elk6.3.1+zookeeper+kafka+filebeat收集dockerswarm容器日誌
Zookeeper主要值借助分布式鎖,保證事務的不變,原子性隔離性。。。
Kafka消息隊列,從生產這到filebeta再到消費這logstash接受到es中,起到緩存,減緩壓力
來吧開始往上懟了
首先下載zookeeper和卡夫卡
wget http://mirrors.hust.edu.cn/apache/zookeeper/zookeeper-3.4.10/zookeeper-3.4.10.tar.gz
wget http://mirror.bit.edu.cn/apache/kafka/1.1.0/kafka_2.12-1.1.0.tgz
這裏需要註意我是單臺服務器安裝的,要添加hosts
這裏zookeeper和卡夫卡的安裝可以參考文檔:
http://www.cnblogs.com/saneri/p/8822116.html
只需要zookeeper和kafka參考就行了,註意修改ip和hostname
完成後驗證:
Zookeeper+Kafka集群測試
創建topic:
顯示topic:
行了這個成功之後開始配置filebeat,這裏換是收集dockerswarm集群的tomat和nginx容器的日誌
filebeat.prospectors:
- type: log
enabled: true
paths:
- /var/log/docker-nginx/access_json.log
fields:
log_topics: 192.168.9.36-nginx
- type: log
enabled: true
paths:
- /var/log/docker-tomcat/catalina.out
fields:
log_topics: 192.168.9.36-tomcat
# include_lines: ['ERROR','WARN']
# exclude_lines: ['DEBUG']
output.kafka:
enabled: true
hosts: ["node1:9092"]
topic: '%{[fields][log_topics]}'
partition.hash:
reachable_only: true
compression: gzip
max_message_bytes: 1000000
required_acks: 1
接下來配置logstash
input {
kafka{
bootstrap_servers => "node1:9092"
topics => ["192.168.9.36-nginx","192.168.9.36-tomcat"]
codec => "json"
consumer_threads => 1
decorate_events => true
auto_offset_reset => "latest"
}
}
filter{
date{
match=>["logdate","MMM dd HH:mm:ss yyyy"]
target=>"@timestamp"
timezone=>"Asia/Shanghai"
}
ruby{
code =>"event.timestamp.time.localtime+8*60*60"
}
}
output {
if [fields][log_topics] == "192.168.9.36-nginx" {
elasticsearch {
hosts => ["http://192.168.9.142:9200"]
index => "192.168.9.36-nginx-%{[@metadata][version]}-%{+YYYY.MM.dd}"
}
}
if [fields][log_topics] == "192.168.9.36-tomcat" {
elasticsearch {
hosts => ["http://192.168.9.142:9200"]
index => "192.168.9.36-tomcat-%{[@metadata][version]}-%{+YYYY.MM.dd}"
}
}
}
完成後啟動,然後測試一下
去查看一下
elk6.3.1+zookeeper+kafka+filebeat收集dockerswarm容器日誌