1. 程式人生 > >Kafka+Zookeeper+Filebeat+ELK 搭建日誌收集系統

Kafka+Zookeeper+Filebeat+ELK 搭建日誌收集系統

could not arch success div 名稱 fill pil ice oca

ELK

ELK目前主流的一種日誌系統,過多的就不多介紹了
Filebeat收集日誌,將收集的日誌輸出到kafka,避免網絡問題丟失信息
kafka接收到日誌消息後直接消費到Logstash
Logstash將從kafka中的日誌發往elasticsearch
Kibana對elasticsearch中的日誌數據進行展示

技術分享圖片 image

環境介紹:

軟件版本:
- Centos 7.4
- java 1.8.0_45
- Elasticsearch 6.4.0
- Logstash 6.4.0
- Filebeat 6.4.0
- Kibana 6.4.0
- Kafka 2.12
- Zookeeper 3.4.13

服務器:
- 10.241.0.1  squid(軟件分發,集中控制)
- 10.241.0.10 node1
- 10.241.0.11 node2
- 10.241.0.12 node3

部署角色
- elasticsearch: 10.241.0.10(master),10.241.0.11,10.241.0.12
  https://www.elastic.co/cn/products/elasticsearch
  Elasticsearch 允許執行和合並多種類型的搜索 ( 結構化、非結構化、地理位置、度量指標 )搜索方式

- logstash: 10.241.0.10,10.241.0.11,10.241.0.12
  https://www.elastic.co/cn/products/logstash
  Logstash 支持各種輸入選擇 ,可以在同一時間從眾多常用來源捕捉事件

- filebeat: 10.241.0.10,10.241.0.11,10.241.0.12
  https://www.elastic.co/cn/products/beats/filebeat
  Filebeat 內置的多種模塊(auditd、Apache、NGINX、System 和 MySQL)可實現對常見日誌格式的一鍵收集、解析和可視化.

- kibana: 10.241.0.10
  https://www.elastic.co/cn/products/kibana
  Kibana 讓您能夠可視化 Elasticsearch 中的數據並操作 Elastic Stack

- kafka: 10.241.0.10,10.241.0.11,10.241.0.12
  http://kafka.apache.org/
  Kafka是一種高吞吐量的分布式發布訂閱消息系統,它可以處理消費者規模的網站中的所有動作流數據
  kafka集群部署前面的博客: https://www.jianshu.com/p/a9ff97dcfe4e

開始安裝部署ELK

1.下載安裝包及測試安裝包完整性

[root@squid ~]# cat /etc/hosts
10.241.0.1  squid
10.241.0.10 squid
10.241.0.11 node2
10.241.0.12 node3

[root@squid ~]# wget https://artifacts.elastic.co/downloads/elasticsearch/elasticsearch-6.4.0.tar.gz
[root@squid ~]# wget https://artifacts.elastic.co/downloads/elasticsearch/elasticsearch-6.4.0.tar.gz.sha512
[root@squid ~]# wget https://artifacts.elastic.co/downloads/kibana/kibana-6.4.0-linux-x86_64.tar.gz
[root@squid ~]# wget https://artifacts.elastic.co/downloads/kibana/kibana-6.4.0-linux-x86_64.tar.gz.sha512
[root@squid ~]# wget https://artifacts.elastic.co/downloads/logstash/logstash-6.4.0.tar.gz
[root@squid ~]# wget https://artifacts.elastic.co/downloads/logstash/logstash-6.4.0.tar.gz.sha512
[root@squid ~]# wget https://artifacts.elastic.co/downloads/beats/filebeat/filebeat-6.4.0-linux-x86_64.tar.gz
[root@squid ~]# wget https://artifacts.elastic.co/downloads/beats/filebeat/filebeat-6.4.0-linux-x86_64.tar.gz.sha512

[root@squid ~]# yum install perl-Digest-SHA
[root@squid ~]# shasum -a 512 -c  elasticsearch-6.4.0.tar.gz.sha512
elasticsearch-6.4.0.tar.gz: OK
[root@squid ~]# shasum -a 512 -c  filebeat-6.4.0-linux-x86_64.tar.gz.sha512
filebeat-6.4.0-linux-x86_64.tar.gz: OK
[root@squid ~]# shasum -a 512 -c  kibana-6.4.0-linux-x86_64.tar.gz.sha512
kibana-6.4.0-linux-x86_64.tar.gz: OK
[root@squid ~]# shasum -a 512 -c  logstash-6.4.0.tar.gz.sha512
logstash-6.4.0.tar.gz: OK

2.部署elasticsearch

1) Ansible主機清單
[root@squid ~]# cat /etc/ansible/hosts 
[client]
10.241.0.10 es_master=true
10.241.0.11 es_master=false
10.241.0.12 es_master=false

2) 創建es用戶和用戶組
[root@squid ~]# ansible client -m group -a ‘name=elk‘
[root@squid ~]# ansible client -m user -a ‘name=es group=elk home=/home/es shell=/bin/bash‘

3) 將elasticsearch解壓到目標主機
[root@squid ~]# ansible client -m unarchive -a ‘src=/root/elasticsearch-6.4.0.tar.gz  dest=/usr/local owner=es group=elk‘

4)將準備好的es配置文件模板分發到各個節點
[root@squid ~]# cat elasticsearch.yml.j2 
#集群名稱及數據存放位置
cluster.name: my_es_cluster
node.name: es-{{ansible_hostname}}
path.data: /data/elk/es/data
path.logs: /data/elk/es/logs
#允許跨域訪問
http.cors.enabled: true 
http.cors.allow-origin: "*" 
#集群中的角色
node.master: {{es_master}}
node.data: true 
#允許訪問的地址及傳輸使用的端口
network.host: 0.0.0.0
transport.tcp.port: 9300
#使用tcp傳輸壓縮
transport.tcp.compress: true
http.port: 9200
#使用單播模式去連接其他節點
discovery.zen.ping.unicast.hosts: ["node1","node2","node3"]

5) 執行ansible,分發配置文件
[root@squid ~]# ansible client -m template -a ‘src=/root/elasticsearch.yml.j2 dest=/usr/local/elasticsearch-6.4.0/config/elasticsearch.yml owner=es group=elk‘

6) 修改系統允許最大打開的文件句柄數等參數,
[root@squid ~]# cat change_system_args.sh
#!/bin/bash
if [ "`grep 65536 /etc/security/limits.conf`" = "" ]
then
cat >> /etc/security/limits.conf << EOF
# End of file
* - nofile 1800000
        * soft nproc 65536
        * hard nproc 65536
        * soft nofile 65536
        * hard nofile 65536
EOF
fi

if [ "`grep 655360 /etc/sysctl.conf`" = "" ]
then
echo "vm.max_map_count=655360"  >> /etc/sysctl.conf
fi

7) 通過ansible來執行腳本
[root@squid ~]# ansible client -m script -a ‘/root/change_system_args.sh‘

8) 重啟目標主機,是參數生效(因為目標主機重啟 所以ansible連不上)
[root@squid ~]# ansible client -m shell -a ‘reboot‘
10.241.0.11 | UNREACHABLE! => {
    "changed": false, 
    "msg": "SSH Error: data could not be sent to remote host \"10.241.0.11\". Make sure this host can be reached over ssh", 
    "unreachable": true
}
10.241.0.12 | UNREACHABLE! => {
    "changed": false, 
    "msg": "SSH Error: data could not be sent to remote host \"10.241.0.12\". Make sure this host can be reached over ssh",
    "unreachable": true
}
10.241.0.10 | UNREACHABLE! => {
    "changed": false, 
    "msg": "SSH Error: data could not be sent to remote host \"10.241.0.10\". Make sure this host can be reached over ssh",
    "unreachable": true
}

9 )創建elk目錄
[root@squid ~]# ansible client -m file -a ‘name=/data/elk/  state=directory owner=es group=elk‘

10) 啟動es
[root@squid ~]# ansible client -m shell -a ‘su - es -c "/usr/local/elasticsearch-6.4.0/bin/elasticsearch -d"‘ 

10.241.0.11 | SUCCESS | rc=0 >>

10.241.0.10 | SUCCESS | rc=0 >>

10.241.0.12 | SUCCESS | rc=0 >>

11) 查看是否啟動
[root@squid ~]# ansible client -m shell -a ‘ps -ef|grep elasticsearch‘ 
10.241.0.12 | SUCCESS | rc=0 >>
es        3553     1 19 20:35 ?        00:00:48 /usr/local/jdk1.8.0_45/bin/java -Xms1g -Xmx1g -XX:+UseConcMarkSweepGC -XX:CMSInitiatingOccupancyFraction=75 -XX:+UseCMSInitiatingOccupancyOnly -XX:+AlwaysPreTouch -Xss1m -Djava.awt.headless=true -Dfile.encoding=UTF-8 -Djna.nosys=true -XX:-OmitStackTraceInFastThrow -Dio.netty.noUnsafe=true -Dio.netty.noKeySetOptimization=true -Dio.netty.recycler.maxCapacityPerThread=0 -Dlog4j.shutdownHookEnabled=false -Dlog4j2.disable.jmx=true -Djava.io.tmpdir=/tmp/elasticsearch.eFvx2dMC -XX:+HeapDumpOnOutOfMemoryError -XX:HeapDumpPath=data -XX:ErrorFile=logs/hs_err_pid%p.log -XX:+PrintGCDetails -XX:+PrintGCDateStamps -XX:+PrintTenuringDistribution -XX:+PrintGCApplicationStoppedTime -Xloggc:logs/gc.log -XX:+UseGCLogFileRotation -XX:NumberOfGCLogFiles=32 -XX:GCLogFileSize=64m -Des.path.home=/usr/local/elasticsearch-6.4.0 -Des.path.conf=/usr/local/elasticsearch-6.4.0/config -Des.distribution.flavor=default -Des.distribution.type=tar -cp /usr/local/elasticsearch-6.4.0/lib/* org.elasticsearch.bootstrap.Elasticsearch -d
es        3594  3553  0 20:35 ?        00:00:00 /usr/local/elasticsearch-6.4.0/modules/x-pack-ml/platform/linux-x86_64/bin/controller
root      3711  3710  0 20:39 ?        00:00:00 /bin/sh -c ps -ef|grep elasticsearch
root      3713  3711  0 20:39 ?        00:00:00 grep elasticsearch

10.241.0.10 | SUCCESS | rc=0 >>
es        4899     1 22 20:35 ?        00:00:54 /usr/local/jdk1.8.0_45/bin/java -Xms1g -Xmx1g -XX:+UseConcMarkSweepGC -XX:CMSInitiatingOccupancyFraction=75 -XX:+UseCMSInitiatingOccupancyOnly -XX:+AlwaysPreTouch -Xss1m -Djava.awt.headless=true -Dfile.encoding=UTF-8 -Djna.nosys=true -XX:-OmitStackTraceInFastThrow -Dio.netty.noUnsafe=true -Dio.netty.noKeySetOptimization=true -Dio.netty.recycler.maxCapacityPerThread=0 -Dlog4j.shutdownHookEnabled=false -Dlog4j2.disable.jmx=true -Djava.io.tmpdir=/tmp/elasticsearch.1uRdvBGd -XX:+HeapDumpOnOutOfMemoryError -XX:HeapDumpPath=data -XX:ErrorFile=logs/hs_err_pid%p.log -XX:+PrintGCDetails -XX:+PrintGCDateStamps -XX:+PrintTenuringDistribution -XX:+PrintGCApplicationStoppedTime -Xloggc:logs/gc.log -XX:+UseGCLogFileRotation -XX:NumberOfGCLogFiles=32 -XX:GCLogFileSize=64m -Des.path.home=/usr/local/elasticsearch-6.4.0 -Des.path.conf=/usr/local/elasticsearch-6.4.0/config -Des.distribution.flavor=default -Des.distribution.type=tar -cp /usr/local/elasticsearch-6.4.0/lib/* org.elasticsearch.bootstrap.Elasticsearch -d
es        4940  4899  0 20:35 ?        00:00:00 /usr/local/elasticsearch-6.4.0/modules/x-pack-ml/platform/linux-x86_64/bin/controller
root      5070  5069  0 20:39 ?        00:00:00 /bin/sh -c ps -ef|grep elasticsearch
root      5072  5070  0 20:39 ?        00:00:00 grep elasticsearch

10.241.0.11 | SUCCESS | rc=0 >>
es        3556     1 19 20:35 ?        00:00:47 /usr/local/jdk1.8.0_45/bin/java -Xms1g -Xmx1g -XX:+UseConcMarkSweepGC -XX:CMSInitiatingOccupancyFraction=75 -XX:+UseCMSInitiatingOccupancyOnly -XX:+AlwaysPreTouch -Xss1m -Djava.awt.headless=true -Dfile.encoding=UTF-8 -Djna.nosys=true -XX:-OmitStackTraceInFastThrow -Dio.netty.noUnsafe=true -Dio.netty.noKeySetOptimization=true -Dio.netty.recycler.maxCapacityPerThread=0 -Dlog4j.shutdownHookEnabled=false -Dlog4j2.disable.jmx=true -Djava.io.tmpdir=/tmp/elasticsearch.fnAavDi0 -XX:+HeapDumpOnOutOfMemoryError -XX:HeapDumpPath=data -XX:ErrorFile=logs/hs_err_pid%p.log -XX:+PrintGCDetails -XX:+PrintGCDateStamps -XX:+PrintTenuringDistribution -XX:+PrintGCApplicationStoppedTime -Xloggc:logs/gc.log -XX:+UseGCLogFileRotation -XX:NumberOfGCLogFiles=32 -XX:GCLogFileSize=64m -Des.path.home=/usr/local/elasticsearch-6.4.0 -Des.path.conf=/usr/local/elasticsearch-6.4.0/config -Des.distribution.flavor=default -Des.distribution.type=tar -cp /usr/local/elasticsearch-6.4.0/lib/* org.elasticsearch.bootstrap.Elasticsearch -d
es        3597  3556  0 20:35 ?        00:00:00 /usr/local/elasticsearch-6.4.0/modules/x-pack-ml/platform/linux-x86_64/bin/controller
root      3710  3709  0 20:39 ?        00:00:00 /bin/sh -c ps -ef|grep elasticsearch
root      3712  3710  0 20:39 ?        00:00:00 grep elasticsearch

12) 查看集群狀態
[root@squid ~]# curl -s http://node1:9200/_nodes/process?pretty |grep -C 5 _nodes
{
  "_nodes" : {
    "total" : 3,
    "successful" : 3,
    "failed" : 0
  },
  "cluster_name" : "my_es_cluster",

3.部署Filebeat

1) 分發安裝包到客戶機
[root@squid ~]# ansible client -m unarchive -a ‘src=/root/filebeat-6.4.0-linux-x86_64.tar.gz dest=/usr/local‘

2) 修改安裝包名稱
[root@squid ~]# ansible client -m shell -a ‘mv /usr/local/filebeat-6.4.0-linux-x86_64 /usr/local/filebeat-6.4.0‘
10.241.0.12 | SUCCESS | rc=0 >>

10.241.0.11 | SUCCESS | rc=0 >>

10.241.0.10 | SUCCESS | rc=0 >>

3) 修改配置文件
[root@squid ~]# cat filebeat.yml.j2 
filebeat.prospectors:
- type: log
  paths:
    - /var/log/supervisor/kafka

output.kafka:
  enabled: true
  hosts: ["10.241.0.10:9092","10.241.0.11:9092","10.241.0.12:9092"]
  topic: kafka_run_log

##參數解釋
enabled 表明這個模塊是啟動的
host  把filebeat的數據發送到那臺kafka上
topic 這個很重要,發送給kafka的topic,若topic不存在,則會自動創建此topic

4) 分發到客戶機,並將原來的配置文件備份
[root@squid ~]# ansible client -m copy -a ‘src=/root/filebeat.yml.j2 dest=/usr/local/filebeat-6.4.0/filebeat.yml backup=yes‘

5) 啟動filebeat
[root@squid ~]# ansible client -m shell -a ‘/usr/local/filebeat-6.4.0/filebeat -c /usr/local/filebeat-6.4.0/filebeat.yml &‘
10.241.0.11 | SUCCESS | rc=0 >>

10.241.0.10 | SUCCESS | rc=0 >>

10.241.0.12 | SUCCESS | rc=0 >>

6) 查看filebeat進程
[root@squid ~]# ansible client -m shell -a ‘ps -ef|grep filebeat| grep -v grep‘
10.241.0.12 | SUCCESS | rc=0 >>
root      4890     1  0 22:50 ?        00:00:00 /usr/local/filebeat-6.4.0/filebeat -c /usr/local/filebeat-6.4.0/filebeat.yml

10.241.0.10 | SUCCESS | rc=0 >>
root      6881     1  0 22:50 ?        00:00:00 /usr/local/filebeat-6.4.0/filebeat -c /usr/local/filebeat-6.4.0/filebeat.yml

10.241.0.11 | SUCCESS | rc=0 >>
root      4939     1  0 22:50 ?        00:00:00 /usr/local/filebeat-6.4.0/filebeat -c /usr/local/filebeat-6.4.0/filebeat.yml

7) 查看是否有topic創建成功
[root@node1 local]# /usr/local/kafka/bin/kafka-topics.sh --list --zookeeper  10.241.0.10:2181
ConsumerTest
__consumer_offsets
kafka_run_log #filebeat創建的topic
topicTest

4.部署Logstash

1) 解壓安裝包值目標主機
[root@squid ~]# ansible client -m unarchive -a ‘src=/root/logstash-6.4.0.tar.gz dest=/usr/local owner=es group=elk‘

2) Logstash配置文件
[root@squid ~]# cat logstash-kafka.conf.j2
input {
    kafka {
        type => "kafka-logs"
        bootstrap_servers => "10.241.0.10:9092,10.241.0.11:9092,10.241.0.12:9092"
        group_id => "logstash"
        auto_offset_reset => "earliest"
        topics => "kafka_run_log"
        consumer_threads => 5
        decorate_events => true
        }
}

output {
    elasticsearch {
    index => ‘kafka-run-log-%{+YYYY.MM.dd}‘
    hosts => ["10.241.0.10:9200","10.241.0.11:9200","10.241.0.12:9200"]
}

3) 使用ansible推送logstash配置文件到目標主機
[root@squid ~]# ansible client -m copy -a ‘src=/root/logstash.conf.j2 dest=/usr/local/logstash-6.4.0/config/logstash.conf owner=es group=elk‘

4) 啟動Logstash
[root@squid ~]# ansible client -m shell -a ‘su - es -c "/usr/local/logstash-6.4.0/bin/logstash -f /usr/local/logstash-6.4.0/config/logstash.conf &"‘     

5)_查看Logstash進程
[root@squid ~]# ansible client -m shell -a ‘ps -ef|grep logstash|grep -v grep‘
10.241.0.11 | SUCCESS | rc=0 >>
es        6040     1 99 23:39 ?        00:02:11 /usr/local/jdk1.8.0_45/bin/java -Xms1g -Xmx1g -XX:+UseParNewGC -XX:+UseConcMarkSweepGC -XX:CMSInitiatingOccupancyFraction=75 -XX:+UseCMSInitiatingOccupancyOnly -Djava.awt.headless=true -Dfile.encoding=UTF-8 -Djruby.compile.invokedynamic=true -Djruby.jit.threshold=0 -XX:+HeapDumpOnOutOfMemoryError -Djava.security.egd=file:/dev/urandom -cp /usr/local/logstash-6.4.0/logstash-core/lib/jars/animal-sniffer-annotations-1.14.jar:/usr/local/logstash-6.4.0/logstash-core/lib/jars/commons-codec-1.11.jar:/usr/local/logstash-6.4.0/logstash-core/lib/jars/commons-compiler-3.0.8.jar:/usr/local/logstash-6.4.0/logstash-core/lib/jars/error_prone_annotations-2.0.18.jar:/usr/local/logstash-6.4.0/logstash-core/lib/jars/google-java-format-1.1.jar:/usr/local/logstash-6.4.0/logstash-core/lib/jars/gradle-license-report-0.7.1.jar:/usr/local/logstash-6.4.0/logstash-core/lib/jars/guava-22.0.jar:/usr/local/logstash-6.4.0/logstash-core/lib/jars/j2objc-annotations-1.1.jar:/usr/local/logstash-6.4.0/logstash-core/lib/jars/jackson-annotations-2.9.5.jar:/usr/local/logstash-6.4.0/logstash-core/lib/jars/jackson-core-2.9.5.jar:/usr/local/logstash-6.4.0/logstash-core/lib/jars/jackson-databind-2.9.5.jar:/usr/local/logstash-6.4.0/logstash-core/lib/jars/jackson-dataformat-cbor-2.9.5.jar:/usr/local/logstash-6.4.0/logstash-core/lib/jars/janino-3.0.8.jar:/usr/local/logstash-6.4.0/logstash-core/lib/jars/jruby-complete-9.1.13.0.jar:/usr/local/logstash-6.4.0/logstash-core/lib/jars/jsr305-1.3.9.jar:/usr/local/logstash-6.4.0/logstash-core/lib/jars/log4j-api-2.9.1.jar:/usr/local/logstash-6.4.0/logstash-core/lib/jars/log4j-core-2.9.1.jar:/usr/local/logstash-6.4.0/logstash-core/lib/jars/log4j-slf4j-impl-2.9.1.jar:/usr/local/logstash-6.4.0/logstash-core/lib/jars/logstash-core.jar:/usr/local/logstash-6.4.0/logstash-core/lib/jars/org.eclipse.core.commands-3.6.0.jar:/usr/local/logstash-6.4.0/logstash-core/lib/jars/org.eclipse.core.contenttype-3.4.100.jar:/usr/local/logstash-6.4.0/logstash-core/lib/jars/org.eclipse.core.expressions-3.4.300.jar:/usr/local/logstash-6.4.0/logstash-core/lib/jars/org.eclipse.core.filesystem-1.3.100.jar:/usr/local/logstash-6.4.0/logstash-core/lib/jars/org.eclipse.core.jobs-3.5.100.jar:/usr/local/logstash-6.4.0/logstash-core/lib/jars/org.eclipse.core.resources-3.7.100.jar:/usr/local/logstash-6.4.0/logstash-core/lib/jars/org.eclipse.core.runtime-3.7.0.jar:/usr/local/logstash-6.4.0/logstash-core/lib/jars/org.eclipse.equinox.app-1.3.100.jar:/usr/local/logstash-6.4.0/logstash-core/lib/jars/org.eclipse.equinox.common-3.6.0.jar:/usr/local/logstash-6.4.0/logstash-core/lib/jars/org.eclipse.equinox.preferences-3.4.1.jar:/usr/local/logstash-6.4.0/logstash-core/lib/jars/org.eclipse.equinox.registry-3.5.101.jar:/usr/local/logstash-6.4.0/logstash-core/lib/jars/org.eclipse.jdt.core-3.10.0.jar:/usr/local/logstash-6.4.0/logstash-core/lib/jars/org.eclipse.osgi-3.7.1.jar:/usr/local/logstash-6.4.0/logstash-core/lib/jars/org.eclipse.text-3.5.101.jar:/usr/local/logstash-6.4.0/logstash-core/lib/jars/slf4j-api-1.7.25.jar org.logstash.Logstash -f /usr/local/logstash-6.4.0/config/logstash.conf

10.241.0.12 | SUCCESS | rc=0 >>
es        5970     1 99 23:39 ?        00:02:13 /usr/local/jdk1.8.0_45/bin/java -Xms1g -Xmx1g -XX:+UseParNewGC -XX:+UseConcMarkSweepGC -XX:CMSInitiatingOccupancyFraction=75 -XX:+UseCMSInitiatingOccupancyOnly -Djava.awt.headless=true -Dfile.encoding=UTF-8 -Djruby.compile.invokedynamic=true -Djruby.jit.threshold=0 -XX:+HeapDumpOnOutOfMemoryError -Djava.security.egd=file:/dev/urandom -cp /usr/local/logstash-6.4.0/logstash-core/lib/jars/animal-sniffer-annotations-1.14.jar:/usr/local/logstash-6.4.0/logstash-core/lib/jars/commons-codec-1.11.jar:/usr/local/logstash-6.4.0/logstash-core/lib/jars/commons-compiler-3.0.8.jar:/usr/local/logstash-6.4.0/logstash-core/lib/jars/error_prone_annotations-2.0.18.jar:/usr/local/logstash-6.4.0/logstash-core/lib/jars/google-java-format-1.1.jar:/usr/local/logstash-6.4.0/logstash-core/lib/jars/gradle-license-report-0.7.1.jar:/usr/local/logstash-6.4.0/logstash-core/lib/jars/guava-22.0.jar:/usr/local/logstash-6.4.0/logstash-core/lib/jars/j2objc-annotations-1.1.jar:/usr/local/logstash-6.4.0/logstash-core/lib/jars/jackson-annotations-2.9.5.jar:/usr/local/logstash-6.4.0/logstash-core/lib/jars/jackson-core-2.9.5.jar:/usr/local/logstash-6.4.0/logstash-core/lib/jars/jackson-databind-2.9.5.jar:/usr/local/logstash-6.4.0/logstash-core/lib/jars/jackson-dataformat-cbor-2.9.5.jar:/usr/local/logstash-6.4.0/logstash-core/lib/jars/janino-3.0.8.jar:/usr/local/logstash-6.4.0/logstash-core/lib/jars/jruby-complete-9.1.13.0.jar:/usr/local/logstash-6.4.0/logstash-core/lib/jars/jsr305-1.3.9.jar:/usr/local/logstash-6.4.0/logstash-core/lib/jars/log4j-api-2.9.1.jar:/usr/local/logstash-6.4.0/logstash-core/lib/jars/log4j-core-2.9.1.jar:/usr/local/logstash-6.4.0/logstash-core/lib/jars/log4j-slf4j-impl-2.9.1.jar:/usr/local/logstash-6.4.0/logstash-core/lib/jars/logstash-core.jar:/usr/local/logstash-6.4.0/logstash-core/lib/jars/org.eclipse.core.commands-3.6.0.jar:/usr/local/logstash-6.4.0/logstash-core/lib/jars/org.eclipse.core.contenttype-3.4.100.jar:/usr/local/logstash-6.4.0/logstash-core/lib/jars/org.eclipse.core.expressions-3.4.300.jar:/usr/local/logstash-6.4.0/logstash-core/lib/jars/org.eclipse.core.filesystem-1.3.100.jar:/usr/local/logstash-6.4.0/logstash-core/lib/jars/org.eclipse.core.jobs-3.5.100.jar:/usr/local/logstash-6.4.0/logstash-core/lib/jars/org.eclipse.core.resources-3.7.100.jar:/usr/local/logstash-6.4.0/logstash-core/lib/jars/org.eclipse.core.runtime-3.7.0.jar:/usr/local/logstash-6.4.0/logstash-core/lib/jars/org.eclipse.equinox.app-1.3.100.jar:/usr/local/logstash-6.4.0/logstash-core/lib/jars/org.eclipse.equinox.common-3.6.0.jar:/usr/local/logstash-6.4.0/logstash-core/lib/jars/org.eclipse.equinox.preferences-3.4.1.jar:/usr/local/logstash-6.4.0/logstash-core/lib/jars/org.eclipse.equinox.registry-3.5.101.jar:/usr/local/logstash-6.4.0/logstash-core/lib/jars/org.eclipse.jdt.core-3.10.0.jar:/usr/local/logstash-6.4.0/logstash-core/lib/jars/org.eclipse.osgi-3.7.1.jar:/usr/local/logstash-6.4.0/logstash-core/lib/jars/org.eclipse.text-3.5.101.jar:/usr/local/logstash-6.4.0/logstash-core/lib/jars/slf4j-api-1.7.25.jar org.logstash.Logstash -f /usr/local/logstash-6.4.0/config/logstash.conf

10.241.0.10 | SUCCESS | rc=0 >>
es        9095     1 98 23:39 ?        00:02:10 /usr/local/jdk1.8.0_45/bin/java -Xms1g -Xmx1g -XX:+UseParNewGC -XX:+UseConcMarkSweepGC -XX:CMSInitiatingOccupancyFraction=75 -XX:+UseCMSInitiatingOccupancyOnly -Djava.awt.headless=true -Dfile.encoding=UTF-8 -Djruby.compile.invokedynamic=true -Djruby.jit.threshold=0 -XX:+HeapDumpOnOutOfMemoryError -Djava.security.egd=file:/dev/urandom -cp /usr/local/logstash-6.4.0/logstash-core/lib/jars/animal-sniffer-annotations-1.14.jar:/usr/local/logstash-6.4.0/logstash-core/lib/jars/commons-codec-1.11.jar:/usr/local/logstash-6.4.0/logstash-core/lib/jars/commons-compiler-3.0.8.jar:/usr/local/logstash-6.4.0/logstash-core/lib/jars/error_prone_annotations-2.0.18.jar:/usr/local/logstash-6.4.0/logstash-core/lib/jars/google-java-format-1.1.jar:/usr/local/logstash-6.4.0/logstash-core/lib/jars/gradle-license-report-0.7.1.jar:/usr/local/logstash-6.4.0/logstash-core/lib/jars/guava-22.0.jar:/usr/local/logstash-6.4.0/logstash-core/lib/jars/j2objc-annotations-1.1.jar:/usr/local/logstash-6.4.0/logstash-core/lib/jars/jackson-annotations-2.9.5.jar:/usr/local/logstash-6.4.0/logstash-core/lib/jars/jackson-core-2.9.5.jar:/usr/local/logstash-6.4.0/logstash-core/lib/jars/jackson-databind-2.9.5.jar:/usr/local/logstash-6.4.0/logstash-core/lib/jars/jackson-dataformat-cbor-2.9.5.jar:/usr/local/logstash-6.4.0/logstash-core/lib/jars/janino-3.0.8.jar:/usr/local/logstash-6.4.0/logstash-core/lib/jars/jruby-complete-9.1.13.0.jar:/usr/local/logstash-6.4.0/logstash-core/lib/jars/jsr305-1.3.9.jar:/usr/local/logstash-6.4.0/logstash-core/lib/jars/log4j-api-2.9.1.jar:/usr/local/logstash-6.4.0/logstash-core/lib/jars/log4j-core-2.9.1.jar:/usr/local/logstash-6.4.0/logstash-core/lib/jars/log4j-slf4j-impl-2.9.1.jar:/usr/local/logstash-6.4.0/logstash-core/lib/jars/logstash-core.jar:/usr/local/logstash-6.4.0/logstash-core/lib/jars/org.eclipse.core.commands-3.6.0.jar:/usr/local/logstash-6.4.0/logstash-core/lib/jars/org.eclipse.core.contenttype-3.4.100.jar:/usr/local/logstash-6.4.0/logstash-core/lib/jars/org.eclipse.core.expressions-3.4.300.jar:/usr/local/logstash-6.4.0/logstash-core/lib/jars/org.eclipse.core.filesystem-1.3.100.jar:/usr/local/logstash-6.4.0/logstash-core/lib/jars/org.eclipse.core.jobs-3.5.100.jar:/usr/local/logstash-6.4.0/logstash-core/lib/jars/org.eclipse.core.resources-3.7.100.jar:/usr/local/logstash-6.4.0/logstash-core/lib/jars/org.eclipse.core.runtime-3.7.0.jar:/usr/local/logstash-6.4.0/logstash-core/lib/jars/org.eclipse.equinox.app-1.3.100.jar:/usr/local/logstash-6.4.0/logstash-core/lib/jars/org.eclipse.equinox.common-3.6.0.jar:/usr/local/logstash-6.4.0/logstash-core/lib/jars/org.eclipse.equinox.preferences-3.4.1.jar:/usr/local/logstash-6.4.0/logstash-core/lib/jars/org.eclipse.equinox.registry-3.5.101.jar:/usr/local/logstash-6.4.0/logstash-core/lib/jars/org.eclipse.jdt.core-3.10.0.jar:/usr/local/logstash-6.4.0/logstash-core/lib/jars/org.eclipse.osgi-3.7.1.jar:/usr/local/logstash-6.4.0/logstash-core/lib/jars/org.eclipse.text-3.5.101.jar:/usr/local/logstash-6.4.0/logstash-core/lib/jars/slf4j-api-1.7.25.jar org.logstash.Logstash -f /usr/local/logstash-6.4.0/config/logstash.conf

5.部署kibana

1) 將安裝包拷貝到node1節點
[root@squid ~]# scp kibana-6.4.0-linux-x86_64.tar.gz [email protected]:/root
kibana-6.4.0-linux-x86_64.tar.gz                 100%  179MB  59.7MB/s   00:03

2) 解壓kibana
[root@node1 ~]# tar  -zxf kibana-6.4.0-linux-x86_64.tar.gz  -C /usr/local
[root@node1 ~]# mv /usr/local/kibana-6.4.0-linux-x86_64/ /usr/local/kibana-6.4.0

3) 修改配置文件
[root@node1 ~]# cat /usr/local/kibana-6.4.0/config/kibana.yml
server.port: 5601
server.host: "10.241.0.10"
kibana.index: ".kibana

4) 啟動kibana (前臺啟動)
[root@node1 ~]# /usr/local/kibana-6.4.0/bin/kibana

5) 訪問的kibana
http://10.241.0.10:5601

6) 添加日誌
Management -> Kibana 列Index Patterns -> Index pattern

7) 發送消息到kafka-run-log  topic,查看是否能通過kibana展示

技術分享圖片 image

技術分享圖片 image

技術分享圖片 image

技術分享圖片 image

技術分享圖片 image

技術分享圖片 image

作者:baiyongjie
鏈接:https://www.jianshu.com/p/d072a55aa844
來源:簡書
簡書著作權歸作者所有,任何形式的轉載都請聯系作者獲得授權並註明出處。

Kafka+Zookeeper+Filebeat+ELK 搭建日誌收集系統