1. 程式人生 > 實用技巧 >安裝部署zabbix監控ELK日誌:(centos 7 )完整文件

安裝部署zabbix監控ELK日誌:(centos 7 )完整文件

  今天接到公司領導要求,要求用zabbix能夠實時的監控所有伺服器的報錯報警日誌。
  但是因為伺服器數量較大,日誌量很大,單獨做指令碼分析日誌來上報的話消耗資源可能會比較大,因此就使用了已經部署了的elk來把錯誤的日誌單獨整理上報,然後就在網上查詢資料找到了ZABBIX+ELK的部署,經過十幾個小時的嘗試和測試,已經能夠成功的監控到錯誤和告警日誌了,因為部署過程中踩了很多坑,因此整理整個流程把相關的內容發了這篇部落格,給大家借鑑。

安裝Jdk:

?
1 2 # tar xf jdk-15_linux-aarch64_bin.tar.gz -C /usr/local/ # mv /usr/local/jdk-
15 / /usr/local/jdk- 1.8 . 0  

新增環境變數:

?
1 2 3 4 5 6 7 8 # alternatives --install /usr/bin/java java /usr/local/jdk1. 8.0 /jre/bin/java 3000 # alternatives --install /usr/bin/jar jar /usr/local/jdk1. 8.0 /bin/jar 3000 # alternatives --install /usr/bin/javac javac /usr/local/jdk1. 8.0 /bin/javac 3000 # alternatives --install /usr/bin/javaws javaws /usr/local/jdk1.
8.0 /jre/bin/javaws 3000 # alternatives --set java /usr/local/jdk1. 8.0 /jre/bin/java # alternatives --set jar /usr/local/jdk1. 8.0 /bin/jar # alternatives --set javac /usr/local/jdk1. 8.0 /bin/javac # alternatives --set javaws /usr/local/jdk1. 8.0 /jre/bin/javaws

檢視java版本:

?
1 2 3 4 # java -version java version
"1.8.0_131" Java(TM) SE Runtime Environment (build 1.8 .0_131-b11) Java HotSpot(TM) 64 -Bit Server VM (build 25.131 -b11, mixed mode)

安裝logstash:

檔案需在官網下載,並進行解壓安裝:

https://www.elastic.co/cn/downloads/logstash

?
1 2 # unzip logstash- 7.9 . 2 .zip # mv logstash- 7.9 . 2 /usr/local/logstash

安裝 logstash-integration-jdbc、logstash-output-zabbix、logstash-input-beats-master 外掛:

?
1 2 3 4 5 6 7 8 9 10 # /usr/local/logstash/bin/logstash-plugin install logstash-integration-jdbc Validating logstash-integration-jdbc Installing logstash-integration-jdbc Installation successful # /usr/local/logstash/bin/logstash-plugin install logstash-output-zabbix Validating logstash-output-zabbix Installing logstash-output-zabbix Installation successful # wget https: //github.com/logstash-plugins/logstash-input-beats/archive/master.zip -O /opt/master.zip # unzip -d /usr/local/logstash /opt/master.zip

安裝elasticsearch:

?
1 2 3 4 5 6 7 8 9 # yum install elasticsearch-6.6.2.rpm 編輯主配置檔案: # vim /etc/elasticsearch/elasticsearch.yml cluster.name: my-application   #17行 node.name: node-1   #23行 path.data: /var/lib/elasticsearch path.logs: /var/ log /elasticsearch network.host: 192.168.191.130   #55行 http.port: 9200   #59行

執行服務elasticsearch:  

?
1 2 3 4 5 6 # systemctl enable elasticsearch # systemctl start elasticsearch 驗證服務: # netstat -lptnu|grep java tcp6 0 0 192.168.191.130:9200 :::* LISTEN 14671/java tcp6 0 0 192.168.191.130:9300 :::* LISTEN 14671/java

通過讀取系統日誌檔案的監控,過濾掉日誌資訊中的異常關鍵詞,將這些帶有異常關鍵詞的日誌資訊過濾出來,輸出到zabbix上,通過zabbix告警機制實現觸發告警,最後由logsatsh拉取日誌並過濾,輸出到zabbix中

新增配置檔案: 測試配置檔案 !!!!!!!

?
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53 54 # vim /usr/local/logstash/config/from_beat.conf input { beats { port => 5044 } } filter { #過濾access 日誌 if ( [source] =~ "localhost\_access\_log" ) { grok { match => { message => [ "%{COMMONAPACHELOG}" ] } } date { match => [ "request_time" , "ISO8601" ] locale => "cn" target => "request_time" } #過濾tomcat日誌 } else if ( [source] =~ "catalina" ) { #使用正則匹配內容到欄位 grok { match => { message => [ "(?<webapp_name>\[\w+\])\s+(?<request_time>\d{4}\-\d{2}\-\d{2}\s+\w{2}\:\w{2}\:\w{2}\,\w{3})\s+(?<log_level>\w+)\s+(?<class_package>[^.^\s]+(?:\.[^.\s]+)+)\.(?<class_name>[^\s]+)\s+(?<message_content>.+)" ] } } #解析請求時間 date { match => [ "request_time" , "ISO8601" ] locale => "cn" target => "request_time" } } else { drop {} } } output { if ( [source] =~ "localhost_access_log" ) { elasticsearch { hosts => [ "192.168.132.129:9200" ] index => "access_log" } } else { elasticsearch { hosts => [ "192.168.132.129:9200" ] index => "tomcat_log" } } stdout { codec => rubydebug } }

啟動logstash 看是否能接收到 filebeat 傳過來的日誌內容: 進行前臺測試:

?
1 # /usr/local/logstash/bin/logstash -f /usr/local/logstash/config/from_beat.conf

開啟成功提示下方日誌資訊:

?
1 2 3 4 5 6 7 8 9 10 11 Sending Logstash logs to /usr/local/logstash/logs which is now configured via log4j2.properties •[ 2020 - 10 -08T20: 37 : 47 , 334 ][INFO ][logstash.runner ] Starting Logstash { "logstash.version" => "7.9.2" , "jruby.version" => "jruby 9.2.13.0 (2.5.7) 2020-08-03 9a89c94bcc Java HotSpot(TM) 64-Bit Server VM 25.131-b11 on 1.8.0_131-b11 +indy +jit [linux-x86_64]" } •[ 2020 - 10 -08T20: 37 : 47 , 923 ][WARN ][logstash.config.source.multilocal] Ignoring the 'pipelines.yml' file because modules or command line options are specified •[ 2020 - 10 -08T20: 37 : 50 , 204 ][INFO ][org.reflections.Reflections] Reflections took 42 ms to scan 1 urls, producing 22 keys and 45 values •[ 2020 - 10 -08T20: 37 : 51 , 436 ][INFO ][logstash.javapipeline ][main] Starting pipeline {:pipeline_id=> "main" , "pipeline.workers" => 2 , "pipeline.batch.size" => 125 , "pipeline.batch.delay" => 50 , "pipeline.max_inflight" => 250 , "pipeline.sources" =>[ "/etc/logstash/conf.d/zabbix.conf" ], :thread=> "#<Thread:0x2ef7b133 run>" } •[ 2020 - 10 -08T20: 37 : 52 , 520 ][INFO ][logstash.javapipeline ][main] Pipeline Java execution initialization time { "seconds" => 1.06 } •[ 2020 - 10 -08T20: 37 : 52 , 766 ][INFO ][logstash.inputs.file ][main] No sincedb_path set, generating one based on the "path" setting {:sincedb_path=> "/usr/local/logstash/data/plugins/inputs/file/.sincedb_730aea1d074d4636ec2eacfacc10f882" , :path=>[ "/var/log/secure" ]} •[ 2020 - 10 -08T20: 37 : 52 , 830 ][INFO ][logstash.javapipeline ][main] Pipeline started { "pipeline.id" => "main" } •[ 2020 - 10 -08T20: 37 : 52 , 921 ][INFO ][filewatch.observingtail ][main][5ffcc74b3b6be0e4daa892ae39a07dc20fdbc1d05bd5cedc4b4290930274f61e] START, creating Discoverer, Watch with file and sincedb collections •[ 2020 - 10 -08T20: 37 : 52 , 963 ][INFO ][logstash.agent ] Pipelines running {:count=> 1 , :running_pipelines=>[:main], :non_running_pipelines=>[]} •[ 2020 - 10 -08T20: 37 : 53 , 369 ][INFO ][logstash.agent ] Successfully started Logstash API endpoint {:port=> 9600 }c

啟動後如果沒有報錯需要等待logstash 完成,此時間可能比較長!!!!!

踩過的坑!!!!!

日誌中出現的報錯:

?
1 [ 2020 - 10 -08T09: 06 : 42 , 311 ][WARN ][logstash.outputs.zabbix ][main][630c433ba0be0739e8ee72ca91d03f00695f05873b64e12c7488b8f2c32a8e05] Zabbix server at 192.168 . 132.130 rejected all items sent. {:zabbix_host=> "192.168.132.129" } 

表示zabbix主機拒絕了所有的連線:

?
1 2 3 4 解決方法: 修改防火牆規則,使需要測試 logstash 的 IP 放行。  切記!!不要關閉防火牆!!!!! 檢視配置檔案zabbix地址與logstash地址 [ 2020 - 10 -08T20: 41 : 45 , 154 ][WARN ][logstash.outputs.zabbix ][main][cf6b448e829beca8b4ffbd64e71c6e510108015eec5933f7b4675d79d5f09f03] Field referenced by 192.168 . 132.130 is missing

缺少引用的欄位:

?
1 2 解決方法: 新增帶有message欄位的資訊到secure日誌中  

缺少外掛出現的問題:

?
1 操作過程中出現了無法啟動配置檔案,從而導致無限迴圈  #僅做參考

在 /usr/local/logstash/config下新增配置檔案:

?
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 # vim /usr/local/logstash/config/file_to_zabbix.conf input { file { path => [ "/var/log/secure" ] type => "system" start_position => "beginning" add_field => [ "[zabbix_key]" , "oslogs" ]    #新增的一個欄位,欄位名是zabbix_key,值為 oslogs。 add_field => [ "[zabbix_host]" , "192.168.132.129" ]    #新增的欄位是zabbix_host,值可以在這裡直接定義,這裡的 IP 獲取的就是日誌是本機的,這個主機名要與zabbix web中“主機名稱”需要保持一致。 } } output { zabbix { zabbix_host => "[zabbix_host]"     #這個zabbix_host將獲取上面input的oslogs zabbix_key => "[zabbix_key]"     #這個zabbix_key將獲取上面input的ip地址 zabbix_server_host => "192.168.132.130"     #這是指定zabbix server的IP地址,監控端的 zabbix_server_port => "10051"     #這是指定zabbix server的監控端的監聽埠 zabbix_value => "message"     #這個很重要,指定要傳給zabbix監控項(oslogs)的值, zabbix_value預設的值是 "message" 欄位 } # stdout { codec => rubydebug }    #這個模式下第一次測試時可以開啟,測試OK之後,可以關閉 } 

測試 {不確定配置檔案是否正確時,我們可以通過前臺的執行方式來測試}:

?
1 2 # /usr/local/logstash/bin/logstash -f /usr/local/logstash/config/ file_to_zabbix.conf # 前臺測試
?
1 2 # /usr/local/logstash/bin/logstash -f /usr/local/logstash/config/file_to_zabbix.conf &>/dev/ null & # 後臺執行新增 &>/dev/ null &

在本機安裝 zabbix-agent :

?
1 2 3 4 # yum install Zabbix-agent 修改 zabbix-agent 配置檔案:<br data-filtered= "filtered" ># vim /etc/zabiix/zabbix-agent.conf Server= 192.168 . 132.130    # 98 行,新增監控端IP ServerActive= 192.168 . 132.130     # 139 行,新增監控端IP

通過zabbix-web平臺建立需要的模板:

在模板下建立應用集:

建立監控項:

監控型別必須的zabbix採集器。鍵值要與文字的鍵值相同。

建立觸發器:

我們要將建立好的模板連結到客戶端上,也就是監控192.168.132.129主機上的日誌資料,發現日誌異常就會進行告警

我們通過檢測裡的最新資料獲取客戶端的最新日誌:

進入歷史記錄,我們會看到詳細的日誌內容:

當我們看到紅框中的內容就是在logstash中定義的message欄位的內容!!!!!

**加粗樣式**