1. 程式人生 > 其它 >ELK快速搭建日誌平臺

ELK快速搭建日誌平臺

2.  前言


2.1.  現狀

以前,檢視日誌都是通過SSH客戶端登伺服器去看,使用較多的命令就是 less 或者 tail。如果服務部署了好幾臺,就要分別登入到這幾臺機器上看,還要注意日誌列印的時間(比如,有可能一個操作過來產生好的日誌,這些日誌還不是在同一臺機器上,此時就需要根據時間的先後順序推斷使用者的操作行為,就要在這些機器上來回切換)。而且,搜尋的時候不方便,必須對vi,less這樣的命令很熟悉,還容易看花了眼。為了簡化日誌檢索的操作,可以將日誌收集並索引,這樣方便多了,用過Lucene的人應該知道,這種檢索效率是很高的。基本上每個網際網路公司都會有自己的日誌管理平臺和監控平臺(比如,Zabbix),無論是自己搭建的,還是用的阿里雲這樣的雲服務提供的,反正肯定有。下面,我們利用ELK搭建一個相對真實的日誌管理平臺。

2.2.  日誌格式

我們的日誌,現在是這樣的:

每條日誌的格式,類似於這樣:

2018-08-22 00:34:51.952 [INFO ] [org.springframework.kafka.KafkaListenerEndpointContainer#0-1-C-1] [com.cjs.handler.MessageHandler][39] - 監聽到註冊事件訊息:

2.3.  logback.xml

Logback配置

2.4.  環境介紹

在本例中,各個系統的日誌都在/data/logs/${projectName},比如:

Filebeat,Logstash,Elasticsearch,Kibana都在一臺虛擬機器上,而且都是單例項,而且沒有別的中介軟體

由於,日誌每天都會歸檔,且實時日誌都是輸出在info.log或者error.log中,所以Filebeat採集的時候只需要監視這兩個檔案即可。

 

3.  Filebeat配置


Filebeat的主要配置在於filebeat.yml配置檔案中的 filebeat.inputs 和 output.logstash 區域:

#=========================== Filebeat inputs =============================
filebeat.inputs:
- type: log
  enabled: true
  # 要抓取的檔案路徑 
  paths:
    - /data/logs/oh-coupon/info.log
    - /data/logs/oh-coupon/error.log
  # 新增額外的欄位
  fields:
    log_source: oh-coupon
  fields_under_root: true
  # 多行處理
  # 不以"yyyy-MM-dd"這種日期開始的行與前一行合併 
  multiline.pattern: ^\d{4}-\d{1,2}-\d{1,2}
  multiline.negate: true
  multiline.match: after

  # 5秒鐘掃描一次以檢查檔案更新
  scan_frequency: 5s
  # 如果檔案1小時都沒有更新,則關閉檔案控制代碼
  close_inactive: 1h  
  # 忽略24小時前的檔案
  #ignore_older: 24h


- type: log
  enabled: true
  paths:
    - /data/logs/oh-promotion/info.log
    - /data/logs/oh-promotion/error.log
  fields:
    log_source: oh-promotion
  fields_under_root: true
  multiline.pattern: ^\d{4}-\d{1,2}-\d{1,2}
  multiline.negate: true
  multiline.match: after
  scan_frequency: 5s
  close_inactive: 1h  
  ignore_older: 24h

#================================ Outputs =====================================

#-------------------------- Elasticsearch output ------------------------------
#output.elasticsearch:
  # Array of hosts to connect to.
  # hosts: ["localhost:9200"]

  # Optional protocol and basic auth credentials.
  #protocol: "https"
  #username: "elastic"
  #password: "changeme"

#----------------------------- Logstash output --------------------------------
output.logstash:
  # The Logstash hosts
  hosts: ["localhost:5044"]

  # Optional SSL. By default is off.
  # List of root certificates for HTTPS server verifications
  #ssl.certificate_authorities: ["/etc/pki/root/ca.pem"]

  # Certificate for SSL client authentication
  #ssl.certificate: "/etc/pki/client/cert.pem"

  # Client Certificate Key
  #ssl.key: "/etc/pki/client/cert.key"

 

4.  Logstash配置


4.1.  logstash.yml

# X-Pack Monitoring
# https://www.elastic.co/guide/en/logstash/current/monitoring-logstash.html
xpack.monitoring.enabled: true
xpack.monitoring.elasticsearch.username: "logstash_system"
xpack.monitoring.elasticsearch.password: "123456"
xpack.monitoring.elasticsearch.url: ["http://localhost:9200"]

4.2.  管道配置

input {
    beats {
        port => "5044"
    }
}
filter {
    grok {
        match => { "message" => "%{TIMESTAMP_ISO8601:log_date}\s+\[%{LOGLEVEL:log_level}" }
    }
    date {
        match => ["log_date", "yyyy-MM-dd HH:mm:ss.SSS"]
        target => "@timestamp"
    }
}
output {
    
    if [log_source] == "oh-coupon" {
        elasticsearch {
            hosts => [ "localhost:9200" ]
            index => "oh-coupon-%{+YYYY.MM.dd}"
            user => "logstash_internal"
            password => "123456"
        }
    }

    if [log_source] == "oh-promotion" {
        elasticsearch {
            hosts => [ "localhost:9200" ]
            index => "oh-promotion-%{+YYYY.MM.dd}"
            user => "logstash_internal"
            password => "123456"
        }
    }

}

4.3.  外掛

Logstash針對輸入、過濾、輸出都有好多外掛

關於Logstash的外掛在之前的文章中未曾提及,因為都是配置,所以不打算再單獨寫一篇了,這裡稍微重點的提一下,下面幾篇文章對此特別有幫助:

https://www.elastic.co/guide/en/logstash/current/input-plugins.html

https://www.elastic.co/guide/en/logstash/current/plugins-inputs-beats.html

https://www.elastic.co/guide/en/logstash/current/plugins-inputs-kafka.html

https://www.elastic.co/guide/en/logstash/current/filebeat-modules.html

https://www.elastic.co/guide/en/logstash/current/output-plugins.html

https://www.elastic.co/guide/en/logstash/current/logstash-config-for-filebeat-modules.html

https://www.elastic.co/guide/en/logstash/current/filter-plugins.html

本例中,到了輸入外掛:beats,過濾外掛:grok和date,輸出外掛:elasticsearch

這裡,最最重要的是 grok ,利用這個外掛我們可以從訊息中提取一些我們想要的欄位

grok

https://www.elastic.co/guide/en/logstash/current/plugins-filters-grok.html

https://github.com/logstash-plugins/logstash-patterns-core/blob/master/patterns/grok-patterns

date

https://www.elastic.co/guide/en/logstash/current/plugins-filters-date.html#plugins-filters-date-target

欄位引用

https://www.elastic.co/guide/en/logstash/current/event-dependent-configuration.html#logstash-config-field-references

 

5.  Elasticsearch配置


5.1.  elasticsearch.yml

xpack.security.enabled: true

其它均為預設

 

6.  Kibana配置


6.1.  kibana.yml

server.port: 5601

server.host: "192.168.101.5"

elasticsearch.url: "http://localhost:9200"

kibana.index: ".kibana"

elasticsearch.username: "kibana"
elasticsearch.password: "123456"

xpack.security.enabled: true
xpack.security.encryptionKey: "4297f44b13955235245b2497399d7a93"

 

7.  啟動服務


7.1.  啟動Elasticsearch

[root@localhost ~]# su - cheng
[cheng@localhost ~]$ cd $ES_HOME
[cheng@localhost elasticsearch-6.3.2]$ bin/elasticsearch

7.2.  啟動Kibana

[cheng@localhost kibana-6.3.2-linux-x86_64]$ bin/kibana

7.3.  啟動Logstash

[root@localhost logstash-6.3.2]# bin/logstash -f second-pipeline.conf --config.test_and_exit
[root@localhost logstash-6.3.2]# bin/logstash -f second-pipeline.conf --config.reload.automatic

7.4.  啟動Filebeat

[root@localhost filebeat-6.3.2-linux-x86_64]# rm -f data/registry
[root@localhost filebeat-6.3.2-linux-x86_64]# ./filebeat -e -c filebeat.yml -d "publish"

 

8.  演示


 

9.  參考


https://www.cnblogs.com/liuxinan/p/5336971.html