統一日誌ELK部署配置(3)——logstash
從官網下載:https://www.elastic.co/downloads/logstash ;
二、配置
1、修改config下的jvm.options:
1??根據需要修改最大堆和最小堆
2??我這裏使用的jdk1.8,gc使用G1,所以需要重新配置;
-XX:+UnlockDiagnosticVMOptions
-XX:+UseCompressedOops
-XX:+UseG1GC
-XX:MaxGCPauseMillis=200
-XX:G1ReservePercent=20
-XX:+G1SummarizeConcMark
-XX:InitiatingHeapOccupancyPercent=40
-XX:+AlwaysPreTouch
2、新建配置文件service.conf 3、在service.conf配置數據收集、解析及輸出: #==數據源配置,從kafka input{ kafka{ bootstrap_servers => ["10.31.140.96:9092,10.31.140.99:9092,10.31.140.93:9092"] topics => "elk-service-log" group_id => "service-log-group" codec => "json" auto_offset_reset => "earliest" max_partition_fetch_bytes => "52428700" max_poll_records => "300" consumer_threads => "16" }
}
#==數據解析——這裏的日誌內容是json格式,所以用以下解析
filter{
mutate{
gsub => ["message", "[\n|\r|\f|\t]", " "]
gsub => ["message", "[\]", "/"]
}
json{
source => "message" #==filebeat收集的日誌內容在message字段中,JSON格式
#target => "doc"
}
if [type] == "nginx-log" { #==匹配字段內容,去除filebeat中不需要的內容 grok{ match => { "@version" => "1" } remove_field => ["@version"] remove_field => ["offset"] remove_field => ["beat"] remove_field => ["@version"] remove_field => ["source"] remove_field => ["input_type"] remove_field => ["message"] #remove_field => ["request_body"] } } else { grok{ match => { "@version" => "1" } remove_field => ["@version"] remove_field => ["offset"] remove_field => ["beat"] remove_field => ["@version"] remove_field => ["source"] remove_field => ["input_type"] } }
}
#== 數據輸出到elasticsearch集群
output{
if [type] == "service-log" { #==type字段是在filebeat中添加的自定義字段,用於日誌內容區分
elasticsearch{
hosts => ["es1:9200","es2:9200","es3:9200"]
index => "service-log-%{+yyyy.MM.dd}" #==每天一個日誌索引
codec => "json"
manage_template => true
template => "/mnt/logstash-5.6.4/templates/service-log.json" #日誌索引模板
template_name=>"service-log"
template_overwrite=>true
}
}
if [type] == "nginx-log" {
elasticsearch{
hosts => ["es1:9200","es2:9200","es3:9200"]
index => "nginx-log-%{+yyyy.MM.dd}"
codec => "json"
manage_template => true
template => "/mnt/logstash-5.6.4/templates/nginx-log.json"
template_name=>"nginx-log"
template_overwrite=>true
}
}
}
三、日誌索引模板:
日誌索引模板是用來創建索引時規定字段類型、設置索引配置的,設置得當可以提高Elasticsearch性能,減少Elasticsearch對資源的消耗。
日誌模板如下:
{
"template": "service-log*",
"settings": {
"index.number_of_shards": 12,
"index.number_of_replicas": 0,
"index.refresh_interval":"10s"
},
"mappings": {
"java": {
"_all": {
"analyzer": "ik_max_word", #==ik分詞
"enabled": false #==禁用all
},
"properties": {
"@timestamp": {
"format": "dateOptionalTime",
"type": "date"
},
"date": {
"type": "keyword"
},
"tranceId": {
"type": "keyword"
},
"sequenceId": {
"type": "keyword"
},
"level": {
"type": "keyword"
},
"appName": {
"type": "text",
"analyzer":"ik_max_word"
},
"serverName": {
"type": "text",
"analyzer":"ik_max_word"
},
"port": {
"type": "integer"
},
"class": {
"analyzer": "ik_max_word",
"type":"text"
},
"method": {
"type": "text",
"analyzer":"ik_max_word"
},
"line": {
"type": "integer"
},
"message": {
"analyzer": "ik_max_word",
"type": "text"
}
}
}
}
}
四、啟動:
bin/logstash -f service.conf &
五、維護:
如果不同的日誌內容格式需要接入到ELK,那麽在filebeat端增加一個type標識內容,在logstash中根據type進行處理,然後寫入Elasticsearch時新建一個索引模板寫入即可。
註意事項:
template的命名一定要和logstash中輸出到Elasticsearch時的配置匹配,否則會導致template不生效,但是還會寫入到Elasticsearch;
如:"template": "service-log*",那麽配置為 index => "service-log-%{+yyyy.MM.dd}",配置為index => "nginx-log-%{+yyyy.MM.dd}"就會提示找不到template
統一日誌ELK部署配置(3)——logstash