1. 程式人生 > >Filebeat 關鍵字多行匹配日誌採集(multiline與include_lines)

Filebeat 關鍵字多行匹配日誌採集(multiline與include_lines)

很多同事認為filebeat採集日誌不能做到多行處理,今天這裡討論下filebeat的multiline與include_lines。

先來個案例,以下日誌,我們只要求採集error的欄位,

2017/06/22 11:26:30 [error] 26067#0: *17918 connect() failed (111: Connection refused) while connecting to upstream, client: 192.168.32.17, server: localhost, request: "GET /wss/ HTTP/1.1", upstream: "http://192.168.12.106:8010/", host: "192.168.12.106"
2017/06/22 11:26:30 [info] 26067#0:
2017/06/22 12:05:10 [error] 26067#0: *17922 open() "/data/programs/nginx/html/ws" failed (2: No such file or directory), client: 192.168.32.17, server: localhost, request: "GET /ws HTTP/1.1", host: "192.168.12.106"

filebeat.yml檔案配置如下:

filebeat.prospectors:
- input_type: log
  paths:
    - /tmp/test.log
  include_lines: [‘error‘]
output.kafka:
  enabled: true
  hosts: ["192.168.12.105:9092"]
  topic: logstash-errors-log

檢視下kafka佇列

果然只有“error”關鍵字的日誌被採集了

{"@timestamp":"2017-06-23T08:57:25.227Z","beat":{"name":"192.168.12.106"},"input_type":"log","message":"2017/06/22 12:05:10 [error] 26067#0: *17922 open() /data/programs/nginx/html/ws failed (2: No such file or directory), client: 192.168.32.17, server: localhost, request: GET /ws HTTP/1.1, host: 192.168.12.106","offset":30926,"source":"/tmp/test.log","type":"log"}
{"@timestamp":"2017-06-23T08:57:32.228Z","beat":{"name":"192.168.12.106"},"input_type":"log","message":"2017/06/22 12:05:10 [error] 26067#0: *17922 open() /data/programs/nginx/html/ws failed (2: No such file or directory), client: 192.168.32.17, server: localhost, request: GET /ws HTTP/1.1, host: 192.168.12.106","offset":31342,"source":"/tmp/test.log","type":"log"}

再來多行案例

[2016-05-25 12:39:04,744][DEBUG][action.bulk              ] [Set] [***][3] failed to execute bulk item (index) index {[***][***][***], source[{***}}
MapperParsingException[Field name [events.created] cannot contain ‘.‘]
    at org.elasticsearch.index.mapper.object.ObjectMapper$TypeParser.parseProperties(ObjectMapper.java:273)
    at org.elasticsearch.index.mapper.object.ObjectMapper$TypeParser.parseObjectOrDocumentTypeProperties(ObjectMapper.java:218)
    at org.elasticsearch.index.mapper.object.ObjectMapper$TypeParser.parse(ObjectMapper.java:193)
    at org.elasticsearch.index.mapper.object.ObjectMapper$TypeParser.parseProperties(ObjectMapper.java:305)
    at org.elasticsearch.index.mapper.object.ObjectMapper$TypeParser.parseObjectOrDocumentTypeProperties(ObjectMapper.java:218)
    at org.elasticsearch.index.mapper.object.RootObjectMapper$TypeParser.parse(RootObjectMapper.java:139)
    at org.elasticsearch.index.mapper.DocumentMapperParser.parse(DocumentMapperParser.java:118)
    at org.elasticsearch.index.mapper.DocumentMapperParser.parse(DocumentMapperParser.java:99)
    at org.elasticsearch.index.mapper.MapperService.parse(MapperService.java:498)
    at org.elasticsearch.cluster.metadata.MetaDataMappingService$PutMappingExecutor.applyRequest(MetaDataMappingService.java:257)
    at org.elasticsearch.cluster.metadata.MetaDataMappingService$PutMappingExecutor.execute(MetaDataMappingService.java:230)
    at org.elasticsearch.cluster.service.InternalClusterService.runTasksForExecutor(InternalClusterService.java:468)
    at org.elasticsearch.cluster.service.InternalClusterService$UpdateTask.run(InternalClusterService.java:772)
    at org.elasticsearch.common.util.concurrent.PrioritizedEsThreadPoolExecutor$TieBreakingPrioritizedRunnable.runAndClean(PrioritizedEsThreadPoolExecutor.java:231)
    at org.elasticsearch.common.util.concurrent.PrioritizedEsThreadPoolExecutor$TieBreakingPrioritizedRunnable.run(PrioritizedEsThreadPoolExecutor.java:194)
    at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142)
    at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617)
    at java.lang.Thread.run(Thread.java:745)

filebeat.yml檔案配置如下:

filebeat.prospectors:
- input_type: log
  paths:
    - /tmp/test.log
   multiline:
        pattern: ‘^\[‘
        negate:  true
        match:   after
  fields:
    beat.name: 192.168.12.106
  fields_under_root: true
output.kafka:
  enabled: true
  hosts: ["192.168.12.105:9092"]
  topic: logstash-errors-log
kafka佇列如下:

{"@timestamp":"2017-06-23T09:09:02.887Z","beat":{"name":"192.168.12.106"},"input_type":"log",
"message":"[2016-05-25 12:39:04,744][DEBUG][action.bulk              ] [Set] [***][3] failed to execute bulk item (index) index {[***][***][***], source[{***}}\n
MapperParsingException[Field name [events.created] cannot contain ‘.‘]\n    at org.elasticsearch.index.mapper.object.ObjectMapper$TypeParser.parseProperties(ObjectMapper.java:273)\n    
at org.elasticsearch.index.mapper.object.ObjectMapper$TypeParser.parseObjectOrDocumentTypeProperties(ObjectMapper.java:218)\n    
at org.elasticsearch.index.mapper.object.ObjectMapper$TypeParser.parse(ObjectMapper.java:193)\n    
at org.elasticsearch.index.mapper.object.ObjectMapper$TypeParser.parseProperties(ObjectMapper.java:305)\n    
at org.elasticsearch.index.mapper.object.ObjectMapper$TypeParser.parseObjectOrDocumentTypeProperties(ObjectMapper.java:218)\n    
at org.elasticsearch.index.mapper.object.RootObjectMapper$TypeParser.parse(RootObjectMapper.java:139)\n    
at org.elasticsearch.index.mapper.DocumentMapperParser.parse(DocumentMapperParser.java:118)\n    
at org.elasticsearch.index.mapper.DocumentMapperParser.parse(DocumentMapperParser.java:99)\n    
at org.elasticsearch.index.mapper.MapperService.parse(MapperService.java:498)\n    
at org.elasticsearch.cluster.metadata.MetaDataMappingService$PutMappingExecutor.applyRequest(MetaDataMappingService.java:257)\n    
at org.elasticsearch.cluster.metadata.MetaDataMappingService$PutMappingExecutor.execute(MetaDataMappingService.java:230)\n   
at org.elasticsearch.cluster.service.InternalClusterService.runTasksForExecutor(InternalClusterService.java:468)\n    
at org.elasticsearch.cluster.service.InternalClusterService$UpdateTask.run(InternalClusterService.java:772)\n    
at org.elasticsearch.common.util.concurrent.PrioritizedEsThreadPoolExecutor$TieBreakingPrioritizedRunnable.runAndClean(PrioritizedEsThreadPoolExecutor.java:231)\n    
at org.elasticsearch.common.util.concurrent.PrioritizedEsThreadPoolExecutor$TieBreakingPrioritizedRunnable.run(PrioritizedEsThreadPoolExecutor.java:194)\n   
at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142)\n    
at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617)\n    
at java.lang.Thread.run(Thread.java:745)\n\n\n\n","offset":35737,"source":"/tmp/test.log","type":"log"}

可以看出multiline將多行日誌彙總。

multiline與include_lines,結合使用。

filebeat.yml檔案配置如下:

filebeat.prospectors:
- input_type: log
  paths:
    - /tmp/test.log
  include_lines: [‘error‘]
  multiline:
        pattern: ‘^\[‘
        negate:  true
        match:   after
output.kafka:
  enabled: true
  hosts: ["192.168.12.105:9092"]
  topic: logstash-errors-log

即日誌中如果有"error"關鍵字的日誌,進行多行合併,傳送至kafka.

經驗證,在日誌不斷輸入的情況,會把不含"error"的行也進行合併,日誌有間隔的情況輸入,過濾效果比較好,具體結合業務情況實用吧。

總之一句話,filebeat可以多行合併和進行關鍵字日誌採集。