filebeat讀取nginx日誌並寫入kafka
阿新 • • 發佈:2018-11-29
com res 讀取 code inpu file max eat compress filebeat寫入kafka的配置:
filebeat.inputs: - type: log paths: - /tmp/access.log tags: ["nginx-test"] fields: type: "nginx-test" log_topic: "nginxmessages" fields_under_root: true processors: - drop_fields: fields: ["beat","input","source","offset"] name: 10.10.5.119 output.kafka: enabled: true hosts: ["10.78.1.85:9092","10.78.1.87:9092","10.78.1.71:9092"] topic: "%{[log_topic]}" partition.round_robin: reachable_only: true worker: 2 required_acks: 1 compression: gzip max_message_bytes: 10000000
logstash從kafka中讀取的配置:
input {
kafka {
bootstrap_servers => "10.78.1.85:9092,10.78.1.87:9092,10.78.1.71:9092"
topics => ["nginxmessages"]
codec => "json"
}
}
filebeat讀取nginx日誌並寫入kafka