1. 程式人生 > >logstash對nginx日誌進行解析

logstash對nginx日誌進行解析

eat sent bytes list min oat try port log

logstash對nginx日誌進行解析過濾轉換等操作;
配置可以用於生產環境,架構為filebeat讀取日誌放入redis,logstash從redis讀取日誌後進行操作;
對user_agent和用戶ip也進行了解析操作,便於統計;

input {
    redis {
        host => "192.168.1.109"
        port => 6379
        db => "0"
        data_type => "list"
        key => "test"
    }
}
filter{
    json {
        source => "message"
        remove_field => "message"
    }
    useragent {
        source => "agent"
        target => "agent"
        remove_field => ["[agent][build]","[agent][os_name]","[agent][device]","[agent][minor]","[agent][patch]"]
    }
    date {
        match => ["access_time", "dd/MMM/yyyy:HH:mm:ss Z"]
    }
    mutate {
        remove_field => ["beat","host","prospector","@version","offset","input","source","access_time"]
        convert => {"body_bytes_sent" => "integer"}
        convert => {"up_response_time" => "float"}
        convert => {"request_time" => "float"}

    }
    geoip {
        source => "remote_addr"
        target => "geoip"
        remove_field => ["[geoip][country_code3]","[geoip][location]","[geoip][longitude]","[geoip][latitude]","[geoip][region_code]"]
        add_field => ["[geoip][coordinates]", "%{[geoip][longitude]}"]
        add_field => ["[geoip][coordinates]", "%{[geoip][latitude]}"]
    }
    mutate {
        convert => ["[geoip][coordinates]","float"]
    }
}
output {
    if [tags][0] == "newvp" {
        elasticsearch {
                hosts  => ["192.168.1.110:9200","192.168.1.111:9200","192.168.1.112:9200"]
                index  => "%{type}-%{+YYYY.MM.dd}"
        }
        stdout {
                codec => rubydebug
        }
        #stdout用於調試,正式使用可以去掉
    }
}

filebeat讀取日誌的寫法:

filebeat.inputs:
- type: log
  paths:
    - /var/log/nginx/access.log
  tags: ["newvp"]
  fields:
    type: newvp
  fields_under_root: true
output.redis:
  hosts: ["192.168.1.109"]
  key: "test"
  datatype: list

logstash對nginx日誌進行解析