1. 程式人生 > 實用技巧 >Docker(八): 安裝ELK

Docker(八): 安裝ELK

服務部署發展

傳統架構單應用部署

應用程式部署在單節點中,日誌資源同樣輸出到這臺單節點物理機的儲存介質中。

微服務架構服務部署

以分散式,叢集的方式部署應用,應用分別部署在不同的物理機中,日誌分別輸出到應用部署的那臺物理機中。

K8S部署微服務

應用以docker容器的方式部署在K8S平臺中,應用日誌輸出到K8S的各個Pod節點中。

系統架構

Elasticsearch

分散式搜尋和分析引擎。聚合和豐富您的資料並將其儲存在Elasticsearch中。elasticsearch負責儲存日誌和處理查詢。

logstash

實時流水線功能的開源資料收集引擎。可以動態統一來自不同來源的資料。logstash負責收集日誌,整理日誌並將日誌傳送至Elasticsearch中儲存。

kibana

開源分析和視覺化平臺。kibana用於UI展示。

filebeat

轉發和集中日誌資料的輕量級中轉程式。收集您指定的日誌並轉發到指定的位置。filebeat負責收集日誌並將日誌轉送至您指定的位置。

部署ELK

編寫docker-compose

version: '2'
services:
  elasticsearch:
    container_name: elasticsearch
    image: docker.elastic.co/elasticsearch/elasticsearch:7.8.1
    ports:
      - "9200:9200"
    volumes:
      - /mydata/elasticsearch/config/elasticsearch.yml:/usr/share/elasticsearch/config/elasticsearch.yml
      - /mydata/elasticsearch/data:/usr/share/elasticsearch/data
      - /mydata/elasticsearch/plugins:/usr/share/elasticsearch/plugins
    environment:
      - "ES_JAVA_OPTS=-Xms1024m -Xmx1024m"
      - "discovery.type=single-node"
      - "COMPOSE_PROJECT_NAME=elasticsearch-server"
    restart: 'no'

  kibana:
    depends_on:
      - elasticsearch
    container_name: kibana
    image: docker.elastic.co/kibana/kibana:7.8.1
    ports:
      - "5601:5601"
    restart: 'no'
    environment:
      - ELASTICSEARCH_HOSTS=http://192.168.1.20:9200

  filebeat:
    container_name: filebeat
    image: docker.elastic.co/beats/filebeat:7.8.1
    user: root
    volumes:
      - /home/chinda/log:/var/log
      - /mydata/filebeat/filebeat.docker.yml:/usr/share/filebeat/filebeat.yml:ro
      - /var/lib/docker/containers:/var/lib/docker/containers:ro
      - /var/run/docker.sock:/var/run/docker.sock:ro
    command: filebeat -e -strict.perms=false

  logstash:
    container_name: logstash
    image: docker.elastic.co/logstash/logstash:7.8.1
    ports:
      - 5044:5044
    restart: 'no'
    volumes:
      - /mydata/logstash/pipeline/logstash.conf:/usr/share/logstash/pipeline/logstash.conf
      - /mydata/logstash/settings/logstash.yml:/usr/share/logstash/settings/logstash.yml
    command: bin/logstash --config.reload.automatic --http.port 9600

配置filebeat

filebeat.inputs:
  - type: log
    multiline:
      pattern: '^\d{4}-\d{2}-\d{2}'
      negate: true
      match: after
    tags: ['chinda']
    fields:
      app_id: chinda_app
    exclude_lines: ['^DBG']
    paths:
      - /var/log/*/*.log

output.logstash:
  hosts: ["192.168.1.20:5044"]

配置logstash

# The # character at the beginning of a line indicates a comment. Use
# comments to describe your configuration.
input {
    beats {
        port => "5044"
    }
}
# The filter part of this file is commented out to indicate that it is
# optional.
filter {
    grok {
        match => { "message" => "%{TIMESTAMP_ISO8601:log_date}\s*%{LOGLEVEL:log_level}\s*%{POSINT}\s*---\s*\[%{GREEDYDATA}\]\s*%{JAVAFILE:log_class}(.*?[:])\s*(?<log_content>.*$)" }
    }

    date {
        timezone => "Asia/Shanghai"
        match => [ "log_date", "yyyy-MM-dd HH:mm:ss.SSS" ]
    }
}

output {
    elasticsearch {
        hosts => [ "192.168.1.20:9200" ]
        index => "chinda_index"
    }
}

注意: grok匹配日誌格式為:2020-10-13 14:58:26.801 WARN 25810 --- [o-auto-1-exec-7] com.zaxxer.hikari.pool.PoolBase : HikariPool-1 - Failed to validate connection com.mysql.cj.jdbc.ConnectionImpl@2512a45f (No operations allowed after connection closed.). Possibly consider using a shorter maxLifetime value.