ELK收集MySQL慢日誌並告警
ELK收集MySQL慢日誌並告警
採用的是filebeat採集日誌,Redis做日誌儲存,logstash消費處理日誌,將處理過的日誌儲存到ES,kibana做日誌展示,Elastalert做監控告警長時間的慢日誌。
1. ELK架構的安裝
參考文件:https://www.cnblogs.com/98record/p/13648570.html
2. Elastalert 安裝
2.1 官方Git程式碼
採用的是Docker方式部署
[root@centos2 opt]# git clone https://github.com/Yelp/elastalert.git [root@centos2 opt]# cd elastalert [root@centos2 elastalert]# ls changelog.md docs Makefile requirements.txt tests config.yaml.example elastalert pytest.ini setup.cfg tox.ini docker-compose.yml example_rules README.md setup.py Dockerfile-test LICENSE requirements-dev.txt supervisord.conf.example # 建立Dockerfile [root@centos2 elastalert]# cat Dockerfile FROM ubuntu:latest RUN apt-get update && apt-get upgrade -y RUN apt-get -y install build-essential python3 python3-dev python3-pip libssl-dev git WORKDIR /home/elastalert ADD requirements*.txt ./ RUN pip3 install -r requirements-dev.txt # 編譯容器 [root@centos2 elastalert]# docker build -t elastalert:1 . [root@centos2 elastalert]# docker run -itd --name elastalert -v `pwd`/:/home/elastalert/ elastalert:1 [root@centos2 elastalert]# docker exec -it elastalert bash root@45f77d2936d4:/home/elastalert# pip install elastalert
2.2 整合Git程式碼
因官方的docker程式碼多年未更新,導致很多問題,而且也為整合釘釘外掛,所我特根據我個人的需求,集成了釘釘外掛後,並重寫了
Dockerfile
。我已將相關檔案上傳到我的gitee,並與官方程式碼合成,有需要的直接拉即可。
git clone https://gitee.com/rubbishes/elastalert-dingtalk.git cd elastalert docker build -t elastalert:1 . docker run -itd --name elastalert -v `pwd`/:/home/elastalert/ elastalert:1
3.配置
3.1 filebeat配置
[root@mysql-178 filebeat-7.6.0-linux-x86_64]# vim filebeat.yml #=========================== Filebeat inputs ============================= filebeat.inputs: - type: log # Change to true to enable this input configuration. enabled: true # Paths that should be crawled and fetched. Glob based paths. paths: - /usr/local/mysql/data/mysql-178-slow.log #- c:\programdata\elasticsearch\logs\* # Exclude lines. A list of regular expressions to match. It drops the lines that are # matching any regular expression from the list. #exclude_lines: ['^\# Time'] exclude_lines: ['^\# Time|^/usr/local/mysql/bin/mysqld|^Tcp port|^Time'] multiline.pattern: '^\# Time|^\# User' multiline.negate: true multiline.match: after #配置filebeat是否重頭開始讀取日誌,預設是重頭開始的。 #tail_files: true tags: ["mysql-slow-log"] #============================= Filebeat modules =============================== filebeat.config.modules: # Glob pattern for configuration loading path: ${path.config}/modules.d/*.yml # Set to true to enable config reloading reload.enabled: ture # Period on which files under path should be checked for changes reload.period: 10s #==================== Elasticsearch template setting ========================== setup.template.settings: index.number_of_shards: 1 #index.codec: best_compression #_source.enabled: false #================================ General ===================================== # The name of the shipper that publishes the network data. It can be used to group # all the transactions sent by a single shipper in the web interface. name: 10.228.81.178 #============================== Dashboards ===================================== # These settings control loading the sample dashboards to the Kibana index. Loading # the dashboards is disabled by default and can be enabled either by setting the # options here or by using the `setup` command. #setup.dashboards.enabled: false # The URL from where to download the dashboards archive. By default this URL # has a value which is computed based on the Beat name and version. For released # versions, this URL points to the dashboard archive on the artifacts.elastic.co # website. #setup.dashboards.url: #============================== Kibana ===================================== # Starting with Beats version 6.0.0, the dashboards are loaded via the Kibana API. # This requires a Kibana endpoint configuration. setup.kibana: # Kibana Host # Scheme and port can be left out and will be set to the default (http and 5601) # In case you specify and additional path, the scheme is required: http://localhost:5601/path # IPv6 addresses should always be defined as: https://[2001:db8::1]:5601 #host: "localhost:5601" # Kibana Space ID # ID of the Kibana Space into which the dashboards should be loaded. By default, # the Default Space will be used. #space.id: #================================ Outputs ===================================== # Configure what output to use when sending the data collected by the beat. #-------------------------- Elasticsearch output ------------------------------ #output.elasticsearch: # Array of hosts to connect to. # hosts: ["localhost:9200"] # Protocol - either `http` (default) or `https`. #protocol: "https" # Authentication credentials - either API key or username/password. #api_key: "id:api_key" #username: "elastic" #password: "changeme" #----------------------------- Logstash output -------------------------------- #output.logstash: # The Logstash hosts # hosts: ["localhost:5044"] # Optional SSL. By default is off. # List of root certificates for HTTPS server verifications #ssl.certificate_authorities: ["/etc/pki/root/ca.pem"] # Certificate for SSL client authentication #ssl.certificate: "/etc/pki/client/cert.pem" # Client Certificate Key #ssl.key: "/etc/pki/client/cert.key" #================================ Processors ===================================== # Configure processors to enhance or manipulate events generated by the beat. processors: - add_host_metadata: ~ - add_cloud_metadata: ~ - add_docker_metadata: ~ - add_kubernetes_metadata: ~ #刪除欄位 - drop_fields: fields: ["beat","offset", "prospector"] #================================ Logging ===================================== # Sets log level. The default log level is info. # Available log levels are: error, warning, info, debug # 剛開始除錯的時候可以開啟debug模式,後期註釋了就行了 #logging.level: debug # At debug level, you can selectively enable logging only for some components. # To enable all selectors use ["*"]. Examples of other selectors are "beat", # "publish", "service". #logging.selectors: ["*"] #================================= Migration ================================== # This allows to enable 6.7 migration aliases #migration.6_to_7.enabled: true #輸出到Redis output.redis: hosts: ["10.228.81.51:6379"] password: "123456" db: "1" key: "mysqllog" timeout: 5 datatype: list
3.2 logstash配置
建議使用docker與二進位制方式部署,rpm包部署的時候提示不支援ruby語句。
input {
redis {
host => "10.228.81.51"
port => 6379
password => "123456"
db => "1"
data_type => "list"
key => "mysqllog"
}
}
filter {
json {
source => "message"
}
grok {
match => [ "message" , "(?m)^#\s+User@Host:\s+%{USER:user}\[[^\]]+\]\s+@\s+(?:(?<clienthost>\S*) )?\[(?:%{IPV4:clientip})?\]\s+Id:\s+%{NUMBER:row_id:int}\n#\s+Query_time:\s+%{NUMBER:query_time:float}\s+Lock_time:\s+%{NUMBER:lock_time:float}\s+Rows_sent:\s+%{NUMBER:rows_sent:int}\s+Rows_examined:\s+%{NUMBER:rows_examined:int}\n\s*(?:use %{DATA:database};\s*\n)?SET\s+timestamp=%{NUMBER:timestamp};\n\s*(?<sql>(?<action>\w+)\b.*;)\s*(?:\n#\s+Time)?.*$" ]
}
#替換時間戳
date {
locale => "en"
match => ["timestamp","UNIX"]
target => "@timestamp"
}
#因MySQL使用的是UTC時間跟我們的時間差八小時,所以我們需要將時間戳加8小時再傳給ES
ruby {
code => "event.set('timestamp', event.get('@timestamp').time.localtime + 8*3600)"
}
}
output {
stdout {
#開啟debug模式,除錯的時候使用,除錯完成後建議關閉,不然日誌輸出真的大,特別在監控mysql-binglog那種的時候
codec => rubydebug
}
# 這裡判斷tags標籤是否等於 mysql-slow-log,如果是則輸出到es,並生成索引為 mysql-slow-log-年月日
if [tags][0] == "mysql-slow-log" {
elasticsearch {
hosts => ["10.228.81.51:9200"]
index => "%{[tags][0]}-%{+YYYY.MM.dd}"
}
}
}
3.3 Elastalert 配置
3.3.1 config.yaml 配置
先複製一下預設的
cp config.yaml.example config.yaml
然後酌情修改,如下
# 主要是配置es的地址與埠,其他的無需配置
# This is the folder that contains the rule yaml files
# Any .yaml file will be loaded as a rule
rules_folder: example_rules
run_every:
minutes: 1
buffer_time:
minutes: 15
# The Elasticsearch hostname for metadata writeback
# Note that every rule can have its own Elasticsearch host
es_host: 10.228.81.51
# The Elasticsearch port
es_port: 9200
# The index on es_host which is used for metadata storage
# This can be a unmapped index, but it is recommended that you run
# elastalert-create-index to set a mapping
writeback_index: elastalert_status
writeback_alias: elastalert_alerts
# If an alert fails for some reason, ElastAlert will retry
# sending the alert until this time period has elapsed
alert_time_limit:
days: 2
通過我的Git拉取的,直接修改config.yaml
檔案配置即可,修改點與上大同。
3.3.2 rule.yaml配置
這主要是配置你的告警規則的
釘釘通知
cd example_rules
cat mysql_rule.yaml
# 配置es的主機與埠
es_host: 10.228.81.51
es_port: 9200
#不使用https協議
use_ssl: False
#定義規則唯一標識,需要唯一性。
name: My-Product Exception Alert
# 指定規則型別
## 支援any,blacklist,whitelist,change,frequency,spike,flatline,new_term,cardinality 型別
### frequency:
type: frequency在相同 query_key條件下,timeframe 範圍內有num_events個被過濾出 來的異常;
# 指定索引名,支援萬用字元,正則匹配與kibana中一樣
index: mysql-*
#時間出發的次數
num_events: 1
#和num_events引數關聯,也就是說在5分鐘內觸發1次會報警
timeframe:
minutes: 5
# 設定告警規則
filter:
- query:
query_string:
# 這裡的語法使用的 es中的查詢語法,測試的時候可以使用kibana中的查詢來過濾出自己想要的內容,然後貼上至此
query: "user:eopuser OR user:root"
# 指定需要的欄位,如果不指定的話預設是所有欄位
include: ["message","clientip","query_time"]
# 告警方式,我這裡使用的是釘釘,支援email與企業微信
alert:
- "elastalert_modules.dingtalk_alert.DingTalkAlerter"
# 配置你機器人的api
dingtalk_webhook: "https://oapi.dingtalk.com/robot/send?access_token=96eabeeaf956bb26128fed1259cxxxxxxxxxxfa6b2baeb"
# 釘釘標題,也是機器的關鍵字
dingtalk_msgtype: "text"
#alert_subject: "test"
# 指定內容格式
alert_text: "
text: 1 \n
IP: {}\n
QUERYTIME: {}
"
alert_text_args:
- clientip
- query_time
郵件通知
# 與釘釘沒多大區別就是需要配置一下 email的一些相關資訊
root@45f77d2936d4:/home/elastalert/example_rules# cat myrule_email.yaml
es_host: 10.228.81.51
es_port: 9200
use_ssl: False
#name屬性要求唯一,這裡最好能標示自己的產品
name: My-Product Exception Alert
#型別,我選擇任何匹配的條件都發送郵件警告
type: any
#需要監控的索引,支援通配
index: mysql-*
num_events: 50
timeframe:
hours: 4
filter:
- query:
query_string:
query: "user:eopuser OR user:root"
#email的警告方式
alert:
- "email"
#增加郵件內容
alert_text: "test"
#SMTP協議的郵件伺服器相關配置(我這裡是阿里企業郵箱)
smtp_host: smtp.mxhichina.com
smtp_port: 25
#使用者認證檔案,需要user和password兩個屬性
smtp_auth_file: smtp_auth_file.yaml
email_reply_to: [email protected]
from_addr: [email protected]
#需要接受郵件的郵箱地址列表
email:
- "[email protected]"
# 因為我們的賬號與密碼也寫在了yaml檔案中了,所以我們需要在同級目錄下配置一下
root@45f77d2936d4:/home/elastalert/example_rules# cat smtp_auth_file.yaml
user: "[email protected]"
password: "123456"
注意: 如果是使用我的程式碼構建的,需修改 example_rules/myrule.yaml
規則檔案,其他規則名無效,或修改我的run.sh
指令碼也可。
3.3.3 安裝dingtalk外掛
這是因為使用的原版的製作無dingtalk外掛,所以需要手動安裝。如採用我的Dockerfile生成的話是已經有了的,可以省略
git clone https://github.com.cnpmjs.org/xuyaoqiang/elastalert-dingtalk-plugin.git
cd elastalert-dingtalk-plugin/
# 將elastalert_modules目錄拷貝到 elastalert 根目錄下
cp -r elastalert_modules ../elastalert/
4. 啟動
啟動順序
ES > Kibana > elastalert > Redis > Filebeat > Logstash
其實啟動順序主要需要將ES啟動先,這樣kibana才能起來,然後為了能告警所以我們需要先把elastalert起起來,接著Redis起來為filebeat收集日誌做準備,filebeat啟動收集日誌到Redis,logstash 最後啟動 消費Redis中的資料,存到ES。
其他的啟動我剛開始的文件中都有,我就不多說了,主要是針對elastalert的啟動需要多說一嘴。
一樣,如果是使用我的程式碼生成的docker,那麼無需操作這一步。
# 進入容器
[root@centos2 elastalert]# docker exec -it elastalert bash
# 先做個測試規則檔案有沒有問題
root@45f77d2936d4:/home/elastalert# root@45f77d2936d4:/home/elastalert# elastalert-test-rule example_rules/myrule.yaml
# 沒問題就後臺執行好了
root@45f77d2936d4:/home/elastalert# nohup python3 -m elastalert.elastalert --verbose --rule example_rules/myrule.yaml &
root@45f77d2936d4:/home/elastalert# exit
5、 擴充套件
elastalert Dockerfile 檔案編寫
FROM ubuntu:latest
RUN apt-get update && apt-get upgrade -y && apt-get install -y build-essential python3 python3-dev python3-pip libssl-dev git && echo "Asia/Shanghai" > /etc/timezone
WORKDIR /home/elastalert
ADD ./* ./
RUN pip install elastalert && ln -sf /dev/stdout elastalert.log && ln -sf /dev/stderr elastalert.log
CMD ["/bin/bash","run.sh"]
執行
docker run -itd --name elastalert -v /root/elastalert/:/home/elastalert/ -v /etc/localtime:/etc/localtime elastalert:1