Elasticsearch配置使用
阿新 • • 發佈:2018-06-22
eat lte erro centos refresh network con batch adb 1.實驗環境:
配置前註意事項:1.關閉防火墻。2.關閉SELinux。3.同步時間
步驟1.實現收集httpd服務的日誌文件,並將數據發送給redis服務
http+filebeat服務器相關配置 logstash相關配置,配置該服務之前需要安裝JVM相關組件
使用8臺CentOS主機,實現filebeat+redis+logstash+els集群(3臺)+kibana來完成搜索日誌相關內容,目標:filebeat來完成收集本機http數據,收集完成後發送給redis,redis主要是來避免數據量過大,logstash處理不過來,logstash是用來格式化數據,將收集來的數據格式化成指定格式,els集群是將格式化完成的數據,進行文檔分析,,構建索引,提供查詢等操作,kibana提供圖形化界面查詢的組件
邏輯拓撲圖
2.實驗步驟
本實驗所用的四個軟件包全部都是5.6版本
下載相關網站:https://www.elastic.co/cn/products
步驟1.實現收集httpd服務的日誌文件,並將數據發送給redis服務
http+filebeat服務器相關配置
[root@filebeat ~]# yum install -y httpd [root@filebeat ~]# echo test > /var/www/html/index.html [root@filebeat ~]# systemctl start httpd [root@filebeat ~]# rpm -ivh filebeat-5.6.10-x86_64.rpm 相關配置文件 /etc/filebeat/filebeat.full.yml #模板配置文件 /etc/filebeat/filebeat.yml 主配置文件 配置redis需要從模板文件中將模板復制到主配置文件中 output.redis: enabled: true #開啟 hosts: ["172.18.100.2:6379"] #redis服務器 port: 6379 key: filebeat #key的名字 password: centos #密碼若沒有設置則不用填 db: 0 #寫入哪個數據庫 datatype: list #數據類型 worker: 1 #開幾個進行寫數據 loadbalance: true #是否支持將多個redis中寫入 [root@filebeat ~]# systemctl start filebeat
redis相關配置
[root@redis ~]# yum install -y redis
[root@redis ~]# vim /etc/redis.conf
bind 0.0.0.0
port 6379
requirepass centos
[root@nginx1 ~]# systemctl start redis
增加訪問日誌,在redis中查詢
[root@nginx1 ~]# redis-cli -a centos
127.0.0.1:6379> KEYS *
1) "filebeat" #即可驗證成功
步驟2配置logstash從redis中拿數據,並且格式化,然後存入elasticsearch,並且顯示
[root@nginx2 ~]# rpm -ivh logstash-5.6.10.rpm
[root@nginx2 ~]# cd /etc/logstash/conf.d/
[root@nginx2 conf.d]# vim redis-logstash-els.conf #創建文件,只要以.conf結尾即可
input {
redis {
batch_count => 1
data_type => "list"
key => "filebeat"
host => "172.18.100.2"
port => 6379
password => "centos"
threads => 5
}
}
filter {
grok {
match => {
"message" => "%{HTTPD_COMBINEDLOG}"
}
remove_field => "message"
}
date {
match => ["timestamp","dd/MM/YYYY:H:m:s Z"]
remove_field => "timestamp"
}
}
output {
stdout {
codec => rubydebug
}
}
在終端顯示格式化好的內容
[root@nginx2 conf.d]# /usr/share/logstash/bin/logstash -f redis-logstash-els.conf
{
"request" => "/",
"agent" => "\"curl/7.29.0\"",
"offset" => 93516,
"auth" => "-",
"ident" => "-",
"input_type" => "log",
"verb" => "GET",
"source" => "/var/log/httpd/access_log",
"type" => "log",
"tags" => [
[0] "_dateparsefailure"
],
"referrer" => "\"-\"",
"@timestamp" => 2018-06-20T15:21:20.094Z,
"response" => "200",
"bytes" => "5",
"clientip" => "127.0.0.1",
"beat" => {
"name" => "filebeat.test.com",
"hostname" => "filebeat.test.com",
"version" => "5.6.10"
},
"@version" => "1",
"httpversion" => "1.1",
"timestamp" => "20/Jun/2018:11:21:19 -0400"
}
將output修改成傳遞給els集群
output {
elasticsearch {
hosts => ["http://172.18.100.4:9200/","http://172.18.100.5:9200/","http://172.18.100.6:9200/"]
index => "logstash-%{+YYYY.MM.dd}"
document_type => "apache_logs"
}
}
檢查沒有錯誤即可
[root@nginx2 conf.d]# /usr/share/logstash/bin/logstash -f redis-logstash-els.conf -t
WARNING: Could not find logstash.yml which is typically located in $LS_HOME/config or /etc/logstash. You can specify the path using --path.settings. Continuing using the defaults
Could not find log4j2 configuration at path /usr/share/logstash/config/log4j2.properties. Using default config which logs errors to the console
Configuration OK
步驟3配置els集群服務,需要先安裝JVM服務
節點1:
[root@tomcat1 ~]# rpm -ivh elasticsearch-5.6.10.rpm
[root@els1 ~]# vim /etc/elasticsearch/elasticsearch.yml
cluster.name: myels
node.name: els.test.com
network.host: 172.18.100.4
http.port: 9200
discovery.zen.ping.unicast.hosts: ["172.18.100.4", "172.18.100.5","172.18.100.6"]
discovery.zen.minimum_master_nodes: 2
節點2:
[root@tomcat1 ~]# rpm -ivh elasticsearch-5.6.10.rpm
[root@els1 ~]# vim /etc/elasticsearch/elasticsearch.yml
cluster.name: myels
node.name: els.test.com
network.host: 172.18.100.5
http.port: 9200
discovery.zen.ping.unicast.hosts: ["172.18.100.4", "172.18.100.5","172.18.100.6"]
discovery.zen.minimum_master_nodes: 2
節點3:
[root@tomcat1 ~]# rpm -ivh elasticsearch-5.6.10.rpm
[root@els1 ~]# vim /etc/elasticsearch/elasticsearch.yml
cluster.name: myels
node.name: els.test.com
network.host: 172.18.100.6
http.port: 9200
discovery.zen.ping.unicast.hosts: ["172.18.100.4", "172.18.100.5","172.18.100.6"]
discovery.zen.minimum_master_nodes: 2
在els任意一個節點上查看數據
[root@els1 ~]# curl -XGET ‘http://172.18.100.4:9200/logstash-2018.06.21?pretty=true‘ 顯示傳過來的數據
"settings" : {
"index" : {
"refresh_interval" : "5s",
"number_of_shards" : "5",
"provided_name" : "logstash-2018.06.21",
"creation_date" : "1529545212157",
"number_of_replicas" : "1",
"uuid" : "3n74gNpCQUyCLq58vAwL6A",
"version" : {
"created" : "5061099"
}
}
}
}
}
步驟4:配置Nginx反向代理,若其中有一個故障,還可以被查詢
[root@mysql1 ~]# yum install -y nginx
[root@mysql1 ~]# vim /etc/nginx/conf.d/test.conf
upstream ser {
server 172.18.100.4:9200;
server 172.18.100.5:9200;
server 172.18.100.6:9200;
}
server {
listen 80;
server_name www.test.com;
root /app/;
index index.html;
location / {
proxy_pass http://ser;
}
}
步驟5:配置kibana實現圖形化查看
server.host: "0.0.0.0"
server.basePath: ""
server.name: "172.18.100.8"
elasticsearch.url: "http://172.18.100.7:80" #反向代理服務器
elasticsearch.preserveHost: true
kibana.index: ".kibana"
Elasticsearch配置使用