1. 程式人生 > >ELK安裝

ELK安裝

process ice case apm remote syslog 記錄 如何 strong

一、基礎軟件安裝

yum -y localinstall elasticsearch-2.1.1.rpm

chkconfig --add elasticsearch

rpm -ivh jdk-8u111-linux-x64.rpm (elasticsearch 依賴於jdk1.8以上)

[root@rabbitmq-node2 ELK]# java -version

java version "1.8.0_111"

Java(TM) SE Runtime Environment (build 1.8.0_111-b14)

Java HotSpot(TM) 64-Bit Server VM (build 25.111-b14, mixed mode)

配置新的環境變量

[root@rabbitmq-node2 profile.d]# cat /etc/profile.d/java.sh

JAVA_HOME=/usr/java/jdk1.8.0_111

JRE_HOME=/usr/java/jdk1.8.0_111/jre

CLASS_PATH=.:$JAVA_HOME/lib/dt.jar:$JAVA_HOME/lib/tools.jar:$JRE_HOME/lib

PATH=$PATH:$JAVA_HOME/bin:$JRE_HOME/bin

export JAVA_HOME JRE_HOME CLASS_PATH PATH

修改/etc/elasticsearch/elasticsearch.yml 配置文件

[root@rabbitmq-node2 elasticsearch]# egrep -v "^$|#" elasticsearch.yml

cluster.name: gaoyang 多個機器的集群名稱需要一樣

node.name: node-1

path.data: /data/es-data 數據目錄要創建,並且要賦值權限給elasticsearch用戶。因為yum安裝的默認是用的elasticsearch用戶啟動服務的

path.logs: /var/log/elasticsearch

bootstrap.mlockall: true 開啟鎖定內存

network.host: 0.0.0.0

http.port: 9200

[root@rabbitmq-node2 elasticsearch]# mkdir -p /data/es-data

chown -R elasticsearch.elasticsearch /data/es-data/

[root@rabbitmq-node2 elasticsearch]# cat /etc/security/limits.conf |grep elasticsearch

elasticsearch soft memlock unlimited

elasticsearch hard memlock unlimited

需要配置/etc/security/limits.conf文件,elasticsearch用戶有權限獨占內存

service elasticsearch status

service elasticsearch start 啟動elasticsearch

然後查看端口和服務

[root@rabbitmq-node2 elasticsearch]# ss -tnulp|grep 9200

tcp LISTEN 0 50 :::9200 :::* users:(("java",55424,140))

[root@rabbitmq-node2 elasticsearch]# ps aux |grep elasticsearch

497 55424 5.5 3.5 4682452 583948 ? SLl 10:58 0:07 /usr/bin/java -Xms256m -Xmx1g -Djava.awt.headless=true -XX:+UseParNewGC -XX:+UseConcMarkSweepGC -XX:CMSInitiatingOccupancyFraction=75 -XX:+UseCMSInitiatingOccupancyOnly -XX:+HeapDumpOnOutOfMemoryError -XX:+DisableExplicitGC -Dfile.encoding=UTF-8 -Djna.nosys=true -Des.path.home=/usr/share/elasticsearch -cp /usr/share/elasticsearch/lib/elasticsearch-2.1.1.jar:/usr/share/elasticsearch/lib/* org.elasticsearch.bootstrap.Elasticsearch start -p /var/run/elasticsearch/elasticsearch.pid -d -Des.default.path.home=/usr/share/elasticsearch -Des.default.path.logs=/var/log/elasticsearch -Des.default.path.data=/var/lib/elasticsearch -Des.default.path.conf=/etc/elasticsearch

root 55516 0.0 0.0 105488 956 pts/1 S+ 11:00 0:00 grep elasticsearch

通過web頁面訪問,如果可以出現json格式的字符串,表示elasticsearch安裝成功了。

技術分享圖片

安裝完之後也可以通過查看日誌來分析elasticsearch啟動是否有問題

[root@rabbitmq-node1 profile.d]# tail -f /var/log/elasticsearch/xx.log

[2017-11-08 11:11:56,935][INFO ][node ] [node-2] initialized

[2017-11-08 11:11:56,936][INFO ][node ] [node-2] starting ...

[2017-11-08 11:11:57,013][WARN ][common.network ] [node-2] publish address: {0.0.0.0} is a wildcard address, falling back to first non-loopback: {10.83.22.86}

[2017-11-08 11:11:57,014][INFO ][transport ] [node-2] publish_address {10.83.22.86:9300}, bound_addresses {[::]:9300}

[2017-11-08 11:11:57,022][INFO ][discovery ] [node-2] gaoyang/1--F-NyXSHi6jMxdnQT-7A

[2017-11-08 11:12:00,061][INFO ][cluster.service ] [node-2] new_master {node-2}{1--F-NyXSHi6jMxdnQT-7A}{10.83.22.86}{10.83.22.86:9300}, reason: zen-disco-join(elected_as_master, [0] joins received)

[2017-11-08 11:12:00,087][WARN ][common.network ] [node-2] publish address: {0.0.0.0} is a wildcard address, falling back to first non-loopback: {10.83.22.86}

[2017-11-08 11:12:00,087][INFO ][http ] [node-2] publish_address {10.83.22.86:9200}, bound_addresses {[::]:9200}

[2017-11-08 11:12:00,087][INFO ][node ] [node-2] started

[2017-11-08 11:12:00,121][INFO ][gateway ] [node-2] recovered [0] indices into cluster

通過web訪問另外一個節點Node-2

技術分享圖片

[root@rabbitmq-node2 elasticsearch]# curl -i -XGET ‘http://10.83.22.86:9200/_count?pretty‘ -d ‘ {

> "query": {

> "match_all": {}

> }

> }‘

HTTP/1.1 200 OK

Content-Type: application/json; charset=UTF-8

Content-Length: 95

{

"count" : 0,

"_shards" : {

"total" : 0,

"successful" : 0,

"failed" : 0

}

}

[root@rabbitmq-node2 elasticsearch]#

通過HTTP RESTful API 操作elasticsearch搜索數據

pretty,參數告訴elasticsearch,返回形式打印JSON結果

query:告訴我們定義查詢
match_all:運行簡單類型查詢指定索引中的所有文檔

http://blog.csdn.net/stark_summer/article/details/48830493

安裝elasticsearch-head插件:

/usr/share/elasticsearch/bin/plugin install mobz/elasticsearch-head

首先創建一個索引,選擇5分片,副本數為1

然後可以在這個索引裏面POST數據到裏面

然後可以通過ID get剛才post上去的數據

創建了index之後,黃色表示主的沒有問題,備節點有問題

正常的集群應該是兩個節點都是綠色的才正確

兩個服務器如果要創建集群。除了上面所說的要配置同一個集群名稱以外,還需要配置單播。默認用的是多播的方式。但是多播不成功的話,就需要配置單播

[root@rabbitmq-node2 elasticsearch]# cat /etc/elasticsearch/elasticsearch.yml |grep discovery

# Pass an initial list of hosts to perform discovery when new node is started:

discovery.zen.ping.multicast.enabled: false

discovery.zen.ping.unicast.hosts: ["10.83.22.85", "10.83.22.86"]

# discovery.zen.minimum_master_nodes: 3

# <http://www.elastic.co/guide/en/elasticsearch/reference/current/modules-discovery.html>

[root@rabbitmq-node2 elasticsearch]#

把集群的IP配置到單播的地址裏面,並且在防火墻裏面開通兩個機器的集群通信端口9300;註意9200只是訪問端口

安裝elasticsearch的監控插件

/usr/share/elasticsearch/bin/plugin install lmenezes/elasticsearch-kopf

Install 後面緊跟著的是github的下載地址,默認會在github下面下載

安裝logstash:

wget ftp://bqjrftp:Pass123$%^@10.83.20.27:9020/software/ELK/logstash-2.1.1-1.noarch.rpm

yum -y localinstall logstash-2.1.1-1.noarch.rpm

/opt/logstash/bin/logstash -e ‘input { stdin{} } output { stdout{} }‘ 啟動logstash

然後輸入hello,控制臺就會輸出信息

Ctrc+c 取消掉logstash的運行

然後重新輸入命令:

/opt/logstash/bin/logstash -e ‘input { stdin{} } output { stdout{ codec => rubydebug} }‘

codec:輸出至前臺,方便邊實踐邊測試

#通常使用rubydebug方式前臺輸出展示以及測試

/opt/logstash/bin/logstash -e ‘input { stdin{} } output { elasticsearch {hosts => ["10.83.22.85:9200"] } stdout{ codec => rubydebug } }‘

輸出到elasticsearch,並且輸出到控制臺

同時在elasticsearch也可以看的到輸出的數據

也可以通過寫配置文件,然後啟動logstash的時候指定配置文件的方式

[root@SZ33SITSIM00AP0003 software]# cat /etc/profile.d/logstash.sh 把logstash的執行文件寫入到環境變量,下次執行命令就不需要寫絕對路徑了

LOGSTASH_HOME=/opt/logstash/bin

export PATH=$LOGSTASH_HOME:$PATH

[root@SZ33SITSIM00AP0003 software]# source /etc/profile

[root@SZ33SITSIM00AP0003 software]# logstash -f /confs/logstash-simple.conf

}

[root@SZ33SITSIM00AP0003 ~]# cat /confs/logstash-simple.conf

input {

stdin { }

}

output {

elasticsearch { hosts => ["10.83.22.85:9200"] }

stdout { codec => rubydebug }

}

[root@SZ33SITSIM00AP0003 ~]#

現在要把系統日誌文件/var/log/message 還有nginx的訪問日誌文件access.log 放到elasticsearch裏面查詢的配置

input {

file {

path => "/var/log/messages"

type => "syslog"

start_position => "beginning" #表示從文件的開頭開始

}

file {

path => "/usr/local/nginx/logs/access.log"

type => "nginx"

codec => "json"

start_position => "beginning"

}

}

output {

if[type] == "syslog" { #根據文件的類型創建不同的索引

elasticsearch {

hosts => ["10.83.22.85:9200"]

index => [‘syslog-%{+YYYY-MM-dd}‘]

workers => 5 #指定多線程

}

}

if[type] == "nginx" {

elasticsearch {

hosts => ["10.83.22.85:9200"]

index => [‘nginx-%{+YYYY-MM-dd}‘]

workers => 5

}

}

}

logstash -f /confs/logstash-simple.conf 啟動logstash

安裝kibana:

wget ftp://bqjrftp:Pass123$%^@10.83.20.27:9020/software/ELK/kibana-4.3.1-linux-x64.tar.gz

tar xzvf kibana-4.3.1-linux-x64.tar.gz

mv kibana-4.3.1-linux-x64 /usr/local/

ln -sv kibana-4.3.1-linux-x64/ kibana

vim /usr/local/kibana/config/kibana.yml

server.port: 5601

server.host: "0.0.0.0"

server.basePath: ""

elasticsearch.url: "http://10.83.22.85:9200"

kibana.index: ".kibana"

screen -S kibana

/usr/local/kibana/bin/kibana &

Ctrl+a+d

Screen -ls

放在後臺開啟

ab -n 1000 -c 20 http://10.83.36.35:80/ 模擬用戶訪問瀏覽器

註意這個ab命令 後面的網址是http://ip:端口/路徑的格式

Ab命令默認系統是沒有安裝的,需要安裝的方法是:

yum install yum-utils

cd /opt

mkdir abtmp

cd abtmp

yum install yum-utils.noarch

yumdownloader httpd-tools*

rpm2cpio httpd-*.rpm | cpio -idmv

修改nginx的配置文件(主要是修改日誌的格式)

log_format access_log_json ‘{"user_ip":"$http_x_real_ip","lan_ip":"$remote_addr","log_time":"$time_iso8601","user_req":"$request","http_code":"$status","body_bytes_sents":"$body_bytes_sent","req_time":"$request_time","user_ua":"$http_user_agent"}‘;

在server段裏面調用日誌

access_log logs/host.access.log access_log_json;

然後設置logstash的配置文件

[root@SZ33SITSIM00AP0003 ~]# cat /confs/logstash-nginx.conf

input{

file {

path => "/usr/local/nginx/logs/host.access.log"

codec => "json"

type => "nginx-json"

start_position => "beginning"

}

}

filter{

}

output{

elasticsearch {

hosts => ["10.83.22.85:9200"]

index => "nginx-json-%{+YYYY-MM-dd}"

}

}

然後在另外一臺機器模擬訪問

ab -n 1000 -c 20 http://10.83.36.35:80/

最終在elasticsearch看到的效果就是

可以看到分域顯示的

# Set ES_HEAP_SIZE to 50% of available RAM, but no more than 31g

ES_HEAP_SIZE=4g

# 配置 JVM內存

vim /etc/sysconfig/elasticsearch

ES_HEAP_SIZE=4g

# 這臺機器的可用內存為8G

Filebeat的安裝和配置

wget ftp://bqjrftp:Pass123$%^@10.83.20.27:9020/software/ELK/filebeat-5.0.1-x86_64.rpm

Rpm -ivh filebeat-5.0.1-x86_64.rpm

配置文件:/etc/filebeat/filebeat.yml

filebeat.prospectors:

- input_type: log

paths:

- /home/weblogic/scm_server/logs/logger.log 定義日誌文件的路徑

encoding: plain 定義日誌的編碼是UTF-8

document_type: scm_server_msg 定義分類,在logstash服務器上面可以通過type字段引用這個類型

- input_type: log

paths:

- /home/weblogic/scm_server/logs/logger_error.log

encoding: plain

document_type: scm_server_error

tail_files: false 表示從文件頭部開始獲取日誌,默認是true,就是從文件的結尾開始獲取日誌,如果此選項沒有作用的話,可以使用另外一種方法,就是刪除記錄日誌讀取位置的文件 rm -rf /var/lib/filebeat/registry

multiline: 這一段的配置主要是針對tomcat報錯的多行合並

pattern: ‘^\#\#\#\s‘ 這一塊是正則表達式,因為scm_server的error日誌,每一行都是以###開始的,所以用正則表達式來表示以###開頭緊接著是空格的 \s表示空格

negate: true 符合上面的正則表達式

match: after 向下匹配成一行

timeout: 10s 定義超時時間,如果開始一個新的事件在超時時間內沒有發現匹配,也將發送日誌,默認是5s

- input_type: log

paths:

- /home/weblogic/bla_server/logs/logger_error.log

encoding: plain

document_type: bla_server_error

tail_files: false

multiline:

pattern: ‘^\[‘ 這一塊是正則表達式,因為bla_server的error日誌,報錯每一行都是以[開始的,所以用正則表達式來表示以[開頭緊接著是空格的

negate: true

match: after

timeout: 10s

- input_type: log

paths:

- /home/weblogic/bla_server/logs/logger.log

encoding: plain

document_type: bla_server_msg

processors:

- drop_fields:

fields: ["input_type", "beat", "offset", "source","tags","@timestamp"]

fields:

ip_address: 172.16.8.11 在logstash裏面定義的的變量內容

host: 172.16.8.11

fields_under_root: true

output.logstash: 將filebeat抓取的日誌輸出到logstash

hosts: ["10.83.22.118:5044"]

Logstash配置:logstash是自定義配置文件的

[root@SZ3FUATIMS00AP0001 ~]# cat /confs/logstash/conf.d/filebeat.conf

input { #這塊是定義logstash的端口,filebeat服務器裏面的output寫了這個端口

beats {

port => 5044

}

}

output {

if [type] == "scm_server_msg" { #這個地方就是根據filebeat裏面的document_type定義的類型來設置的,通過if來實現不同的日誌文件,輸出到elasticsearch裏面為不同的索引

elasticsearch {

hosts => [ "10.83.22.118:9200" ] #定義輸出到elasticsearch,端口是9200

index => "scm_server_msg-%{+YYYY.MM.dd}" #定義elasticsearch裏面的index的名稱

}

}

if [type] == "scm_server_error" {

elasticsearch {

hosts => [ "10.83.22.118:9200" ]

index => "scm_server_error-%{+YYYY.MM.dd}"

}

}

if [type] == "bla_server_error" {

elasticsearch {

hosts => [ "10.83.22.118:9200" ]

index => "bla_server_error-%{+YYYY.MM.dd}"

}

}

if [type] == "bla_server_msg" {

elasticsearch {

hosts => [ "10.83.22.118:9200" ]

index => "bla_server_msg-%{+YYYY.MM.dd}"

}

}

#這一塊的配置主要是郵件報警,通過if判斷type的名稱並且日誌message字段就是消息主體裏面包含了ERROR的內容就觸發email插件來實現報警

if [type] =~ /bla_server_error|scm_server_error/ and [message] =~ /ERROR/ {

email {

port => 25

address => "smtp.163.com"

username => "[email protected]"

password => "chenbin42"

authentication => "plain"

from => "[email protected]"

codec => "plain" 這裏是指定日誌的編碼UTF-8

contenttype => "text/html; charset=UTF-8"

subject => "%{type}:應用錯誤日誌!%{host}" 這裏是郵件的標題,裏面用到了變量,分別為type和主機ip

to => "[email protected]"

cc #抄送給誰 => "[email protected],[email protected],[email protected]"

via => "smtp"

body => "%{message}" #郵件的內容為message的內容

}

}

}

[root@SZ3FUATIMS00AP0001 ~]#

Elasticsearch5.0的安裝和配置:

1、wget ftp://bqjrftp:Pass123$%^@10.83.20.27:9020/software/APM/elasticsearch-5.5.0.tar.gz

2、tar xzvf elasticsearch-5.5.0.tar.gz -C /usr/local/

3、Cd /usr/local/ && ln -sv elasticsearch-5.5.0/ elasticsearch

4、mkdir -p /var/log/elasticsearch

5、mkdir -p /var/lib/elasticsearch

6、mkdir -p /var/run/elasticsearch

7、mkdir -p /data/es-data

8、chattr -i /etc/passwd

9、chattr -i /etc/shadow

10、chattr -i /etc/group

11、chattr -i /etc/gshadow

12、useradd elasticsearch

13、chown -R elasticsearch.elasticsearch /var/lib/elasticsearch/

14、chown -R elasticsearch.elasticsearch /var/log/elasticsearch/

15、chown -R elasticsearch.elasticsearch /var/run/elasticsearch/

16、chown -R elasticsearch.elasticsearch /data/es-data/

17、chown -R elasticsearch:elasticsearch /usr/local/elasticsearch

Es5.0的配置文件

Master節點配置文件:

cluster.name: apm

node.name: node-2

node.master: true

node.data: false

path.data: /data/es-data

path.logs: /var/log/elasticsearch

bootstrap.memory_lock: false #5.0我開啟了這個true選項就總是報錯,還沒有找到原因報錯的日誌為:

indices.fielddata.cache.size: 50mb

network.host: 10.83.64.102

http.port: 9200

http.cors.enabled: true #這一塊是5.0獨有的。因為5.0版本以後不支持通過插件的方式安裝head插件,需要將elasticsearch-head配置成服務,然後調用。如果啟用了 HTTP 端口,那麽此屬性會指定是否允許跨源 REST 請求

http.cors.allow-origin: "*" 如果 http.cors.enabled 的值為 true,那麽該屬性會指定允許 REST 請求來自何處

discovery.zen.minimum_master_nodes: 1

discovery.zen.ping.unicast.hosts: #5.0的寫法不一樣了,要用這種方式來寫

- 10.83.64.101:9300

- 10.83.64.102:9300

Data數據節點的配置:

cluster.name: apm

node.name: node-1

node.master: false #註意這裏就是數據節點為true,管理節點為false

node.data: true

path.data: /data/es-data

path.logs: /var/log/elasticsearch

bootstrap.memory_lock: false

indices.fielddata.cache.size: 50mb

network.host: 10.83.64.101

http.port: 9200

http.cors.enabled: true

http.cors.allow-origin: "*"

discovery.zen.minimum_master_nodes: 1

discovery.zen.ping.unicast.hosts: #數據節點的單播發現,只需要寫入master節點的ip和端口即可;

- 10.83.64.102:9300

可以配置一個管理節點,兩個數據節點

ES5.0的服務腳本配置:

  1. centos 7.2系統的服務腳本是在 /usr/lib/systemd/system 目錄下面,但是不成功.

Vim elasticsearch.service

[Unit]

Description=elasticsearch

After=network.target

[Service]

Type=forking

PIDFile=/var/run/elasticsearch/elasticsearch.pid

ExecStart=/usr/local/elasticsearch/bin/elasticsearch &

ExecReload=

ExecStop=kill -9 `ps aux |grep elasticsearch|grep -v su|grep -v grep|awk ‘{print $2}’`

PrivateTmp=true

User=elasticsearch

roup=elasticsearch

[Install]

WantedBy=multi-user.target

2、最後自己手動寫腳本,加入到/etc/init.d/下面才完成了服務的重啟

[root@SZ3FUAT0APM00AP ~]# cat /etc/init.d/es

#!/bin/bash

#

# elasticsearch <summary>

#

# chkconfig: 2345 80 20

# description: Starts and stops a single elasticsearch instance on this system

#

### BEGIN INIT INFO

# Provides: Elasticsearch

# Required-Start: $network $named

# Required-Stop: $network $named

# Default-Start: 2 3 4 5

# Default-Stop: 0 1 6

# Short-Description: This service manages the elasticsearch daemon

# Description: Elasticsearch is a very scalable, schema-free and high-performance search solution supporting multi-tenancy and near realtime search.

### END INIT INFO

pid_num=$(ps aux |grep elasticsearch|grep -v grep|awk ‘{print $2}‘)

start() {

su - elasticsearch -c "nohup /usr/local/elasticsearch/bin/elasticsearch >/dev/null 2>&1 &"

}

stop() {

if [ `ps aux |grep elasticsearch|grep -v grep|wc -l` -eq 1 ];then

kill -9 ${pid_num}

fi

}

status() {

if [ `ps aux |grep elasticsearch|grep -v grep|wc -l` -eq 1 ];then

echo "elasticsearch service is starting"

else

echo "elasticsearch service is stoping"

fi

}

case $1 in

start)

start

;;

stop)

stop

;;

status)

status

;;

*)

echo "service accept arguments start|stop|status"

esac

Chkconfig --add es

Chkconfig es on

Service es start Es的啟動

Service es status es的狀態查看

Service es stop es的停止

ES5.0 head插件的安裝:

  • Cd /usr/local
  • git clone git://github.com/mobz/elasticsearch-head.git
  • cd elasticsearch-head
  • npm install
  • npm run start
  • npm install -g grunt-cli
  •  “CLI”被翻譯為“命令行”。要想使用grunt,首先必須將grunt-cli安裝到全局環境中,使用nodejs的“npm install…”進行安裝。如果你不了解nodejs的npm如何安裝軟件,這裏就先不要問了,先照著我說的做。
  • grunt server
  • 如果沒有npm命令,請使用yum install npm命令安裝。如果安裝以後還報錯libssl.so的問題,請更新yum update openssl

訪問方式:http://localhost:9100/ 默認是不能通過ip進行訪問的,只能通過127.0.0.1訪問,如果要允許訪問管理ip,需要配置

同時,如果允許head連接es集群,還需要在es的配置文件中配置這段

http.cors.enabled: true #這一塊是5.0獨有的。因為5.0版本以後不支持通過插件的方式安裝head插件,需要將elasticsearch-head配置成服務,然後調用。如果啟用了 HTTP 端口,那麽此屬性會指定是否允許跨源 REST 請求

http.cors.allow-origin: "*" 如果 http.cors.enabled 的值為 true,那麽該屬性會指定允許 REST 請求來自何處

Zkserver的安裝:

Cd /software

wget ftp://bqjrftpadm:sldya123$%25%[email protected]:9020/software/zookeeper/zookeeper-3.4.9.tar.gz

tar xzvf zookeeper-3.4.9.tar.gz -C /usr/local/

cd /usr/local/

ln -sv zookeeper-3.4.9/ zookeeper

cd zookeeper

cp -r zoo_sample.cfg zoo.cfg

Vim /etc/profile.d/zk.sh

ZK_HOME=/usr/local/zookeeper/

PATH=$PATH:$ZK_HOME/bin

export ZK_HOME PATH

Source /etc/profile

配置zoo.cfg文件

tickTime=2000

initLimit=10

syncLimit=5

dataDir=/usr/local/zookeeper/data

dataLogDir=/usr/local/zookeeper/logs

clientPort=2181

autopurge.snapRetainCount=500

autopurge.purgeInterval=24

server.1= 10.83.64.102:2888:3888

server.2= 10.83.64.101:2888:3888

server.3= 10.83.64.105:2888:3888

Mkdir -p /usr/local/zookeeper/{data,logs}

echo "1" >/usr/local/zookeeper/data/myid

tickTime這個時間是作為zookeeper服務器之間或客戶端與服務器之間維持心跳的時間間隔,也就是說每個tickTime時間就會發送一個心跳。

initLimit這個配置項是用來配置zookeeper接受客戶端(這裏所說的客戶端不是用戶連接zookeeper服務器的客戶端,而是zookeeper服務器集群中連接到leader的follower 服務器)初始化連接時最長能忍受多少個心跳時間間隔數。

當已經超過10個心跳的時間(也就是tickTime)長度後 zookeeper 服務器還沒有收到客戶端的返回信息,那麽表明這個客戶端連接失敗。總的時間長度就是 10*2000=20秒。

syncLimit這個配置項標識leader與follower之間發送消息,請求和應答時間長度,最長不能超過多少個tickTime的時間長度,總的時間長度就是5*2000=10秒。

dataDir顧名思義就是zookeeper保存數據的目錄,默認情況下zookeeper將寫數據的日誌文件也保存在這個目錄裏;

clientPort這個端口就是客戶端連接Zookeeper服務器的端口,Zookeeper會監聽這個端口接受客戶端的訪問請求;

server.A=B:C:D中的A是一個數字,表示這個是第幾號服務器,B是這個服務器的IP地址,C第一個端口用來集群成員的信息交換,表示這個服務器與集群中的leader服務器交換信息的端口,D是在leader掛掉時專門用來進行選舉leader所用的端口。

3.3、創建ServerID標識

除了修改zoo.cfg配置文件外,zookeeper集群模式下還要配置一個myid文件,這個文件需要放在dataDir目錄下。

這個文件裏面有一個數據就是A的值(該A就是zoo.cfg文件中server.A=B:C:D中的A),在zoo.cfg文件中配置的dataDir路徑中創建myid文件。

#在192.168.1.148服務器上面創建myid文件,並設置值為1,同時與zoo.cfg文件裏面的server.1保持一致,#在192.168.1.149服務器上面創建myid文件,並設置值為1,同時與zoo.cfg文件裏面的server.2保持一致,如下

echo "2" > /opt/zookeeper/data/myid

#在192.168.1.150服務器上面創建myid文件,並設置值為1,同時與zoo.cfg文件裏面的server.3保持一致,如下

echo "3" > /opt/zookeeper/data/myid

到此,相關配置已完成

Vim /etc/init.d/zk

#!/bin/bash

#

# elasticsearch <summary>

#

# chkconfig: 2345 90 30

# description: Starts and stops a single elasticsearch instance on this system

#

### BEGIN INIT INFO

# Provides: Elasticsearch

# Required-Start: $network $named

# Required-Stop: $network $named

# Default-Start: 2 3 4 5

# Default-Stop: 0 1 6

# Short-Description: This service manages the elasticsearch daemon

# Description: Elasticsearch is a very scalable, schema-free and high-performance search solution supporting multi-tenancy and near realtime search.

### END INIT INFO

pid_num=$(ps aux |grep zookeeper|grep -v grep|awk ‘{print $2}‘)

start() {

nohup /usr/local/zookeeper/bin/zkServer.sh start >/dev/null 2>&1 &

}

stop() {

if [ `ps aux |grep zookeeper|grep -v grep|wc -l` -eq 1 ];then

kill -9 ${pid_num}

fi

}

status() {

if [ `ps aux |grep zookeeper|grep -v grep|wc -l` -eq 1 ];then

echo "zookeeper service is starting"

/usr/local/zookeeper/bin/zkServer.sh status

else

echo "zookeeper service is stoping"

fi

}

case $1 in

start)

start

;;

stop)

stop

;;

status)

status

;;

*)

echo "service accept arguments start|stop|status"

esac

Chkconfig --add zk

Chkconfig zk on

Service zk status

Service zk start

Service zk stop

ELK安裝