Elasticsearch叢集部署和運維命令
阿新 • • 發佈:2021-07-08
Elasticsearch叢集部署
下載tar包
在"https://www.elastic.co/cn/downloads/elasticsearch"頁面,有 past releases,可以下載以前版本
下載
# wget https://artifacts.elastic.co/downloads/elasticsearch/elasticsearch-7.4.0-linux-x86_64.tar.gz 解壓 # tar -zxvf elasticsearch-7.4.0-linux-x86_64.tar.gz -C /usr/local/
編輯配置檔案
# cd elasticsearch-7.4.0/ # vi elasticsearch.yml
cluster.name: cluster-233 node.name: node_233_101 network.host: 10.233.27.103 network.publish_host: 10.233.27.103 http.port: 9500 transport.tcp.port: 9501 node.master: true node.data: true path.data: /usr/local/elasticsearch-7.4.0/data path.logs: /usr/local/elasticsearch-7.4.0/logs path.repo: ["/usr/local/elasticsearch-7.4.0/reposity"] # head 外掛需要這開啟這兩個配置 http.cors.allow-origin: "*" http.cors.enabled: true http.cors.allow-methods: OPTIONS, HEAD, GET, POST, PUT, DELETE http.cors.allow-headers: "X-Requested-With, Content-Type, Content-Length, X-User" http.max_content_length: 200mb # 可以選舉的主節點 cluster.initial_master_nodes: ["10.233.27.101","10.233.27.103","10.233.27.104"] discovery.seed_hosts: ["10.233.27.101","10.233.27.103","10.233.27.104"] gateway.recover_after_nodes: 2 network.tcp.keep_alive: true network.tcp.no_delay: true transport.tcp.compress: true #叢集內同時啟動的資料任務個數,預設是2個 cluster.routing.allocation.cluster_concurrent_rebalance: 16 #新增或刪除節點及負載均衡時併發恢復的執行緒個數,預設4個 cluster.routing.allocation.node_concurrent_recoveries: 16 #初始化資料恢復時,併發恢復執行緒的個數,預設4個 cluster.routing.allocation.node_initial_primaries_recoveries: 16 bootstrap.system_call_filter: false
### 沒有節點都配置,node_name、network-host、network.public_host每個節點修改對應的IP
vi jvm.options
-Xms1g #修改成想要的值
-Xmx1g #修改成想要的值
建elastic使用者
# useradd elastic ### 設定elasticsearch安裝目錄屬主為elastic # chown -R elastic.elastic elasticsearch-7.4.0
啟動elasticsearch
用elastic使用者登入 # su - elastic 啟動elasticsearch,加-d用守護程序啟動 # /usr/local/elasticsearch-7.4.0/bin/elasticsearch -d
檢視es狀態資訊
檢視叢集健康狀態 # curl -XGET "http://10.2.XX.XX:9200/_cluster/health?pretty" * "status" : "green"代表健康,"yellow"代表主分片正常副分片不正常,"red"代表主分片不正常 檢視叢集節點資訊 # curl -XGET "http://10.2.2.1:9200/_cat/nodes?v&format=json&pretty" * "master" : "*"為主節點,"-"為普通節點 索引資訊 # curl -XGET "http://10.2.2.1:9200/_cat/indices?v" * health索引健康狀態,pri.store.size儲存大小 索引資訊[:shard] # curl -XGET "http://10.2.2.1:9200/_cluster/health?pretty&level=indices" * "number_of_shards" : 1把索引分為幾個分片,"number_of_replicas" : 1每個主分片有幾個副本,"unassigned_shards" : 0未分配的shard[磁碟85%或節點宕] 堆Jvm使用率 # curl -XGET "http://10.2.2.1:9200/_cat/nodes?v=true&h=name,node*,heap*" * heap.current現使用量 heap.percent百分比 heap.max最大值 [索引-分片]在節點的分配情況 # curl -XGET "http://10.2.2.1:9200/_cat/shards?v=true&s=state" 檢查分片分配_cluster/allocation/explain ### 提供檢測,給出未分配分片的原因,已分配分片為什麼沒有rebalance或轉移到別的節點的解釋 # curl -XGET "http://10.3.4.1:9200/_cluster/allocation/explain?pretty" 對未分配的分片,重試分配 # curl -XPOST "http://10.3.4.1:9200/_cluster/reroute?retry_failed=true" # curl -XGET "http://10.34.4.153:9200/_cluster/allocation/explain?pretty&filter_path=index,node_allocation_decisions.node_name,node_allocation_decisions.deciders.*" 手動分配分片 1)第一步,提取node 名稱 curl -XGET 'http://10.34.4.153:9200/_nodes/process?pretty=true' 2)建立指令碼,手動分配 #!/bin/bash NODE="0gniN6q6S4GVuCXtWTRbwQ" IFS=$'\n' for line in $(curl -s 'http://10.34.4.153:9200/_cat/shards' | fgrep UNASSIGNED); do INDEX=$(echo $line | (awk '{print $1}')) SHARD=$(echo $line | (awk '{print $2}')) echo $INDEX echo $SHARD curl -XPOST 'http://10.34.4.153:9200/_cluster/reroute' -H 'content-Type:application/json' -d '{ "commands": [ { "allocate": { "index": "'$INDEX'", "shard": '$SHARD', "node": "'$NODE'", "allow_primary": true } } ] }' done
安裝kibana
kibana作為es的客戶端,進行操作;需要安裝與es同一版本的包,否則不可用.
下載kibana包
# wget https://artifacts.elastic.co/downloads/kibana/kibana-7.4.0-linux-x86_64.tar.gz 解壓 # tar -zxvf kibana-7.4.0-linux-x86_64.tar.gz -C /usr/local 配置檔案 # vi /usr/local/kibana-7.4.0-linux-x86_64/config/kibana.yml server.port: 5601 server.host: "10.23.2.1" elasticsearch.hosts: ["http://10.23.2.101:9500","http://10.23.2.103:9500","http://10.23.2.104:9500"]
安裝logstash
待續...