Redis叢集cluster 節點 新增 刪除 重分配
阿新 • • 發佈:2018-12-23
此部落格主要是轉載講解redis叢集中節點的新增和刪除
redis叢集請檢視:
redis cluster配置好,並執行一段時間後,我們想新增節點,或者刪除節點,該怎麼辦呢。
一,redis cluster命令列
這些命令是叢集所獨有的。執行上述命令要先登入//叢集(cluster) CLUSTER INFO 列印叢集的資訊 CLUSTER NODES 列出叢集當前已知的所有節點(node),以及這些節點的相關資訊。 //節點(node) CLUSTER MEET <ip> <port> 將 ip 和 port 所指定的節點新增到叢集當中,讓它成為叢集的一份子。 CLUSTER FORGET <node_id> 從叢集中移除 node_id 指定的節點。 CLUSTER REPLICATE <node_id> 將當前節點設定為 node_id 指定的節點的從節點。 CLUSTER SAVECONFIG 將節點的配置檔案儲存到硬盤裡面。 //槽(slot) CLUSTER ADDSLOTS <slot> [slot ...] 將一個或多個槽(slot)指派(assign)
二,新增節點[[email protected] redis]# redis-cli -c -p 6382 -h 192.168.10.220 //登入 192.168.10.220:6382> cluster info //檢視叢集情況 cluster_state:ok cluster_slots_assigned:16384 cluster_slots_ok:16384 cluster_slots_pfail:0 cluster_slots_fail:0 cluster_known_nodes:6 cluster_size:3 cluster_current_epoch:8 cluster_my_epoch:4 cluster_stats_messages_sent:82753 cluster_stats_messages_re
1,新配置二個測試節點
2,新增主節點# cd /etc/redis //新增配置 # cp redis-6379.conf redis-6378.conf && sed -i "s/6379/6378/g" redis-6378.conf # cp redis-6382.conf redis-6385.conf && sed -i "s/6382/6385/g" redis-6385.conf //啟動 # redis-server /etc/redis/redis-6385.conf > /var/log/redis/redis-6385.log 2>&1 & # redis-server /etc/redis/redis-6378.conf > /var/log/redis/redis-6378.log 2>&1 &
# redis-trib.rb add-node 192.168.10.219:6378 192.168.10.219:6379
註釋:192.168.10.219:6378是新增的節點
192.168.10.219:6379叢集任一個舊節點
3,新增從節點
# redis-trib.rb add-node --slave --master-id 03ccad2ba5dd1e062464bc7590400441fafb63f2 192.168.10.220:6385 192.168.10.219:6379
註釋:--slave,表示新增的是從節點
--master-id 03ccad2ba5dd1e062464bc7590400441fafb63f2,主節點的node id,在這裡是前面新新增的6378的node id
192.168.10.220:6385,新節點
192.168.10.219:6379叢集任一個舊節點
4,重新分配slot
# redis-trib.rb reshard 192.168.10.219:6378 //下面是主要過程
How many slots do you want to move (from 1 to 16384)? 1000 //設定slot數1000
What is the receiving node ID? 03ccad2ba5dd1e062464bc7590400441fafb63f2 //新節點node id
Please enter all the source node IDs.
Type 'all' to use all the nodes as source nodes for the hash slots.
Type 'done' once you entered all the source nodes IDs.
Source node #1:all //表示全部節點重新洗牌
Do you want to proceed with the proposed reshard plan (yes/no)? yes //確認重新分
新增加的主節點,是沒有slots的,M: 03ccad2ba5dd1e062464bc7590400441fafb63f2 192.168.10.219:6378
slots:0-332,5461-5794,10923-11255 (0 slots) master
主節點如果沒有slots的話,存取資料就都不會被選中。
可以把分配的過程理解成打撲克牌,all表示大家重新洗牌;輸入某個主節點的node id,然後在輸入done的話,就好比從某個節點,抽牌。
5,檢視一下,叢集情況
[[email protected] redis]# redis-trib.rb check 192.168.10.219:6379
Connecting to node 192.168.10.219:6379: OK
Connecting to node 192.168.10.220:6385: OK
Connecting to node 192.168.10.219:6378: OK
Connecting to node 192.168.10.220:6382: OK
Connecting to node 192.168.10.220:6383: OK
Connecting to node 192.168.10.219:6380: OK
Connecting to node 192.168.10.219:6381: OK
Connecting to node 192.168.10.220:6384: OK
>>> Performing Cluster Check (using node 192.168.10.219:6379)
M: 5d8ef5a7fbd72ac586bef04fa6de8a88c0671052 192.168.10.219:6379
slots:5795-10922 (5128 slots) master
1 additional replica(s)
S: 9c240333476469e8e2c8e80b089c48f389827265 192.168.10.220:6385
slots: (0 slots) slave
replicates 03ccad2ba5dd1e062464bc7590400441fafb63f2
M: 03ccad2ba5dd1e062464bc7590400441fafb63f2 192.168.10.219:6378
slots:0-332,5461-5794,10923-11255 (1000 slots) master
1 additional replica(s)
M: 19b042c17d2918fade18a4ad2efc75aa81fd2422 192.168.10.220:6382
slots:333-5460 (5128 slots) master
1 additional replica(s)
M: b2c50113db7bd685e316a16b423c9b8abc3ba0b7 192.168.10.220:6383
slots:11256-16383 (5128 slots) master
1 additional replica(s)
S: 6475e4c8b5e0c0ea27547ff7695d05e9af0c5ccb 192.168.10.219:6380
slots: (0 slots) slave
replicates 19b042c17d2918fade18a4ad2efc75aa81fd2422
S: 1ee01fe95bcfb688a50825d54248eea1e6133cdc 192.168.10.219:6381
slots: (0 slots) slave
replicates b2c50113db7bd685e316a16b423c9b8abc3ba0b7
S: 9a2a1d75b8eb47e05eee1198f81a9edd88db5aa1 192.168.10.220:6384
slots: (0 slots) slave
replicates 5d8ef5a7fbd72ac586bef04fa6de8a88c0671052
[OK] All nodes agree about slots configuration.
>>> Check for open slots...
>>> Check slots coverage...
[OK] All 16384 slots covered.
三,改變從節點的master
//檢視一下6378的從節點
# redis-cli -p 6378 cluster nodes | grep slave | grep 03ccad2ba5dd1e062464bc7590400441fafb63f2
//將6385加入到新的master
# redis-cli -c -p 6385 -h 192.168.10.220
192.168.10.220:6385> cluster replicate 5d8ef5a7fbd72ac586bef04fa6de8a88c0671052 //新master的node id
OK
192.168.10.220:6385> quit
//檢視新master的slave
# redis-cli -p 6379 cluster nodes | grep slave | grep 5d8ef5a7fbd72ac586bef04fa6de8a88c0671052
四,刪除節點
1,刪除從節點
# redis-trib.rb del-node 192.168.10.220:6385 '9c240333476469e8e2c8e80b089c48f389827265'
2,刪除主節點
如果主節點有從節點,將從節點轉移到其他主節點
如果主節點有slot,去掉分配的slot,然後在刪除主節點
# redis-trib.rb reshard 192.168.10.219:6378 //取消分配的slot,下面是主要過程
How many slots do you want to move (from 1 to 16384)? 1000 //被刪除master的所有slot數量
What is the receiving node ID? 5d8ef5a7fbd72ac586bef04fa6de8a88c0671052 //接收6378節點slot的master
Please enter all the source node IDs.
Type 'all' to use all the nodes as source nodes for the hash slots.
Type 'done' once you entered all the source nodes IDs.
Source node #1:03ccad2ba5dd1e062464bc7590400441fafb63f2 //被刪除master的node-id
Source node #2:done
Do you want to proceed with the proposed reshard plan (yes/no)? yes //取消slot後,reshard
新增master節點後,也進行了這一步操作,當時是分配,現在去掉。反著的。
# redis-trib.rb del-node 192.168.10.219:6378 '03ccad2ba5dd1e062464bc7590400441fafb63f2'
新的master節點被刪除了,這樣就回到了,就是這篇文章開頭,還沒有新增節點的狀態參考資料:
http://blog.51yip.com/nosql/1726.html
http://blog.csdn.net/xu470438000/article/details/42972123
http://blog.csdn.net/vtopqx/article/details/50235891