Redis叢集資料節點修改
阿新 • • 發佈:2020-08-07
目錄
一、redis叢集節點修改
#新增和刪除節點的流程
1.新節點新增槽位
2.源節點中的資料進行遷移
3.源節點資料遷移完畢
4.遷移下一個槽位的資料,依次迴圈
1.新增節點
1)準備新機器
[root@db02 ~]# mkdir /service/redis/{6381,6382} [root@db02 ~]# vim /service/redis/6381/redis.conf [root@db02 ~]# vim /service/redis/6382/redis.conf #啟動新節點 [root@db02 ~]# redis-server /service/redis/6381/redis.conf [root@db02 ~]# redis-server /service/redis/6382/redis.conf
2)將新節點新增到叢集
[root@db01 ~]# redis-trib.rb add-node 172.16.1.52:6381 172.16.1.51:6379(使用工具方式)
#或者
[root@db01 ~]# redis-cli -p 6379 -h 172.16.1.51 cluster meet 172.16.1.52:6381
#檢視節點資訊
[root@db01 ~]# redis-trib.rb info 172.16.1.51:6379
[root@db01 ~]# redis-cli -p 6379 -h 172.16.1.51 cluster nodes
3)重新分配槽位
[root@db01 ~]# redis-trib.rb reshard 172.16.1.51:6379 #你想移動多少個槽位到新節點 How many slots do you want to move (from 1 to 16384)? 4096 #新節點的ID是什麼 What is the receiving node ID? a298dbd22c10b8492d9ff4295504c50666f4fb2e #輸入源節點的ID,如果是所有節點直接使用all(意思是是否從叢集中所有主中每個人身上抽出一點點到新節點) Please enter all the source node IDs. Type 'all' to use all the nodes as source nodes for the hash slots. Type 'done' once you entered all the source nodes IDs. source node #1:all -----------(這裡為all表示從每個人身上抽出那麼一點點) #你確定要這麼分配? Do you want to proceed with the proposed reshard plan (yes/no)? yes #分配完畢,檢視分配結果 [root@db01 ~]# redis-cli -p 6379 -h 172.16.1.51 cluster nodes e4794215d9d3548e9c514c10626ce618be19ebfb 172.16.1.53:6380 slave 5ad7bd957133eac9c3a692b35f8ae72258cf0ece 0 1596769469815 5 connected d27553035a3e91c78d375208c72b756e9b2523d4 172.16.1.53:6379 master - 0 1596769468805 3 connected 12288-16383 5ad7bd957133eac9c3a692b35f8ae72258cf0ece 172.16.1.51:6379 myself,master - 0 0 1 connected 1365-5460 fee551a90c8646839f66fa0cd1f6e5859e9dd8e0 172.16.1.52:6380 slave d27553035a3e91c78d375208c72b756e9b2523d4 0 1596769467797 4 connected 1d10edbc5ed08f85d2afc21cd338b023b9dd61b4 172.16.1.51:6380 slave 7c79559b280db9d9c182f3a25c718efe9e934fc7 0 1596769469513 6 connected a298dbd22c10b8492d9ff4295504c50666f4fb2e 172.16.1.52:6381 master - 0 1596769468302 7 connected 0-1364 5461-6826 10923-12287 7c79559b280db9d9c182f3a25c718efe9e934fc7 172.16.1.52:6379 master - 0 1596769468302 2 connected 6827-10922 [root@db01 ~]# redis-trib.rb info 172.16.1.51:6379 172.16.1.51:6379 (5ad7bd95...) -> 499 keys | 4096 slots | 1 slaves. 172.16.1.53:6379 (d2755303...) -> 501 keys | 4096 slots | 1 slaves. 172.16.1.52:6381 (a298dbd2...) -> 502 keys | 4096 slots | 0 slaves. 172.16.1.52:6379 (7c79559b...) -> 498 keys | 4096 slots | 1 slaves. [OK] 2000 keys in 4 masters. 0.12 keys per slot on average.
4)新增新節點的副本
[root@db02 ~]# redis-cli -h 172.16.1.52 -p 6382 cluster replicate a298dbd22c10b8492d9ff4295504c50666f4fb2e
#或者
[root@db01 ~]# redis-trib.rb add-node --slave --master-id a298dbd22c10b8492d9ff4295504c50666f4fb2e 172.16.1.52:6382 172.16.1.51:6379
(叢集中任意一臺機器)
[root@db01 ~]# redis-cli -p 6379 -h 172.16.1.51 cluster nodes
[root@db01 ~]# redis-trib.rb info 172.16.1.51:6379
172.16.1.51:6379 (5ad7bd95...) -> 499 keys | 4096 slots | 1 slaves.
172.16.1.53:6379 (d2755303...) -> 501 keys | 4096 slots | 1 slaves.
172.16.1.52:6381 (a298dbd2...) -> 502 keys | 4096 slots | 1 slaves.
172.16.1.52:6379 (7c79559b...) -> 498 keys | 4096 slots | 1 slaves.
[OK] 2000 keys in 4 masters.
0.12 keys per slot on average.
#調整主從,儘量保證每一臺機器的副本都不於主節點在一臺機器
5)模擬故障
#分配槽位的過程中
[root@db01 ~]# redis-trib.rb reshard 172.16.1.51:6379
#執行Ctrl+c
#檢視叢集狀態一些命令看不出來有錯
[root@db01 ~]# redis-cli -p 6379 -h 172.16.1.51 cluster info
[root@db01 ~]# redis-cli -p 6379 -h 172.16.1.51 cluster nodes
[root@db01 ~]# redis-trib.rb info 172.16.1.51:6379
#必須使用工具檢查叢集
[root@db01 ~]# redis-trib.rb check 172.16.1.51:6379
#錯誤一:172.16.1.52:6379節點正在匯出槽位,172.16.1.52:6381節點正在匯入槽位
>>> Check for open slots...
[WARNING] Node 172.16.1.52:6381 has slots in importing state (6885).
[WARNING] Node 172.16.1.52:6379 has slots in migrating state (6885).
[WARNING] The following slots are open: 6885
>>> Check slots coverage...
#錯誤二:172.16.1.52:6379節點正在匯出槽位
>>> Check for open slots...
[WARNING] Node 172.16.1.52:6379 has slots in migrating state (6975).
[WARNING] The following slots are open: 6975
>>> Check slots coverage...
#錯誤三:
>>> Check for open slots...
[WARNING] Node 172.16.1.52:6381 has slots in importing state (7093).
[WARNING] The following slots are open: 7093
>>> Check slots coverage...
6)修復故障
#使用fix修復叢集
[root@db01 ~]# redis-trib.rb fix 172.16.1.52:6379
#將槽位平均分配
1.平均之前
[root@db01 ~]# redis-trib.rb info 172.16.1.51:6379
172.16.1.51:6379 (5ad7bd95...) -> 499 keys | 4096 slots | 1 slaves.
172.16.1.53:6379 (d2755303...) -> 501 keys | 4096 slots | 1 slaves.
172.16.1.52:6381 (a298dbd2...) -> 648 keys | 5320 slots | 1 slaves.
172.16.1.52:6379 (7c79559b...) -> 352 keys | 2872 slots | 1 slaves.
[OK] 2000 keys in 4 masters.
0.12 keys per slot on average.
2.平均分配(如果節點之間槽位數量相差較小,不會平均分配)
[root@db01 ~]# redis-trib.rb rebalance 172.16.1.51:6379
>>> Performing Cluster Check (using node 172.16.1.51:6379)
[OK] All nodes agree about slots configuration.
>>> Check for open slots...
>>> Check slots coverage...
[OK] All 16384 slots covered.
>>> Rebalancing across 4 nodes. Total weight = 4
Moving 1224 slots from 172.16.1.52:6381 to 172.16.1.52:6379
#####################################################################################################
3平均分配之後
[root@db01 ~]# redis-trib.rb info 172.16.1.51:6379
172.16.1.51:6379 (5ad7bd95...) -> 499 keys | 4096 slots | 1 slaves.
172.16.1.53:6379 (d2755303...) -> 501 keys | 4096 slots | 1 slaves.
172.16.1.52:6381 (a298dbd2...) -> 492 keys | 4096 slots | 1 slaves.
172.16.1.52:6379 (7c79559b...) -> 508 keys | 4096 slots | 1 slaves.
[OK] 2000 keys in 4 masters.
0.12 keys per slot on average.
2.刪除節點
1)重新分配槽位
#重新分配
[root@db01 ~]# redis-trib.rb reshard 172.16.1.51:6379
How many slots do you want to move (from 1 to 16384)? 1365 #要分配的槽位數量
What is the receiving node ID? #接收槽位的節點id
Please enter all the source node IDs.
Type 'all' to use all the nodes as source nodes for the hash slots.
Type 'done' once you entered all the source nodes IDs.
Source node #1: 要刪除的節點id
Source node #2: done
Do you want to proceed with the proposed reshard plan (yes/no)? yes
#第二次重新分配
[root@db01 ~]# redis-trib.rb reshard 172.16.1.51:6379
How many slots do you want to move (from 1 to 16384)? 1365
What is the receiving node ID? 7c79559b280db9d9c182f3a25c718efe9e934fc7
Please enter all the source node IDs.
Type 'all' to use all the nodes as source nodes for the hash slots.
Type 'done' once you entered all the source nodes IDs.
Source node #1:a298dbd22c10b8492d9ff4295504c50666f4fb2e
Source node #2:done
Do you want to proceed with the proposed reshard plan (yes/no)? yes
#第三次重新分配
[root@db01 ~]# redis-trib.rb reshard 172.16.1.51:6379
How many slots do you want to move (from 1 to 16384)? 1366
What is the receiving node ID? d27553035a3e91c78d375208c72b756e9b2523d4
Please enter all the source node IDs.
Type 'all' to use all the nodes as source nodes for the hash slots.
Type 'done' once you entered all the source nodes IDs.
Source node #1:a298dbd22c10b8492d9ff4295504c50666f4fb2e
Source node #2:done
Do you want to proceed with the proposed reshard plan (yes/no)? yes
2)檢視重新分配以後的節點資訊
[root@db01 ~]# redis-trib.rb info 172.16.1.51:6379
172.16.1.51:6379 (5ad7bd95...) -> 664 keys | 5461 slots | 1 slaves.
172.16.1.53:6379 (d2755303...) -> 665 keys | 5462 slots | 2 slaves.
172.16.1.52:6381 (a298dbd2...) -> 0 keys | 0 slots | 0 slaves.
172.16.1.52:6379 (7c79559b...) -> 671 keys | 5461 slots | 1 slaves.
[OK] 2000 keys in 4 masters.
0.12 keys per slot on average.
3)刪除節點
#刪除沒有資料的主節點
[root@db01 ~]# redis-trib.rb del-node 172.16.1.52:6381 a298dbd22c10b8492d9ff4295504c50666f4fb2e
>>> Removing node a298dbd22c10b8492d9ff4295504c50666f4fb2e from cluster 172.16.1.52:6381
>>> Sending CLUSTER FORGET messages to the cluster...
>>> SHUTDOWN the node.
#刪除從節點
[root@db01 ~]# redis-trib.rb del-node 172.16.1.52:6382 47e3638a203488218d8c62deb82e768598977ba4
>>> Removing node 47e3638a203488218d8c62deb82e768598977ba4 from cluster 172.16.1.52:6382
>>> Sending CLUSTER FORGET messages to the cluster...
>>> SHUTDOWN the node.
#刪除時不能刪除有資料的主節點,從節點可以隨便刪除
[root@db01 ~]# redis-trib.rb del-node 172.16.1.53:6379 d27553035a3e91c78d375208c72b756e9b2523d4
>>> Removing node d27553035a3e91c78d375208c72b756e9b2523d4 from cluster 172.16.1.53:6379
[ERR] Node 172.16.1.53:6379 is not empty! Reshard data away and try again.