Redis5.0 Cluster叢集搭建及擴容
阿新 • • 發佈:2020-12-09
Redis3.0之後,Redis官方提供了完整的叢集解決方案。該方案採用去中心化的方式,包括:sharding(分割槽)、replication(複製)、failover(故障轉移),稱為RedisCluster。Redis5.0前採用redis-trib進行叢集的建立和管理,需要ruby支援。Redis5.0可以直接使用Redis-cli進行叢集的建立和管理。這裡主要介紹使用Redis5.0.10搭建RedisCluster叢集
1.環境準備
1.1 叢集規劃
Maste1 - 192.168.1.161:6379
Slave1 - 192.168.1.161:6380
Maste2 - 192.168.1.162:6379
Slave2 - 192.168.1.162:6380
Maste3 - 192.168.1.163:6379
Slave3 - 192.168.1.163:6380
Maste4(擴容) - 192.168.1.165:6379
Slave4(擴容) - 192.168.1.165:6380
Redis叢集最少需要6個節點,可以分佈在一臺或者多臺主機上。本次使用4臺虛擬機器,先161,162,163上搭建一個三主三從的叢集,每臺虛擬機器上安裝6379和6380兩個節點。叢集搭建成功後,在165上新建兩個節點(一主一從),將兩個新的節點加入叢集,來驗證叢集擴容。
1.2 安裝包下載準備
wget https://download.redis.io/releases/redis-5.0.10.tar.gz
tar -zxvf redis-5.0.10.tar.gz
cd redis-5.0.10
mkdir -p /opt/redis-cluster/6379 /opt/redis-cluster/6380
make install PREFIX=/opt/redis-cluster/6379
make install PREFIX=/opt/redis-cluster/6380
cp redis.conf /opt/redis-cluster/6379/bin
cp redis.conf /opt/redis-cluster/6380/bin
2.叢集配置
2.1 修改每臺伺服器上的節點配置檔案
修改/opt/redis-cluster/6379/bin/redis.conf
###1.註釋掉bind
#bind 127.0.0.1
###2.修改protected-mode為no
protected-mode no
###3.修改cluster-enable為 yes
cluster-enable yes
###4.修改daemonize為 yes
daemonize yes
修改/opt/redis-cluster/6380/bin/redis.conf
###1.註釋掉bind
#bind 127.0.0.1
###2.修改protected-mode為no
protected-mode no
###3.修改cluster-enable為 yes
cluster-enable yes
###4.修改daemonize為 yes
daemonize yes
###5.port 修改為6380
port 6380
###6.修改pidfile 為 /var/run/redis_6380.pid
pidfile /var/run/redis_6380.pid
2.2 啟動節點
cd /opt/redis-cluster/6379/bin/
./redis-server redis.conf
cd /opt/redis-cluster/6380/bin/
./redis-server redis.conf
##檢視節點是否啟動
ps -ef|grep redis
2.3 建立Redis叢集
選取一個節點執行建立叢集命令,如下:
[root@master bin]# cd /opt/redis-cluster/6379/bin/
[root@master bin]# ./redis-cli --cluster create 192.168.1.161:6379 192.168.1.162:6379 192.168.1.163:6379 192.168.1.161:6380 192.168.1.162:6380 192.168.1.163:6380 --cluster-replicas 1
>>> Performing hash slots allocation on 6 nodes...
Master[0] -> Slots 0 - 5460
Master[1] -> Slots 5461 - 10922
Master[2] -> Slots 10923 - 16383
Adding replica 192.168.1.162:6380 to 192.168.1.161:6379
Adding replica 192.168.1.163:6380 to 192.168.1.162:6379
Adding replica 192.168.1.161:6380 to 192.168.1.163:6379
M: 64364a4b2de9a82653b46be040e86f600fb5ac2d 192.168.1.161:6379
slots:[0-5460] (5461 slots) master
M: 9ee1767e39b07033b480c82337620ed006162c8a 192.168.1.162:6379
slots:[5461-10922] (5462 slots) master
M: 041fcc81090840f67efed70d3cd623076d15dbc4 192.168.1.163:6379
slots:[10923-16383] (5461 slots) master
S: f59d4108d4fbe145c2d9c6f2d70e06991a4d63be 192.168.1.161:6380
replicates 041fcc81090840f67efed70d3cd623076d15dbc4
S: 113134e4cedc9069db1af667b2f6cf23097d0b3e 192.168.1.162:6380
replicates 64364a4b2de9a82653b46be040e86f600fb5ac2d
S: 65ad86c316f6452ca6a7429febe4a9d96f6c2e43 192.168.1.163:6380
replicates 9ee1767e39b07033b480c82337620ed006162c8a
Can I set the above configuration? (type 'yes' to accept): yes
>>> Nodes configuration updated
>>> Assign a different config epoch to each node
>>> Sending CLUSTER MEET messages to join the cluster
Waiting for the cluster to join
...
>>> Performing Cluster Check (using node 192.168.1.161:6379)
M: 64364a4b2de9a82653b46be040e86f600fb5ac2d 192.168.1.161:6379
slots:[0-5460] (5461 slots) master
1 additional replica(s)
S: 113134e4cedc9069db1af667b2f6cf23097d0b3e 192.168.1.162:6380
slots: (0 slots) slave
replicates 64364a4b2de9a82653b46be040e86f600fb5ac2d
M: 041fcc81090840f67efed70d3cd623076d15dbc4 192.168.1.163:6379
slots:[10923-16383] (5461 slots) master
1 additional replica(s)
M: 9ee1767e39b07033b480c82337620ed006162c8a 192.168.1.162:6379
slots:[5461-10922] (5462 slots)