1. 程式人生 > >Redis資料遷移同步工具(redis-shake)

Redis資料遷移同步工具(redis-shake)

## 前言 最近線上一臺自建redis服務的伺服器頻繁報警,記憶體使用率有點高,這是一臺配置比較簡陋(2C8G)的機子了,近期也打算準備拋棄它了。拋棄之前需對原先的資料進行遷移,全量資料,增量資料都需要考慮,確保資料不丟失,在網上查了下發現了阿里自研的RedisShake工具,據說很妙,那就先試試吧。 ## 實戰 正式操作前先在測試環境實踐一把看看效果如何,先說明下環境 源庫:192.168.28.142 目標庫:192.168.147.128 步驟一: 使用wget命令下載至本地 wget https://github.com/alibaba/RedisShake/releases/download/release-v2.0.2-20200506/redis-shake-v2.0.2.tar.gz 步驟二: 解壓,進入相應目錄看看有哪些東東 ![](https://img2020.cnblogs.com/blog/451497/202005/451497-20200526201310573-1086783385.png) ```bash [root@dev ~]# cd /opt/redis-shake/ [root@dev redis-shake]# ls redis-shake-v2.0.2.tar.gz [root@dev redis-shake]# tar -zxvf redis-shake-v2.0.2.tar.gz redis-shake-v2.0.2/ redis-shake-v2.0.2/redis-shake.darwin redis-shake-v2.0.2/redis-shake.windows redis-shake-v2.0.2/redis-shake.conf redis-shake-v2.0.2/ChangeLog redis-shake-v2.0.2/stop.sh redis-shake-v2.0.2/start.sh redis-shake-v2.0.2/hypervisor redis-shake-v2.0.2/redis-shake.linux [root@dev redis-shake]# ls redis-shake-v2.0.2 redis-shake-v2.0.2.tar.gz [root@dev redis-shake]# cd redis-shake-v2.0.2 [root@dev redis-shake-v2.0.2]# ls ChangeLog hypervisor redis-shake.conf redis-shake.darwin redis-shake.linux redis-shake.windows start.sh stop.sh ``` 步驟三: 更改配置檔案redis-shake.conf 日誌輸出 ```bash # log file,日誌檔案,不配置將列印到stdout (e.g. /var/log/redis-shake.log ) log.file =/opt/redis-shake/redis-shake.log ``` 源端連線配置 ```bash # ip:port # the source address can be the following: # 1. single db address. for "standalone" type. # 2. ${sentinel_master_name}:${master or slave}@sentinel single/cluster address, e.g., mymaster:[email protected]:26379;127.0.0.1:26380, or @127.0.0.1:26379;127.0.0.1:26380. for "sentinel" type. # 3. cluster that has several db nodes split by semicolon(;). for "cluster" type. e.g., 10.1.1.1:20331;10.1.1.2:20441. # 4. proxy address(used in "rump" mode only). for "proxy" type. # 源redis地址。對於sentinel或者開源cluster模式,輸入格式為"master名字:拉取角色為master或者slave@sentinel的地址",別的cluster # 架構,比如codis, twemproxy, aliyun proxy等需要配置所有master或者slave的db地址。 source.address = 192.168.28.142:6379 # password of db/proxy. even if type is sentinel. source.password_raw = xxxxxxx ``` 目標端設定 ```bash # ip:port # the target address can be the following: # 1. single db address. for "standalone" type. # 2. ${sentinel_master_name}:${master or slave}@sentinel single/cluster address, e.g., mymaster:[email protected]:26379;127.0.0.1:26380, or @127.0.0.1:26379;127.0.0.1:26380. for "sentinel" type. # 3. cluster that has several db nodes split by semicolon(;). for "cluster" type. # 4. proxy address. for "proxy" type. target.address = 192.168.147.128:6379 # password of db/proxy. even if type is sentinel. target.password_raw = xxxxxx # auth type, don't modify it target.auth_type = auth # all the data will be written into this db. < 0 means disable. target.db = -1 ``` 步驟四 ./start.sh redis-shake.conf sync 檢視日誌檔案 ![](https://img2020.cnblogs.com/blog/451497/202005/451497-20200526201337186-33210766.png) ```bash 2020/05/15 09:00:29 [INFO] DbSyncer[0] starts syncing data from 192.168.28.142:6379 to [192.168.147.128:6379] with http[9321], enableResumeFromBreakPoint[false], slot boundary[-1, -1] 2020/05/15 09:00:29 [INFO] DbSyncer[0] psync connect '192.168.28.142:6379' with auth type[auth] OK! 2020/05/15 09:00:29 [INFO] DbSyncer[0] psync send listening port[9320] OK! 2020/05/15 09:00:29 [INFO] DbSyncer[0] try to send 'psync' command: run-id[?], offset[-1] 2020/05/15 09:00:29 [INFO] Event:FullSyncStart Id:redis-shake 2020/05/15 09:00:29 [INFO] DbSyncer[0] psync runid = 0a08aa75b91f8724014e056cd2c3068eebf81ec4, offset = 126, fullsync 2020/05/15 09:00:30 [INFO] DbSyncer[0] + 2020/05/15 09:00:30 [INFO] DbSyncer[0] rdb file size = 45173748 2020/05/15 09:00:30 [INFO] Aux information key:redis-ver value:4.0.10 2020/05/15 09:00:30 [INFO] Aux information key:redis-bits value:64 2020/05/15 09:00:30 [INFO] Aux information key:ctime value:1589521609 2020/05/15 09:00:30 [INFO] Aux information key:used-mem value:66304824 2020/05/15 09:00:30 [INFO] Aux information key:repl-stream-db value:0 2020/05/15 09:00:30 [INFO] Aux information key:repl-id value:0a08aa75b91f8724014e056cd2c3068eebf81ec4 2020/05/15 09:00:30 [INFO] Aux information key:repl-offset value:126 2020/05/15 09:00:30 [INFO] Aux information key:aof-preamble value:0 2020/05/15 09:00:30 [INFO] db_size:8 expire_size:0 2020/05/15 09:00:31 [INFO] DbSyncer[0] total = 43.08MB - 10.87MB [ 25%] entry=0 filter=4 2020/05/15 09:00:32 [INFO] DbSyncer[0] total = 43.08MB - 21.78MB [ 50%] entry=0 filter=5 2020/05/15 09:00:33 [INFO] DbSyncer[0] total = 43.08MB - 32.64MB [ 75%] entry=0 filter=5 2020/05/15 09:00:34 [INFO] DbSyncer[0] total = 43.08MB - 42.92MB [ 99%] entry=0 filter=6 2020/05/15 09:00:34 [INFO] db_size:1 expire_size:0 2020/05/15 09:00:34 [INFO] db_size:48 expire_size:12 2020/05/15 09:00:34 [INFO] db_size:533 expire_size:468 2020/05/15 09:00:34 [INFO] DbSyncer[0] total ``` 檢視下資料同步情況,如下圖,發現所有的庫都同步過來了,非常nice。 ![](https://img2020.cnblogs.com/blog/451497/202005/451497-20200526201355932-1639761344.png) 但如果只想同步某個庫又該怎麼操作呢? 馬上查閱了配置檔案及官方文件,稍作調整就可以,具體如下 |配置項|說明 | |--|--| | target.db |設定待遷移的資料在目的Redis中的邏輯資料庫名。例如,要將所有資料遷移到目的Redis中的DB10,則需將此引數的值設定為10。當該值設定為-1時,邏輯資料庫名在源Redis和目的Redis中的名稱相同,即源Redis中的DB0將被遷移至目的Redis中的DB0,DB1將被遷移至DB1,以此類推。 | |filter.db.whitelist| 指定的db被通過,比如0;5;10將會使db0, db5, db10通過, 其他的被過濾| 那比如我這邊只想把源端的10庫同步至目標端的10庫只需對配置檔案進行如下改動 ```bash target.db = 10 filter.db.whitelist =10 ``` 重新執行步驟四命令,執行後效果如下,大功告成。 ![](https://img2020.cnblogs.com/blog/451497/202005/451497-20200526201413524-1347251028.png) 另外還有一個配置項特意說明下 |配置項|說明 | |--|--| | key_exists |當源目的有重複key,是否進行覆寫。rewrite表示源端覆蓋目的端。none表示一旦發生程序直接退出。ignore表示保留目的端key,忽略源端的同步key。該值在rump模式下沒有用。 | 當前僅僅是單個節點到單個節點的同步,如涉及到叢集等其他一些場景下,請參考官方文件說明,自行測試。 [參考文件](https://github.com/alibaba/RedisShake/wiki/%E7%AC%AC%E4%B8%80%E6%AC%A1%E4%BD%BF%E7%94%A8%EF%BC%8C%E5%A6%82%E4%BD%95%E8%BF%9B%E8%A1%8C%E9%85%8D%E7%BD%AE%EF%BC%9F) ![](https://img2020.cnblogs.com/blog/451497/202005/451497-20200526201220858-7427982