1. 程式人生 > >Mongodb 5節點異地兩中心故障轉移恢復測試案例

Mongodb 5節點異地兩中心故障轉移恢復測試案例

Mongodb5節點異地兩中心故障轉移恢復測試案例

架構方式:5節點,主中心(2資料1仲裁),備中心(1資料1仲裁)

 

1基本情況

 

作業系統:Red Hat Enterprise Linux Server release 6.3 (Santiago)

 

Mongodb版本:db version v3.6.3

 

Mongodb架構:

 

 

 

Ip,埠規劃

 

"hosts" : [##資料節點

 

"10.15.7.114:28001",#主中心

 

"10.15.7.114:28002",#主中心

 

"10.15.7.114:28004"#備份中心

 

],

 

"arbiters" : [##仲裁節點

 

"10.15.7.114:28003",#主中心

 

"10.15.7.114:28005"#備份中心

 

],

 

2 mongodb

配置檔案,其他配置檔案28001替換為28002~28007

注意相應的datalog等資料目錄要存在,記住,所有的mongodb裡面執行的命令,都要有返回ok1才成功。

[[email protected] ~]# cat /data/mongodb/conf/28001.conf

port=28001

bind_ip=10.15.7.114

logpath=/data/mongodb/log/28001.log

dbpath=/data/mongodb/data/28001/

logappend=true

pidfilepath=/data/mongodb/28001.pid

fork=true

oplogSize=1024

replSet=MyMongo

[[email protected] conf]# ll /data/mongodb/conf/

total 32

-rw-r--r-- 1 root root 192 Oct 16 02:48 28001.conf

-rw-r--r-- 1 root root 225 Oct 16 07:21 28002.conf

-rw-r--r-- 1 root root 192 Oct 16 02:48 28003.conf

-rw-r--r-- 1 root root 192 Oct 11 03:37 28004.conf

-rw-r--r-- 1 root root 192 Oct 11 03:38 28005.conf

-rw-r--r-- 1 root root 192 Oct 16 07:18 28006.conf

-rw-r--r-- 1 root root 192 Oct 16 08:15 28007.conf

啟動5個節點

[[email protected] data]# /usr/local/mongodb/bin/mongod -f /data/mongodb/conf/28001.conf

[[email protected] data]# /usr/local/mongodb/bin/mongod -f /data/mongodb/conf/28002.conf

[[email protected] data]# /usr/local/mongodb/bin/mongod -f /data/mongodb/conf/28003.conf

[[email protected] data]# /usr/local/mongodb/bin/mongod -f /data/mongodb/conf/28004.conf

[[email protected] data]# /usr/local/mongodb/bin/mongod -f /data/mongodb/conf/28005.conf

 

3 測試主中心全部宕機的情況

[[email protected] ~]# /usr/local/mongodb/bin/mongod  --shutdown -f /data/mongodb/conf/28001.conf

[[email protected] ~]# /usr/local/mongodb/bin/mongod  --shutdown -f /data/mongodb/conf/28002.conf

[[email protected] ~]# /usr/local/mongodb/bin/mongod  --shutdown -f /data/mongodb/conf/28003.conf

只剩備份中心的2個節點,這時候變為secondry,備份中心只能讀取,不能寫入

兩種解決方案
1 把備份中心的secondry節點,作為單節點,去掉引數replSet重新啟動,可以繼續使用,但是由於單節點缺少oplog,後面主中心恢復,備份中心的資料不能恢復到整個副本集中,可以考慮備份方式(複雜,這裡還有第二種方式)。

2 在備份中心,啟動2個新的仲裁節點,強制加入副本集,使secondry節點變為primary節點,詳細的操作方式

一:啟動備份中心的2個新節點(28006,28007

二:在備份中心的secondry節點,重新配置副本集,加入2個仲裁節點

MyMongo:SECONDARY> use admin

MyMongo:SECONDARY> config = {

"_id":"MyMongo",

members:[

{"_id":0,host:"10.15.7.114:28001"},

{"_id":1,host:"10.15.7.114:28002"},

{"_id":2,host:"10.15.7.114:28003", arbiterOnly: true},

{"_id":3,host:"10.15.7.114:28004"},

{"_id":4,host:"10.15.7.114:28005", arbiterOnly: true},

{"_id":5,host:"10.15.7.114:28006", arbiterOnly: true},

{"_id":6,host:"10.15.7.114:28007", arbiterOnly: true}]

}

MyMongo:SECONDARY> rs.reconfig(config,{force:true});

MyMongo:PRIMARY> rs.status() #檢視副本集的狀態,及各節點的狀態

MyMongo:PRIMARY> db.isMaster()

client端批量插入資料(簡單的程式),這裡可以配置叢集方式連線,也可以指定主節點的方式進行插入,這裡是直接指定主節點

 

#coding:utf-8
import time
from pymongo import MongoClient
#conn = MongoClient()
# keyword argument
conn = MongoClient('10.15.7.114', 28004)
# MongoDB URI
#conn = MongoClient('mongodb://localhost:27017/')
#from pymongo import ReplicaSetConnection
#conn = ReplicaSetConnection("10.15.7.114:28001,10.15.7.114:28002,10.15.7.114:28004", replicaSet="MyMongo", read_preference=2, safe=True)

for i in xrange(100):
    try:
        conn.test.tt3.insert({"name":"test" + str(i)})
        time.sleep(1)
        print conn.primary
        print conn.secondaries
    except:
        pass

 

主節點執行,100

MyMongo:PRIMARY> db.tt3.find().count()

100

啟動主中心的3個節點

[[email protected] conf]# /usr/local/mongodb/bin/mongod -f /data/mongodb/conf/28001.conf

[[email protected] conf]# /usr/local/mongodb/bin/mongod -f /data/mongodb/conf/28002.conf

[[email protected] conf]# /usr/local/mongodb/bin/mongod -f /data/mongodb/conf/28003.conf

[[email protected] ~]#  /usr/local/mongodb/bin/mongo 10.15.7.114:28002/admin

MyMongo:SECONDARY> rs.slaveOk(true)

MyMongo:SECONDARY> use test;

switched to db test

MyMongo:SECONDARY> db.tt3.find().count() #資料同步成功

100

之前的5個節點,現在變成了7個節點,刪除新加的2個仲裁節點

MyMongo:PRIMARY> rs.remove("10.15.7.114:28007");

MyMongo:PRIMARY> rs.remove("10.15.7.114:28006");

MyMongo:PRIMARY> db.isMaster() #變回之前的5個節點,1主,2secondry2仲裁

{

"hosts" : [

"10.15.7.114:28001",

"10.15.7.114:28002",

"10.15.7.114:28004"

],

"arbiters" : [

"10.15.7.114:28003",

"10.15.7.114:28005"

MyMongo:PRIMARY> rs.status()

{

"set" : "MyMongo",

"date" : ISODate("2018-10-16T18:16:15.512Z"),

"myState" : 1,

"term" : NumberLong(7),

"heartbeatIntervalMillis" : NumberLong(2000),

"optimes" : {

"lastCommittedOpTime" : {

"ts" : Timestamp(1539713766, 1),

"t" : NumberLong(7)

},

"readConcernMajorityOpTime" : {

"ts" : Timestamp(1539713766, 1),

"t" : NumberLong(7)

},

"appliedOpTime" : {

"ts" : Timestamp(1539713766, 1),

"t" : NumberLong(7)

},

"durableOpTime" : {

"ts" : Timestamp(1539713766, 1),

"t" : NumberLong(7)

}

},

"members" : [

{

"_id" : 0,

"name" : "10.15.7.114:28001",

"health" : 1,

"state" : 2,

"stateStr" : "SECONDARY",

"uptime" : 237,

"optime" : {

"ts" : Timestamp(1539713766, 1),

"t" : NumberLong(7)

},

"optimeDurable" : {

"ts" : Timestamp(1539713766, 1),

"t" : NumberLong(7)

},

"optimeDate" : ISODate("2018-10-16T18:16:06Z"),

"optimeDurableDate" : ISODate("2018-10-16T18:16:06Z"),

"lastHeartbeat" : ISODate("2018-10-16T18:16:13.929Z"),

"lastHeartbeatRecv" : ISODate("2018-10-16T18:16:14.928Z"),

"pingMs" : NumberLong(0),

"syncingTo" : "10.15.7.114:28004",

"configVersion" : 102086

},

{

"_id" : 1,

"name" : "10.15.7.114:28002",

"health" : 1,

"state" : 2,

"stateStr" : "SECONDARY",

"uptime" : 269,

"optime" : {

"ts" : Timestamp(1539713766, 1),

"t" : NumberLong(7)

},

"optimeDurable" : {

"ts" : Timestamp(1539713766, 1),

"t" : NumberLong(7)

},

"optimeDate" : ISODate("2018-10-16T18:16:06Z"),

"optimeDurableDate" : ISODate("2018-10-16T18:16:06Z"),

"lastHeartbeat" : ISODate("2018-10-16T18:16:13.929Z"),

"lastHeartbeatRecv" : ISODate("2018-10-16T18:16:14.928Z"),

"pingMs" : NumberLong(0),

"syncingTo" : "10.15.7.114:28004",

"configVersion" : 102086

},

{

"_id" : 2,

"name" : "10.15.7.114:28003",

"health" : 1,

"state" : 7,

"stateStr" : "ARBITER",

"uptime" : 193,

"lastHeartbeat" : ISODate("2018-10-16T18:16:13.929Z"),

"lastHeartbeatRecv" : ISODate("2018-10-16T18:16:11.917Z"),

"pingMs" : NumberLong(0),

"configVersion" : 102086

},

{

"_id" : 3,

"name" : "10.15.7.114:28004",

"health" : 1,

"state" : 1,

"stateStr" : "PRIMARY",

"uptime" : 68054,

"optime" : {

"ts" : Timestamp(1539713766, 1),

"t" : NumberLong(7)

},

"optimeDate" : ISODate("2018-10-16T18:16:06Z"),

"electionTime" : Timestamp(1539712874, 1),

"electionDate" : ISODate("2018-10-16T18:01:14Z"),

"configVersion" : 102086,

"self" : true

},

{

"_id" : 4,

"name" : "10.15.7.114:28005",

"health" : 1,

"state" : 7,

"stateStr" : "ARBITER",

"uptime" : 66987,

"lastHeartbeat" : ISODate("2018-10-16T18:16:13.929Z"),

"lastHeartbeatRecv" : ISODate("2018-10-16T18:16:11.921Z"),

"pingMs" : NumberLong(0),

"configVersion" : 102086

}

],

"ok" : 1,

"operationTime" : Timestamp(1539713766, 1),

"$clusterTime" : {

"clusterTime" : Timestamp(1539713766, 1),

"signature" : {

"hash" : BinData(0,"AAAAAAAAAAAAAAAAAAAAAAAAAAA="),

"keyId" : NumberLong(0)

}

}

}