Mongodb分散式叢集副本集+分片
簡介
1. 副本集
開啟複製集後,主節點會在 local 庫下生成一個集合叫 oplog.rs,這是一個有限集合,也就是大小是固定的。其中記錄的是整個mongod例項一段時間內資料庫的所有變更(插入/更新/刪除)操作,當空間用完時新記錄自動覆蓋最老的記錄
MongoDB複製集(副本集):由一組實列(程序)組成;包含一個Primary節點和多個Secondary節點,使用者的所有寫操作寫入Primary ,Secondary通過oplog來同步primary的資料;可以通過心跳檢測機制,一旦primary出現故障,則就會通過仲裁節點從secondary選取一個新的主節點
Primary:主節點,由選擇產生,負責客戶端的寫操作,產生oplog日誌檔案
Secondary:從節點;負責客戶端的讀操作;
Arbiter:仲裁節點;只參與選舉的投票;不會成為Primary和secondary,任意節點宕機,複製集將不能提供服務了(無法選出Primary),這時可以給複製集新增一個Arbiter節點,即使有節點宕機,仍能選出Primary**
1.1 MongoDB選舉的原理
MongoDB的節點分為三種類型,分別為標準節點(host)、被動節點(passive)和仲裁節點(arbiter)
只有標準節點才有可能被選舉為活躍節點(主節點),擁有選舉權。被動節點有完整副本,不可能成為活躍節點,具有選舉權。仲裁節點不復制資料,不可能成為活躍節點,只有選舉權。說白了就是隻有標準節點才有可能被選舉為主節點,即使在一個複製集中說有的標準節點都宕機,被動節點和仲裁節點也不會成為主節點
標準節點與被動節點的區別:priority值高者是標準節點,低者則為被動節點
選舉規則是票數高的獲勝,priority是優先權01000的值,相當於額外增加0
1000的票數。選舉結果:票數高者獲勝;若票數相同,資料新者獲勝。
1.2 複製過程
-
客戶端的資料進來;
-
資料操作寫入到日誌緩衝;
-
資料寫入到資料緩衝;
-
把日誌緩衝中的操作日誌放到OPLOG中來;
-
返回操作結果到客戶端(非同步);
-
後臺執行緒進行OPLOG複製到從節點,這個頻率是非常高的,比日誌刷盤頻率還要高,從節點會一直監聽主節點,OPLOG一有變化就會進行復制操作;
-
後臺執行緒進行日誌緩衝中的資料刷盤,非常頻繁(預設100)毫秒,也可自行設定(30-60);
後臺執行緒進行資料緩衝中的資料刷盤,預設是60秒;
2. 分片技術
複製集主要用來實現自動故障轉移從而達到高可用的目的,然而,隨著業務規模的增長和時間的推移,業務資料量會越來越大,當前業務資料可能只有幾百GB不到,一臺DB伺服器足以搞定所有的工作,而一旦業務資料量擴充大幾個TB幾百個TB時,就會產生一臺伺服器無法儲存的情況,此時,需要將資料按照一定的規則分配到不同的伺服器進行儲存、查詢等,即為分片叢集。分片叢集要做到的事情就是資料分散式儲存
儲存方式:資料集被拆分成資料塊(chunk),每個資料塊包含多個doc,資料塊分散式儲存在分片叢集中。
2.1 角色
Config server:MongoDB負責追蹤資料塊在shard上的分佈資訊,每個分片儲存哪些資料塊,叫做分片的元資料,儲存在config server上的資料庫 config中,一般使用3臺config
server,所有config server中的config資料庫必須完全相同(建議將config server部署在不同的伺服器,以保證穩定性);
Shard server:將資料進行分片,拆分成資料塊(chunk),每個trunk塊的大小預設為64M,資料塊真正存放的單位;
Mongos server:資料庫叢集請求的入口,所有的請求都通過mongos進行協調,檢視分片的元資料,查詢chunk存放位置,mongos自己就是一個請求分發中心,在生產環境通常有多mongos作為請求的入口,防止其中一個掛掉所有的mongodb請求都沒有辦法操作。
總結:應用請求mongos來操作mongodb的增刪改查,配置伺服器儲存資料庫元資訊,並且和mongos做同步,資料最終存入在shard(分片)上,為了防止資料丟失,同步在副本集中儲存了一份,仲裁節點在資料儲存到分片的時候決定儲存到哪個節點。
2.2 分片的片鍵
概述:片鍵是文件的一個屬性欄位或是一個複合索引欄位,一旦建立後則不可改變,片鍵是拆分資料的關鍵的依據,如若在資料極為龐大的場景下,片鍵決定了資料在分片的過程中資料的儲存位置,直接會影響叢集的效能;
注:建立片鍵時,需要有一個支撐片鍵執行的索引;
2.3 片鍵分類
1.遞增片鍵:使用時間戳,日期,自增的主鍵,ObjectId,_id等,此類片鍵的寫入操作集中在一個分片伺服器上,寫入不具有分散性,這會導致單臺伺服器壓力較大,但分割比較容易,這臺伺服器可能會成為效能瓶頸;
2.雜湊片鍵:也稱之為雜湊索引,使用一個雜湊索引欄位作為片鍵,優點是使資料在各節點分佈比較均勻,資料寫入可隨機分發到每個分片伺服器上,把寫入的壓力分散到了各個伺服器上。但是讀也是隨機的,可能會命中更多的分片,但是缺點是無法實現範圍區分;
3.組合片鍵:資料庫中沒有比較合適的鍵值供片鍵選擇,或者是打算使用的片鍵基數太小(即變化少如星期只有7天可變化),可以選另一個欄位使用組合片鍵,甚至可以新增冗餘欄位來組合;
4.標籤片鍵:資料儲存在指定的分片伺服器上,可以為分片新增tag標籤,然後指定相應的tag,比如讓10...(T)出現在shard0000上,11...(Q)出現在shard0001或shard0002上,就可以使用tag讓均衡器指定分發;
環境介紹
分散式mongodb叢集副本集+分片
CentOS Linux release 7.9.2009
Mongodb:4.0.21
IP | 路由服務埠 | 配置服務埠 | 分片1埠 | 分片2埠 | 分片3端 |
---|---|---|---|---|---|
172.16.245.102 | 27017 | 27018 | 27001 | 27002 | 27003 |
172.16.245.103 | 27017 | 27018 | 27001 | 27002 | 27003 |
172.16.245.104 | 27017 | 27018 | 27001 | 27002 | 27003 |
1.獲取軟體包
wget https://fastdl.mongodb.org/linux/mongodb-linux-x86_64-4.0.21.tgz
2.建立路由、配置、分片等的相關目錄與檔案
三臺伺服器相同操作
mkdir -p /data/mongodb/conf
mkdir -p /data/mongodb/data/config
mkdir -p /data/mongodb/data/shard1
mkdir -p /data/mongodb/data/shard2
mkdir -p /data/mongodb/data/shard3
mkdir -p /data/mongodb/log/config.log
mkdir -p /data/mongodb/log/mongos.log
mkdir -p /data/mongodb/log/shard1.log
mkdir -p /data/mongodb/log/shard2.log
mkdir -p /data/mongodb/log/shard3.log
touch /data/mongodb/log/config.log/config.log
touch /data/mongodb/log/mongos.log/mongos.log
touch /data/mongodb/log/shard1.log/shard1.log
touch /data/mongodb/log/shard2.log/shard2.log
touch /data/mongodb/log/shard3.log/shard3.log
3. 配置伺服器部署mongodb
3臺伺服器執行相同操作
[root@node5 conf]# vim /data/mongodb/conf/config.conf
[root@node5 conf]# cat /data/mongodb/conf/config.conf
dbpath=/data/mongodb/data/config
logpath=/data/mongodb/log/config.log/config.log
port=27018 #埠號
logappend=true
fork=true
maxConns=5000
replSet=configs #副本集名稱
configsvr=true
bind_ip=0.0.0.0
4. 配置複本集
分別啟動三臺伺服器的配置服務
[root@node5 conf]# /data/mongodb/bin/mongod -f /data/mongodb/conf/config.conf
連線mongo,只需在任意一臺機器執行即可
[root@node5 conf]# /data/mongodb/bin/mongo --host 172.16.245.102 --port 27018
進入資料庫以後切換資料庫
use admin
初始化副本集
rs.initiate({_id:"configs",members:[{_id:0,host:"172.16.245.102:27018"},{_id:1,host:"172.16.245.103:27018"}, {_id:2,host:"172.16.245.104:27018"}]})
其中_id:"configs"的configs是上面config.conf配置檔案裡的複製集名稱,把三臺伺服器的(指定相應的IP)配置服務組成複製集
檢視狀態
configs:PRIMARY> rs.status()
{
"set" : "configs", #副本集名稱
"date" : ISODate("2020-12-22T06:39:04.184Z"),
"myState" : 1,
"term" : NumberLong(1),
"syncingTo" : "",
"syncSourceHost" : "",
"syncSourceId" : -1,
"configsvr" : true,
"heartbeatIntervalMillis" : NumberLong(2000),
"optimes" : {
"lastCommittedOpTime" : {
"ts" : Timestamp(1608619142, 1),
"t" : NumberLong(1)
},
"readConcernMajorityOpTime" : {
"ts" : Timestamp(1608619142, 1),
"t" : NumberLong(1)
},
"appliedOpTime" : {
"ts" : Timestamp(1608619142, 1),
"t" : NumberLong(1)
},
"durableOpTime" : {
"ts" : Timestamp(1608619142, 1),
"t" : NumberLong(1)
}
},
"lastStableCheckpointTimestamp" : Timestamp(1608619122, 1),
"electionCandidateMetrics" : {
"lastElectionReason" : "electionTimeout",
"lastElectionDate" : ISODate("2020-12-22T05:31:42.975Z"),
"electionTerm" : NumberLong(1),
"lastCommittedOpTimeAtElection" : {
"ts" : Timestamp(0, 0),
"t" : NumberLong(-1)
},
"lastSeenOpTimeAtElection" : {
"ts" : Timestamp(1608615092, 1),
"t" : NumberLong(-1)
},
"numVotesNeeded" : 2,
"priorityAtElection" : 1,
"electionTimeoutMillis" : NumberLong(10000),
"numCatchUpOps" : NumberLong(0),
"newTermStartDate" : ISODate("2020-12-22T05:31:42.986Z"),
"wMajorityWriteAvailabilityDate" : ISODate("2020-12-22T05:31:44.134Z")
},
"members" : [
{
"_id" : 0,
"name" : "172.16.245.102:27018", #副本1
"health" : 1,
"state" : 1,
"stateStr" : "PRIMARY",
"uptime" : 4383,
"optime" : {
"ts" : Timestamp(1608619142, 1),
"t" : NumberLong(1)
},
"optimeDate" : ISODate("2020-12-22T06:39:02Z"),
"syncingTo" : "",
"syncSourceHost" : "",
"syncSourceId" : -1,
"infoMessage" : "",
"electionTime" : Timestamp(1608615102, 1),
"electionDate" : ISODate("2020-12-22T05:31:42Z"),
"configVersion" : 1,
"self" : true,
"lastHeartbeatMessage" : ""
},
{
"_id" : 1,
"name" : "172.16.245.103:27018", #副本2
"health" : 1,
"state" : 2,
"stateStr" : "SECONDARY",
"uptime" : 4052,
"optime" : {
"ts" : Timestamp(1608619142, 1),
"t" : NumberLong(1)
},
"optimeDurable" : {
"ts" : Timestamp(1608619142, 1),
"t" : NumberLong(1)
},
"optimeDate" : ISODate("2020-12-22T06:39:02Z"),
"optimeDurableDate" : ISODate("2020-12-22T06:39:02Z"),
"lastHeartbeat" : ISODate("2020-12-22T06:39:02.935Z"),
"lastHeartbeatRecv" : ISODate("2020-12-22T06:39:03.044Z"),
"pingMs" : NumberLong(85),
"lastHeartbeatMessage" : "",
"syncingTo" : "172.16.245.102:27018",
"syncSourceHost" : "172.16.245.102:27018",
"syncSourceId" : 0,
"infoMessage" : "",
"configVersion" : 1
},
{
"_id" : 2,
"name" : "172.16.245.104:27018", #副本3
"health" : 1,
"state" : 2,
"stateStr" : "SECONDARY",
"uptime" : 4052,
"optime" : {
"ts" : Timestamp(1608619142, 1),
"t" : NumberLong(1)
},
"optimeDurable" : {
"ts" : Timestamp(1608619142, 1),
"t" : NumberLong(1)
},
"optimeDate" : ISODate("2020-12-22T06:39:02Z"),
"optimeDurableDate" : ISODate("2020-12-22T06:39:02Z"),
"lastHeartbeat" : ISODate("2020-12-22T06:39:03.368Z"),
"lastHeartbeatRecv" : ISODate("2020-12-22T06:39:03.046Z"),
"pingMs" : NumberLong(85),
"lastHeartbeatMessage" : "",
"syncingTo" : "172.16.245.102:27018",
"syncSourceHost" : "172.16.245.102:27018",
"syncSourceId" : 0,
"infoMessage" : "",
"configVersion" : 1
}
],
"ok" : 1,
"operationTime" : Timestamp(1608619142, 1),
"$gleStats" : {
"lastOpTime" : Timestamp(0, 0),
"electionId" : ObjectId("7fffffff0000000000000001")
},
"lastCommittedOpTime" : Timestamp(1608619142, 1),
"$clusterTime" : {
"clusterTime" : Timestamp(1608619142, 1),
"signature" : {
"hash" : BinData(0,"AAAAAAAAAAAAAAAAAAAAAAAAAAA="),
"keyId" : NumberLong(0)
}
}
}
configs:PRIMARY>
等幾十秒左右,執行上面的命令檢視狀態,三臺機器的配置服務就已形成複製集,其中1臺為PRIMARY,其他2臺為SECONDARY
5. 分片服務部署
3臺伺服器執行相同操作
在/data/mongodb/conf目錄建立shard1.conf、shard2.conf、shard3.conf,內容如下
[root@node3 conf]# ls
config.conf mongos.conf shard1.conf shard2.conf shard3.conf
[root@node3 conf]# cat shard1.conf
dbpath=/data/mongodb/data/shard1
logpath=/data/mongodb/log/shard1.log/shard1.log
port=27001
logappend=true
fork=true
maxConns=5000
storageEngine=mmapv1
shardsvr=true
replSet=shard1
bind_ip=0.0.0.0
[root@node3 conf]# cat shard2.conf
dbpath=/data/mongodb/data/shard2
logpath=/data/mongodb/log/shard2.log/shard2.log
port=27002
logappend=true
fork=true
maxConns=5000
storageEngine=mmapv1
shardsvr=true
replSet=shard2
bind_ip=0.0.0.0
[root@node3 conf]# cat shard3.conf
dbpath=/data/mongodb/data/shard3
logpath=/data/mongodb/log/shard3.log/shard3.log
port=27003
logappend=true
fork=true
maxConns=5000
storageEngine=mmapv1
shardsvr=true
replSet=shard3
bind_ip=0.0.0.0
埠分別是27001、27002、27003,分別對應shard1.conf、shard2.conf、shard3.conf
在3臺機器的相同埠形成一個分片的複製集,由於3臺機器都需要這3個檔案,所以根據這9個配置檔案分別啟動分片服務
三臺機器都需要啟動分片服務,節點1啟動shard1 節點2啟動shard1 節點2啟動shard1 ....
[root@node3 conf]# /data/mongodb/bin/mongond -f /data/mongodb/conf/shard1.conf
[root@node3 conf]# /data/mongodb/bin/mongond -f /data/mongodb/conf/shard2.conf
[root@node3 conf]# /data/mongodb/bin/mongond -f /data/mongodb/conf/shard3.conf
6. 將分片配置為複製集
連線mongo,只需在任意一臺機器執行即可
mongo --host 172.16.245.103 --port 27001
這裡以shard1為例,其他兩個分片則再需對應連線到27002、27003的埠進行操作即可
進入資料庫admin
use admin
初始化三個分片副本集叢集
rs.initiate({_id:"shard1",members:[{_id:0,host:"172.16.245.102:27001"},{_id:1,host:"172.16.245.103:27001"},{_id:2,host:"172.16.245.104:27001"}]})
rs.initiate({_id:"shard2",members:[{_id:0,host:"172.16.245.102:27002"},{_id:1,host:"172.16.245.103:27002"},{_id:2,host:"172.16.245.104:27002"}]})
rs.initiate({_id:"shard3",members:[{_id:0,host:"172.16.245.102:27003"},{_id:1,host:"172.16.245.103:27003"},{_id:2,host:"172.16.245.104:27003"}]})
7. 路由服務部署
3臺伺服器執行相同操作
在/data/mongodb/conf目錄建立mongos.conf,內容如下
[root@node4 conf]# cat mongos.conf
logpath=/data/mongodb/log/mongos.log/mongos.log
logappend = true
port = 27017
fork = true
configdb = configs/172.16.245.102:27018,172.16.245.103:27018,172.16.245.104:27018
maxConns=20000
bind_ip=0.0.0.0
啟動mongos
分別在三臺伺服器啟動:
[root@node4 conf]# /data/mongodb/bin/mongos -f /data/mongodb/conf/mongos.conf
8. 啟動分片功能
連線mongo
mongo --host 172.16.245.102 --port 27017
mongos>use admin
新增分片,只需在一臺機器執行即可
mongos>sh.addShard("shard1/172.16.245.102:27001,172.16.245.103:27001,172.16.245.104:27001")
mongos>sh.addShard("shard2/172.16.245.102:27002,172.16.245.103:27002,172.16.245.104:27002")
mongos>sh.addShard("shard3/172.16.245.102:27003,172.16.245.103:27003,172.16.245.104:27003")
mongos> sh.status()
--- Sharding Status ---
sharding version: {
"_id" : 1,
"minCompatibleVersion" : 5,
"currentVersion" : 6,
"clusterId" : ObjectId("5fe184bf29ea91799b557a8b")
}
shards:
{ "_id" : "shard1", "host" : "shard1/172.16.245.102:27001,172.16.245.103:27001,172.16.245.104:27001", "state" : 1 }
{ "_id" : "shard2", "host" : "shard2/172.16.245.102:27002,172.16.245.103:27002,172.16.245.104:27002", "state" : 1 }
{ "_id" : "shard3", "host" : "shard3/172.16.245.102:27003,172.16.245.103:27003,172.16.245.104:27003", "state" : 1 }
active mongoses:
"4.0.21" : 3
autosplit:
Currently enabled: yes
balancer:
Currently enabled: yes
Currently running: no
Failed balancer rounds in last 5 attempts: 0
Migration Results for the last 24 hours:
No recent migrations
databases:
{ "_id" : "calon", "primary" : "shard1", "partitioned" : true, "version" : { "uuid" : UUID("2a4780da-8f33-4214-88f8-c9b1a3140299"), "lastMod" : 1 } }
{ "_id" : "config", "primary" : "config", "partitioned" : true }
config.system.sessions
shard key: { "_id" : 1 }
unique: false
balancing: true
chunks:
shard1 1
{ "_id" : { "$minKey" : 1 } } -->> { "_id" : { "$maxKey" : 1 } } on : shard1 Timestamp(1, 0)
{ "_id" : "test", "primary" : "shard2", "partitioned" : false, "version" : { "uuid" : UUID("d59549a4-3e68-4a7d-baf8-67a4d8372b76"), "lastMod" : 1 } }
{ "_id" : "ycsb", "primary" : "shard3", "partitioned" : true, "version" : { "uuid" : UUID("6d491868-245e-4c86-a5f5-f8fcd308b45e"), "lastMod" : 1 } }
ycsb.usertable
shard key: { "_id" : "hashed" }
unique: false
balancing: true
chunks:
shard1 2
shard2 2
shard3 2
{ "_id" : { "$minKey" : 1 } } -->> { "_id" : NumberLong("-6148914691236517204") } on : shard1 Timestamp(1, 0)
{ "_id" : NumberLong("-6148914691236517204") } -->> { "_id" : NumberLong("-3074457345618258602") } on : shard1 Timestamp(1, 1)
{ "_id" : NumberLong("-3074457345618258602") } -->> { "_id" : NumberLong(0) } on : shard2 Timestamp(1, 2)
{ "_id" : NumberLong(0) } -->> { "_id" : NumberLong("3074457345618258602") } on : shard2 Timestamp(1, 3)
{ "_id" : NumberLong("3074457345618258602") } -->> { "_id" : NumberLong("6148914691236517204") } on : shard3 Timestamp(1, 4)
{ "_id" : NumberLong("6148914691236517204") } -->> { "_id" : { "$maxKey" : 1 } } on : shard3 Timestamp(1, 5)
9.實現分片功能
設定分片chunk大小
mongos>use config
mongos>db.setting.save({"_id":"chunksize","value":1}) #設定塊大小為1M是方便實驗,不然需要插入海量資料
10. 啟用資料庫分片並進行測試
mongos> use shardbtest;
switched to db shardbtest
mongos>
mongos>
mongos> sh.enableSharding("shardbtest");
{
"ok" : 1,
"operationTime" : Timestamp(1608620190, 4),
"$clusterTime" : {
"clusterTime" : Timestamp(1608620190, 4),
"signature" : {
"hash" : BinData(0,"AAAAAAAAAAAAAAAAAAAAAAAAAAA="),
"keyId" : NumberLong(0)
}
}
}
mongos> sh.shardCollection("shardbtest.usertable",{"_id":"hashed"}); #為 shardbtest褲中的usertable表進行分片基於id的雜湊分片
{
"collectionsharded" : "shardbtest.usertable",
"collectionUUID" : UUID("2b5a8bcf-6e31-4dac-831f-5fa414253655"),
"ok" : 1,
"operationTime" : Timestamp(1608620216, 36),
"$clusterTime" : {
"clusterTime" : Timestamp(1608620216, 36),
"signature" : {
"hash" : BinData(0,"AAAAAAAAAAAAAAAAAAAAAAAAAAA="),
"keyId" : NumberLong(0)
}
}
}
mongos> for(i=1;i<=3000;i++){db.usertable.insert({"id":i})} #模擬插入3000條的資料
WriteResult({ "nInserted" : 1 })
11. 檢視分片驗證
mongos> db.usertable.stats();
{
"sharded" : true,
"paddingFactorNote" : "paddingFactor is unused and unmaintained in 3.0. It remains hard coded to 1.0 for compatibility only.",
"userFlags" : 1,
"capped" : false,
"ns" : "shardbtest.usertable",
"count" : 3000, #總3000
"numExtents" : 9,
"size" : 144096,
"storageSize" : 516096,
"totalIndexSize" : 269808,
"indexSizes" : {
"_id_" : 122640,
"_id_hashed" : 147168
},
"avgObjSize" : 48,
"maxSize" : NumberLong(0),
"nindexes" : 2,
"nchunks" : 6,
"shards" : {
"shard3" : {
"ns" : "shardbtest.usertable",
"size" : 48656,
"count" : 1013, #shard3寫入1013
"avgObjSize" : 48,
"numExtents" : 3,
"storageSize" : 172032,
"lastExtentSize" : 131072,
"paddingFactor" : 1,
"paddingFactorNote" : "paddingFactor is unused and unmaintained in 3.0. It remains hard coded to 1.0 for compatibility only.",
"userFlags" : 1,
"capped" : false,
"nindexes" : 2,
"totalIndexSize" : 89936,
"indexSizes" : {
"_id_" : 40880,
"_id_hashed" : 49056
},
"ok" : 1,
"operationTime" : Timestamp(1608620309, 1),
"$gleStats" : {
"lastOpTime" : {
"ts" : Timestamp(1608620272, 38),
"t" : NumberLong(1)
},
"electionId" : ObjectId("7fffffff0000000000000001")
},
"lastCommittedOpTime" : Timestamp(1608620309, 1),
"$configServerState" : {
"opTime" : {
"ts" : Timestamp(1608620307, 1),
"t" : NumberLong(1)
}
},
"$clusterTime" : {
"clusterTime" : Timestamp(1608620309, 1),
"signature" : {
"hash" : BinData(0,"AAAAAAAAAAAAAAAAAAAAAAAAAAA="),
"keyId" : NumberLong(0)
}
}
},
"shard2" : {
"ns" : "shardbtest.usertable",
"size" : 49232,
"count" : 1025, #shard2寫入1025
"avgObjSize" : 48,
"numExtents" : 3,
"storageSize" : 172032,
"lastExtentSize" : 131072,
"paddingFactor" : 1,
"paddingFactorNote" : "paddingFactor is unused and unmaintained in 3.0. It remains hard coded to 1.0 for compatibility only.",
"userFlags" : 1,
"capped" : false,
"nindexes" : 2,
"totalIndexSize" : 89936,
"indexSizes" : {
"_id_" : 40880,
"_id_hashed" : 49056
},
"ok" : 1,
"operationTime" : Timestamp(1608620306, 1),
"$gleStats" : {
"lastOpTime" : {
"ts" : Timestamp(1608620272, 32),
"t" : NumberLong(1)
},
"electionId" : ObjectId("7fffffff0000000000000001")
},
"lastCommittedOpTime" : Timestamp(1608620306, 1),
"$configServerState" : {
"opTime" : {
"ts" : Timestamp(1608620307, 1),
"t" : NumberLong(1)
}
},
"$clusterTime" : {
"clusterTime" : Timestamp(1608620309, 1),
"signature" : {
"hash" : BinData(0,"AAAAAAAAAAAAAAAAAAAAAAAAAAA="),
"keyId" : NumberLong(0)
}
}
},
"shard1" : {
"ns" : "shardbtest.usertable",
"size" : 46208,
"count" : 962, #shard1寫入962
"avgObjSize" : 48,
"numExtents" : 3,
"storageSize" : 172032,
"lastExtentSize" : 131072,
"paddingFactor" : 1,
"paddingFactorNote" : "paddingFactor is unused and unmaintained in 3.0. It remains hard coded to 1.0 for compatibility only.",
"userFlags" : 1,
"capped" : false,
"nindexes" : 2,
"totalIndexSize" : 89936,
"indexSizes" : {
"_id_" : 40880,
"_id_hashed" : 49056
},
"ok" : 1,
"operationTime" : Timestamp(1608620308, 1),
"$gleStats" : {
"lastOpTime" : {
"ts" : Timestamp(1608620292, 10),
"t" : NumberLong(1)
},
"electionId" : ObjectId("7fffffff0000000000000001")
},
"lastCommittedOpTime" : Timestamp(1608620308, 1),
"$configServerState" : {
"opTime" : {
"ts" : Timestamp(1608620307, 1),
"t" : NumberLong(1)
}
},
"$clusterTime" : {
"clusterTime" : Timestamp(1608620309, 1),
"signature" : {
"hash" : BinData(0,"AAAAAAAAAAAAAAAAAAAAAAAAAAA="),
"keyId" : NumberLong(0)
}
}
}
},
"ok" : 1,
"operationTime" : Timestamp(1608620309, 1),
"$clusterTime" : {
"clusterTime" : Timestamp(1608620309, 1),
"signature" : {
"hash" : BinData(0,"AAAAAAAAAAAAAAAAAAAAAAAAAAA="),
"keyId" : NumberLong(0)
}
}
}
11. 副本節點是否已同步資料
mongos> show dbs
admin 0.000GB
calon 0.078GB
config 0.235GB
shardbtest 0.234GB
test 0.078GB
ycsb 0.234GB
mongos> use shardbtest
switched to db shardbtest
mongos> db.usertable.stats();
{
"sharded" : true,
"paddingFactorNote" : "paddingFactor is unused and unmaintained in 3.0. It remains hard coded to 1.0 for compatibility only.",
"userFlags" : 1,
"capped" : false,
"ns" : "shardbtest.usertable",
"count" : 3000,
"numExtents" : 9,
"size" : 144096,
"storageSize" : 516096,
"totalIndexSize" : 269808,
"indexSizes" : {
"_id_" : 122640,
"_id_hashed" : 147168
},
"avgObjSize" : 48,
"maxSize" : NumberLong(0),
"nindexes" : 2,
"nchunks" : 6,
"shards" : {
"shard2" : {
"ns" : "shardbtest.usertable",
"size" : 49232,
"count" : 1025,
"avgObjSize" : 48,
"numExtents" : 3,
"storageSize" : 172032,
"lastExtentSize" : 131072,
"paddingFactor" : 1,
"paddingFactorNote" : "paddingFactor is unused and unmaintained in 3.0. It remains hard coded to 1.0 for compatibility only.",
"userFlags" : 1,
"capped" : false,
"nindexes" : 2,
"totalIndexSize" : 89936,
"indexSizes" : {
"_id_" : 40880,
"_id_hashed" : 49056
},
"ok" : 1,
"operationTime" : Timestamp(1608620886, 6),
"$gleStats" : {
"lastOpTime" : Timestamp(0, 0),
"electionId" : ObjectId("7fffffff0000000000000001")
},
"lastCommittedOpTime" : Timestamp(1608620886, 6),
"$configServerState" : {
"opTime" : {
"ts" : Timestamp(1608620888, 1),
"t" : NumberLong(1)
}
},
"$clusterTime" : {
"clusterTime" : Timestamp(1608620888, 1),
"signature" : {
"hash" : BinData(0,"AAAAAAAAAAAAAAAAAAAAAAAAAAA="),
"keyId" : NumberLong(0)
}
}
},
"shard3" : {
"ns" : "shardbtest.usertable",
"size" : 48656,
"count" : 1013,
"avgObjSize" : 48,
"numExtents" : 3,
"storageSize" : 172032,
"lastExtentSize" : 131072,
"paddingFactor" : 1,
"paddingFactorNote" : "paddingFactor is unused and unmaintained in 3.0. It remains hard coded to 1.0 for compatibility only.",
"userFlags" : 1,
"capped" : false,
"nindexes" : 2,
"totalIndexSize" : 89936,
"indexSizes" : {
"_id_" : 40880,
"_id_hashed" : 49056
},
"ok" : 1,
"operationTime" : Timestamp(1608620889, 1),
"$gleStats" : {
"lastOpTime" : Timestamp(0, 0),
"electionId" : ObjectId("7fffffff0000000000000001")
},
"lastCommittedOpTime" : Timestamp(1608620889, 1),
"$configServerState" : {
"opTime" : {
"ts" : Timestamp(1608620888, 1),
"t" : NumberLong(1)
}
},
"$clusterTime" : {
"clusterTime" : Timestamp(1608620889, 1),
"signature" : {
"hash" : BinData(0,"AAAAAAAAAAAAAAAAAAAAAAAAAAA="),
"keyId" : NumberLong(0)
}
}
},
"shard1" : {
"ns" : "shardbtest.usertable",
"size" : 46208,
"count" : 962,
"avgObjSize" : 48,
"numExtents" : 3,
"storageSize" : 172032,
"lastExtentSize" : 131072,
"paddingFactor" : 1,
"paddingFactorNote" : "paddingFactor is unused and unmaintained in 3.0. It remains hard coded to 1.0 for compatibility only.",
"userFlags" : 1,
"capped" : false,
"nindexes" : 2,
"totalIndexSize" : 89936,
"indexSizes" : {
"_id_" : 40880,
"_id_hashed" : 49056
},
"ok" : 1,
"operationTime" : Timestamp(1608620888, 1),
"$gleStats" : {
"lastOpTime" : Timestamp(0, 0),
"electionId" : ObjectId("7fffffff0000000000000001")
},
"lastCommittedOpTime" : Timestamp(1608620888, 1),
"$configServerState" : {
"opTime" : {
"ts" : Timestamp(1608620888, 1),
"t" : NumberLong(1)
}
},
"$clusterTime" : {
"clusterTime" : Timestamp(1608620888, 1),
"signature" : {
"hash" : BinData(0,"AAAAAAAAAAAAAAAAAAAAAAAAAAA="),
"keyId" : NumberLong(0)
}
}
}
},
"ok" : 1,
"operationTime" : Timestamp(1608620889, 1),
"$clusterTime" : {
"clusterTime" : Timestamp(1608620889, 1),
"signature" : {
"hash" : BinData(0,"AAAAAAAAAAAAAAAAAAAAAAAAAAA="),
"keyId" : NumberLong(0)
}
}
}
mongos>
以上就實現了mongodb複製集的高可用以及分片