erlang中修改list裡面tuple—list的值的型別
分片概念
分片(sharding)是一種跨多臺機器分佈資料的方法, MongoDB使用分片來支援具有非常大的資料集和高吞吐量操作的部署。
換句話說:分片(sharding)是指將資料拆分,將其分散存在不同的機器上的過程。有時也用分割槽(partitioning)來表示這個概念。將資料分散到不同的機器上,不需要功能強大的大型計算機就可以儲存更多的資料,處理更多的負載。
具有大型資料集或高吞吐量應用程式的資料庫系統可以會挑戰單個伺服器的容量。例如,高查詢率會耗盡伺服器的CPU容量。工作集大小大於系統的RAM會強調磁碟驅動器的I / O容量。
有兩種解決系統增長的方法:垂直擴充套件和水平擴充套件。
- 垂直擴充套件意味著增加單個伺服器的容量,例如使用更強大的CPU,新增更多RAM或增加儲存空間量。可用技術的侷限性可能會限制單個機器對於給定工作負載而言足夠強大。此外,基於雲的提供商基於可用的硬體配置具有硬性上限。結果,垂直縮放有實際的最大值。
- 水平擴充套件意味著劃分系統資料集並載入多個伺服器,新增其他伺服器以根據需要增加容量。雖然單個機器的總體速度或容量可能不高,但每臺機器處理整個工作負載的子集,可能提供比單個高速大容量伺服器更高的效率。擴充套件部署容量只需要根據需要新增額外的伺服器,這可能比單個機器的高階硬體的總體成本更低。權衡是基礎架構和部署維護的複雜性增加。
MongoDB支援通過分片進行水平擴充套件。
分片叢集包含的元件
MongoDB分片群集包含以下元件:
- 分片(儲存):每個分片包含分片資料的子集。 每個分片都可以部署為副本集。
- mongos (路由):mongos充當查詢路由器,在客戶端應用程式和分片叢集之間提供介面。
- config servers (“排程”的配置):配置伺服器儲存群集的元資料和配置設定。 從MongoDB 3.4開始,必須將配置伺服器部署為副本集(CSRS)。
下圖描述了分片叢集中元件的互動:
MongoDB在集合級別對資料進行分片,將集合資料分佈在叢集中的分片上。
分片叢集架構目標
兩個分片節點副本集(3+3)+一個配置節點副本集(3)+兩個路由節點(2),共11個服務節點。
分片(儲存)節點副本集的建立
所有的的配置檔案都直接放到 sharded_cluster 的相應的子目錄下面,預設配置檔名字:mongod.conf
第一套副本集
準備存放資料和日誌的目錄:
#-----------myshardrs01 mkdir -p /home/mongodb/sharded_cluster/myshardrs01_27018/log & mkdir -p /home/mongodb/sharded_cluster/myshardrs01_27018/data/db mkdir -p /home/mongodb/sharded_cluster/myshardrs01_27118/log & mkdir -p /home/mongodb/sharded_cluster/myshardrs01_27118/data/db mkdir -p /home/mongodb/sharded_cluster/myshardrs01_27218/log & mkdir -p /home/mongodb/sharded_cluster/myshardrs01_27218/data/db
新建或修改配置檔案:
# vim /home/mongodb/sharded_cluster/myshardrs01_27018/mongod.conf
systemLog:
destination: file
path: /home/mongodb/sharded_cluster/myshardrs01_27018/log/mongod.log
logAppend: true
storage:
dbPath: /home/mongodb/sharded_cluster/myshardrs01_27018/data/db
journal:
enabled: true
processManagement:
fork: true
pidFilePath: /home/mongodb/sharded_cluster/myshardrs01_27018/log/mongod.pid
net:
bindIp: 192.168.0.253
port: 27018
replication:
replSetName: myshardrs01
sharding:
clusterRole: shardsvr
# vim /home/mongodb/sharded_cluster/myshardrs01_27118/mongod.conf
systemLog:
destination: file
path: /home/mongodb/sharded_cluster/myshardrs01_27118/log/mongod.log
logAppend: true
storage:
dbPath: /home/mongodb/sharded_cluster/myshardrs01_27118/data/db
journal:
enabled: true
processManagement:
fork: true
pidFilePath: /home/mongodb/sharded_cluster/myshardrs01_27118/log/mongod.pid
net:
bindIp: 192.168.0.253
port: 27118
replication:
replSetName: myshardrs01
sharding:
clusterRole: shardsvr
# vim /home/mongodb/sharded_cluster/myshardrs01_27218/mongod.conf
systemLog:
destination: file
path: /home/mongodb/sharded_cluster/myshardrs01_27218/log/mongod.log
logAppend: true
storage:
dbPath: /home/mongodb/sharded_cluster/myshardrs01_27218/data/db
journal:
enabled: true
processManagement:
fork: true
pidFilePath: /home/mongodb/sharded_cluster/myshardrs01_27218/log/mongod.pid
net:
bindIp: 192.168.0.253
port: 27218
replication:
replSetName: myshardrs01
sharding:
clusterRole: shardsvr
啟動第一套副本集:一主一副本一仲裁
依次啟動三個mongod服務:
/usr/bin/mongod -f /home/mongodb/sharded_cluster/myshardrs01_27018/mongod.conf
/usr/bin/mongod -f /home/mongodb/sharded_cluster/myshardrs01_27118/mongod.conf
/usr/bin/mongod -f /home/mongodb/sharded_cluster/myshardrs01_27218/mongod.conf
檢視服務是否啟動:
# ps -ef |grep mongod
root 4080 1 3 14:45 ? 00:00:01 /usr/bin/mongod -f /home/mongodb/sharded_cluster/myshardrs01_27018/mongod.conf
root 4133 1 4 14:45 ? 00:00:01 /usr/bin/mongod -f /home/mongodb/sharded_cluster/myshardrs01_27118/mongod.conf
root 4186 1 6 14:45 ? 00:00:01 /usr/bin/mongod -f /home/mongodb/sharded_cluster/myshardrs01_27218/mongod.conf
# ss -tulnp |grep 27*
tcp LISTEN 0 128 192.168.0.253:27018 *:* users:(("mongod",pid=4080,fd=12))
tcp LISTEN 0 128 192.168.0.253:27118 *:* users:(("mongod",pid=4133,fd=12))
tcp LISTEN 0 128 192.168.0.253:27218 *:* users:(("mongod",pid=4186,fd=12))
(1)初始化副本集和建立主節點:
使用客戶端命令連線任意一個節點,但這裡儘量要連線主節點:
/usr/bin/mongo --host 192.168.0.253 --port 27018
執行初始化副本集命令:
> rs.initiate()
{
"info2" : "no configuration specified. Using a default configuration for the set",
"me" : "192.168.0.253:27018",
"ok" : 1,
"$clusterTime" : {
"clusterTime" : Timestamp(1605163725, 1),
"signature" : {
"hash" : BinData(0,"AAAAAAAAAAAAAAAAAAAAAAAAAAA="),
"keyId" : NumberLong(0)
}
},
"operationTime" : Timestamp(1605163725, 1)
}
檢視副本集情況:
myshardrs01:PRIMARY> rs.status()
{
"set" : "myshardrs01",
"date" : ISODate("2020-11-12T06:49:37.873Z"),
"myState" : 1,
"term" : NumberLong(1),
"syncSourceHost" : "",
"syncSourceId" : -1,
"heartbeatIntervalMillis" : NumberLong(2000),
"majorityVoteCount" : 1,
"writeMajorityCount" : 1,
"votingMembersCount" : 1,
"writableVotingMembersCount" : 1,
"optimes" : {
"lastCommittedOpTime" : {
"ts" : Timestamp(1605163776, 1),
"t" : NumberLong(1)
},
"lastCommittedWallTime" : ISODate("2020-11-12T06:49:36.443Z"),
"readConcernMajorityOpTime" : {
"ts" : Timestamp(1605163776, 1),
"t" : NumberLong(1)
},
"readConcernMajorityWallTime" : ISODate("2020-11-12T06:49:36.443Z"),
"appliedOpTime" : {
"ts" : Timestamp(1605163776, 1),
"t" : NumberLong(1)
},
"durableOpTime" : {
"ts" : Timestamp(1605163776, 1),
"t" : NumberLong(1)
},
"lastAppliedWallTime" : ISODate("2020-11-12T06:49:36.443Z"),
"lastDurableWallTime" : ISODate("2020-11-12T06:49:36.443Z")
},
"lastStableRecoveryTimestamp" : Timestamp(1605163726, 4),
"electionCandidateMetrics" : {
"lastElectionReason" : "electionTimeout",
"lastElectionDate" : ISODate("2020-11-12T06:48:46.210Z"),
"electionTerm" : NumberLong(1),
"lastCommittedOpTimeAtElection" : {
"ts" : Timestamp(0, 0),
"t" : NumberLong(-1)
},
"lastSeenOpTimeAtElection" : {
"ts" : Timestamp(1605163725, 1),
"t" : NumberLong(-1)
},
"numVotesNeeded" : 1,
"priorityAtElection" : 1,
"electionTimeoutMillis" : NumberLong(10000),
"newTermStartDate" : ISODate("2020-11-12T06:48:46.333Z"),
"wMajorityWriteAvailabilityDate" : ISODate("2020-11-12T06:48:46.449Z")
},
"members" : [
{
"_id" : 0,
"name" : "192.168.0.253:27018",
"health" : 1,
"state" : 1,
"stateStr" : "PRIMARY",
"uptime" : 232,
"optime" : {
"ts" : Timestamp(1605163776, 1),
"t" : NumberLong(1)
},
"optimeDate" : ISODate("2020-11-12T06:49:36Z"),
"syncSourceHost" : "",
"syncSourceId" : -1,
"infoMessage" : "Could not find member to sync from",
"electionTime" : Timestamp(1605163726, 1),
"electionDate" : ISODate("2020-11-12T06:48:46Z"),
"configVersion" : 1,
"configTerm" : -1,
"self" : true,
"lastHeartbeatMessage" : ""
}
],
"ok" : 1,
"$clusterTime" : {
"clusterTime" : Timestamp(1605163776, 1),
"signature" : {
"hash" : BinData(0,"AAAAAAAAAAAAAAAAAAAAAAAAAAA="),
"keyId" : NumberLong(0)
}
},
"operationTime" : Timestamp(1605163776, 1)
}
(2)主節點配置檢視:
myshardrs01:PRIMARY> rs.conf()
{
"_id" : "myshardrs01",
"version" : 1,
"protocolVersion" : NumberLong(1),
"writeConcernMajorityJournalDefault" : true,
"members" : [
{
"_id" : 0,
"host" : "192.168.0.253:27018",
"arbiterOnly" : false,
"buildIndexes" : true,
"hidden" : false,
"priority" : 1,
"tags" : {
},
"slaveDelay" : NumberLong(0),
"votes" : 1
}
],
"settings" : {
"chainingAllowed" : true,
"heartbeatIntervalMillis" : 2000,
"heartbeatTimeoutSecs" : 10,
"electionTimeoutMillis" : 10000,
"catchUpTimeoutMillis" : -1,
"catchUpTakeoverDelayMillis" : 30000,
"getLastErrorModes" : {
},
"getLastErrorDefaults" : {
"w" : 1,
"wtimeout" : 0
},
"replicaSetId" : ObjectId("5facdacda5ab79463a9265e4")
}
}
( 3)新增副本節點:
myshardrs01:PRIMARY> rs.add("192.168.0.253:27118")
{
"ok" : 1,
"$clusterTime" : {
"clusterTime" : Timestamp(1605163982, 1),
"signature" : {
"hash" : BinData(0,"AAAAAAAAAAAAAAAAAAAAAAAAAAA="),
"keyId" : NumberLong(0)
}
},
"operationTime" : Timestamp(1605163982, 1)
}
(4)新增仲裁節點:
myshardrs01:PRIMARY> rs.addArb("192.168.0.253:27218")
{
"ok" : 1,
"$clusterTime" : {
"clusterTime" : Timestamp(1605164036, 2),
"signature" : {
"hash" : BinData(0,"AAAAAAAAAAAAAAAAAAAAAAAAAAA="),
"keyId" : NumberLong(0)
}
},
"operationTime" : Timestamp(1605164036, 2)
}
檢視副本集的配置情況:
myshardrs01:PRIMARY> rs.conf()
{
"_id" : "myshardrs01",
"version" : 3,
"protocolVersion" : NumberLong(1),
"writeConcernMajorityJournalDefault" : true,
"members" : [
{
"_id" : 0,
"host" : "192.168.0.253:27018",
"arbiterOnly" : false,
"buildIndexes" : true,
"hidden" : false,
"priority" : 1,
"tags" : {
},
"slaveDelay" : NumberLong(0),
"votes" : 1
},
{
"_id" : 1,
"host" : "192.168.0.253:27118",
"arbiterOnly" : false,
"buildIndexes" : true,
"hidden" : false,
"priority" : 1,
"tags" : {
},
"slaveDelay" : NumberLong(0),
"votes" : 1
},
{
"_id" : 2,
"host" : "192.168.0.253:27218",
"arbiterOnly" : true,
"buildIndexes" : true,
"hidden" : false,
"priority" : 0,
"tags" : {
},
"slaveDelay" : NumberLong(0),
"votes" : 1
}
],
"settings" : {
"chainingAllowed" : true,
"heartbeatIntervalMillis" : 2000,
"heartbeatTimeoutSecs" : 10,
"electionTimeoutMillis" : 10000,
"catchUpTimeoutMillis" : -1,
"catchUpTakeoverDelayMillis" : 30000,
"getLastErrorModes" : {
},
"getLastErrorDefaults" : {
"w" : 1,
"wtimeout" : 0
},
"replicaSetId" : ObjectId("5facdacda5ab79463a9265e4")
}
}
第二套副本集
準備存放資料和日誌的目錄:
#-----------myshardrs02
mkdir -p /home/mongodb/sharded_cluster/myshardrs02_27318/log & mkdir -p /home/mongodb/sharded_cluster/myshardrs02_27318/data/db
mkdir -p /home/mongodb/sharded_cluster/myshardrs02_27418/log & mkdir -p /home/mongodb/sharded_cluster/myshardrs02_27418/data/db
mkdir -p /home/mongodb/sharded_cluster/myshardrs02_27518/log & mkdir -p /home/mongodb/sharded_cluster/myshardrs02_27518/data/db
新建或修改配置檔案:
# vim /home/mongodb/sharded_cluster/myshardrs02_27318/mongod.conf
systemLog:
destination: file
path: /home/mongodb/sharded_cluster/myshardrs02_27318/log/mongod.log
logAppend: true
storage:
dbPath: /home/mongodb/sharded_cluster/myshardrs02_27318/data/db
journal:
enabled: true
processManagement:
fork: true
pidFilePath: /home/mongodb/sharded_cluster/myshardrs02_27318/log/mongod.pid
net:
bindIp: 192.168.0.253
port: 27318
replication:
replSetName: myshardrs02
sharding:
clusterRole: shardsvr
# vim /home/mongodb/sharded_cluster/myshardrs02_27418/mongod.conf
systemLog:
destination: file
path: /home/mongodb/sharded_cluster/myshardrs02_27418/log/mongod.log
logAppend: true
storage:
dbPath: /home/mongodb/sharded_cluster/myshardrs02_27418/data/db
journal:
enabled: true
processManagement:
fork: true
pidFilePath: /home/mongodb/sharded_cluster/myshardrs02_27418/log/mongod.pid
net:
bindIp: 192.168.0.253
port: 27418
replication:
replSetName: myshardrs02
sharding:
clusterRole: shardsvr
# vim /home/mongodb/sharded_cluster/myshardrs02_27518/mongod.conf
systemLog:
destination: file
path: /home/mongodb/sharded_cluster/myshardrs02_27518/log/mongod.log
logAppend: true
storage:
dbPath: /home/mongodb/sharded_cluster/myshardrs02_27518/data/db
journal:
enabled: true
processManagement:
fork: true
pidFilePath: /home/mongodb/sharded_cluster/myshardrs02_27518/log/mongod.pid
net:
bindIp: 192.168.0.253
port: 27518
replication:
replSetName: myshardrs02
sharding:
clusterRole: shardsvr
啟動第一套副本集:一主一副本一仲裁
依次啟動三個mongod服務:
/usr/bin/mongod -f /home/mongodb/sharded_cluster/myshardrs02_27318/mongod.conf
/usr/bin/mongod -f /home/mongodb/sharded_cluster/myshardrs02_27418/mongod.conf
/usr/bin/mongod -f /home/mongodb/sharded_cluster/myshardrs02_27518/mongod.conf
檢視服務是否啟動:
# ps -ef |grep mongod
root 4080 1 0 14:45 ? 00:00:08 /usr/bin/mongod -f /home/mongodb/sharded_cluster/myshardrs01_27018/mongod.conf
root 4133 1 0 14:45 ? 00:00:07 /usr/bin/mongod -f /home/mongodb/sharded_cluster/myshardrs01_27118/mongod.conf
root 4186 1 0 14:45 ? 00:00:07 /usr/bin/mongod -f /home/mongodb/sharded_cluster/myshardrs01_27218/mongod.conf
root 4527 1 5 14:59 ? 00:00:01 /usr/bin/mongod -f /home/mongodb/sharded_cluster/myshardrs02_27318/mongod.conf
root 4580 1 7 14:59 ? 00:00:00 /usr/bin/mongod -f /home/mongodb/sharded_cluster/myshardrs02_27418/mongod.conf
root 4633 1 11 14:59 ? 00:00:00 /usr/bin/mongod -f /home/mongodb/sharded_cluster/myshardrs02_27518/mongod.conf
# ss -tulnp |grep 27
tcp LISTEN 0 128 192.168.0.253:27018 *:* users:(("mongod",pid=4080,fd=12))
tcp LISTEN 0 128 192.168.0.253:27118 *:* users:(("mongod",pid=4133,fd=12))
tcp LISTEN 0 128 192.168.0.253:27218 *:* users:(("mongod",pid=4186,fd=12))
tcp LISTEN 0 128 192.168.0.253:27318 *:* users:(("mongod",pid=4527,fd=12))
tcp LISTEN 0 128 192.168.0.253:27418 *:* users:(("mongod",pid=4580,fd=12))
tcp LISTEN 0 128 192.168.0.253:27518 *:* users:(("mongod",pid=4633,fd=12))
(1)初始化副本集和建立主節點:
使用客戶端命令連線任意一個節點,但這裡儘量要連線主節點:
/usr/bin/mongo --host 192.168.0.253 --port 27318
執行初始化副本集命令:
> rs.initiate()
{
"info2" : "no configuration specified. Using a default configuration for the set",
"me" : "192.168.0.253:27318",
"ok" : 1,
"$clusterTime" : {
"clusterTime" : Timestamp(1605164586, 1),
"signature" : {
"hash" : BinData(0,"AAAAAAAAAAAAAAAAAAAAAAAAAAA="),
"keyId" : NumberLong(0)
}
},
"operationTime" : Timestamp(1605164586, 1)
}
檢視副本集情況:
myshardrs02:PRIMARY> rs.status()
{
"set" : "myshardrs02",
"date" : ISODate("2020-11-12T07:03:24.108Z"),
"myState" : 1,
"term" : NumberLong(1),
"syncSourceHost" : "",
"syncSourceId" : -1,
"heartbeatIntervalMillis" : NumberLong(2000),
"majorityVoteCount" : 1,
"writeMajorityCount" : 1,
"votingMembersCount" : 1,
"writableVotingMembersCount" : 1,
"optimes" : {
"lastCommittedOpTime" : {
"ts" : Timestamp(1605164596, 1),
"t" : NumberLong(1)
},
"lastCommittedWallTime" : ISODate("2020-11-12T07:03:16.910Z"),
"readConcernMajorityOpTime" : {
"ts" : Timestamp(1605164596, 1),
"t" : NumberLong(1)
},
"readConcernMajorityWallTime" : ISODate("2020-11-12T07:03:16.910Z"),
"appliedOpTime" : {
"ts" : Timestamp(1605164596, 1),
"t" : NumberLong(1)
},
"durableOpTime" : {
"ts" : Timestamp(1605164596, 1),
"t" : NumberLong(1)
},
"lastAppliedWallTime" : ISODate("2020-11-12T07:03:16.910Z"),
"lastDurableWallTime" : ISODate("2020-11-12T07:03:16.910Z")
},
"lastStableRecoveryTimestamp" : Timestamp(1605164586, 5),
"electionCandidateMetrics" : {
"lastElectionReason" : "electionTimeout",
"lastElectionDate" : ISODate("2020-11-12T07:03:06.619Z"),
"electionTerm" : NumberLong(1),
"lastCommittedOpTimeAtElection" : {
"ts" : Timestamp(0, 0),
"t" : NumberLong(-1)
},
"lastSeenOpTimeAtElection" : {
"ts" : Timestamp(1605164586, 1),
"t" : NumberLong(-1)
},
"numVotesNeeded" : 1,
"priorityAtElection" : 1,
"electionTimeoutMillis" : NumberLong(10000),
"newTermStartDate" : ISODate("2020-11-12T07:03:06.768Z"),
"wMajorityWriteAvailabilityDate" : ISODate("2020-11-12T07:03:06.917Z")
},
"members" : [
{
"_id" : 0,
"name" : "192.168.0.253:27318",
"health" : 1,
"state" : 1,
"stateStr" : "PRIMARY",
"uptime" : 231,
"optime" : {
"ts" : Timestamp(1605164596, 1),
"t" : NumberLong(1)
},
"optimeDate" : ISODate("2020-11-12T07:03:16Z"),
"syncSourceHost" : "",
"syncSourceId" : -1,
"infoMessage" : "Could not find member to sync from",
"electionTime" : Timestamp(1605164586, 2),
"electionDate" : ISODate("2020-11-12T07:03:06Z"),
"configVersion" : 1,
"configTerm" : -1,
"self" : true,
"lastHeartbeatMessage" : ""
}
],
"ok" : 1,
"$clusterTime" : {
"clusterTime" : Timestamp(1605164596, 1),
"signature" : {
"hash" : BinData(0,"AAAAAAAAAAAAAAAAAAAAAAAAAAA="),
"keyId" : NumberLong(0)
}
},
"operationTime" : Timestamp(1605164596, 1)
}
(2)主節點配置檢視:
myshardrs02:PRIMARY> rs.conf()
{
"_id" : "myshardrs02",
"version" : 1,
"protocolVersion" : NumberLong(1),
"writeConcernMajorityJournalDefault" : true,
"members" : [
{
"_id" : 0,
"host" : "192.168.0.253:27318",
"arbiterOnly" : false,
"buildIndexes" : true,
"hidden" : false,
"priority" : 1,
"tags" : {
},
"slaveDelay" : NumberLong(0),
"votes" : 1
}
],
"settings" : {
"chainingAllowed" : true,
"heartbeatIntervalMillis" : 2000,
"heartbeatTimeoutSecs" : 10,
"electionTimeoutMillis" : 10000,
"catchUpTimeoutMillis" : -1,
"catchUpTakeoverDelayMillis" : 30000,
"getLastErrorModes" : {
},
"getLastErrorDefaults" : {
"w" : 1,
"wtimeout" : 0
},
"replicaSetId" : ObjectId("5facde29ec2cfe3457df95cd")
}
}
( 3)新增副本節點:
myshardrs02:PRIMARY> rs.add("192.168.0.253:27418")
{
"ok" : 1,
"$clusterTime" : {
"clusterTime" : Timestamp(1605164678, 1),
"signature" : {
"hash" : BinData(0,"AAAAAAAAAAAAAAAAAAAAAAAAAAA="),
"keyId" : NumberLong(0)
}
},
"operationTime" : Timestamp(1605164678, 1)
}
(4)新增仲裁節點:
myshardrs02:PRIMARY> rs.addArb("192.168.0.253:27518")
{
"ok" : 1,
"$clusterTime" : {
"clusterTime" : Timestamp(1605164699, 1),
"signature" : {
"hash" : BinData(0,"AAAAAAAAAAAAAAAAAAAAAAAAAAA="),
"keyId" : NumberLong(0)
}
},
"operationTime" : Timestamp(1605164699, 1)
}
檢視副本集的配置情況:
myshardrs02:PRIMARY> rs.conf()
{
"_id" : "myshardrs02",
"version" : 3,
"protocolVersion" : NumberLong(1),
"writeConcernMajorityJournalDefault" : true,
"members" : [
{
"_id" : 0,
"host" : "192.168.0.253:27318",
"arbiterOnly" : false,
"buildIndexes" : true,
"hidden" : false,
"priority" : 1,
"tags" : {
},
"slaveDelay" : NumberLong(0),
"votes" : 1
},
{
"_id" : 1,
"host" : "192.168.0.253:27418",
"arbiterOnly" : false,
"buildIndexes" : true,
"hidden" : false,
"priority" : 1,
"tags" : {
},
"slaveDelay" : NumberLong(0),
"votes" : 1
},
{
"_id" : 2,
"host" : "192.168.0.253:27518",
"arbiterOnly" : true,
"buildIndexes" : true,
"hidden" : false,
"priority" : 0,
"tags" : {
},
"slaveDelay" : NumberLong(0),
"votes" : 1
}
],
"settings" : {
"chainingAllowed" : true,
"heartbeatIntervalMillis" : 2000,
"heartbeatTimeoutSecs" : 10,
"electionTimeoutMillis" : 10000,
"catchUpTimeoutMillis" : -1,
"catchUpTakeoverDelayMillis" : 30000,
"getLastErrorModes" : {
},
"getLastErrorDefaults" : {
"w" : 1,
"wtimeout" : 0
},
"replicaSetId" : ObjectId("5facde29ec2cfe3457df95cd")
}
}
檢視副本集情況:
myshardrs02:PRIMARY> rs.status()
{
"set" : "myshardrs02",
"date" : ISODate("2020-11-12T07:07:02.377Z"),
"myState" : 1,
"term" : NumberLong(1),
"syncSourceHost" : "",
"syncSourceId" : -1,
"heartbeatIntervalMillis" : NumberLong(2000),
"majorityVoteCount" : 2,
"writeMajorityCount" : 2,
"votingMembersCount" : 3,
"writableVotingMembersCount" : 2,
"optimes" : {
"lastCommittedOpTime" : {
"ts" : Timestamp(1605164816, 1),
"t" : NumberLong(1)
},
"lastCommittedWallTime" : ISODate("2020-11-12T07:06:56.916Z"),
"readConcernMajorityOpTime" : {
"ts" : Timestamp(1605164816, 1),
"t" : NumberLong(1)
},
"readConcernMajorityWallTime" : ISODate("2020-11-12T07:06:56.916Z"),
"appliedOpTime" : {
"ts" : Timestamp(1605164816, 1),
"t" : NumberLong(1)
},
"durableOpTime" : {
"ts" : Timestamp(1605164816, 1),
"t" : NumberLong(1)
},
"lastAppliedWallTime" : ISODate("2020-11-12T07:06:56.916Z"),
"lastDurableWallTime" : ISODate("2020-11-12T07:06:56.916Z")
},
"lastStableRecoveryTimestamp" : Timestamp(1605164766, 1),
"electionCandidateMetrics" : {
"lastElectionReason" : "electionTimeout",
"lastElectionDate" : ISODate("2020-11-12T07:03:06.619Z"),
"electionTerm" : NumberLong(1),
"lastCommittedOpTimeAtElection" : {
"ts" : Timestamp(0, 0),
"t" : NumberLong(-1)
},
"lastSeenOpTimeAtElection" : {
"ts" : Timestamp(1605164586, 1),
"t" : NumberLong(-1)
},
"numVotesNeeded" : 1,
"priorityAtElection" : 1,
"electionTimeoutMillis" : NumberLong(10000),
"newTermStartDate" : ISODate("2020-11-12T07:03:06.768Z"),
"wMajorityWriteAvailabilityDate" : ISODate("2020-11-12T07:03:06.917Z")
},
"members" : [
{
"_id" : 0,
"name" : "192.168.0.253:27318",
"health" : 1,
"state" : 1,
"stateStr" : "PRIMARY",
"uptime" : 449,
"optime" : {
"ts" : Timestamp(1605164816, 1),
"t" : NumberLong(1)
},
"optimeDate" : ISODate("2020-11-12T07:06:56Z"),
"syncSourceHost" : "",
"syncSourceId" : -1,
"infoMessage" : "",
"electionTime" : Timestamp(1605164586, 2),
"electionDate" : ISODate("2020-11-12T07:03:06Z"),
"configVersion" : 3,
"configTerm" : -1,
"self" : true,
"lastHeartbeatMessage" : ""
},
{
"_id" : 1,
"name" : "192.168.0.253:27418",
"health" : 1,
"state" : 2,
"stateStr" : "SECONDARY",
"uptime" : 144,
"optime" : {
"ts" : Timestamp(1605164816, 1),
"t" : NumberLong(1)
},
"optimeDurable" : {
"ts" : Timestamp(1605164816, 1),
"t" : NumberLong(1)
},
"optimeDate" : ISODate("2020-11-12T07:06:56Z"),
"optimeDurableDate" : ISODate("2020-11-12T07:06:56Z"),
"lastHeartbeat" : ISODate("2020-11-12T07:07:01.068Z"),
"lastHeartbeatRecv" : ISODate("2020-11-12T07:07:02.150Z"),
"pingMs" : NumberLong(0),
"lastHeartbeatMessage" : "",
"syncSourceHost" : "192.168.0.253:27318",
"syncSourceId" : 0,
"infoMessage" : "",
"configVersion" : 3,
"configTerm" : -1
},
{
"_id" : 2,
"name" : "192.168.0.253:27518",
"health" : 1,
"state" : 7,
"stateStr" : "ARBITER",
"uptime" : 123,
"lastHeartbeat" : ISODate("2020-11-12T07:07:01.067Z"),
"lastHeartbeatRecv" : ISODate("2020-11-12T07:07:01.250Z"),
"pingMs" : NumberLong(0),
"lastHeartbeatMessage" : "",
"syncSourceHost" : "",
"syncSourceId" : -1,
"infoMessage" : "",
"configVersion" : 3,
"configTerm" : -1
}
],
"ok" : 1,
"$clusterTime" : {
"clusterTime" : Timestamp(1605164816, 1),
"signature" : {
"hash" : BinData(0,"AAAAAAAAAAAAAAAAAAAAAAAAAAA="),
"keyId" : NumberLong(0)
}
},
"operationTime" : Timestamp(1605164816, 1)
}
配置節點副本集的建立
第一步:準備存放資料和日誌的目錄:
#-----------configrs
#建立資料節點data和日誌目錄
mkdir -p /home/mongodb/sharded_cluster/myconfigrs_27019/log & mkdir -p /home/mongodb/sharded_cluster/myconfigrs_27019/data/db
mkdir -p /home/mongodb/sharded_cluster/myconfigrs_27119/log & mkdir -p /home/mongodb/sharded_cluster/myconfigrs_27119/data/db
mkdir -p /home/mongodb/sharded_cluster/myconfigrs_27219/log & mkdir -p /home/mongodb/sharded_cluster/myconfigrs_27219/data/db
新建或修改配置檔案:
# vim /home/mongodb/sharded_cluster/myconfigrs_27019/mongod.conf
systemLog:
destination: file
path: /home/mongodb/sharded_cluster/myconfigrs_27019/log/mongod.log
logAppend: true
storage:
dbPath: /home/mongodb/sharded_cluster/myconfigrs_27019/data/db
journal:
enabled: true
processManagement:
fork: true
pidFilePath: /home/mongodb/sharded_cluster/myconfigrs_27019/log/mongod.pid
net:
bindIp: 192.168.0.253
port: 27019
replication:
replSetName: myconfigrs
sharding:
clusterRole: configsvr
# vim /home/mongodb/sharded_cluster/myconfigrs_27119/mongod.conf
systemLog:
destination: file
path: /home/mongodb/sharded_cluster/myconfigrs_27119/log/mongod.log
logAppend: true
storage:
dbPath: /home/mongodb/sharded_cluster/myconfigrs_27119/data/db
journal:
enabled: true
processManagement:
fork: true
pidFilePath: /home/mongodb/sharded_cluster/myconfigrs_27119/log/mongod.pid
net:
bindIp: 192.168.0.253
port: 27119
replication:
replSetName: myconfigrs
sharding:
clusterRole: configsvr
# vim /home/mongodb/sharded_cluster/myconfigrs_27219/mongod.conf
systemLog:
destination: file
path: /home/mongodb/sharded_cluster/myconfigrs_27219/log/mongod.log
logAppend: true
storage:
dbPath: /home/mongodb/sharded_cluster/myconfigrs_27219/data/db
journal:
enabled: true
processManagement:
fork: true
pidFilePath: /home/mongodb/sharded_cluster/myconfigrs_27219/log/mongod.pid
net:
bindIp: 192.168.0.253
port: 27219
replication:
replSetName: myconfigrs
sharding:
clusterRole: configsvr
啟動配置副本集:一主兩副本
依次啟動三個mongod服務:
/usr/bin/mongod -f /home/mongodb/sharded_cluster/myconfigrs_27019/mongod.conf
/usr/bin/mongod -f /home/mongodb/sharded_cluster/myconfigrs_27119/mongod.conf
/usr/bin/mongod -f /home/mongodb/sharded_cluster/myconfigrs_27219/mongod.conf
檢視服務是否啟動:
# ps -ef |grep mongod
root 4080 1 0 14:45 ? 00:00:17 /usr/bin/mongod -f /home/mongodb/sharded_cluster/myshardrs01_27018/mongod.conf
root 4133 1 0 14:45 ? 00:00:15 /usr/bin/mongod -f /home/mongodb/sharded_cluster/myshardrs01_27118/mongod.conf
root 4186 1 0 14:45 ? 00:00:11 /usr/bin/mongod -f /home/mongodb/sharded_cluster/myshardrs01_27218/mongod.conf
root 4527 1 0 14:59 ? 00:00:11 /usr/bin/mongod -f /home/mongodb/sharded_cluster/myshardrs02_27318/mongod.conf
root 4580 1 0 14:59 ? 00:00:09 /usr/bin/mongod -f /home/mongodb/sharded_cluster/myshardrs02_27418/mongod.conf
root 4633 1 0 14:59 ? 00:00:07 /usr/bin/mongod -f /home/mongodb/sharded_cluster/myshardrs02_27518/mongod.conf
root 5026 1 3 15:17 ? 00:00:01 /usr/bin/mongod -f /home/mongodb/sharded_cluster/myconfigrs_27019/mongod.conf
root 5087 1 4 15:17 ? 00:00:01 /usr/bin/mongod -f /home/mongodb/sharded_cluster/myconfigrs_27119/mongod.conf
root 5148 1 5 15:18 ? 00:00:01 /usr/bin/mongod -f /home/mongodb/sharded_cluster/myconfigrs_27219/mongod.conf
# ss -tulnp |grep 27
tcp LISTEN 0 128 192.168.0.253:27018 *:* users:(("mongod",pid=4080,fd=12))
tcp LISTEN 0 128 192.168.0.253:27019 *:* users:(("mongod",pid=5026,fd=12))
tcp LISTEN 0 128 192.168.0.253:27118 *:* users:(("mongod",pid=4133,fd=12))
tcp LISTEN 0 128 192.168.0.253:27119 *:* users:(("mongod",pid=5087,fd=12))
tcp LISTEN 0 128 192.168.0.253:27218 *:* users:(("mongod",pid=4186,fd=12))
tcp LISTEN 0 128 192.168.0.253:27219 *:* users:(("mongod",pid=5148,fd=12))
tcp LISTEN 0 128 192.168.0.253:27318 *:* users:(("mongod",pid=4527,fd=12))
tcp LISTEN 0 128 192.168.0.253:27418 *:* users:(("mongod",pid=4580,fd=12))
tcp LISTEN 0 128 192.168.0.253:27518 *:* users:(("mongod",pid=4633,fd=12))
(1)初始化副本集和建立主節點:
使用客戶端命令連線任意一個節點,但這裡儘量要連線主節點:
/usr/bin/mongo --host 192.168.0.253 --port 27019
執行初始化副本集命令:
> rs.initiate()
{
"info2" : "no configuration specified. Using a default configuration for the set",
"me" : "192.168.0.253:27019",
"ok" : 1,
"$gleStats" : {
"lastOpTime" : Timestamp(1605165618, 1),
"electionId" : ObjectId("000000000000000000000000")
},
"lastCommittedOpTime" : Timestamp(0, 0),
"$clusterTime" : {
"clusterTime" : Timestamp(1605165618, 1),
"signature" : {
"hash" : BinData(0,"AAAAAAAAAAAAAAAAAAAAAAAAAAA="),
"keyId" : NumberLong(0)
}
},
"operationTime" : Timestamp(1605165618, 1)
}
檢視副本集情況:
myconfigrs:PRIMARY> rs.status()
{
"set" : "myconfigrs",
"date" : ISODate("2020-11-12T07:20:39.883Z"),
"myState" : 1,
"term" : NumberLong(1),
"syncSourceHost" : "",
"syncSourceId" : -1,
"configsvr" : true,
"heartbeatIntervalMillis" : NumberLong(2000),
"majorityVoteCount" : 1,
"writeMajorityCount" : 1,
"votingMembersCount" : 1,
"writableVotingMembersCount" : 1,
"optimes" : {
"lastCommittedOpTime" : {
"ts" : Timestamp(1605165638, 1),
"t" : NumberLong(1)
},
"lastCommittedWallTime" : ISODate("2020-11-12T07:20:38.905Z"),
"readConcernMajorityOpTime" : {
"ts" : Timestamp(1605165638, 1),
"t" : NumberLong(1)
},
"readConcernMajorityWallTime" : ISODate("2020-11-12T07:20:38.905Z"),
"appliedOpTime" : {
"ts" : Timestamp(1605165638, 1),
"t" : NumberLong(1)
},
"durableOpTime" : {
"ts" : Timestamp(1605165638, 1),
"t" : NumberLong(1)
},
"lastAppliedWallTime" : ISODate("2020-11-12T07:20:38.905Z"),
"lastDurableWallTime" : ISODate("2020-11-12T07:20:38.905Z")
},
"lastStableRecoveryTimestamp" : Timestamp(1605165619, 11),
"electionCandidateMetrics" : {
"lastElectionReason" : "electionTimeout",
"lastElectionDate" : ISODate("2020-11-12T07:20:18.310Z"),
"electionTerm" : NumberLong(1),
"lastCommittedOpTimeAtElection" : {
"ts" : Timestamp(0, 0),
"t" : NumberLong(-1)
},
"lastSeenOpTimeAtElection" : {
"ts" : Timestamp(1605165618, 1),
"t" : NumberLong(-1)
},
"numVotesNeeded" : 1,
"priorityAtElection" : 1,
"electionTimeoutMillis" : NumberLong(10000),
"newTermStartDate" : ISODate("2020-11-12T07:20:18.441Z"),
"wMajorityWriteAvailabilityDate" : ISODate("2020-11-12T07:20:19.907Z")
},
"members" : [
{
"_id" : 0,
"name" : "192.168.0.253:27019",
"health" : 1,
"state" : 1,
"stateStr" : "PRIMARY",
"uptime" : 167,
"optime" : {
"ts" : Timestamp(1605165638, 1),
"t" : NumberLong(1)
},
"optimeDate" : ISODate("2020-11-12T07:20:38Z"),
"syncSourceHost" : "",
"syncSourceId" : -1,
"infoMessage" : "Could not find member to sync from",
"electionTime" : Timestamp(1605165618, 2),
"electionDate" : ISODate("2020-11-12T07:20:18Z"),
"configVersion" : 1,
"configTerm" : 1,
"self" : true,
"lastHeartbeatMessage" : ""
}
],
"ok" : 1,
"$gleStats" : {
"lastOpTime" : Timestamp(1605165618, 1),
"electionId" : ObjectId("7fffffff0000000000000001")
},
"lastCommittedOpTime" : Timestamp(1605165638, 1),
"$clusterTime" : {
"clusterTime" : Timestamp(1605165638, 1),
"signature" : {
"hash" : BinData(0,"AAAAAAAAAAAAAAAAAAAAAAAAAAA="),
"keyId" : NumberLong(0)
}
},
"operationTime" : Timestamp(1605165638, 1)
}
(2)主節點配置檢視:
myconfigrs:PRIMARY> rs.conf()
{
"_id" : "myconfigrs",
"version" : 1,
"term" : 1,
"configsvr" : true,
"protocolVersion" : NumberLong(1),
"writeConcernMajorityJournalDefault" : true,
"members" : [
{
"_id" : 0,
"host" : "192.168.0.253:27019",
"arbiterOnly" : false,
"buildIndexes" : true,
"hidden" : false,
"priority" : 1,
"tags" : {
},
"slaveDelay" : NumberLong(0),
"votes" : 1
}
],
"settings" : {
"chainingAllowed" : true,
"heartbeatIntervalMillis" : 2000,
"heartbeatTimeoutSecs" : 10,
"electionTimeoutMillis" : 10000,
"catchUpTimeoutMillis" : -1,
"catchUpTakeoverDelayMillis" : 30000,
"getLastErrorModes" : {
},
"getLastErrorDefaults" : {
"w" : 1,
"wtimeout" : 0
},
"replicaSetId" : ObjectId("5face231793c0e43a638ac4b")
}
}
( 3)新增副本節點:
myconfigrs:PRIMARY> rs.add("192.168.0.253:27119")
{
"ok" : 1,
"$gleStats" : {
"lastOpTime" : {
"ts" : Timestamp(1605165745, 1),
"t" : NumberLong(1)
},
"electionId" : ObjectId("7fffffff0000000000000001")
},
"lastCommittedOpTime" : Timestamp(1605165745, 1),
"$clusterTime" : {
"clusterTime" : Timestamp(1605165746, 1),
"signature" : {
"hash" : BinData(0,"AAAAAAAAAAAAAAAAAAAAAAAAAAA="),
"keyId" : NumberLong(0)
}
},
"operationTime" : Timestamp(1605165745, 1)
}
myconfigrs:PRIMARY> rs.add("192.168.0.253:27219")
{
"ok" : 1,
"$gleStats" : {
"lastOpTime" : {
"ts" : Timestamp(1605165756, 1),
"t" : NumberLong(1)
},
"electionId" : ObjectId("7fffffff0000000000000001")
},
"lastCommittedOpTime" : Timestamp(1605165757, 1),
"$clusterTime" : {
"clusterTime" : Timestamp(1605165757, 1),
"signature" : {
"hash" : BinData(0,"AAAAAAAAAAAAAAAAAAAAAAAAAAA="),
"keyId" : NumberLong(0)
}
},
"operationTime" : Timestamp(1605165756, 1)
}
檢視副本集的配置情況:
myconfigrs:PRIMARY> rs.conf()
{
"_id" : "myconfigrs",
"version" : 3,
"term" : 1,
"configsvr" : true,
"protocolVersion" : NumberLong(1),
"writeConcernMajorityJournalDefault" : true,
"members" : [
{
"_id" : 0,
"host" : "192.168.0.253:27019",
"arbiterOnly" : false,
"buildIndexes" : true,
"hidden" : false,
"priority" : 1,
"tags" : {
},
"slaveDelay" : NumberLong(0),
"votes" : 1
},
{
"_id" : 1,
"host" : "192.168.0.253:27119",
"arbiterOnly" : false,
"buildIndexes" : true,
"hidden" : false,
"priority" : 1,
"tags" : {
},
"slaveDelay" : NumberLong(0),
"votes" : 1
},
{
"_id" : 2,
"host" : "192.168.0.253:27219",
"arbiterOnly" : false,
"buildIndexes" : true,
"hidden" : false,
"priority" : 1,
"tags" : {
},
"slaveDelay" : NumberLong(0),
"votes" : 1
}
],
"settings" : {
"chainingAllowed" : true,
"heartbeatIntervalMillis" : 2000,
"heartbeatTimeoutSecs" : 10,
"electionTimeoutMillis" : 10000,
"catchUpTimeoutMillis" : -1,
"catchUpTakeoverDelayMillis" : 30000,
"getLastErrorModes" : {
},
"getLastErrorDefaults" : {
"w" : 1,
"wtimeout" : 0
},
"replicaSetId" : ObjectId("5face231793c0e43a638ac4b")
}
}
myconfigrs:PRIMARY> rs.status()
{
"set" : "myconfigrs",
"date" : ISODate("2020-11-12T07:23:22.429Z"),
"myState" : 1,
"term" : NumberLong(1),
"syncSourceHost" : "",
"syncSourceId" : -1,
"configsvr" : true,
"heartbeatIntervalMillis" : NumberLong(2000),
"majorityVoteCount" : 2,
"writeMajorityCount" : 2,
"votingMembersCount" : 3,
"writableVotingMembersCount" : 3,
"optimes" : {
"lastCommittedOpTime" : {
"ts" : Timestamp(1605165801, 1),
"t" : NumberLong(1)
},
"lastCommittedWallTime" : ISODate("2020-11-12T07:23:21.940Z"),
"readConcernMajorityOpTime" : {
"ts" : Timestamp(1605165801, 1),
"t" : NumberLong(1)
},
"readConcernMajorityWallTime" : ISODate("2020-11-12T07:23:21.940Z"),
"appliedOpTime" : {
"ts" : Timestamp(1605165801, 1),
"t" : NumberLong(1)
},
"durableOpTime" : {
"ts" : Timestamp(1605165801, 1),
"t" : NumberLong(1)
},
"lastAppliedWallTime" : ISODate("2020-11-12T07:23:21.940Z"),
"lastDurableWallTime" : ISODate("2020-11-12T07:23:21.940Z")
},
"lastStableRecoveryTimestamp" : Timestamp(1605165800, 1),
"electionCandidateMetrics" : {
"lastElectionReason" : "electionTimeout",
"lastElectionDate" : ISODate("2020-11-12T07:20:18.310Z"),
"electionTerm" : NumberLong(1),
"lastCommittedOpTimeAtElection" : {
"ts" : Timestamp(0, 0),
"t" : NumberLong(-1)
},
"lastSeenOpTimeAtElection" : {
"ts" : Timestamp(1605165618, 1),
"t" : NumberLong(-1)
},
"numVotesNeeded" : 1,
"priorityAtElection" : 1,
"electionTimeoutMillis" : NumberLong(10000),
"newTermStartDate" : ISODate("2020-11-12T07:20:18.441Z"),
"wMajorityWriteAvailabilityDate" : ISODate("2020-11-12T07:20:19.907Z")
},
"members" : [
{
"_id" : 0,
"name" : "192.168.0.253:27019",
"health" : 1,
"state" : 1,
"stateStr" : "PRIMARY",
"uptime" : 330,
"optime" : {
"ts" : Timestamp(1605165801, 1),
"t" : NumberLong(1)
},
"optimeDate" : ISODate("2020-11-12T07:23:21Z"),
"syncSourceHost" : "",
"syncSourceId" : -1,
"infoMessage" : "",
"electionTime" : Timestamp(1605165618, 2),
"electionDate" : ISODate("2020-11-12T07:20:18Z"),
"configVersion" : 3,
"configTerm" : 1,
"self" : true,
"lastHeartbeatMessage" : ""
},
{
"_id" : 1,
"name" : "192.168.0.253:27119",
"health" : 1,
"state" : 2,
"stateStr" : "SECONDARY",
"uptime" : 56,
"optime" : {
"ts" : Timestamp(1605165801, 1),
"t" : NumberLong(1)
},
"optimeDurable" : {
"ts" : Timestamp(1605165801, 1),
"t" : NumberLong(1)
},
"optimeDate" : ISODate("2020-11-12T07:23:21Z"),
"optimeDurableDate" : ISODate("2020-11-12T07:23:21Z"),
"lastHeartbeat" : ISODate("2020-11-12T07:23:22.210Z"),
"lastHeartbeatRecv" : ISODate("2020-11-12T07:23:22.304Z"),
"pingMs" : NumberLong(0),
"lastHeartbeatMessage" : "",
"syncSourceHost" : "192.168.0.253:27019",
"syncSourceId" : 0,
"infoMessage" : "",
"configVersion" : 3,
"configTerm" : 1
},
{
"_id" : 2,
"name" : "192.168.0.253:27219",
"health" : 1,
"state" : 2,
"stateStr" : "SECONDARY",
"uptime" : 46,
"optime" : {
"ts" : Timestamp(1605165801, 1),
"t" : NumberLong(1)
},
"optimeDurable" : {
"ts" : Timestamp(1605165801, 1),
"t" : NumberLong(1)
},
"optimeDate" : ISODate("2020-11-12T07:23:21Z"),
"optimeDurableDate" : ISODate("2020-11-12T07:23:21Z"),
"lastHeartbeat" : ISODate("2020-11-12T07:23:22.209Z"),
"lastHeartbeatRecv" : ISODate("2020-11-12T07:23:20.934Z"),
"pingMs" : NumberLong(0),
"lastHeartbeatMessage" : "",
"syncSourceHost" : "192.168.0.253:27119",
"syncSourceId" : 1,
"infoMessage" : "",
"configVersion" : 3,
"configTerm" : 1
}
],
"ok" : 1,
"$gleStats" : {
"lastOpTime" : {
"ts" : Timestamp(1605165756, 1),
"t" : NumberLong(1)
},
"electionId" : ObjectId("7fffffff0000000000000001")
},
"lastCommittedOpTime" : Timestamp(1605165801, 1),
"$clusterTime" : {
"clusterTime" : Timestamp(1605165801, 1),
"signature" : {
"hash" : BinData(0,"AAAAAAAAAAAAAAAAAAAAAAAAAAA="),
"keyId" : NumberLong(0)
}
},
"operationTime" : Timestamp(1605165801, 1)
}
路由節點的建立和操作
第一個路由節點的建立和連線
第一步:準備存放日誌的目錄:
#-----------mongos01
mkdir -p /home/mongodb/sharded_cluster/mymongos_27017/log
新建或修改配置檔案:
# vim /home/mongodb/sharded_cluster/mymongos_27017/mongos.conf
systemLog:
destination: file
path: /home/mongodb/sharded_cluster/mymongos_27017/log/mongod.log
logAppend: true
processManagement:
fork: true
pidFilePath: /home/mongodb/sharded_cluster/mymongos_27017/log/mongod.pid
net:
bindIp: 192.168.0.253
port: 27017
sharding:
configDB: myconfigrs/192.168.0.253:27019,192.168.0.253:27119,192.168.0.253:27219
啟動mongos:
/usr/bin/mongos -f /home/mongodb/sharded_cluster/mymongos_27017/mongos.conf
提示:啟動如果失敗,可以檢視 log目錄下的日誌,檢視失敗原因。
客戶端登入mongos
# /usr/bin/mongo --host 192.168.0.253 --port 27017
MongoDB shell version v4.4.0
connecting to: mongodb://192.168.0.253:27017/?compressors=disabled&gssapiServiceName=mongodb
Implicit session: session { "id" : UUID("ec6d17bf-2ae9-40c5-b690-7b0f0c30d40f") }
MongoDB server version: 4.4.0
---
The server generated these startup warnings when booting:
2020-11-12T15:34:52.266+08:00: Access control is not enabled for the database. Read and write access to data and configuration is unrestricted
2020-11-12T15:34:52.266+08:00: You are running this process as the root user, which is not recommended
---
mongos>
此時,寫不進去資料,如果寫資料會報錯,因為通過路由節點操作,現在只是連線了配置節點,還沒有連線分片資料節點,因此無法寫入業務資料。
在路由節點上進行分片配置操作
使用命令新增分片:
(1)新增分片:
語法:
sh.addShard("IP:Port")
將第一套分片副本集新增進來:
mongos> sh.addShard("myshardrs01/192.168.0.253:27018,192.168.0.253:27118,192.168.0.253:27218")
{
"shardAdded" : "myshardrs01",
"ok" : 1,
"operationTime" : Timestamp(1605166795, 5),
"$clusterTime" : {
"clusterTime" : Timestamp(1605166795, 5),
"signature" : {
"hash" : BinData(0,"AAAAAAAAAAAAAAAAAAAAAAAAAAA="),
"keyId" : NumberLong(0)
}
}
}
檢視分片狀態情況:
mongos> sh.status()
--- Sharding Status ---
sharding version: {
"_id" : 1,
"minCompatibleVersion" : 5,
"currentVersion" : 6,
"clusterId" : ObjectId("5face233793c0e43a638ac50")
}
shards:
{ "_id" : "myshardrs01", "host" : "myshardrs01/192.168.0.253:27018,192.168.0.253:27118", "state" : 1 }
active mongoses:
"4.4.0" : 1
autosplit:
Currently enabled: yes
balancer:
Currently enabled: yes
Currently running: no
Failed balancer rounds in last 5 attempts: 0
Migration Results for the last 24 hours:
No recent migrations
databases:
{ "_id" : "config", "primary" : "config", "partitioned" : true }
繼續將第二套分片副本集新增進來:
mongos> sh.addShard("myshardrs02/192.168.0.253:27318,192.168.0.253:27418,192.168.0.253:27518")
{
"shardAdded" : "myshardrs02",
"ok" : 1,
"operationTime" : Timestamp(1605166905, 3),
"$clusterTime" : {
"clusterTime" : Timestamp(1605166905, 3),
"signature" : {
"hash" : BinData(0,"AAAAAAAAAAAAAAAAAAAAAAAAAAA="),
"keyId" : NumberLong(0)
}
}
}
檢視分片狀態:
mongos> sh.status()
--- Sharding Status ---
sharding version: {
"_id" : 1,
"minCompatibleVersion" : 5,
"currentVersion" : 6,
"clusterId" : ObjectId("5face233793c0e43a638ac50")
}
shards:
{ "_id" : "myshardrs01", "host" : "myshardrs01/192.168.0.253:27018,192.168.0.253:27118", "state" : 1 }
{ "_id" : "myshardrs02", "host" : "myshardrs02/192.168.0.253:27318,192.168.0.253:27418", "state" : 1 }
active mongoses:
"4.4.0" : 1
autosplit:
Currently enabled: yes
balancer:
Currently enabled: yes
Currently running: no
Failed balancer rounds in last 5 attempts: 0
Migration Results for the last 24 hours:
No recent migrations
databases:
{ "_id" : "config", "primary" : "config", "partitioned" : true }
提示:如果新增分片失敗,需要先手動移除分片,檢查新增分片的資訊的正確性後,再次新增分片。
移除分片參考(瞭解):
use admin
db.runCommand({removeShard:"myshardrs02" })
注意:如果只剩下最後一個 shard,是無法刪除的,移除時會自動轉移分片資料,需要一個時間過程。完成後,再次執行刪除分片命令才能真正刪除。
(2)開啟分片功能:sh.enableSharding("庫名")、sh.shardCollection("庫名.集合名",{"key":1})
在mongos上的articledb資料庫配置sharding:
mongos> sh.enableSharding("articledb")
{
"ok" : 1,
"operationTime" : Timestamp(1605167077, 4),
"$clusterTime" : {
"clusterTime" : Timestamp(1605167077, 4),
"signature" : {
"hash" : BinData(0,"AAAAAAAAAAAAAAAAAAAAAAAAAAA="),
"keyId" : NumberLong(0)
}
}
}
檢視分片狀態:
mongos> sh.status()
--- Sharding Status ---
sharding version: {
"_id" : 1,
"minCompatibleVersion" : 5,
"currentVersion" : 6,
"clusterId" : ObjectId("5face233793c0e43a638ac50")
}
shards:
{ "_id" : "myshardrs01", "host" : "myshardrs01/192.168.0.253:27018,192.168.0.253:27118", "state" : 1 }
{ "_id" : "myshardrs02", "host" : "myshardrs02/192.168.0.253:27318,192.168.0.253:27418", "state" : 1 }
active mongoses:
"4.4.0" : 1
autosplit:
Currently enabled: yes
balancer:
Currently enabled: yes
Currently running: yes
Collections with active migrations:
config.system.sessions started at Thu Nov 12 2020 15:45:02 GMT+0800 (CST)
Failed balancer rounds in last 5 attempts: 0
Migration Results for the last 24 hours:
59 : Success
databases:
{ "_id" : "articledb", "primary" : "myshardrs01", "partitioned" : true, "version" : { "uuid" : UUID("238a84e2-d22e-4159-8cf2-4d38e77a0bba"), "lastMod" : 1 } }
{ "_id" : "config", "primary" : "config", "partitioned" : true }
config.system.sessions
shard key: { "_id" : 1 }
unique: false
balancing: true
chunks:
myshardrs01 964
myshardrs02 60
too many chunks to print, use verbose if you want to force print
(3)集合分片
對集合分片,你必須使用 sh.shardCollection() 方法指定集合和分片鍵。
語法:
sh.shardCollection(namespace, key, unique)
引數:
對集合進行分片時,你需要選擇一個 片鍵(Shard Key) , shard key 是每條記錄都必須包含的,且建立了索引的單個欄位或複合字段,MongoDB按照片鍵將資料劃分到不同的 資料塊 中,並將 資料塊 均衡地分佈到所有分片中.為了按照片鍵劃分資料塊,MongoDB使用 基於雜湊的分片方式(隨機平均分配)或者基於範圍的分片方式(數值大小分配) 。
用什麼欄位當片鍵都可以,如:nickname作為片鍵,但一定是必填欄位。
分片規則一:雜湊策略
對於 基於雜湊的分片 ,MongoDB計算一個欄位的雜湊值,並用這個雜湊值來建立資料塊.
在使用基於雜湊分片的系統中,擁有”相近”片鍵的文件 很可能不會 儲存在同一個資料塊中,因此資料的分離性更好一些.
使用nickname作為片鍵,根據其值的雜湊值進行資料分片
mongos> sh.shardCollection("articledb.comment",{"nickname":"hashed"})
{
"collectionsharded" : "articledb.comment",
"collectionUUID" : UUID("87f5c846-4222-4d27-858f-073b7569be2d"),
"ok" : 1,
"operationTime" : Timestamp(1605167364, 13),
"$clusterTime" : {
"clusterTime" : Timestamp(1605167364, 13),
"signature" : {
"hash" : BinData(0,"AAAAAAAAAAAAAAAAAAAAAAAAAAA="),
"keyId" : NumberLong(0)
}
}
}
檢視分片狀態:sh.status()
mongos> sh.status()
--- Sharding Status ---
sharding version: {
"_id" : 1,
"minCompatibleVersion" : 5,
"currentVersion" : 6,
"clusterId" : ObjectId("5face233793c0e43a638ac50")
}
shards:
{ "_id" : "myshardrs01", "host" : "myshardrs01/192.168.0.253:27018,192.168.0.253:27118", "state" : 1 }
{ "_id" : "myshardrs02", "host" : "myshardrs02/192.168.0.253:27318,192.168.0.253:27418", "state" : 1 }
active mongoses:
"4.4.0" : 1
autosplit:
Currently enabled: yes
balancer:
Currently enabled: yes
Currently running: no
Failed balancer rounds in last 5 attempts: 0
Migration Results for the last 24 hours:
200 : Success
databases:
{ "_id" : "articledb", "primary" : "myshardrs01", "partitioned" : true, "version" : { "uuid" : UUID("238a84e2-d22e-4159-8cf2-4d38e77a0bba"), "lastMod" : 1 } }
articledb.comment
shard key: { "nickname" : "hashed" }
unique: false
balancing: true
chunks:
myshardrs01 2
myshardrs02 2
{ "nickname" : { "$minKey" : 1 } } -->> { "nickname" : NumberLong("-4611686018427387902") } on : myshardrs01 Timestamp(1, 0)
{ "nickname" : NumberLong("-4611686018427387902") } -->> { "nickname" : NumberLong(0) } on : myshardrs01 Timestamp(1, 1)
{ "nickname" : NumberLong(0) } -->> { "nickname" : NumberLong("4611686018427387902") } on : myshardrs02 Timestamp(1, 2)
{ "nickname" : NumberLong("4611686018427387902") } -->> { "nickname" : { "$maxKey" : 1 } } on : myshardrs02 Timestamp(1, 3)
{ "_id" : "config", "primary" : "config", "partitioned" : true }
config.system.sessions
shard key: { "_id" : 1 }
unique: false
balancing: true
chunks:
myshardrs01 824
myshardrs02 200
too many chunks to print, use verbose if you want to force print
分片規則二:範圍策略
對於 基於範圍的分片 ,MongoDB按照片鍵的範圍把資料分成不同部分.假設有一個數字的片鍵:想象一個從負無窮到正無窮的直線,每一個片鍵的值都在直線上畫了一個點.MongoDB把這條直線劃分為更短的不重疊的片段,並稱之為 資料塊 ,每個資料塊包含了片鍵在一定範圍內的資料.
在使用片鍵做範圍劃分的系統中,擁有”相近”片鍵的文件很可能儲存在同一個資料塊中,因此也會儲存在同一個分片中.
如使用作者年齡欄位作為片鍵,按照點贊數的值進行分片:
mongos> sh.shardCollection("articledb.author",{"age":1})
{
"collectionsharded" : "articledb.author",
"collectionUUID" : UUID("15698590-02ae-45ca-9a91-ebb695c9b2ad"),
"ok" : 1,
"operationTime" : Timestamp(1605167437, 18),
"$clusterTime" : {
"clusterTime" : Timestamp(1605167437, 18),
"signature" : {
"hash" : BinData(0,"AAAAAAAAAAAAAAAAAAAAAAAAAAA="),
"keyId" : NumberLong(0)
}
}
}
注意的是:
1)一個集合只能指定一個片鍵,否則報錯。
2)一旦對一個集合分片,分片鍵和分片值就不可改變。 如:不能給集合選擇不同的分片鍵、不能更新分片鍵的值。
3)根據age索引進行分配資料。
檢視分片狀態:
mongos> sh.status()
--- Sharding Status ---
sharding version: {
"_id" : 1,
"minCompatibleVersion" : 5,
"currentVersion" : 6,
"clusterId" : ObjectId("5face233793c0e43a638ac50")
}
shards:
{ "_id" : "myshardrs01", "host" : "myshardrs01/192.168.0.253:27018,192.168.0.253:27118", "state" : 1 }
{ "_id" : "myshardrs02", "host" : "myshardrs02/192.168.0.253:27318,192.168.0.253:27418", "state" : 1 }
active mongoses:
"4.4.0" : 1
autosplit:
Currently enabled: yes
balancer:
Currently enabled: yes
Currently running: no
Failed balancer rounds in last 5 attempts: 0
Migration Results for the last 24 hours:
244 : Success
databases:
{ "_id" : "articledb", "primary" : "myshardrs01", "partitioned" : true, "version" : { "uuid" : UUID("238a84e2-d22e-4159-8cf2-4d38e77a0bba"), "lastMod" : 1 } }
articledb.author
shard key: { "age" : 1 }
unique: false
balancing: true
chunks:
myshardrs01 1
{ "age" : { "$minKey" : 1 } } -->> { "age" : { "$maxKey" : 1 } } on : myshardrs01 Timestamp(1, 0)
articledb.comment
shard key: { "nickname" : "hashed" }
unique: false
balancing: true
chunks:
myshardrs01 2
myshardrs02 2
{ "nickname" : { "$minKey" : 1 } } -->> { "nickname" : NumberLong("-4611686018427387902") } on : myshardrs01 Timestamp(1, 0)
{ "nickname" : NumberLong("-4611686018427387902") } -->> { "nickname" : NumberLong(0) } on : myshardrs01 Timestamp(1, 1)
{ "nickname" : NumberLong(0) } -->> { "nickname" : NumberLong("4611686018427387902") } on : myshardrs02 Timestamp(1, 2)
{ "nickname" : NumberLong("4611686018427387902") } -->> { "nickname" : { "$maxKey" : 1 } } on : myshardrs02 Timestamp(1, 3)
{ "_id" : "config", "primary" : "config", "partitioned" : true }
config.system.sessions
shard key: { "_id" : 1 }
unique: false
balancing: true
chunks:
myshardrs01 780
myshardrs02 244
too many chunks to print, use verbose if you want to force print
基於範圍的分片方式與基於雜湊的分片方式效能對比:
基於範圍的分片方式提供了更高效的範圍查詢,給定一個片鍵的範圍,分發路由可以很簡單地確定哪個資料塊儲存了請求需要的資料,並將請求轉發到相應的分片中.
不過,基於範圍的分片會導致資料在不同分片上的不均衡,有時候,帶來的消極作用會大於查詢效能的積極作用.比如,如果片鍵所在的欄位是線性增長的,一定時間內的所有請求都會落到某個固定的資料塊中,最終導致分佈在同一個分片中.在這種情況下,一小部分分片承載了叢集大部分的資料,系統並不能很好地進行擴充套件.
與此相比,基於雜湊的分片方式以範圍查詢效能的損失為代價,保證了叢集中資料的均衡.雜湊值的隨機性使資料隨機分佈在每個資料塊中,因此也隨機分佈在不同分片中.但是也正由於隨機性,一個範圍查詢很難確定應該請求哪些分片,通常為了返回需要的結果,需要請求所有分片.如無特殊情況,一般推薦使用 Hash Sharding。
而使用 _id
作為片鍵是一個不錯的選擇,因為它是必有的,你可以使用資料文件 _id 的雜湊作為片鍵。這個方案能夠是的讀和寫都能夠平均分佈,並且它能夠保證每個文件都有不同的片鍵所以資料塊能夠很精細。似乎還是不夠完美,因為這樣的話對多個文件的查詢必將命中所有的分片。雖說如此,這也是一種比較好的方案了。
理想化的 shard key 可以讓 documents 均勻地在叢集中分佈:
顯示叢集的詳細資訊:
mongos> db.printShardingStatus()
--- Sharding Status ---
sharding version: {
"_id" : 1,
"minCompatibleVersion" : 5,
"currentVersion" : 6,
"clusterId" : ObjectId("5face233793c0e43a638ac50")
}
shards:
{ "_id" : "myshardrs01", "host" : "myshardrs01/192.168.0.253:27018,192.168.0.253:27118", "state" : 1 }
{ "_id" : "myshardrs02", "host" : "myshardrs02/192.168.0.253:27318,192.168.0.253:27418", "state" : 1 }
active mongoses:
"4.4.0" : 1
autosplit:
Currently enabled: yes
balancer:
Currently enabled: yes
Currently running: no
Failed balancer rounds in last 5 attempts: 0
Migration Results for the last 24 hours:
298 : Success
databases:
{ "_id" : "articledb", "primary" : "myshardrs01", "partitioned" : true, "version" : { "uuid" : UUID("238a84e2-d22e-4159-8cf2-4d38e77a0bba"), "lastMod" : 1 } }
articledb.author
shard key: { "age" : 1 }
unique: false
balancing: true
chunks:
myshardrs01 1
{ "age" : { "$minKey" : 1 } } -->> { "age" : { "$maxKey" : 1 } } on : myshardrs01 Timestamp(1, 0)
articledb.comment
shard key: { "nickname" : "hashed" }
unique: false
balancing: true
chunks:
myshardrs01 2
myshardrs02 2
{ "nickname" : { "$minKey" : 1 } } -->> { "nickname" : NumberLong("-4611686018427387902") } on : myshardrs01 Timestamp(1, 0)
{ "nickname" : NumberLong("-4611686018427387902") } -->> { "nickname" : NumberLong(0) } on : myshardrs01 Timestamp(1, 1)
{ "nickname" : NumberLong(0) } -->> { "nickname" : NumberLong("4611686018427387902") } on : myshardrs02 Timestamp(1, 2)
{ "nickname" : NumberLong("4611686018427387902") } -->> { "nickname" : { "$maxKey" : 1 } } on : myshardrs02 Timestamp(1, 3)
{ "_id" : "config", "primary" : "config", "partitioned" : true }
config.system.sessions
shard key: { "_id" : 1 }
unique: false
balancing: true
chunks:
myshardrs01 726
myshardrs02 298
too many chunks to print, use verbose if you want to force print
檢視均衡器是否工作(需要重新均衡時系統才會自動啟動,不用管它):
mongos> sh.isBalancerRunning()
false
檢視當前 Balancer狀態:
mongos> sh.getBalancerState()
true
分片後插入資料測試
測試一(雜湊規則):登入mongs後,向comment迴圈插入1000條資料做測試:
mongos> use articledb
switched to db articledb
mongos> for(var i=1;i<=1000;i++){db.comment.insert({_id:i+"",nickname:"BoBo"+i})}
WriteResult({ "nInserted" : 1 })
mongos> db.comment.count()
1000
提示: js的語法,因為mongo的shell是一個JavaScript的shell。
注意:從路由上插入的資料,必須包含片鍵,否則無法插入。
分別登陸兩個片的主節點,統計文件數量:
# 第一個分片副本集
myshardrs01:PRIMARY> use articledb
switched to db articledb
myshardrs01:PRIMARY> db.comment.count()
507
# 第二個分片副本集
myshardrs02:PRIMARY> use articledb
switched to db articledb
myshardrs02:PRIMARY> db.comment.count()
493
可以看到, 1000條資料近似均勻的分佈到了2個shard上。是根據片鍵的雜湊值分配的。
這種分配方式非常易於水平擴充套件:一旦資料儲存需要更大空間,可以直接再增加分片即可,同時提升了效能。
使用db.comment.stats()檢視單個集合的完整情況,mongos執行該命令可以檢視該集合的資料分片的情況。
使用sh.status()檢視本庫內所有集合的分片資訊。
測試二(範圍規則):登入mongs後,向comment迴圈插入1000條資料做測試:
mongos> use articledb
switched to db articledb
mongos> for(var i=1;i<=20000;i++){db.author.save({"name":"BoBoBoBoBoBoBoBoBoBoBoBoBoBoBoBoBoBoBoBoBoBoBoBoBoBoBoBoBoBoBoBoBoBoBoBo"+i,"age":NumberInt(i%120)})}
WriteResult({ "nInserted" : 1 })
mongos> db.author.count()
20000
插入成功後,仍然要分別檢視兩個分片副本集的資料情況。
分片效果:
# 第一個分片副本集
myshardrs01:PRIMARY> db
articledb
myshardrs01:PRIMARY> db.author.count()
20000
# 第二個分片副本集
myshardrs02:PRIMARY> db
articledb
myshardrs02:PRIMARY> db.author.count()
0
提示:
如果檢視狀態發現沒有分片,則可能是由於以下原因造成了:
1)系統繁忙,正在分片中。
2)資料塊(chunk)沒有填滿,預設的資料塊尺寸(chunksize)是64M,填滿後才會考慮向其他片的資料塊填充資料,因此,為了測試,可以將其改小,這裡改為1M,操作如下:
use config
db.settings.save({_id:"chunksize",value:1})
測試完改回來:
db.settings.save({_id:"chunksize",value: 64})
注意:要先改小,再設定分片。為了測試,可以先刪除集合,重新建立集合的分片策略,再插入資料測試即可。
再增加一個路由節點
建立儲存日誌的資料夾:
#-----------mongos02
mkdir -p /home/mongodb/sharded_cluster/mymongos_27117/log
新建或修改配置檔案:
# vim /home/mongodb/sharded_cluster/mymongos_27117/mongos.conf
systemLog:
destination: file
path: /home/mongodb/sharded_cluster/mymongos_27117/log/mongod.log
logAppend: true
processManagement:
fork: true
pidFilePath: /home/mongodb/sharded_cluster/mymongos_27117/log/mongod.pid
net:
bindIp: 192.168.0.253
port: 27117
sharding:
configDB: myconfigrs/192.168.0.253:27019,192.168.0.253:27119,192.168.0.253:27219
啟動mongos2:
/usr/bin/mongos -f /home/mongodb/sharded_cluster/mymongos_27117/mongos.conf
使用mongo客戶端登入27117,發現,第二個路由無需配置,因為分片配置都儲存到了配置伺服器中了。
Compass 連線分片叢集
詳細看文件:
清除所有的節點資料(備用)
如果在搭建分片的時候有操作失敗或配置有問題,需要重新來過的,可以進行如下操作:
第一步:查詢出所有的測試服務節點的程序,根據上述的程序編號,使用kill -9 pid
依次中斷程序
第二步:清除所有的節點的資料,也就是刪除儲存資料的目錄下的內容
第三步:檢視或修改有問題的配置
第四步:依次啟動所有節點,不包括路由節點
第五步:對兩個資料分片副本集和一個配置副本集進行初始化和相關配置
第六步:檢查路由mongos的配置,並啟動mongos
第七步:mongo登入mongos,在其上進行相關操作