MongoDB叢集解決方案-分片技術
MongoDB,NoSQL技術的實現,基於分散式檔案儲存的資料庫,由C++語言編寫。主要是解決海量資料的訪問效率問題,為web應用提供可擴充套件的高效能資料庫儲存解決方案
MongoDB叢集的實現方式:
1、Replica Set:也叫作副本集,簡單來說就是叢集中的伺服器包含了多分資料,保證主節點掛掉了。備節點能夠繼續的提供服務,但是提供的前提就是資料必須要和主節點的一致,如下圖:
MongoDB(M)表示主節點,MongoDB(S)表示從節點,MongoDB(A)表示仲裁節點;
M節點儲存資料並提供所有的增擅長服務,S節點預設不提供服務,但是可以通過設定使備節點提供查詢服務,這樣可以減少主節點的壓力。A節點是一個特殊的節點,不提供任何服務,只是起一個選舉的作用,當有主節點宕機時,可通過A節點來選取出那個S節點來提升為主節點。
2、Master-Slave:類似於MySQL的主從模式,配置很簡單,這裡我就不多說了
3、Sharding:和Replica Set類似,也需要一個仲裁節點,但是Sharding還需要配置伺服器和路由節點,如下圖:
MongoDB(R)為路由節點(mongos),資料庫叢集請求的入口,所有的請求都通過mongos進行協調,它就是一個請求分發器,他負責把對應的資料請求轉發到對應的shard伺服器上。
MongoDB(C1)為配置伺服器,儲存了叢集的元資訊,元資訊儲存了叢集的狀態和組織結構,元資訊包含每個分片儲存的資料塊資訊以及每個資料塊的範圍,mongos會快取這個資訊用來做讀寫的路由分發!
部署環境:
主機名 | IP |
node1 | 192.168.1.109 |
node2 | 192.168.1.107 |
node3 | 192.168.1.110 |
一、規劃每個服務對應的埠
configserver:11000 路由:10000 shard:10001 shard:10002 shard:10003
二、在node1、node2、node3上建立相應的目錄(以下操作均在mongodb使用者下執行)
[[email protected]~]#cat/etc/passwd|grepmongodb mongodb:x:10001:10001::/data/mongodb:/bin/bash [[email protected]~]#su-mongodb [[email protected]~]$pwd /data/mongodb [[email protected]~]$mkdir-pconfig/{data,log}##configserver的資料、日誌路徑 [[email protected]~]$mkdir-pmongos/log##路由的日誌路徑 [[email protected]~]$mkdir-pshard1/{data,log}##副本集1的資料、日誌路徑 [[email protected]~]$mkdir-pshard2/{data,log} [[email protected]~]$mkdir-pshard3/{data,log} [[email protected]~]$tar-xfmongodb-linux-x86_64-rhel62-3.2.7.tgz##這裡用的是3.2.7的版本,目前官網最新是3.2.9 [[email protected]~]$ll drwxr-xr-x4mongodbdev4096Sep3020:50config drwxr-xr-x3mongodbdev4096Sep3020:55mongodb-linux-x86_64-rhel62-3.2.7 -rw-r--r--1mongodbdev74938432Sep3020:40mongodb-linux-x86_64-rhel62-3.2.7.tgz drwxr-xr-x3mongodbdev4096Sep3020:50mongos drwxr-xr-x4mongodbdev4096Sep3020:50shard1 drwxr-xr-x4mongodbdev4096Sep3020:51shard2 drwxr-xr-x4mongodbdev4096Sep3020:51shard3 ###node2、node3亦是如此
三、啟動node1、node2、node3的配置伺服器
[[email protected]bin]$pwd /data/mongodb/mongodb-linux-x86_64-rhel62-3.2.7/bin [[email protected]bin]$./mongod--configsvr--port11000--dbpath/data/mongodb/config/data/--logpath/data/mongodb/config/log/config.log--fork##"--fork"在後臺執行,在node1上執行 [[email protected]bin]$./mongod--configsvr--port11000--dbpath/data/mongodb/config/data/--logpath/data/mongodb/config/log/config.log--fork##在node2上執行 [[email protected]bin]$./mongod--configsvr--port11000--dbpath/data/mongodb/config/data/--logpath/data/mongodb/config/log/config.log--fork##在node3上執行
四、啟動node1、node2、node3的路由
[[email protected]bin]$./mongos--configdb192.168.1.109:11000,192.168.1.107:11000,192.168.1.110:11000--port10000--logpath/data/mongodb/mongos/log/mongos.log--fork##node1啟動路由 [[email protected]bin]$./mongos--configdb192.168.1.109:11000,192.168.1.107:11000,192.168.1.110:11000--port10000--logpath/data/mongodb/mongos/log/mongos.log--fork##node2啟動路由 [[email protected]bin]$./mongos--configdb192.168.1.109:11000,192.168.1.107:11000,192.168.1.110:11000--port10000--logpath/data/mongodb/mongos/log/mongos.log--fork##node3啟動路由
五、在node1、node2、node3上設定shard並啟動
./mongod--shardsvr--replSetshard1--port11001--dbpath/data/mongodb/shard1/data--logpath/data/mongodb/shard1/log/shard1.log--fork--oplogSize10240--logappend##設定shard1 ./mongod--shardsvr--replSetshard2--port11002--dbpath/data/mongodb/shard2/data--logpath/data/mongodb/shard2/log/shard2.log--fork--oplogSize10240--logappend##設定shard2 ./mongod--shardsvr--replSetshard3--port11003--dbpath/data/mongodb/shard3/data--logpath/data/mongodb/shard3/log/shard3.log--fork--oplogSize10240--logappend##設定shard3 ###node2、node3操作亦是如此
六、登入任意一臺機器,在相應的埠對shard進行配置
1、對shard1進行配置
[[email protected]bin]$./mongo--port11001##連線至副本集shard1 >useadmin >config={_id:"shard1",members:[{_id:0,host:"192.168.1.109:11001"},{_id:1,host:"192.168.1.107:11001"},{_id:2,host:"192.168.1.110:11001",arbiterOnly:true}]}##"arbiterOnly"設定誰為仲裁節點 >rs.initiate(config);##對shard1進行初始化
2、對shard2進行配置
[[email protected]bin]$./mongo--port11002##連線至副本集shard2 >useadmin >config2={_id:"shard2",members:[{_id:0,host:"192.168.1.109:11002"},{_id:1,host:"192.168.1.107:11002",arbiterOnly:true},{_id:2,host:"192.168.1.110:11002"}]} >rs.initiate(config2);##對shard2進行初始化
3、對shard3進行配置
[[email protected]bin]$./mongo--port11003##連線至副本集shard3 >useadmin >config3={_id:"shard3",members:[{_id:0,host:"192.168.1.109:11003",arbiterOnly:true},{_id:1,host:"192.168.1.107:11003"},{_id:2,host:"192.168.1.110:11003"}]} >rs.initiate(config3);##對shard3進行初始化 ###注意:本機配置shard時,不能把本機設定為"arbiter",否則會報錯,必須要去其他節點設定。 ###在配置shard1、shard2時都是在node1上配置的,因為仲裁節點分別是node3、node2。當node1為仲裁節點時,必須 要去node2或者是node3上去配置
七、配置好副本集後,還需要將路由與副本集串聯起來,因為所有的請求都是經過路由,隨後再到配置伺服器的
[[email protected]bin]$./mongo--port10000##連線至mongos mongos>useadmin mongos>db.runCommand({addshard:"shard1/192.168.1.109:11001,192.168.1.107:11001,192.168.1.110:11001"});##將路由和副本集shard1串聯起來 {"shardAdded":"shard1","ok":1} mongos>db.runCommand({addshard:"shard2/192.168.1.109:11002,192.168.1.107:11002,192.168.1.110:11002"});##將路由和副本集shard2串聯起來 {"shardAdded":"shard2","ok":1} mongos>db.runCommand({addshard:"shard3/192.168.1.109:11003,192.168.1.107:11003,192.168.1.110:11003"});##將路由和副本集shard1串聯起來 {"shardAdded":"shard3","ok":1}
八、檢測配置是否成
1、連線至mongos檢視sh
[[email protected]bin]$./mongo--port10000 mongos>useamdin mongos>sh.status() shards: {"_id":"shard1","host":"shard1/192.168.1.107:11001,192.168.1.109:11001"} {"_id":"shard2","host":"shard2/192.168.1.109:11002,192.168.1.110:11002"} {"_id":"shard3","host":"shard3/192.168.1.107:11003,192.168.1.110:11003"}
2、連線至各個shard的埠檢視rs
[[email protected]bin]$./mongo--port11001 shard1:PRIMARY>rs.status() "name":"192.168.1.109:11001", "health":1, "state":1, "stateStr":"PRIMARY", "name":"192.168.1.107:11001", "health":1, "state":2, "stateStr":"SECONDARY", "name":"192.168.1.110:11001", "health":1, "state":7, "stateStr":"ARBITER", ##其他節點檢視方式相同
九、插入資料,測試資料能否進行自動分片
[[email protected]bin]$./mongo--port10000 mongos>useadmin mongos>db.runCommand({enablesharding:"testdb"});##建立資料庫,指定資料庫"testdb"進行分片生效 {"ok":1} mongos>db.runCommand({shardcollection:"testdb.table1",key:{apId:1,_id:1}})##MongoDB具有很多片鍵,此處建立ID片鍵,指定"testdb"資料庫中的"table1"表中的資料通過ID片鍵進行分片 {"collectionsharded":"testdb.table1","ok":1} mongos>usetestdb; switchedtodbtestdb mongos>for(vari=1;i<100000;i++)db.table1.save({id:i,"test1":"testval1"});##在"testdb"中的表"table1"中插入10w個id [[email protected]bin]$./mongo--port11003##node2上連線至shard3 shard3:PRIMARY>usetestdb switchedtodbtestdb shard3:PRIMARY>db.table1.find() {"_id":ObjectId("57ef6d2e4eec2b8ef67ad1ce"),"id":1,"test1":"testval1"} {"_id":ObjectId("57ef6d2e4eec2b8ef67ad1cf"),"id":2,"test1":"testval1"} {"_id":ObjectId("57ef6d2e4eec2b8ef67ad1d1"),"id":4,"test1":"testval1"} {"_id":ObjectId("57ef6d2e4eec2b8ef67ad1d2"),"id":5,"test1":"testval1"} {"_id":ObjectId("57ef6d2e4eec2b8ef67ad1d6"),"id":9,"test1":"testval1"} {"_id":ObjectId("57ef6d2e4eec2b8ef67ad1d9"),"id":12,"test1":"testval1"} {"_id":ObjectId("57ef6d2e4eec2b8ef67ad1db"),"id":14,"test1":"testval1"} {"_id":ObjectId("57ef6d2e4eec2b8ef67ad1dd"),"id":16,"test1":"testval1"} {"_id":ObjectId("57ef6d2e4eec2b8ef67ad1de"),"id":17,"test1":"testval1"} {"_id":ObjectId("57ef6d2e4eec2b8ef67ad1e0"),"id":19,"test1":"testval1"} {"_id":ObjectId("57ef6d2e4eec2b8ef67ad1e3"),"id":22,"test1":"testval1"} {"_id":ObjectId("57ef6d2e4eec2b8ef67ad1e4"),"id":23,"test1":"testval1"} {"_id":ObjectId("57ef6d2f4eec2b8ef67ad1e5"),"id":24,"test1":"testval1"} {"_id":ObjectId("57ef6d2f4eec2b8ef67ad1e8"),"id":27,"test1":"testval1"} {"_id":ObjectId("57ef6d2f4eec2b8ef67ad1ea"),"id":29,"test1":"testval1"} {"_id":ObjectId("57ef6d2f4eec2b8ef67ad1eb"),"id":30,"test1":"testval1"} {"_id":ObjectId("57ef6d2f4eec2b8ef67ad1ee"),"id":33,"test1":"testval1"} {"_id":ObjectId("57ef6d2f4eec2b8ef67ad1ef"),"id":34,"test1":"testval1"} {"_id":ObjectId("57ef6d2f4eec2b8ef67ad1f2"),"id":37,"test1":"testval1"} {"_id":ObjectId("57ef6d2f4eec2b8ef67ad1f4"),"id":39,"test1":"testval1"} [[email protected]bin]$./mongo--port11002##node3上連線至shard2 shard2:PRIMARY>usetestdb switchedtodbtestdb shard2:PRIMARY>db.table1.find() {"_id":ObjectId("57ef6d2e4eec2b8ef67ad1d4"),"id":7,"test1":"testval1"} {"_id":ObjectId("57ef6d2e4eec2b8ef67ad1d8"),"id":11,"test1":"testval1"} {"_id":ObjectId("57ef6d2e4eec2b8ef67ad1da"),"id":13,"test1":"testval1"} {"_id":ObjectId("57ef6d2e4eec2b8ef67ad1df"),"id":18,"test1":"testval1"} {"_id":ObjectId("57ef6d2e4eec2b8ef67ad1e2"),"id":21,"test1":"testval1"} {"_id":ObjectId("57ef6d2f4eec2b8ef67ad1e9"),"id":28,"test1":"testval1"} {"_id":ObjectId("57ef6d2f4eec2b8ef67ad1f0"),"id":35,"test1":"testval1"} {"_id":ObjectId("57ef6d2f4eec2b8ef67ad1fa"),"id":45,"test1":"testval1"} {"_id":ObjectId("57ef6d2f4eec2b8ef67ad1fc"),"id":47,"test1":"testval1"} {"_id":ObjectId("57ef6d2f4eec2b8ef67ad1fe"),"id":49,"test1":"testval1"} {"_id":ObjectId("57ef6d2f4eec2b8ef67ad200"),"id":51,"test1":"testval1"} {"_id":ObjectId("57ef6d2f4eec2b8ef67ad202"),"id":53,"test1":"testval1"} {"_id":ObjectId("57ef6d2f4eec2b8ef67ad203"),"id":54,"test1":"testval1"} {"_id":ObjectId("57ef6d2f4eec2b8ef67ad206"),"id":57,"test1":"testval1"} {"_id":ObjectId("57ef6d2f4eec2b8ef67ad208"),"id":59,"test1":"testval1"} {"_id":ObjectId("57ef6d2f4eec2b8ef67ad204"),"id":55,"test1":"testval1"} {"_id":ObjectId("57ef6d2f4eec2b8ef67ad209"),"id":60,"test1":"testval1"} {"_id":ObjectId("57ef6d2f4eec2b8ef67ad20c"),"id":63,"test1":"testval1"} {"_id":ObjectId("57ef6d2f4eec2b8ef67ad20f"),"id":66,"test1":"testval1"} {"_id":ObjectId("57ef6d2f4eec2b8ef67ad210"),"id":67,"test1":"testval1"} ###node1上檢視方式亦是如此 ###注意:檢視資料,只能在主節點上進行
問題小結:
1、叢集中的每臺伺服器時間必須要保持一致,否則啟動mongos會出現"Error Number 5"
2、在配置Replica Set時,只能指明三個節點,官網明確指出。
3、在部署叢集時,如果不想設定"arbitrate“,可以通過設定副本集中的優先順序"priority"來定義主備備4、
轉載於:https://blog.51cto.com/wangtianci/1858287