zookeeper叢集部署(分散式)
描述
ZooKeeper可以用來保證資料在zookeeper叢集之間的資料的事務一致性。
如何搭建ZooKeeper叢集
1. Zookeeper服務叢集規模不小於三個節點,要求各服務之間系統時間要保持一致。
2. 在hadoop0的usr/local目錄下,解壓縮zookeeper(執行命令tar –zvxf zookeeper.tar.gz)
3. 設定環境變數
開啟/etc/profile檔案!內容如下:
#set java & hadoop export JAVA_HOME=/usr/local/jdk export HADOOP_HOME=/usr/local/hadoop export ZOOKEEPER_HOME=/usr/local/zookeeper export PATH=.:$HADOOP_HOME/bin:$ZOOKEEPER_HOME/bin:$JAVA_HOME/bin:$PATH
注:修改完後profile記得執行source /etc/profile
4. 在解壓後的zookeeper的目錄下進入conf目錄修改配置檔案
更名操作:mv zoo_sample.cfg zoo.cfg
5. 編輯zoo.cfg (vi zoo.cfg)
修改dataDir=/usr/local/zookeeper/data/
新增server.0=hadoop0:2888:3888
server.1=hadoop1:2888:3888
server.2=hadoop2:2888:3888
檔案如下:
# The number of milliseconds of each tick tickTime=2000 # The number of ticks that the initial # synchronization phase can take initLimit=10 # The number of ticks that can pass between # sending a request and getting an acknowledgement syncLimit=5 # the directory where the snapshot is stored. # do not use /tmp for storage, /tmp here is just # example sakes. dataDir=/usr/local/zookeeper/data # the port at which the clients will connect clientPort=2181 # # Be sure to read the maintenance section of the # administrator guide before turning on autopurge. # # http://zookeeper.apache.org/doc/current/zookeeperAdmin.html#sc_maintenance # # The number of snapshots to retain in dataDir #autopurge.snapRetainCount=3 # Purge task interval in hours # Set to "0" to disable auto purge feature #autopurge.purgeInterval=1 server.0=hadoop0:2888:3888 server.1=hadoop1:2888:3888 server.2=hadoop2:2888:3888
注:
server.0=hadoop0:2888:3888
server.1=hadoop1:2888:3888
server.2=hadoop2:2888:3888
這三行為配置zookeeper叢集的機器(hadoop0、hadoop1、hadoop2)分別用server.0和server.1、server.2標識,2888和3888為埠號(zookeeper叢集包含一個leader(領導)和多個fllower(隨從),啟動zookeeper叢集時會隨機分配埠號,分配的埠號為2888的為leader,埠號為3888的是fllower)
6.
建立資料夾mkdir /usr/local/zookeeper/data
7. 在data目錄下,建立檔案myid,值為0 (0用來標識hadoop0這臺機器的zookeeper )
到此為止 hadoop0上的配置就已經完成;接下來配置hadoop1和hadoop2.
8. 把zookeeper目錄複製到hadoop1和hadoop2中(scp –r /usr/local/zookeeper hadoop1:/usr/local)
9. 把修改後的etc/profile檔案複製到hadoop1和hadoop2中
(複製完後記得在hadoop1和hadoop2中執行命令source /etc/profile)
10. 把hadoop1中相應的myid中的值改為1,hadoop2中相應的myid中的值改為2
11. 啟動,在三個節點上分別執行命令zkServer.sh start
12. 檢驗,在三個節點上分別執行命令zkServer.sh status
zookeeper的shell操作
啟動zookeeper:zkServer.sh start
進入zookeeper:zkCli.sh