1. 程式人生 > >Zookeeper簡介

Zookeeper簡介

ora 分布式 change set tco ref 自增 修改配置 value

zookeeper 是一個分布式協調系統

下載鏈接:http://mirror.cogentco.com/pub/apache/zookeeper/


1.zookeeper集群結構

(1) leader

是zk集群的主節點,客戶端向zk註冊數據時,都要通過leader來對整個集群中的所有從節點做數據同步

(2)follower

是zk集群的從節點,保留數據,接收leader的請求,參與leader的選舉

(3)observer

是zk集群的從節點,保留數據,接收leader的請求,不參與leader的選舉


2.zookeeper的選舉機制

(1)集群啟動的時候

server1啟動,先查看集群中有沒有leader,如果沒有,則選舉自己為leader;

server2啟動,先查看集群中有沒有leader,如果沒有,則選舉自己為leader,

第二輪投票,server1j和server2投id較大為leader, server2當選為leader;

(2)集群運行的時候

運行中,如果leader宕機,剩余的機器會進入選舉狀態,重新選舉的依據:

優先考慮節點所持有的數據的版本號,最新的作為leader;

如果每個節點的數據都一樣新,那就選舉id大的為leader;


3.安裝配置

解壓安裝

[root@Darren2 zookeeper-3.4.10]# tar -zxvf zookeeper-3.4.10.tar.gz

[root@Darren2 zookeeper-3.4.10]# mkdir -p /usr/local/zookeeper-3.4.10/data

[root@Darren2 zookeeper-3.4.10]# echo 1 > data/myid

修改配置文件

其中server.1中的1對應文件myid內容,每個zookeeper都要有myid文件

[root@Darren2 zookeeper-3.4.10]# cd /usr/local/zookeeper-3.4.10/conf/

[root@Darren2 conf]# cp zoo_sample.cfg zoo.cfg

[root@Darren2 conf]# vim zoo.cfg

tickTime=2000

initLimit=10

syncLimit=5

dataDir=/usr/local/zookeeper-3.4.10/data

clientPort=2181

server.1=Darren2:2888:3888

server.2=Darren3:2888:3888

server.3=Darren4:2888:3888


#啟動zookeeper

[root@Darren2 bin]# ./zkServer.sh start

ZooKeeper JMX enabled by default

Using config: /usr/local/zookeeper-3.4.10/bin/../conf/zoo.cfg

Starting zookeeper ... STARTED


[root@Darren2 bin]# ./zkServer.sh status

ZooKeeper JMX enabled by default

Using config: /usr/local/zookeeper-3.4.10/bin/../conf/zoo.cfg

Mode: Leader


[root@Darren3 bin]# ./zkServer.sh status

ZooKeeper JMX enabled by default

Using config: /usr/local/zookeeper-3.4.10/bin/../conf/zoo.cfg

Mode: Follower


4.zookeeper命令

zookeeper中數據本質是key-value,zookeeper中一個數據被稱為一個znode,一個znode不能太大,通常在10K以內,官方要求最大不要超過1M,如果太大,會導致zookeeper集群個節點的數據無法實時同步 ,數據保持一致性。

key使用路徑表示,如:/dir1 value

znode類型

(1) persistent:默認的節點類型,一旦創建就會一直存在,除非手動刪除數據;

(2)ephemeral:短暫節點,創建該數據節點的客戶端如果和zk服務失去聯系,該數據節點會被zk服務自動刪除;

(3)sequential:帶自增序號的節點,在同一個節點下創建sequential子節點,zk會給子節點名字自動拼接一個自增的序列號;

#登錄zk client

[root@Darren2 bin]# ./zkCli.sh

[zk: localhost:2181(CONNECTED) 0] help

ZooKeeper -server host:port cmd args

stat path [watch]

set path data [version]

ls path [watch]

delquota [-n|-b] path

ls2 path [watch]

setAcl path acl

setquota -n|-b val path

history

redo cmdno

printwatches on|off

delete path [version]

sync path

listquota path

rmr path

get path [watch]

create [-s] [-e] path data acl

addauth scheme auth

quit

getAcl path

close

connect host:port


[zk: localhost:2181(CONNECTED) 13] create /dir1 a1

[zk: localhost:2181(CONNECTED) 16] ls /

[dir1, zookeeper]


[zk: localhost:2181(CONNECTED) 15] get /dir1

a1

cZxid = 0x7

ctime = Sun Nov 26 13:05:34 CST 2017

mZxid = 0x7

mtime = Sun Nov 26 13:05:34 CST 2017

pZxid = 0x7

cversion = 0

dataVersion = 0

aclVersion = 0

ephemeralOwner = 0x0

dataLength = 2

numChildren = 0


#創建短暫的znode,一旦quit客戶端,則自動刪除

[zk: localhost:2181(CONNECTED) 17] create -e /dir2 b1


#創建序列znode

[zk: localhost:2181(CONNECTED) 1] create -s /dir3 c1

Created /dir30000000003

[zk: localhost:2181(CONNECTED) 2] ls /

[dir1, zookeeper, dir30000000003]

[zk: localhost:2181(CONNECTED) 3] create -s /dir3 c1

Created /dir30000000004

[zk: localhost:2181(CONNECTED) 4] ls /

[dir1, zookeeper, dir30000000003, dir30000000004]


#創建短暫序列znode

[zk: localhost:2181(CONNECTED) 5] create -s -e /dir3 c1


#事件監聽

[zk: localhost:2181(CONNECTED) 4] ls /dir1 watch

WATCHER::

WatchedEvent state:SyncConnected type:NodeChildrenChanged path:/dir1


5.zookeeper的客戶端api的基本使用

package zkdemo1;

import org.apache.zookeeper.ZooKeeper;

public class TestConnection {

public static void main(String[] args) throws Exception{

ZooKeeper zk = new ZooKeeper("192.168.163.102", 2000, null);

byte[] data = zk.getData("/dir1", false, null);

System.out.println(new String(data));

zk.close();

}

}

輸出:

a1


Zookeeper簡介