Docker 搭建 etcd 叢集
閱讀目錄:
- 主機安裝
- 叢集搭建
- API 操作
- API 說明和 etcdctl 命令說明
etcd 是 CoreOS 團隊發起的一個開源專案(Go 語言,其實很多這類專案都是 Go 語言實現的,只能說很強大),實現了分散式鍵值儲存和服務發現,etcd 和 ZooKeeper/Consul 非常相似,都提供了類似的功能,以及 REST API 的訪問操作,具有以下特點:
- 簡單:安裝和使用簡單,提供了 REST API 進行操作互動
- 安全:支援 HTTPS SSL 證書
- 快速:支援併發 10 k/s 的讀寫操作
- 可靠:採用 raft 演算法,實現分散式系統資料的可用性和一致性
etcd 可以單個例項使用,也可以進行叢集配置,因為很多專案都是以 etcd 作為服務發現,比如 CoreOS 和 Kubernetes,所以,下面我們使用 Docker 簡單搭建一下 etcd 叢集。
1. 主機安裝
如果不使用 Docker 的話,etcd 在主機上安裝,也非常簡單。
Linux 安裝命令:
$ curl -L https://github.com/coreos/etcd/releases/download/v3.3.0-rc.0/etcd-v3.3.0-rc.0-linux-amd64.tar.gz -o etcd-v3.3.0-rc.0-linux-amd64.tar.gz && sudo tar xzvf etcd-v3.3.0-rc.0-linux-amd64.tar.gz && cd etcd-v3.3.0-rc.0-linux-amd64 && sudo cp etcd* /usr/local/bin/
Mac OS 安裝命令:
$ ruby -e "$(curl -fsSL https://raw.githubusercontent.com/Homebrew/install/master/install)" < /dev/null 2> /dev/null
$ brew install etcd
執行下面命令,檢視 etcd 是否安裝成功:
$ etcd --version
etcd Version: 3.2.12
Git SHA: GitNotFound
Go Version: go1.9.2
Go OS/Arch: darwin/amd64
2. 叢集搭建
搭建 etcd 叢集,需要藉助下 Docker Machine 建立三個 Docker 主機,命令:
$ docker-machine create -d virtualbox manager1 &&
docker-machine create -d virtualbox worker1 &&
docker-machine create -d virtualbox worker2
$ docker-machine ls
NAME ACTIVE DRIVER STATE URL SWARM DOCKER ERRORS
manager1 - virtualbox Running tcp://192.168.99.100:2376 v17.11.0-ce
worker1 - virtualbox Running tcp://192.168.99.101:2376 v17.11.0-ce
worker2 - virtualbox Running tcp://192.168.99.102:2376 v17.11.0-ce
為防止 Docker 主機中垃取官方映象,速度慢的問題,我們還需要將 etcd 映象打包推送到私有倉庫中,命令:
$ docker tag quay.io/coreos/etcd 192.168.99.1:5000/quay.io/coreos/etcd:latest &&
docker push 192.168.99.1:5000/quay.io/coreos/etcd:latest &&
docker pull 192.168.99.1:5000/quay.io/coreos/etcd:latest
另外,還需要將私有倉庫地址配置在 Docker 主機中,並重啟三個 Docker 主機,具體配置參考:Docker 三劍客之 Docker Swarm
Docker 主機配置好之後,我們需要使用docker-machine ssh
命令,分別進入三個 Docker 主機中,執行 Docker etcd 配置命令。
manager1 主機(node1 192.168.99.100
):
$ docker run -d --name etcd \
-p 2379:2379 \
-p 2380:2380 \
--volume=etcd-data:/etcd-data \
192.168.99.1:5000/quay.io/coreos/etcd \
/usr/local/bin/etcd \
--data-dir=/etcd-data --name node1 \
--initial-advertise-peer-urls http://192.168.99.100:2380 --listen-peer-urls http://0.0.0.0:2380 \
--advertise-client-urls http://192.168.99.100:2379 --listen-client-urls http://0.0.0.0:2379 \
--initial-cluster-state new \
--initial-cluster-token docker-etcd \
--initial-cluster node1=http://192.168.99.100:2380,node2=http://192.168.99.101:2380,node3=http://192.168.99.102:2380
worker1 主機(node2 192.168.99.101
):
$ docker run -d --name etcd \
-p 2379:2379 \
-p 2380:2380 \
--volume=etcd-data:/etcd-data \
192.168.99.1:5000/quay.io/coreos/etcd \
/usr/local/bin/etcd \
--data-dir=/etcd-data --name node2 \
--initial-advertise-peer-urls http://192.168.99.101:2380 --listen-peer-urls http://0.0.0.0:2380 \
--advertise-client-urls http://192.168.99.101:2379 --listen-client-urls http://0.0.0.0:2379 \
--initial-cluster-state new \
--initial-cluster-token docker-etcd \
--initial-cluster node1=http://192.168.99.100:2380,node2=http://192.168.99.101:2380,node3=http://192.168.99.102:2380
worker2 主機(node1 192.168.99.102
):
$ docker run -d --name etcd \
-p 2379:2379 \
-p 2380:2380 \
--volume=etcd-data:/etcd-data \
192.168.99.1:5000/quay.io/coreos/etcd \
/usr/local/bin/etcd \
--data-dir=/etcd-data --name node3 \
--initial-advertise-peer-urls http://192.168.99.102:2380 --listen-peer-urls http://0.0.0.0:2380 \
--advertise-client-urls http://192.168.99.102:2379 --listen-client-urls http://0.0.0.0:2379 \
--initial-cluster-state existing \
--initial-cluster-token docker-etcd \
--initial-cluster node1=http://192.168.99.100:2380,node2=http://192.168.99.101:2380,node3=http://192.168.99.102:2380
先來說明下 etcd 各個配置引數的意思(參考自 etcd 使用入門):
--name
:節點名稱,預設為 default。--data-dir
:服務執行資料儲存的路徑,預設為${name}.etcd
。--snapshot-count
:指定有多少事務(transaction)被提交時,觸發擷取快照儲存到磁碟。--heartbeat-interval
:leader 多久傳送一次心跳到 followers。預設值是 100ms。--eletion-timeout
:重新投票的超時時間,如果 follow 在該時間間隔沒有收到心跳包,會觸發重新投票,預設為 1000 ms。--listen-peer-urls
:和同伴通訊的地址,比如http://ip:2380
,如果有多個,使用逗號分隔。需要所有節點都能夠訪問,所以不要使用 localhost!--listen-client-urls
:對外提供服務的地址:比如http://ip:2379,http://127.0.0.1:2379
,客戶端會連線到這裡和 etcd 互動。--advertise-client-urls
:對外公告的該節點客戶端監聽地址,這個值會告訴叢集中其他節點。--initial-advertise-peer-urls
:該節點同伴監聽地址,這個值會告訴叢集中其他節點。--initial-cluster
:叢集中所有節點的資訊,格式為node1=http://ip1:2380,node2=http://ip2:2380,…
,注意:這裡的 node1 是節點的 --name 指定的名字;後面的 ip1:2380 是 --initial-advertise-peer-urls 指定的值。--initial-cluster-state
:新建叢集的時候,這個值為 new;假如已經存在的叢集,這個值為 existing。--initial-cluster-token
:建立叢集的 token,這個值每個叢集保持唯一。這樣的話,如果你要重新建立叢集,即使配置和之前一樣,也會再次生成新的叢集和節點 uuid;否則會導致多個叢集之間的衝突,造成未知的錯誤。
上述配置也可以設定配置檔案,預設為/etc/etcd/etcd.conf
。
我們可以使用docker ps
,檢視 Docker etcd 是否配置成功:
$ docker ps
CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES
463380d23dfe 192.168.99.1:5000/quay.io/coreos/etcd "/usr/local/bin/et..." 2 hours ago Up 2 hours 0.0.0.0:2379-2380->2379-2380/tcp etcd
然後進入其中一個 Docker 主機:
$ docker exec -it etcd bin/sh
執行下面命令(檢視叢集成員):
$ etcdctl member list
773d30c9fc6640b4: name=node2 peerURLs=http://192.168.99.101:2380 clientURLs=http://192.168.99.101:2379 isLeader=true
b2b0bca2e0cfcc19: name=node3 peerURLs=http://192.168.99.102:2380 clientURLs=http://192.168.99.102:2379 isLeader=false
c88e2cccbb287a01: name=node1 peerURLs=http://192.168.99.100:2380 clientURLs=http://192.168.99.100:2379 isLeader=false
可以看到,叢集裡面有三個成員,並且node2
為管理員,node1
和node3
為普通成員。
etcdctl 是 ectd 的客戶端命令工具(也是 go 語言實現),裡面封裝了 etcd 的 REST API 執行命令,方便我們進行操作 etcd,後面再列出 etcdctl 的命令詳細說明。
上面命令的 etcd API 版本為 2.0,我們可以手動設定版本為 3.0,命令:
$ export ETCDCTL_API=3 && /usr/local/bin/etcdctl put foo bar
OK
部分命令和執行結果還是和 2.0 版本,有很多不同的,比如同是檢視叢集成員,3.0 版本的執行結果:
$ etcdctl member list
773d30c9fc6640b4, started, node2, http://192.168.99.101:2380, http://192.168.99.101:2379
b2b0bca2e0cfcc19, started, node3, http://192.168.99.102:2380, http://192.168.99.102:2379
c88e2cccbb287a01, started, node1, http://192.168.99.100:2380, http://192.168.99.100:2379
好了,我們現在再演示一種情況,就是從叢集中移除一個節點,然後再把它新增到叢集中,為演示 etcd 中使用 Raft 演算法,我們將node2
管理節點,作為操作物件。
我們在隨便一個主機 etcd 容器中(node2
除外),執行成員移除叢集命令(必須使用 ID,使用別名會報錯):
$ etcdctl member remove 773d30c9fc6640b4
Member 773d30c9fc6640b4 removed from cluster f84185fa5f91bdf6
我們再執行下檢視叢集成員命令(v2 版本):
$ etcdctl member list
b2b0bca2e0cfcc19: name=node3 peerURLs=http://192.168.99.102:2380 clientURLs=http://192.168.99.102:2379 isLeader=true
c88e2cccbb287a01: name=node1 peerURLs=http://192.168.99.100:2380 clientURLs=http://192.168.99.100:2379 isLeader=false
會發現node2
管理節點被移除叢集了,並且通過 Raft 演算法,node3
被推舉為管理節點。
在將node2
節點重新加入叢集之前,我們需要執行下面命令:
$ etcdctl member add node2 --peer-urls="http://192.168.99.101:2380"
Member 22b0de6ffcd98f00 added to cluster f84185fa5f91bdf6
ETCD_NAME="node2"
ETCD_INITIAL_CLUSTER="node2=http://192.168.99.101:2380,node3=http://192.168.99.102:2380,node1=http://192.168.99.100:2380"
ETCD_INITIAL_CLUSTER_STATE="existing"
可以看到,ETCD_INITIAL_CLUSTER_STATE 值為existing
,也就是我們配置的--initial-cluster-state
引數。
我們再執行下檢視叢集成員命令(v2 版本):
$ etcdctl member list
22b0de6ffcd98f00[unstarted]: peerURLs=http://192.168.99.101:2380
b2b0bca2e0cfcc19: name=node3 peerURLs=http://192.168.99.102:2380 clientURLs=http://192.168.99.102:2379 isLeader=true
c88e2cccbb287a01: name=node1 peerURLs=http://192.168.99.100:2380 clientURLs=http://192.168.99.100:2379 isLeader=false
會發現22b0de6ffcd98f00
成員狀態變為了unstarted
。
我們在node2
節點,執行 Docker etcd 叢集配置命令:
$ docker run -d --name etcd \
-p 2379:2379 \
-p 2380:2380 \
--volume=etcd-data:/etcd-data \
192.168.99.1:5000/quay.io/coreos/etcd \
/usr/local/bin/etcd \
--data-dir=/etcd-data --name node2 \
--initial-advertise-peer-urls http://192.168.99.101:2380 --listen-peer-urls http://0.0.0.0:2380 \
--advertise-client-urls http://192.168.99.101:2379 --listen-client-urls http://0.0.0.0:2379 \
--initial-cluster-state existing \
--initial-cluster-token docker-etcd \
--initial-cluster node1=http://192.168.99.100:2380,node2=http://192.168.99.101:2380,node3=http://192.168.99.102:2380
結果並不像我們想要的那樣成功,執行檢視日誌:
$ docker logs etcd
2017-12-25 08:19:30.160967 I | etcdmain: etcd Version: 3.2.12
2017-12-25 08:19:30.161062 I | etcdmain: Git SHA: b19dae0
2017-12-25 08:19:30.161082 I | etcdmain: Go Version: go1.8.5
2017-12-25 08:19:30.161092 I | etcdmain: Go OS/Arch: linux/amd64
2017-12-25 08:19:30.161105 I | etcdmain: setting maximum number of CPUs to 1, total number of available CPUs is 1
2017-12-25 08:19:30.161144 N | etcdmain: the server is already initialized as member before, starting as etcd member...
2017-12-25 08:19:30.161195 I | embed: listening for peers on http://0.0.0.0:2380
2017-12-25 08:19:30.161232 I | embed: listening for client requests on 0.0.0.0:2379
2017-12-25 08:19:30.165269 I | etcdserver: name = node2
2017-12-25 08:19:30.165317 I | etcdserver: data dir = /etcd-data
2017-12-25 08:19:30.165335 I | etcdserver: member dir = /etcd-data/member
2017-12-25 08:19:30.165347 I | etcdserver: heartbeat = 100ms
2017-12-25 08:19:30.165358 I | etcdserver: election = 1000ms
2017-12-25 08:19:30.165369 I | etcdserver: snapshot count = 100000
2017-12-25 08:19:30.165385 I | etcdserver: advertise client URLs = http://192.168.99.101:2379
2017-12-25 08:19:30.165593 I | etcdserver: restarting member 773d30c9fc6640b4 in cluster f84185fa5f91bdf6 at commit index 14
2017-12-25 08:19:30.165627 I | raft: 773d30c9fc6640b4 became follower at term 11
2017-12-25 08:19:30.165647 I | raft: newRaft 773d30c9fc6640b4 [peers: [], term: 11, commit: 14, applied: 0, lastindex: 14, lastterm: 11]
2017-12-25 08:19:30.169277 W | auth: simple token is not cryptographically signed
2017-12-25 08:19:30.170424 I | etcdserver: starting server... [version: 3.2.12, cluster version: to_be_decided]
2017-12-25 08:19:30.171732 I | etcdserver/membership: added member 773d30c9fc6640b4 [http://192.168.99.101:2380] to cluster f84185fa5f91bdf6
2017-12-25 08:19:30.171845 I | etcdserver/membership: added member c88e2cccbb287a01 [http://192.168.99.100:2380] to cluster f84185fa5f91bdf6
2017-12-25 08:19:30.171877 I | rafthttp: starting peer c88e2cccbb287a01...
2017-12-25 08:19:30.171902 I | rafthttp: started HTTP pipelining with peer c88e2cccbb287a01
2017-12-25 08:19:30.175264 I | rafthttp: started peer c88e2cccbb287a01
2017-12-25 08:19:30.175339 I | rafthttp: added peer c88e2cccbb287a01
2017-12-25 08:19:30.178326 I | etcdserver/membership: added member cbd7fa8d01297113 [http://192.168.99.102:2380] to cluster f84185fa5f91bdf6
2017-12-25 08:19:30.178383 I | rafthttp: starting peer cbd7fa8d01297113...
2017-12-25 08:19:30.178410 I | rafthttp: started HTTP pipelining with peer cbd7fa8d01297113
2017-12-25 08:19:30.179794 I | rafthttp: started peer cbd7fa8d01297113
2017-12-25 08:19:30.179835 I | rafthttp: added peer cbd7fa8d01297113
2017-12-25 08:19:30.180062 N | etcdserver/membership: set the initial cluster version to 3.0
2017-12-25 08:19:30.180132 I | etcdserver/api: enabled capabilities for version 3.0
2017-12-25 08:19:30.180255 N | etcdserver/membership: updated the cluster version from 3.0 to 3.2
2017-12-25 08:19:30.180430 I | etcdserver/api: enabled capabilities for version 3.2
2017-12-25 08:19:30.183979 I | rafthttp: started streaming with peer c88e2cccbb287a01 (writer)
2017-12-25 08:19:30.184139 I | rafthttp: started streaming with peer c88e2cccbb287a01 (writer)
2017-12-25 08:19:30.184232 I | rafthttp: started streaming with peer c88e2cccbb287a01 (stream MsgApp v2 reader)
2017-12-25 08:19:30.185142 I | rafthttp: started streaming with peer c88e2cccbb287a01 (stream Message reader)
2017-12-25 08:19:30.186518 I | etcdserver/membership: removed member cbd7fa8d01297113 from cluster f84185fa5f91bdf6
2017-12-25 08:19:30.186573 I | rafthttp: stopping peer cbd7fa8d01297113...
2017-12-25 08:19:30.186614 I | rafthttp: started streaming with peer cbd7fa8d01297113 (writer)
2017-12-25 08:19:30.186786 I | rafthttp: stopped streaming with peer cbd7fa8d01297113 (writer)
2017-12-25 08:19:30.186815 I | rafthttp: started streaming with peer cbd7fa8d01297113 (writer)
2017-12-25 08:19:30.186831 I | rafthttp: stopped streaming with peer cbd7fa8d01297113 (writer)
2017-12-25 08:19:30.186876 I | rafthttp: started streaming with peer cbd7fa8d01297113 (stream MsgApp v2 reader)
2017-12-25 08:19:30.187224 I | rafthttp: started streaming with peer cbd7fa8d01297113 (stream Message reader)
2017-12-25 08:19:30.187647 I | rafthttp: stopped HTTP pipelining with peer cbd7fa8d01297113
2017-12-25 08:19:30.187682 I | rafthttp: stopped streaming with peer cbd7fa8d01297113 (stream MsgApp v2 reader)
2017-12-25 08:19:30.187873 I | rafthttp: stopped streaming with peer cbd7fa8d01297113 (stream Message reader)
2017-12-25 08:19:30.187895 I | rafthttp: stopped peer cbd7fa8d01297113
2017-12-25 08:19:30.187911 I | rafthttp: removed peer cbd7fa8d01297113
2017-12-25 08:19:30.188034 I | etcdserver/membership: added member b2b0bca2e0cfcc19 [http://192.168.99.102:2380] to cluster f84185fa5f91bdf6
2017-12-25 08:19:30.188059 I | rafthttp: starting peer b2b0bca2e0cfcc19...
2017-12-25 08:19:30.188075 I | rafthttp: started HTTP pipelining with peer b2b0bca2e0cfcc19
2017-12-25 08:19:30.188510 I | rafthttp: started peer b2b0bca2e0cfcc19
2017-12-25 08:19:30.188533 I | rafthttp: added peer b2b0bca2e0cfcc19
2017-12-25 08:19:30.188795 I | etcdserver/membership: removed member 773d30c9fc6640b4 from cluster f84185fa5f91bdf6
2017-12-25 08:19:30.193643 I | rafthttp: started streaming with peer b2b0bca2e0cfcc19 (writer)
2017-12-25 08:19:30.193730 I | rafthttp: started streaming with peer b2b0bca2e0cfcc19 (writer)
2017-12-25 08:19:30.193797 I | rafthttp: started streaming with peer b2b0bca2e0cfcc19 (stream MsgApp v2 reader)
2017-12-25 08:19:30.194782 I | rafthttp: started streaming with peer b2b0bca2e0cfcc19 (stream Message reader)
2017-12-25 08:19:30.195663 I | raft: 773d30c9fc6640b4 [term: 11] received a MsgHeartbeat message with higher term from b2b0bca2e0cfcc19 [term: 12]
2017-12-25 08:19:30.195716 I | raft: 773d30c9fc6640b4 became follower at term 12
2017-12-25 08:19:30.195736 I | raft: raft.node: 773d30c9fc6640b4 elected leader b2b0bca2e0cfcc19 at term 12
2017-12-25 08:19:30.196617 E | rafthttp: streaming request ignored (ID mismatch got 22b0de6ffcd98f00 want 773d30c9fc6640b4)
2017-12-25 08:19:30.197064 E | rafthttp: streaming request ignored (ID mismatch got 22b0de6ffcd98f00 want 773d30c9fc6640b4)
2017-12-25 08:19:30.197846 E | rafthttp: streaming request ignored (ID mismatch got 22b0de6ffcd98f00 want 773d30c9fc6640b4)
2017-12-25 08:19:30.198242 E | rafthttp: streaming request ignored (ID mismatch got 22b0de6ffcd98f00 want 773d30c9fc6640b4)
2017-12-25 08:19:30.201771 E | etcdserver: the member has been permanently removed from the cluster
2017-12-25 08:19:30.202060 I | etcdserver: the data-dir used by this member must be removed.
2017-12-25 08:19:30.202307 E | etcdserver: publish error: etcdserver: request cancelled
2017-12-25 08:19:30.202338 I | etcdserver: aborting publish because server is stopped
2017-12-25 08:19:30.202364 I | rafthttp: stopping peer b2b0bca2e0cfcc19...
2017-12-25 08:19:30.202482 I | rafthttp: stopped streaming with peer b2b0bca2e0cfcc19 (writer)
2017-12-25 08:19:30.202504 I | rafthttp: stopped streaming with peer b2b0bca2e0cfcc19 (writer)
2017-12-25 08:19:30.204143 I | rafthttp: stopped HTTP pipelining with peer b2b0bca2e0cfcc19
2017-12-25 08:19:30.204186 I | rafthttp: stopped streaming with peer b2b0bca2e0cfcc19 (stream MsgApp v2 reader)
2017-12-25 08:19:30.204205 I | rafthttp: stopped streaming with peer b2b0bca2e0cfcc19 (stream Message reader)
2017-12-25 08:19:30.204217 I | rafthttp: stopped peer b2b0bca2e0cfcc19
2017-12-25 08:19:30.204228 I | rafthttp: stopping peer c88e2cccbb287a01...
2017-12-25 08:19:30.204241 I | rafthttp: stopped streaming with peer c88e2cccbb287a01 (writer)
2017-12-25 08:19:30.204255 I | rafthttp: stopped streaming with peer c88e2cccbb287a01 (writer)
2017-12-25 08:19:30.204824 I | rafthttp: stopped HTTP pipelining with peer c88e2cccbb287a01
2017-12-25 08:19:30.204860 I | rafthttp: stopped streaming with peer c88e2cccbb287a01 (stream MsgApp v2 reader)
2017-12-25 08:19:30.204878 I | rafthttp: stopped streaming with peer c88e2cccbb287a01 (stream Message reader)
2017-12-25 08:19:30.204891 I | rafthttp: stopped peer c88e2cccbb287a01
這麼長的日誌,說明啥問題呢,就是說我們雖然重新執行的 etcd 建立命令,但因為讀取之前配置檔案的關係,etcd 會恢復之前的叢集成員,但之前的叢集節點已經被移除了,所以叢集節點就一直處於停止狀態。
怎麼解決呢?很簡單,就是將我們之前建立的etcd-data
資料卷軸刪掉,命令:
$ docker volume ls
DRIVER VOLUME NAME
local etcd-data
$ docker volume rm etcd-data
etcd-data
然後,再在node2
節點,重新執行 Docker etcd 叢集配置命令(上面),會發現執行是成功的。
我們再執行下檢視叢集成員命令(v2 版本):
$ etcdctl member list
22b0de6ffcd98f00: name=node2 peerURLs=http://192.168.99.101:2380 clientURLs=http://192.168.99.101:2379 isLeader=false
b2b0bca2e0cfcc19: name=node3 peerURLs=http://192.168.99.102:2380 clientURLs=http://192.168.99.102:2379 isLeader=true
c88e2cccbb287a01: name=node1 peerURLs=http://192.168.99.100:2380 clientURLs=http://192.168.99.100:2379 isLeader=false
3. API 操作
etcd REST API 被用於鍵值操作和叢集成員操作,這邊就簡單說幾個,詳細的 API 檢視附錄說明。
1. 鍵值管理
設定鍵值命令:
$ curl http://127.0.0.1:2379/v2/keys/hello -XPUT -d value="hello world"
{"action":"set","node":{"key":"/hello","value":"hello world","modifiedIndex":17,"createdIndex":17}}
檢視鍵值命令:
$ curl http://127.0.0.1:2379/v2/keys/hello
{"action":"get","node":{"key":"/hello","value":"hello world","modifiedIndex":17,"createdIndex":17}}
刪除鍵值命令:
$ curl http://127.0.0.1:2379/v2/keys/hello -XDELETE
{"action":"delete","node":{"key":"/hello","modifiedIndex":19,"createdIndex":17},"prevNode":{"key":"/hello","value":"hello world","modifiedIndex":17,"createdIndex":17}}
2. 成員管理
列出叢集中的所有成員:
$ curl http://127.0.0.1:2379/v2/members
{"members":[{"id":"22b0de6ffcd98f00","name":"node2","peerURLs":["http://192.168.99.101:2380"],"clientURLs":["http://192.168.99.101:2379"]},{"id":"b2b0bca2e0cfcc19","name":"node3","peerURLs":["http://192.168.99.102:2380"],"clientURLs":["http://192.168.99.102:2379"]},{"id":"c88e2cccbb287a01","name":"node1","peerURLs":["http://192.168.99.100:2380"],"clientURLs":["http://192.168.99.100:2379"]}]}
檢視當前節點是否為管理節點:
$ curl http://127.0.0.1:2379/v2/stats/leader
{"leader":"b2b0bca2e0cfcc19","followers":{"22b0de6ffcd98f00":{"latency":{"current":0.001051,"average":0.0029195000000000002,"standardDeviation":0.001646769458667484,"minimum":0.001051,"maximum":0.006367},"counts":{"fail":0,"success":10}},"c88e2cccbb287a01":{"latency":{"current":0.000868,"average":0.0022389999999999997,"standardDeviation":0.0011402923601720172,"minimum":0.000868,"maximum":0.004725},"counts":{"fail":0,"success":12}}}}
檢視當前節點資訊:
$ curl http://127.0.0.1:2379/v2/stats/self
{"name":"node3","id":"b2b0bca2e0cfcc19","state":"StateLeader","startTime":"2017-12-25T06:00:28.803429523Z","leaderInfo":{"leader":"b2b0bca2e0cfcc19","uptime":"36m45.45263851s","startTime":"2017-12-25T08:13:02.103896843Z"},"recvAppendRequestCnt":6,"sendAppendRequestCnt":22}
檢視叢集狀態:
$ curl http://127.0.0.1:2379/v2/stats/store
{"getsSuccess":9,"getsFail":4,"setsSuccess":9,"setsFail":0,"deleteSuccess":3,"deleteFail":0,"updateSuccess":0,"updateFail":0,"createSuccess":7,"createFail":0,"compareAndSwapSuccess":0,"compareAndSwapFail":0,"compareAndDeleteSuccess":0,"compareAndDeleteFail":0,"expireCount":0,"watchers":0}
當然也可以通過 API 新增和刪除叢集成員。
4. API 說明和 etcdctl 命令說明
etcd REST API 說明(v2 版本):
etcdctl 命令說明:
命令 | 說明 |
---|---|
etcdctl set key value | 新增鍵值 |
etcdctl get key | 獲取鍵值 |
etcdctl update key value | 更新鍵值 |
etcdctl rm key | 刪除鍵值 |
etcdctl mkdir dirname | 新增目錄(不存在的話建立) |
etcdctl setdir | 新增目錄(都建立) |
etcdctl updatedir | 更新目錄 |
etcdctl rmdir | 刪除目錄 |
etcdctl ls | 列出目錄 |
etcdctl watch | 監控鍵值 |
etcdctl exec-watch | 監控鍵值(執行命令) |
etcdctl list | 檢視叢集成員 |
etcdctl member add | 新增叢集成員 |
etcdctl remove | 移除叢集成員 |
參考資料:
相關推薦
Docker 搭建 etcd 叢集
閱讀目錄: 主機安裝 叢集搭建 API 操作 API 說明和 etcdctl 命令說明 etcd 是 CoreOS 團隊發起的一個開源專案(Go 語言,其實很多這類專案都是 Go 語言實現的,只能說很強大),實現了分散式鍵值儲存和服務發現,etcd 和 ZooKeeper/Consul 非常相似,都提供了
Docker 搭建 etcd 叢集及管理
#選擇任意一個節點 進入 etcd shell $ docker exec -it etcd bin/sh # 檢視節點狀態 $ etcdctl member list 52a25183c1fa5a39: name=etcd0 peerURLs=http://10.1.99.13:2380 client
基於Docker的ETCD叢集搭建
etcd是一個高可用的鍵值儲存系統,主要用於共享配置和服務發現。etcd是由CoreOS開發並維護的,靈感來自於 ZooKeeper 和 Doozer,它使用Go語言編寫,並通過Raft一致性演算法處理日誌複製以保證強一致性。Raft是一個來自Stanford的新的一致
Docker + Swarm + etcd 叢集搭建
在這個資訊爆炸的時代,人們已然被大量、快速並且簡短的資訊所包圍。然而,我們相信:過多“快餐”式的閱讀只會令人“虛胖”,缺乏實質的內涵。伯樂線上內容團隊正試圖以我們微薄的力量,把優秀的原創文章和譯文分享給讀者,為“快餐”新增一些“營養”元素。
如何在滴滴雲 DC2 上搭建 ETCD 叢集
簡介 ETCD 是一個開源的分散式 Key-Value 儲存,它採用 Raft 演算法來保證資料的強一致性,故常常用來存取分散式系統中對一致性要求比較苛刻的配置資訊,被廣泛應用。它具有如下特點: 簡單:為使用者提供了簡單而友好的 API 介面(gRPC) 安全:客戶端認
基於Centos7+Docker 搭建hadoop叢集
總體流程: 獲取centos7映象 為centos7映象安裝ssh 使用pipework為容器配置IP 為centos7映象配置java、hadoop 配置hadoop 1.獲取centos7映象 $ docker pull centos:7 //檢視當前已下載docke
docker搭建consul叢集
說明 docker版本:18.06.1-ce consul 版本:v1.2.3 系統:ubuntu18 本文將介紹在一臺機器上搭建三個server節點 1個Client節點的consul叢集。 docker安裝 1.解除安裝老版本 sudo apt-get
Docker搭建PXC叢集
如何建立MySQL的PXC叢集 下載PXC叢集映象檔案 下載 docker pull percona/percona-xtradb-cluster 重新命名 [[email protected] ~]# docker tag docker.io/percona/percona-xtradb
使用TLS證書搭建etcd叢集
本文etcd叢集才用三臺centos7.5搭建完成。 vmnode1:192.168.20.210 vmnode2:192.168.20.211 vmnode3:192.168.20.212 一、建立CA證書和金鑰 kubernetes 系統各元件需要使用 TLS 證書對通訊進行加密,本文件使用 C
docker 搭建kafka叢集
二當家對這篇文章做了一定修改 因為 原文中 docker-compose scale kafka=3 會啟動3個宿主機上9092的埠的kafka 報錯 仔細可以看下原文和本文進行差異比對 linux發行版 已經安裝好docker 已經安裝好docker-comp
五行命令使用docker搭建hadoop叢集
前言 如果個人想搭建一個hadoop叢集玩玩,之前都是採用虛擬機器的模式,每個節點都要一套配置,非常的複雜,在網上看到有大佬已經做好了映象和指令碼,拿來五行命令就能使用了! 拉取映象 sudo dock
Docker 搭建Spark_hadoop叢集
singularities/spark:2.2版本中 Hadoop版本:2.8.2 Spark版本: 2.2.1 Scala版本:2.11.8 Java版本:1.8.0_151 拉取映象: [[email protected] docker-spar
使用 Docker搭建 ZooKeeper 叢集
備註,此文來源: https://segmentfault.com/a/1190000006907443 防止以後找不到,故記錄一下: 映象下載 hub.docker.com 上有不少 ZK 映象, 不過為了穩定起見, 我們就使用官方的 ZK 映象吧. 首先執行如下
docker搭建redis叢集
序言 在原來VM使用redis的時候,搭建的時候,需要下載redis的原始碼,然後進行編譯
Linux_基於Docker搭建Redis叢集
常用命令:docker images 命令來檢視我們已經安裝映象docker search <name>:查詢映象名稱docker pull <name>:拉取映象docker ps 預設顯示執行的容器,顯示所有容器: docker ps -ado
docker搭建linux叢集,搭建mpi環境,並使用MTT benchmark測試叢集效能
最近在研究docker,早些時候老闆讓做了一個open mpi的image,並在單機環境下,成功使用docker搭建了一個openmpi的叢集,可以跑一些hello world的例子,後來,在ubuntu環境下,使用openvswitch搭建了一個多host的叢
用 Docker 搭建 Spark 叢集
簡介 Spark 是 Berkeley 開發的分散式計算的框架,相對於 Hadoop 來說,Spark 可以快取中間結果到記憶體而提高某些需要迭代的計算場景的效率,目前收到廣泛關注。 熟悉 Hadoop 的同學也不必擔心,Spark 很多設計理念和用法都跟 Hado
Docker搭建hadoop叢集
因為之前在VMware上操作Hadoop時發現資源消耗大,配置麻煩,所以思考能不能使用docker搭建Hadoop叢集,感謝上面連結的大神弄的叢集映象,所以很快就能搭建出Hadoop3節點叢集。我使用的是windows下dockerTool安裝啟動vagrant、vitrualbox 3節點Hado
Docker搭建Swarm叢集
非常好的文章,整個複製過來了,覺得好請點連結,原文更精彩! Docker 叢集環境實現的新方式 通過 Docker Swarm 和 Consul 配置叢集並實現各主機上 Docker 容器的互通 近幾年來,Docker 作為一個開源的應用容器引擎,深受廣大開發者的歡迎。
【轉】使用Docker搭建hadoop叢集
剛開始搭建Hadoop叢集的時候,使用的是VMware建立的虛擬機器。結果卡到心態爆炸。。。 今天嘗試使用Docker搭建hadoop叢集,發現是如此的好用快捷,也比使用VMware簡單。 在這裡記錄一下防止以後忘記,為以後的學習做準備。 1.獲取映象。 如果是本地使用VMware搭建的話,需