搭建Docker-Swarm
阿新 • • 發佈:2021-08-20
環境準備
172.31.0.201 compo-node1.local
172.31.0.202 compo-node2.local
172.31.0.202 compo-node2.local
注意:docker-swarm是通過主機名區分,所以需要改主機名
[root@long-ubuntu ~]# hostnamectl set-hostname compo-node1.local [root@long-ubuntu ~]# hostnamectl set-hostname compo-node2.local [root@long-ubuntu ~]# hostnamectl set-hostname compo-node3.local
改好主機名後,然後在某個節點做初始化
[root@compo-node1 ~]# docker swarm init --advertise-addr 172.31.0.201 Swarm initialized: current node (pjpobxje1z5h2v1ym0lwmszjf) is now a manager. To add a worker to this swarm, run the following command: docker swarm join --token SWMTKN-1-4ksr4at7jw6op59marbqorzez47vzl6gdbq3j2iehqvyobk3yw-3eks198rxc2dwh3ypw0mwniyd 172.31.0.201:2377 To add a manager to this swarm, run 'docker swarm join-token manager' and follow the instructions.
列出所有節點
[root@compo-node1 ~]# docker node ls
ID HOSTNAME STATUS AVAILABILITY MANAGER STATUS ENGINE VERSION
pjpobxje1z5h2v1ym0lwmszjf * compo-node1.local Ready Active Leader 19.03.15
新增節點
[root@compo-node2 ~]# docker swarm join --token SWMTKN-1-4ksr4at7jw6op59marbqorzez47vzl6gdbq3j2iehqvyobk3yw-3eks198rxc2dwh3ypw0mwniyd 172.31.0.201:2377 This node joined a swarm as a worker. [root@compo-node3 ~]# docker swarm join --token SWMTKN-1-4ksr4at7jw6op59marbqorzez47vzl6gdbq3j2iehqvyobk3yw-3eks198rxc2dwh3ypw0mwniyd 172.31.0.201:2377 This node joined a swarm as a worker.
新增label
# 先檢視所有的節點
[root@compo-node1 ~]# docker node ls
ID HOSTNAME STATUS AVAILABILITY MANAGER STATUS ENGINE VERSION
pjpobxje1z5h2v1ym0lwmszjf * compo-node1.local Ready Active Leader 19.03.15
pjk55jhzcg50jyyvpofyknsoh compo-node2.local Ready Active 19.03.15
m8twd2rx7slhmh3hvwfcufxv9 compo-node3.local Ready Active 19.03.15
# 新增
[root@compo-node1 ~]# docker node update --label-add name=compo-node1 compo-node1.local
compo-node1.local
[root@compo-node1 ~]# docker node update --label-add name=compo-node2 compo-node2.local
compo-node2.local
[root@compo-node1 ~]# docker node update --label-add name=compo-node3 compo-node3.local
compo-node3.local
將其他節點提升為manager角色以實現高可用
[root@compo-node1 ~]# docker node promote compo-node2.local
Node compo-node2.local promoted to a manager in the swarm.
[root@compo-node1 ~]# docker node promote compo-node3.local
Node compo-node3.local promoted to a manager in the swarm.
再次檢視所有節點
[root@compo-node1 ~]# docker node ls
ID HOSTNAME STATUS AVAILABILITY MANAGER STATUS ENGINE VERSION
pjpobxje1z5h2v1ym0lwmszjf * compo-node1.local Ready Active Leader 19.03.15
pjk55jhzcg50jyyvpofyknsoh compo-node2.local Ready Active Reachable 19.03.15
m8twd2rx7slhmh3hvwfcufxv9 compo-node3.local Ready Active Reachable 19.03.15
檢視node資訊
[root@compo-node1 ~]# docker node inspect compo-node2.local
建立網路
# 幫助
[root@compo-node1 ~]# docker network --help
# 建立網路的命令
[root@compo-node1 ~]# docker network create -d overlay --subnet=10.200.0.0/21 --gateway=10.200.0.1 --attachable long-net
mqqmq52rjeiv970dkbgbq5rhl
檢視網路
[root@compo-node3 ~]# docker network ls
NETWORK ID NAME DRIVER SCOPE
fda09d67e839 bridge bridge local
7dba8f8f069e docker_gwbridge bridge local
663fea23b067 host host local
o7hubsieqobu ingress overlay swarm
mqqmq52rjeiv long-net overlay swarm
e3dd1c687900 none null local
驗證網路
[root@compo-node1 ~]# docker network inspect long-net
[
{
"Name": "long-net",
"Id": "mqqmq52rjeiv970dkbgbq5rhl",
"Created": "2021-07-22T07:33:45.229235855Z",
"Scope": "swarm",
"Driver": "overlay",
"EnableIPv6": false,
"IPAM": {
"Driver": "default",
"Options": null,
"Config": [
{
"Subnet": "10.200.0.0/21",
"Gateway": "10.200.0.1"
}
]
},
"Internal": false,
"Attachable": true,
"Ingress": false,
"ConfigFrom": {
"Network": ""
},
"ConfigOnly": false,
"Containers": null,
"Options": {
"com.docker.network.driver.overlay.vxlanid_list": "4097"
},
"Labels": null
}
]
建立容器測試
[root@compo-node1 ~]# docker service create --replicas 3 -p 8888:80 --network long-net --name nginx nginx:1.18-alpine
image nginx:1.18-alpine could not be accessed on a registry to record
its digest. Each node will access nginx:1.18-alpine independently,
possibly leading to different nodes running different
versions of the image.
p45rj6a1pb76vrlmsarweyexx
overall progress: 3 out of 3 tasks
1/3: running [==================================================>]
2/3: running [==================================================>]
3/3: running [==================================================>]
verify: Service converged
對上面的命令引數做下解釋
--replicas 3表示要啟動3個服務
--name nginx表示服務的名稱為nginx,
nginx:1.18-alpine表示根據這個映象建立的容器(服務)
驗證
[root@compo-node1 ~]# docker service ls
ID NAME MODE REPLICAS IMAGE PORTS
p45rj6a1pb76 nginx replicated 3/3 nginx:1.18-alpine *:8888->80/tcp
驗證埠監聽,每個伺服器都會監聽service埠8888
[root@compo-node1 ~]# ss -tanl
LISTEN 0 128 [::]:111 [::]:*
LISTEN 0 128 [::]:22 [::]:*
LISTEN 0 128 [::]:35927 [::]:*
LISTEN 0 128 *:8888 *:*
訪問測試
[root@compo-node1 ~]# curl 172.31.0.201:8888
驗證service
[root@compo-node1 ~]# docker service ps p45rj6a1pb76
ID NAME IMAGE NODE DESIRED STATE CURRENT STATE ERROR PORTS
bdpx7xicb6ui nginx.1 nginx:1.18-alpine compo-node2.local Running Running 7 minutes ago
wi3gawal92t5 nginx.2 nginx:1.18-alpine compo-node3.local Running Running 7 minutes ago
nhg6bg1b1tnj nginx.3 nginx:1.18-alpine compo-node2.local Running Running 5 minutes ago
驗證高可用:將容器所在的伺服器或docker關閉,驗證pod副本高可用
# 停止
[root@compo-node1 ~]# systemctl stop docker
# 檢視
[root@compo-node2 ~]# docker node ls
ID HOSTNAME STATUS AVAILABILITY MANAGER STATUS ENGINE VERSION
pjpobxje1z5h2v1ym0lwmszjf compo-node1.local Down Active Unreachable 19.03.15
pjk55jhzcg50jyyvpofyknsoh * compo-node2.local Ready Active Leader 19.03.15
m8twd2rx7slhmh3hvwfcufxv9 compo-node3.local Ready Active Reachable 19.03.15
啟動
[root@compo-node1 ~]# systemctl start docker
# 檢視
[root@compo-node1 ~]# docker node ls
ID HOSTNAME STATUS AVAILABILITY MANAGER STATUS ENGINE VERSION
pjpobxje1z5h2v1ym0lwmszjf * compo-node1.local Ready Active Reachable 19.03.15
pjk55jhzcg50jyyvpofyknsoh compo-node2.local Ready Active Leader 19.03.15
m8twd2rx7slhmh3hvwfcufxv9 compo-node3.local Ready Active Reachable 19.03.15
刪除(慎用,隨便一臺叢集機器刪除就所有機器都沒有了)
[root@compo-node1 ~]# docker service rm nginx
nginx
再次檢視
[root@compo-node1 ~]# docker service ls
ID NAME MODE REPLICAS IMAGE PORTS
tmm1faqhlt7j nginx-web replicated 10/10 nginx:1.18-alpine *:8880->80/tcp
[root@compo-node1 ~]# docker service rm nginx-web
nginx-web
[root@compo-node1 ~]# docker service ls
ID NAME MODE REPLICAS IMAGE PORTS
[root@compo-node2 ~]# docker service ls
ID NAME MODE REPLICAS IMAGE PORTS
[root@compo-node3 ~]# docker service ls
ID NAME MODE REPLICAS IMAGE PORTS
範例:建立一個10個應用的
[root@compo-node1 ~]# docker service create --replicas 10 -p 8888:80 --network long-net --name nginx nginx:1.18-alpine
image nginx:1.18-alpine could not be accessed on a registry to record
its digest. Each node will access nginx:1.18-alpine independently,
possibly leading to different nodes running different
versions of the image.
ljsbgphte9wr4oj6f3lvld8vh
overall progress: 10 out of 10 tasks
1/10: running [==================================================>]
2/10: running [==================================================>]
3/10: running [==================================================>]
4/10: running [==================================================>]
5/10: running [==================================================>]
6/10: running [==================================================>]
7/10: running [==================================================>]
8/10: running [==================================================>]
9/10: running [==================================================>]
10/10: running [==================================================>]
verify: Service converged
檢視
[root@compo-node1 ~]# docker ps
CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES
[root@compo-node2 ~]# docker ps
CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES
da7f30c5f7f3 nginx:1.18-alpine "/docker-entrypoint.…" 10 minutes ago Up 10 minutes 80/tcp nginx.4.rh4j5dxi5mb5e9a8sdnufo405
004e071d1288 nginx:1.18-alpine "/docker-entrypoint.…" 11 minutes ago Up 11 minutes 80/tcp nginx.1.25zkndu23ea16hylehzdj9h4m
395219dcd109 nginx:1.18-alpine "/docker-entrypoint.…" 11 minutes ago Up 11 minutes 80/tcp nginx.9.tb5p9u6dlfebj7bqph3vlqoms
2bccdaea3cd8 nginx:1.18-alpine "/docker-entrypoint.…" 11 minutes ago Up 11 minutes 80/tcp nginx.3.ibvdx58d8hrqqqhi78omlgrp2
df741faa77b4 nginx:1.18-alpine "/docker-entrypoint.…" 11 minutes ago Up 11 minutes 80/tcp nginx.5.fbmapne5e78ky0faszhpsevaa
[root@compo-node3 ~]# docker ps
CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES
8bfc03216f76 nginx:1.18-alpine "/docker-entrypoint.…" 10 minutes ago Up 10 minutes 80/tcp nginx.8.eoyokyiu4avrzoyh2hx7w49vb
5d86a9864778 nginx:1.18-alpine "/docker-entrypoint.…" 10 minutes ago Up 10 minutes 80/tcp nginx.10.pb1hfo1d05d6xq77r0iq6u45b
6998be3d0dfe nginx:1.18-alpine "/docker-entrypoint.…" 11 minutes ago Up 11 minutes 80/tcp nginx.7.q93goqhwa2cxm96k51simv2xi
5c8720961d0b nginx:1.18-alpine "/docker-entrypoint.…" 11 minutes ago Up 11 minutes 80/tcp nginx.2.tmwt38hhao5w9k5r3cxmzrgsw
a9b3af2e4beb nginx:1.18-alpine "/docker-entrypoint.…" 11 minutes ago Up 11 minutes 80/tcp nginx.6.ux6uqk21zl0ssnxwjncmj8n2y
伸縮
[root@compo-node3 ~]# docker service scale nginx=3
nginx scaled to 3
overall progress: 3 out of 3 tasks
1/3: running [==================================================>]
2/3: No such image: nginx:1.18-alpine
3/3: running [==================================================>]
verify: Service converged
再次檢視
[root@compo-node1 ~]# docker ps
CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES
[root@compo-node2 ~]# docker ps
CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES
004e071d1288 nginx:1.18-alpine "/docker-entrypoint.…" 13 minutes ago Up 13 minutes 80/tcp nginx.1.25zkndu23ea16hylehzdj9h4m
2bccdaea3cd8 nginx:1.18-alpine "/docker-entrypoint.…" 14 minutes ago Up 13 minutes 80/tcp nginx.3.ibvdx58d8hrqqqhi78omlgrp2
[root@compo-node3 ~]# docker ps
CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES
5c8720961d0b nginx:1.18-alpine "/docker-entrypoint.…" 13 minutes ago Up 13 minutes 80/tcp nginx.2.tmwt38hhao5w9k5r3cxmzrgsw
總結
有個需求,假如nginx容器啟用有300個,此時需要縮容,就得先縮容到100,然後縮容到20個,而不是300個容器上來就縮容3個,這樣只會減少3個,而不是隻剩3個,跟建立容器個數相反
[root@compo-node1 ~]# docker service scale nginx=3
nginx scaled to 3
overall progress: 3 out of 3 tasks
1/3: starting container failed: Address already in use
2/3:
3/3:
verify: Service converged
[root@compo-node1 ~]# docker service scale nginx=100
nginx scaled to 100
overall progress: 100 out of 100 tasks
verify: Service converged