1. 程式人生 > 其它 >Docker學習重點(8)~Docker網路

Docker學習重點(8)~Docker網路


一、Docker網路--理解Docker0

  • 準備工作:清空所有環境
    • 將docker 的所有映象、容器先刪除,乾乾淨淨!

1、檢視本地網路資訊 ip addr

● 可見有三個網絡卡資訊:

  • lo:本地(迴環)地址;
  • ens:虛擬機器或阿里雲伺服器(內網)地址;
  • docker0:docker網路地址。

● 問題:docker 是如何處理容器網路訪問的?


2、檢視docker容器啟動時的內部網路 ip addr

● Docker容器沒有ip addr命令:exec ip addr 報錯:

OCI runtime exec failed: exec failed: container_linux.go:380: starting container process caused: exec: "ip": executable file not found in $PATH: unknown

《exec failed: exec failed..... exec: “ip“(Docker容器沒有ip addr命令:ex(Docker容器沒有ip addr命令:exec ip addr 報錯)的解決》


☺ ip addr 命令成功執行:

[root@iZwz9535z41cmgcpkm7i81Z ~]# docker exec -it tomcat01 ip addr
1: lo: <LOOPBACK,UP,LOWER_UP> mtu 65536 qdisc noqueue state UNKNOWN group default qlen 1000
    link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00
    inet 127.0.0.1/8 scope host lo
       valid_lft forever preferred_lft forever
6: eth0@if7: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc noqueue state UP group default 
    link/ether 02:42:ac:11:00:02 brd ff:ff:ff:ff:ff:ff link-netnsid 0
    inet 172.17.0.2/16 brd 172.17.255.255 scope global eth0
       valid_lft forever preferred_lft forever

3、docker 容器和linux 系統宿主機可以相互ping 通

# tomcat 容器 ping 通宿主機(外網地址 120.76.136.52, 內網地址 172.22.26.169)
root@f1cfb81dedfd:/usr/local/tomcat# ping 120.76.136.52
PING 120.76.136.52 (120.76.136.52) 56(84) bytes of data.
64 bytes from 120.76.136.52: icmp_seq=1 ttl=63 time=2.97 ms
64 bytes from 120.76.136.52: icmp_seq=2 ttl=63 time=2.89 ms

root@f1cfb81dedfd:/usr/local/tomcat# ping 172.22.26.169
PING 172.22.26.169 (172.22.26.169) 56(84) bytes of data.
64 bytes from 172.22.26.169: icmp_seq=1 ttl=64 time=0.088 ms
64 bytes from 172.22.26.169: icmp_seq=2 ttl=64 time=0.072 ms
64 bytes from 172.22.26.169: icmp_seq=3 ttl=64 time=0.086 ms

# 宿主機ping 通 tomcat 容器(tomcat 的網絡卡地址 172.17.0.2)
[root@iZwz9535z41cmgcpkm7i81Z ~]# ping 172.17.0.2
PING 172.17.0.2 (172.17.0.2) 56(84) bytes of data.
64 bytes from 172.17.0.2: icmp_seq=1 ttl=64 time=0.106 ms
64 bytes from 172.17.0.2: icmp_seq=2 ttl=64 time=0.083 ms

4、docker 容器互聯的原理:

docker每啟動一個容器,就會分配一個ip,只要安裝了docker,就會有一個網絡卡docker0,橋接模式,使用的時veth-pair技術。

  • docker 容器內部,查詢ip資訊:

● 容器 ip 命令,沒有找到:bash: ping: command not found

  • 解決:安裝iputils-ping,命令:apt -y install iputils-ping
  • 宿主機,查詢ip資訊:

■ 再啟動一個容器, 宿主機檢視ip資訊:發現又多了一對網絡卡

[root@iZwz9535z41cmgcpkm7i81Z ~]# docker exec -it tomcat02 ip addr
1: lo: <LOOPBACK,UP,LOWER_UP> mtu 65536 qdisc noqueue state UNKNOWN group default qlen 1000
    link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00
    inet 127.0.0.1/8 scope host lo
       valid_lft forever preferred_lft forever
16: eth0@if17: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc noqueue state UP group default 
    link/ether 02:42:ac:11:00:03 brd ff:ff:ff:ff:ff:ff link-netnsid 0
    inet 172.17.0.3/16 brd 172.17.255.255 scope global eth0
       valid_lft forever preferred_lft forever

可以看到容器內ip與本機ip成對出現,這就是veth-pair技術。

  • 我們發現這個容器帶來網絡卡,都是一對對的。

  • evth—pair 就是一對的虛擬裝置介面,他們都是成對出現的,一段連著協議,一段彼此相連。正因為有這個特性,evth—pair 充當一個橋樑,連線各種虛擬網路裝置的。

  • openstac,Docker容器之間的連線,OVS的連線,都是使用 evth-pair 技術 。


5、容器與容器之間可以相互ping 通,使用evth-pair 技術:

# tomcat01 容器ping tomcat02 容器
root@f1cfb81dedfd:/usr/local/tomcat# ip a
1: lo: <LOOPBACK,UP,LOWER_UP> mtu 65536 qdisc noqueue state UNKNOWN group default qlen 1000
    link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00
    inet 127.0.0.1/8 scope host lo
       valid_lft forever preferred_lft forever
6: eth0@if7: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc noqueue state UP group default 
    link/ether 02:42:ac:11:00:02 brd ff:ff:ff:ff:ff:ff link-netnsid 0
    inet 172.17.0.2/16 brd 172.17.255.255 scope global eth0
       valid_lft forever preferred_lft forever
root@f1cfb81dedfd:/usr/local/tomcat# ping 172.17.0.3
PING 172.17.0.3 (172.17.0.3) 56(84) bytes of data.
64 bytes from 172.17.0.3: icmp_seq=1 ttl=64 time=0.125 ms

# tomcat02 容器ping tomcat01 容器
root@23254b923487:/usr/local/tomcat# ip a
1: lo: <LOOPBACK,UP,LOWER_UP> mtu 65536 qdisc noqueue state UNKNOWN group default qlen 1000
    link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00
    inet 127.0.0.1/8 scope host lo
       valid_lft forever preferred_lft forever
16: eth0@if17: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc noqueue state UP group default 
    link/ether 02:42:ac:11:00:03 brd ff:ff:ff:ff:ff:ff link-netnsid 0
    inet 172.17.0.3/16 brd 172.17.255.255 scope global eth0
       valid_lft forever preferred_lft forever
root@23254b923487:/usr/local/tomcat# ping 172.17.0.2
PING 172.17.0.2 (172.17.0.2) 56(84) bytes of data.
64 bytes from 172.17.0.2: icmp_seq=1 ttl=64 time=0.136 ms
64 bytes from 172.17.0.2: icmp_seq=2 ttl=64 time=0.105 ms

docker0相當於一個路由器,各個容器都與docker0相連,容器之間的通訊通過路由器來轉發。

■ 結論:容器tomcat01和容器tomcat02是公用的一個路由器,docker0。

  • 所有的容器不指定網路的情況下,都是docker0路由的,docker會給我們的容器分配一個預設的可用IP
  • docker0:

  • evth-pair 技術:
    • Docker 使用的是Linux的橋接,宿主機中 是一個docker 容器的網橋 docker0。
    • Docker中的所有網路介面都是虛擬的(虛擬轉發的效率高),相當於內網傳遞;
    • 只要刪除容器,對應網路就會刪除。



1、(高可用問題)需求: database url = ip;

每次重啟容器或Linux,ip就會變化,固定的ip網際網路絡就會失效,如何使用服務名來連線,而不考慮ip?

---可以通過名字來訪問容器。


2、測試使用容器名來ping

[root@iZwz9535z41cmgcpkm7i81Z ~]# docker exec -it tomcat02 ping tomcat01
ping: tomcat01: Name or service not known
  • 容器之間無法通過容器名來連線;如何解決?
# 通過 --link 就可以解決了網路連通問題(通過名字連通)
[root@iZwz9535z41cmgcpkm7i81Z ~]# docker run -d -P --name tomcat03 --link tomcat02 tomcat:9.0
81d38e78eea0756c654af6b51ac626ad7c086a7fe56589303ddb108fd0091f8d
[root@iZwz9535z41cmgcpkm7i81Z ~]# docker exec -it tomcat03 ping tomcat02 
PING tomcat02 (172.17.0.3) 56(84) bytes of data.
64 bytes from tomcat02 (172.17.0.3): icmp_seq=1 ttl=64 time=0.182 ms
64 bytes from tomcat02 (172.17.0.3): icmp_seq=2 ttl=64 time=0.082 ms

# 但是反向卻無法連結通!!!(因為建立tomcat02 的時候,並沒有--link tomcat03)
# tomcat02 想通過名字 ping tomcat03
root@23254b923487:/usr/local/tomcat# ping tomcat03
ping: tomcat03: Name or service not known

3、docker network 命令:

  • 探究命令 inspect

  • inspect tomcat03:


  • 進入tomcat03,檢視它的主機檔案hosts:


4、tomcat03能夠通過容器名連結tomcat02的原理:

通過--link,tomcat03在自己容器hosts檔案中配置了tomcat02 的ip資訊!

[root@node1 ~]# docker exec -it tomcat03 cat /etc/hosts
127.0.0.1	localhost
::1	localhost ip6-localhost ip6-loopback
fe00::0	ip6-localnet
ff00::0	ip6-mcastprefix
ff02::1	ip6-allnodes
ff02::2	ip6-allrouters
172.17.0.3	tomcat02 23254b923487
172.17.0.4	373a2f03bd8d

# --link 在咱的hosts 配置中增加了一個 172.17.0.3	tomcat02 23254b923487   直接寫死的

● 本質就是修改host對映,--link已經摒棄;建議實現使用自定義網路實現!

  • 不使用 --link,不使用網橋docker0
    • 因為:docker0的問題:不支援容器名連線訪問!



三、容器互聯技術 ● 自定義網路

● network 命令:

[root@iZwz9535z41cmgcpkm7i81Z ~]# docker network --help 

Usage:  docker network COMMAND

Manage networks

Commands:
  connect     Connect a container to a network
  create      Create a network
  disconnect  Disconnect a container from a network
  inspect     Display detailed information on one or more networks
  ls          List networks
  prune       Remove all unused networks
  rm          Remove one or more networks

Run 'docker network COMMAND --help' for more information on a command.

1、檢視所有的docker網路 docker network ls

[root@iZwz9535z41cmgcpkm7i81Z ~]# docker network ls
NETWORK ID     NAME      DRIVER    SCOPE
8ddb7e9846c6   bridge    bridge    local
48e785b7efb3   host      host      local
7e07c5b5ae34   none      null      local

2、網路模式:

  • bridge : 橋接(預設,自己建立也使用bridge 模式)
  • host : 和宿主即共享網路
  • none : 不配置網路
  • container:容器網路連通!(很少用,侷限性很大!)

3、測試自定義網路

# 原先啟動容器,其實是預設使用docker0 [--net bridge],橋接模式
docker run -d -P --name tomcat01 tomcat:9.0
# 實際上是
docker run -d -P --name tomcat01 --net bridge tomcat:9.0

# docker0 特點:預設,但是對於域名無法訪問,不過可以通過--link打通連線
  • 建立自定義的網路 docker create
[root@iZwz9535z41cmgcpkm7i81Z ~]# docker network create --help

Usage:  docker network create [OPTIONS] NETWORK

Create a network

Options:
      --attachable           Enable manual container attachment
      --aux-address map      Auxiliary IPv4 or IPv6 addresses used by Network driver (default map[])
      --config-from string   The network from which to copy the configuration
      --config-only          Create a configuration only network
  -d, --driver string        Driver to manage the Network (default "bridge")
      --gateway strings      IPv4 or IPv6 Gateway for the master subnet
      --ingress              Create swarm routing-mesh network
      --internal             Restrict external access to the network
      --ip-range strings     Allocate container ip from a sub-range
      --ipam-driver string   IP Address Management Driver (default "default")
      --ipam-opt map         Set IPAM driver specific options (default map[])
      --ipv6                 Enable IPv6 networking
      --label list           Set metadata on a network
  -o, --opt map              Set driver specific options (default map[])
      --scope string         Control the network's scope
      --subnet strings       Subnet in CIDR format that represents a network segment


----------------------------------------------------------------------------------------------------------------------------

# 命令引數詳解
[root@node1 ~]# docker network create --help

Usage:  docker network create [OPTIONS] NETWORK

Create a network

Options:
      --attachable           Enable manual container attachment
      --aux-address map      Auxiliary IPv4 or IPv6 addresses used by Network driver (default map[])
      --config-from string   The network from which to copy the configuration
      --config-only          Create a configuration only network
  -d, --driver string        Driver to manage the Network (default "bridge")
      --gateway strings      IPv4 or IPv6 Gateway for the master subnet
      --ingress              Create swarm routing-mesh network
      --internal             Restrict external access to the network
      --ip-range strings     Allocate container ip from a sub-range
      --ipam-driver string   IP Address Management Driver (default "default")
      --ipam-opt map         Set IPAM driver specific options (default map[])
      --ipv6                 Enable IPv6 networking
      --label list           Set metadata on a network
  -o, --opt map              Set driver specific options (default map[])
      --scope string         Control the network's scope
      --subnet strings       Subnet in CIDR format that represents a network segment
  • 建立一個網路 mynet:
  • 自定義網路建立完成:
  • 檢查建立的網路:

  • 建立容器的時候,連線自定義的網路mynet

    • 不同容器同處於同一網路下mynet,維護好了容器間的關係

4、通過名字,容器之間相互ping

# 現在不使用 --link,也可以ping 名字了

# 通過名字 tomcat-net-01 ping tomcat-net-02
[root@iZwz9535z41cmgcpkm7i81Z ~]# docker exec -it tomcat-net-01 ping tomcat-net-02
PING tomcat-net-02 (192.168.0.3) 56(84) bytes of data.
64 bytes from tomcat-net-02.mynet (192.168.0.3): icmp_seq=1 ttl=64 time=0.102 ms
64 bytes from tomcat-net-02.mynet (192.168.0.3): icmp_seq=2 ttl=64 time=0.064 ms
64 bytes from tomcat-net-02.mynet (192.168.0.3): icmp_seq=3 ttl=64 time=0.062 ms
64 bytes from tomcat-net-02.mynet (192.168.0.3): icmp_seq=4 ttl=64 time=0.070 ms
^C
--- tomcat-net-02 ping statistics ---
4 packets transmitted, 4 received, 0% packet loss, time 2999ms
rtt min/avg/max/mdev = 0.062/0.074/0.102/0.016 ms


# 通過名字 tomcat-net-02 ping tomcat-net-01
[root@iZwz9535z41cmgcpkm7i81Z ~]# docker exec -it tomcat-net-02 ping tomcat-net-01
PING tomcat-net-01 (192.168.0.2) 56(84) bytes of data.
64 bytes from tomcat-net-01.mynet (192.168.0.2): icmp_seq=1 ttl=64 time=0.123 ms
64 bytes from tomcat-net-01.mynet (192.168.0.2): icmp_seq=2 ttl=64 time=0.074 ms
^C
--- tomcat-net-01 ping statistics ---
2 packets transmitted, 2 received, 0% packet loss, time 1000ms
rtt min/avg/max/mdev = 0.074/0.098/0.123/0.024 ms

5、自定義網路的意義:

我們自定義的網路docker都已經幫我們維護好了對應的關係。可以實現不同叢集使用不同的網路,保證叢集網路的安全和健康。

  • 如Redis叢集在192.160.0.0/16網段下,mysql叢集在192.161.0.0/16網段下。



四、網路連通

1、場景:tomcat01 ping tomcat-net-01,無法ping 通


2、使用docker network connect

[root@iZwz9535z41cmgcpkm7i81Z ~]# docker network connect  mynet tomcat01
[root@iZwz9535z41cmgcpkm7i81Z ~]# docker network inspect mynet

■ 可以看到mynet將tomcat01容器新增到自己網路中:

  • 測試打通 tomcat01-mynet
  • 連通之後就是將 tomcat01 放到了 mynet 網路下?#一個容器兩個ip地址!
    • 阿里雲服務:公網ip 私網ip

■ 網絡卡與網絡卡無法打通,但是容器和網絡卡之間可以打通。


■ 不同網段(卡) 上的容器互相 ping 通

  • 通過 network connect
[root@iZwz9535z41cmgcpkm7i81Z ~]# docker exec -it tomcat01 ping tomcat-net-01
\\PING tomcat-net-01 (192.168.0.2) 56(84) bytes of data.
64 bytes from tomcat-net-01.mynet (192.168.0.2): icmp_seq=1 ttl=64 time=0.115 ms
64 bytes from tomcat-net-01.mynet (192.168.0.2): icmp_seq=2 ttl=64 time=0.063 ms
64 bytes from tomcat-net-01.mynet (192.168.0.2): icmp_seq=3 ttl=64 time=0.062 ms
^C
--- tomcat-net-01 ping statistics ---
3 packets transmitted, 3 received, 0% packet loss, time 2000ms
rtt min/avg/max/mdev = 0.062/0.080/0.115/0.024 ms
[root@iZwz9535z41cmgcpkm7i81Z ~]# docker exec -it tomcat-net-01 ping tomcat01
PING tomcat01 (192.168.0.4) 56(84) bytes of data.
64 bytes from tomcat01.mynet (192.168.0.4): icmp_seq=1 ttl=64 time=0.088 ms
64 bytes from tomcat01.mynet (192.168.0.4): icmp_seq=2 ttl=64 time=0.064 ms

6、結論:

想要跨網路操作別人,就需要使用docker network connect 連通!



五、實戰部署Redis叢集

1、叢集,需要建立自己的網絡卡

2、分片+高可用+負載均衡

3、shell 指令碼! 來啟動這6個容器


4、部署Redis叢集過程如下:

# 準備工作,移除掉系統其他的容器,避免啟動過多的容器導致系統奔潰
docker rm -f $(docker ps -a)
# 建立redis 網絡卡
docker network create redis --subnet 172.38.0.0/16
# 檢查一下redis 網絡卡的資訊
docker network ls
docker network inspect redis
  • shell指令碼建立六個redis配置
# 通過指令碼建立六個redis配置
for port in $(seq 1 6); \
do \
mkdir -p /mydata/redis/node-${port}/conf
touch /mydata/redis/node-${port}/conf/redis.conf
cat << EOF >/mydata/redis/node-${port}/conf/redis.conf
port 6379
cluster-enabled yes
cluster-config-file nodes.conf
cluster-node-timeout 5000
cluster-announce-ip 172.38.0.1${port}
cluster-announce-port 6379
cluster-announce-bus-port 16379
appendonly yes
EOF
done
  • 檢視一下結點:
[root@iZwz9535z41cmgcpkm7i81Z ~]# cd /mydata/redis/
[root@iZwz9535z41cmgcpkm7i81Z redis]# ls
node-1  node-2  node-3  node-4  node-5  node-6
  • 啟動結點容器:
 docker run -p 637${port}:6379 -p 1637${port}:16379 --name redis-${port} \
 -v /mydata/redis/node-${port}/data:/data \
 -v /mydata/redis/node-${port}/conf/redis.conf:/etc/redis/redis.conf \
 -d --net redis --ip 172.38.0.1${port} redis:5.0.9-alpine3.11 redis-server /etc/redis/redis.conf 
  • 一個一個啟動的方式:
 docker run -p 6371:6379 -p 16371:16379 --name redis-1 \
 -v /mydata/redis/node-1/data:/data \
 -v /mydata/redis/node-1/conf/redis.conf:/etc/redis/redis.conf \
 -d --net redis --ip 172.38.0.11 redis:5.0.9-alpine3.11 redis-server /etc/redis/redis.conf 
 
 
  docker run -p 6372:6379 -p 16372:16379 --name redis-2 \
 -v /mydata/redis/node-2/data:/data \
 -v /mydata/redis/node-2/conf/redis.conf:/etc/redis/redis.conf \
 -d --net redis --ip 172.38.0.12 redis:5.0.9-alpine3.11 redis-server /etc/redis/redis.conf 
 
 
 docker run -p 6373:6379 -p 16373:16379 --name redis-3 \
 -v /mydata/redis/node-3/data:/data \
 -v /mydata/redis/node-3/conf/redis.conf:/etc/redis/redis.conf \
 -d --net redis --ip 172.38.0.13 redis:5.0.9-alpine3.11 redis-server /etc/redis/redis.conf 
 
 
 docker run -p 6374:6379 -p 16374:16379 --name redis-4 \
 -v /mydata/redis/node-4/data:/data \
 -v /mydata/redis/node-4/conf/redis.conf:/etc/redis/redis.conf \
 -d --net redis --ip 172.38.0.14 redis:5.0.9-alpine3.11 redis-server /etc/redis/redis.conf 
 
 
 docker run -p 6375:6379 -p 16375:16379 --name redis-5 \
 -v /mydata/redis/node-5/data:/data \
 -v /mydata/redis/node-5/conf/redis.conf:/etc/redis/redis.conf \
 -d --net redis --ip 172.38.0.15 redis:5.0.9-alpine3.11 redis-server /etc/redis/redis.conf 
 
 
 docker run -p 6376:6379 -p 16376:16379 --name redis-6 \
 -v /mydata/redis/node-6/data:/data \
 -v /mydata/redis/node-6/conf/redis.conf:/etc/redis/redis.conf \
 -d --net redis --ip 172.38.0.16 redis:5.0.9-alpine3.11 redis-server /etc/redis/redis.conf 
  • 叢集:
docker exec -it redis-1 /bin/sh

redis-cli --cluster create 172.38.0.11:6379 172.38.0.12:6379 172.38.0.13:6379 172.38.0.14:6379 172.38.0.15:6379 172.38.0.
16:6379 --cluster-replicas 1
>>> Performing hash slots allocation on 6 nodes...
Master[0] -> Slots 0 - 5460
Master[1] -> Slots 5461 - 10922
Master[2] -> Slots 10923 - 16383
Adding replica 172.38.0.15:6379 to 172.38.0.11:6379
Adding replica 172.38.0.16:6379 to 172.38.0.12:6379
Adding replica 172.38.0.14:6379 to 172.38.0.13:6379
M: 0bd617e83421999d29fb55c25f798d3600495e76 172.38.0.11:6379
   slots:[0-5460] (5461 slots) master
M: 8b91a88e817dcff1a5f82d1ea577acf77799bd95 172.38.0.12:6379
   slots:[5461-10922] (5462 slots) master
M: d5baadcc8b4db9ae93f9c01ed2a204e7d84d0619 172.38.0.13:6379
   slots:[10923-16383] (5461 slots) master
S: 8806e059a5c76468aed86fddc1ec9f006c0de203 172.38.0.14:6379
   replicates d5baadcc8b4db9ae93f9c01ed2a204e7d84d0619
S: 155b2b1ef7443e87b944cd745c22584aa5660628 172.38.0.15:6379
   replicates 0bd617e83421999d29fb55c25f798d3600495e76
S: 33e7146e8084a4cb93b1d057612f6a46652e357f 172.38.0.16:6379
   replicates 8b91a88e817dcff1a5f82d1ea577acf77799bd95
Can I set the above configuration? (type 'yes' to accept): yes
>>> Nodes configuration updated
>>> Assign a different config epoch to each node
>>> Sending CLUSTER MEET messages to join the cluster
Waiting for the cluster to join
...
>>> Performing Cluster Check (using node 172.38.0.11:6379)
M: 0bd617e83421999d29fb55c25f798d3600495e76 172.38.0.11:6379
   slots:[0-5460] (5461 slots) master
   1 additional replica(s)
S: 8806e059a5c76468aed86fddc1ec9f006c0de203 172.38.0.14:6379
   slots: (0 slots) slave
   replicates d5baadcc8b4db9ae93f9c01ed2a204e7d84d0619
S: 33e7146e8084a4cb93b1d057612f6a46652e357f 172.38.0.16:6379
   slots: (0 slots) slave
   replicates 8b91a88e817dcff1a5f82d1ea577acf77799bd95
M: d5baadcc8b4db9ae93f9c01ed2a204e7d84d0619 172.38.0.13:6379
   slots:[10923-16383] (5461 slots) master
   1 additional replica(s)
S: 155b2b1ef7443e87b944cd745c22584aa5660628 172.38.0.15:6379
   slots: (0 slots) slave
   replicates 0bd617e83421999d29fb55c25f798d3600495e76
M: 8b91a88e817dcff1a5f82d1ea577acf77799bd95 172.38.0.12:6379
   slots:[5461-10922] (5462 slots) master
   1 additional replica(s)
[OK] All nodes agree about slots configuration.
>>> Check for open slots...
>>> Check slots coverage...
[OK] All 16384 slots covered.
  • 查詢叢集資訊:
/data # redis-cli -c
127.0.0.1:6379> cluster info
cluster_state:ok
cluster_slots_assigned:16384
cluster_slots_ok:16384
cluster_slots_pfail:0
cluster_slots_fail:0
cluster_known_nodes:6
cluster_size:3
cluster_current_epoch:6
cluster_my_epoch:1
cluster_stats_messages_ping_sent:315
cluster_stats_messages_pong_sent:323
cluster_stats_messages_sent:638
cluster_stats_messages_ping_received:318
cluster_stats_messages_pong_received:315
cluster_stats_messages_meet_received:5
cluster_stats_messages_received:638

127.0.0.1:6379> cluster nodes
8806e059a5c76468aed86fddc1ec9f006c0de203 172.38.0.14:6379@16379 slave d5baadcc8b4db9ae93f9c01ed2a204e7d84d0619 0 1651111739893 4 connected
33e7146e8084a4cb93b1d057612f6a46652e357f 172.38.0.16:6379@16379 slave 8b91a88e817dcff1a5f82d1ea577acf77799bd95 0 1651111741407 6 connected
d5baadcc8b4db9ae93f9c01ed2a204e7d84d0619 172.38.0.13:6379@16379 master - 0 1651111740000 3 connected 10923-16383
155b2b1ef7443e87b944cd745c22584aa5660628 172.38.0.15:6379@16379 slave 0bd617e83421999d29fb55c25f798d3600495e76 0 1651111740000 5 connected
8b91a88e817dcff1a5f82d1ea577acf77799bd95 172.38.0.12:6379@16379 master - 0 1651111740906 2 connected 5461-10922
0bd617e83421999d29fb55c25f798d3600495e76 172.38.0.11:6379@16379 myself,master - 0 165111739000 1 connected 0-5460
  • 測試設定key-value 鍵值對:
127.0.0.1:6379> set a b
-> Redirected to slot [15495] located at 172.38.0.13:6379
OK

  • 再開一個視窗,測試高可用性
    • stop掉當前叢集中正在執行的redis-3,若是高可用架構搭建成功,則主機宕掉,從機會替代主機的
# 再開一個視窗,停止當前正在執行的容器redis-3
[root@iZwz9535z41cmgcpkm7i81Z ~]# docker stop redis-3
redis-3
[root@iZwz9535z41cmgcpkm7i81Z ~]# docker ps
CONTAINER ID   IMAGE                    COMMAND                  CREATED             STATUS             PORTS                                              NAMES
5c15f03d7a55   redis:5.0.9-alpine3.11   "docker-entrypoint.s…"   About an hour ago   Up About an hour   0.0.0.0:6376->6379/tcp, 0.0.0.0:16376->16379/tcp   redis-6
f375fc1baaec   redis:5.0.9-alpine3.11   "docker-entrypoint.s…"   About an hour ago   Up About an hour   0.0.0.0:6375->6379/tcp, 0.0.0.0:16375->16379/tcp   redis-5
7e335e02b33d   redis:5.0.9-alpine3.11   "docker-entrypoint.s…"   About an hour ago   Up About an hour   0.0.0.0:6374->6379/tcp, 0.0.0.0:16374->16379/tcp   redis-4
4e721d20f8fd   redis:5.0.9-alpine3.11   "docker-entrypoint.s…"   About an hour ago   Up About an hour   0.0.0.0:6372->6379/tcp, 0.0.0.0:16372->16379/tcp   redis-2
e438501487a1   redis:5.0.9-alpine3.11   "docker-entrypoint.s…"   2 hours ago         Up 2 hours         0.0.0.0:6371->6379/tcp, 0.0.0.0:16371->16379/tcp   redis-1


# 測試在容器redis-3 被停止掉了,是否從機會替代上去
172.38.0.13:6379> get a
Could not connect to Redis at 172.38.0.13:6379: Host is unreachable
(32.33s)
not connected> 
/data # redis-cli -c
127.0.0.1:6379> get a
-> Redirected to slot [15495] located at 172.38.0.14:6379
"b"
172.38.0.14:6379>
  • 宕掉redis-3之後

  • 至此證明docker 搭建redis 叢集完成!



☺ 參考來源:
狂神的B站視訊《【狂神說Java】Docker最新超詳細版教程通俗易懂》 https://www.bilibili.com/video/BV1og4y1q7M4



如果本文對你有幫助的話記得給一樂點個贊哦,感謝!