1. 程式人生 > 其它 >【轉載】解決 failed to set bridge addr: “cni0“ already has an IP address different from 10.244.2.1/24

【轉載】解決 failed to set bridge addr: “cni0“ already has an IP address different from 10.244.2.1/24

failed to set bridge addr: "cni0" already has an IP address different from 10.244.2.1/24 的解決方式

 

啟動pod時,檢視pod一直報如下的錯誤:

Warning FailedCreatePodSandBox 3m18s kubelet Failed to create pod sandbox: rpc error: code = Unknown desc = failed to set up sandbox container "1506a90c486e2c187e21e8fb4b6888e5d331235f48eebb5cf44121cc587a6f05
" network for pod "ds-d58vg": networkPlugin cni failed to set up pod "ds-d58vg_kube-system" network: failed to set bridge addr: "cni0" already has an IP address different from 10.244.2.1/24 Normal SandboxChanged 3m1s (x12 over 4m13s) kubelet Pod sandbox changed, it will be killed and re-created. Warning FailedCreatePodSandBox
2m59s (x4 over 3m14s) kubelet (combined from similar events): Failed to create pod sandbox: rpc error: code
= Unknown desc = failed to set up sandbox container "a8dc84257ca6f4543c223735dd44e79c1d001724a54cd20ab33e3a7596fba5c9" network for pod "ds-d58vg": networkPlugin cni failed to set up pod "ds-d58vg_kube-system
" network: failed to set bridge addr: "cni0" already has an IP address different from 10.244.2.1/24

 

檢視ifconfig資訊

# ifconfig
cni0: flags=4163<UP,BROADCAST,RUNNING,MULTICAST> mtu 1450
inet 10.244.0.1 netmask 255.255.255.0 broadcast 10.244.0.255
inet6 fe80::80bc:10ff:feb0:9d1b prefixlen 64 scopeid 0x20<link>
ether 82:bc:10:b0:9d:1b txqueuelen 1000 (Ethernet)
RX packets 1478990 bytes 119510314 (113.9 MiB)
RX errors 0 dropped 0 overruns 0 frame 0
TX packets 1486862 bytes 136242849 (129.9 MiB)
TX errors 0 dropped 0 overruns 0 carrier 0 collisions 0

...

flannel.1: flags=4163<UP,BROADCAST,RUNNING,MULTICAST> mtu 1450
inet 10.244.0.0 netmask 255.255.255.255 broadcast 0.0.0.0
inet6 fe80::605e:12ff:feb8:7ce3 prefixlen 64 scopeid 0x20<link>
ether 62:5e:12:b8:7c:e3 txqueuelen 0 (Ethernet)
RX packets 55074 bytes 9896264 (9.4 MiB)
RX errors 0 dropped 0 overruns 0 frame 0
TX packets 57738 bytes 5642813 (5.3 MiB)
TX errors 0 dropped 10 overruns 0 carrier 0 collisions 0

 

檢視flannel資訊

# cat /run/flannel/subnet.env
FLANNEL_NETWORK=10.244.0.0/16
FLANNEL_SUBNET=10.244.0.1/24
FLANNEL_MTU=1450
FLANNEL_IPMASQ=true

 

如果直接刪掉cni0等資訊

# ifconfig cni0 down
# ip link delete cni0

 

這樣操作後,雖然這個錯能解決,pod也執行正常,但會將dns的pod擠掉

# kubectl get po -o wide -n kube-system
NAME READY STATUS RESTARTS AGE IP NODE NOMINATED NODE READINESS GATES
coredns-6d8c4cb4d-7lswb  0/1   CrashLoopBackOff  9   (116s ago) 22h   10.244.0.3   master   <none>   <none>
coredns-6d8c4cb4d-84z48  0/1   CrashLoopBackOff  9   (2m6s ago) 22h   10.244.0.2   master   <none>   <none>
ds-4cqxm           1/1   Running       0   33m          10.244.0.4   master   <none>   <none>
ds-d58vg           1/1   Running       0   33m          10.244.2.185 node2    <none>   <none>
ds-sjxwn           1/1   Running       0   33m          10.244.1.48  node1    <none>   <none>

 

此時檢視coredns的pod資訊

# kubectl describe po coredns-6d8c4cb4d-84z48 -n kube-system
Name: coredns-6d8c4cb4d-84z48
Namespace: kube-system
Priority: 2000000000
......

Events:
Type Reason Age From Message
---- ------ ---- ---- -------
Warning Unhealthy 28m (x5 over 29m) kubelet Liveness probe failed: Get "http://10.244.0.2:8080/health": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
Normal Killing 28m kubelet Container coredns failed liveness probe, will be restarted
Normal Pulled 28m (x2 over 22h) kubelet Container image "registry.aliyuncs.com/google_containers/coredns:v1.8.6" already present on machine
Normal Created 28m (x2 over 22h) kubelet Created container coredns
Normal Started 28m (x2 over 22h) kubelet Started container coredns
Warning BackOff 9m29s (x27 over 16m) kubelet Back-off restarting failed container
Warning Unhealthy 4m32s (x141 over 29m) kubelet Readiness probe failed: Get "http://10.244.0.2:8181/ready": context deadline exceeded (Client.Timeout exceeded while awaiting headers)

 

需要重新尋找解決辦法,將之前的pod刪掉,dns的pod也還是異常。沒辦法,將dns的pod刪除後,自行拉起,問題才解決

# kubectl delete pod coredns-6d8c4cb4d-7lswb -n kube-system
pod "coredns-6d8c4cb4d-7lswb" deleted
# kubectl delete pod coredns-6d8c4cb4d-84z48 -n kube-system
pod "coredns-6d8c4cb4d-84z48" deleted

# kubectl get pod -n kube-system -o wide
NAME READY STATUS RESTARTS AGE IP NODE NOMINATED NODE READINESS GATES
coredns-6d8c4cb4d-8xghq 1/1 Running 0 3m48s 10.244.2.186 node2 <none> <none>
coredns-6d8c4cb4d-q65vq 1/1 Running 0 3m48s 10.244.1.49 node1 <none> <none>

 

原文連結:https://blog.csdn.net/red_sky_blue/article/details/123401541