K8s 配置網路外掛flannel
配置網路外掛flannel
docker:
bridge:容器的預設網路
joined:使用別的容器的網路空間
open:容器直接共享宿主機的網路空間
none:不使用網路空間
Kubernetes網路通訊:
容器間通訊:同一個Pod內的多個容器間的通訊
Pod通訊:Pod IP <==> Pod IP
Pod與Service通訊: Pod IP <==> ClusterIP
Service與叢集外部客戶端的通訊
K8s本身沒有網路方案,它允許別人給他提供
主要的網路有:
flannel 預設是使用VXLAN方式進行通訊
calico
canel
Kube-router
-----
解決方案:
虛擬網橋
多路複用:MacVLAN
硬體交換:SR-IOV 單根虛擬化
-------------------------------------------------------------------------
兩臺主機上上Pod進行通訊,利用flannel
vxlan:擴充套件的虛擬區域網
V虛擬的
X擴充套件的
lan區域網
flannel
支援多種後端
Vxlan
1.valan
2.Dirextrouting
host-gw:Host Gateway #不推薦,只能在二層網路中,不支援跨網路,如果有成千上萬的Pod,容易產生廣播風暴
UDP:效能差
檢視CNI外掛
[[email protected] ~]# cat /etc/cni/net.d/10-flannel.conflist
{
"name": "cbr0",
"plugins": [
{
"type": "flannel",
"delegate": {
"hairpinMode": true,
"isDefaultGateway": true
}
},
{
"type": "portmap",
"capabilities": {
"portMappings": true #埠對映
}
}
]
}
注意:使用adm安裝的k8s的外掛flannel是使用的容器的形式
[[email protected] ~]# kubectl get daemonset -n kube-system
NAME DESIRED CURRENT READY UP-TO-DATE AVAILABLE NODE SELECTOR AGE
kube-flannel-ds-amd64 3 3 3 3 3 beta.kubernetes.io/arch=amd64 12d
kube-flannel-ds-arm 0 0 0 0 0 beta.kubernetes.io/arch=arm 12d
kube-flannel-ds-arm64 0 0 0 0 0 beta.kubernetes.io/arch=arm64 12d
kube-flannel-ds-ppc64le 0 0 0 0 0 beta.kubernetes.io/arch=ppc64le 12d
kube-flannel-ds-s390x 0 0 0 0 0 beta.kubernetes.io/arch=s390x 12d
kube-proxy 3 3 3 3 3 beta.kubernetes.io/
檢視一下,另外的兩個node節點是否安裝了flannet
[[email protected] ~]# kubectl get pods -n kube-system -o wide
NAME READY STATUS RESTARTS AGE IP NODE NOMINATED NODE
coredns-78fcdf6894-27npt 1/1 Running 1 12d 10.244.0.5 master <none>
coredns-78fcdf6894-mbg8n 1/1 Running 1 12d 10.244.0.4 master <none>
etcd-master 1/1 Running 1 12d 192.168.68.10 master <none>
kube-apiserver-master 1/1 Running 1 12d 192.168.68.10 master <none>
kube-controller-manager-master 1/1 Running 1 12d 192.168.68.10 master <none>
kube-flannel-ds-amd64-qdmsx 1/1 Running 0 12d 192.168.68.20 node1 <none>
kube-flannel-ds-amd64-rhb49 1/1 Running 6 12d 192.168.68.30 node2 <none>
kube-flannel-ds-amd64-sd6mr 1/1 Running 1 12d 192.168.68.10 master <none>
kube-proxy-g9n4d 1/1 Running 1 12d 192.168.68.10 master <none>
kube-proxy-wrqt8 1/1 Running 2 12d 192.168.68.30 node2 <none>
kube-proxy-x7vc2 1/1 Running 0 12d 192.168.68.20 node1 <none>
kube-scheduler-master 1/1 Running 1 12d 192.168.68.10 master <none>
kubernetes-dashboard-767dc7d4d-7rmp8 1/1 Running 0 2d 10.244.1.72 node1 <none>
因為matser配置檔案配置的flannel-cfg
[[email protected] ~]# kubectl get configmap -n kube-system
NAME DATA AGE
coredns 1 12d
extension-apiserver-authentication 6 12d
kube-flannel-cfg 2 12d #配置檔案
kube-proxy 2 12d
kubeadm-config 1 12d
kubelet-config-1.11 1 12d
kubernetes-dashboard-settings 1 2d
檢視kube-flannel-cfg
[[email protected] ~]# kubectl get configmap kube-flannel-cfg -o json -n kube-system
{
"apiVersion": "v1",
"data": {
"cni-conf.json": "{\n \"name\": \"cbr0\",\n \"plugins\": [\n {\n \"type\": \"flannel\",\n \"delegate\": {\n \"hairpinMode\": true,\n \"isDefaultGateway\": true\n }\n },\n {\n \"type\": \"portmap\",\n \"capabilities\": {\n \"portMappings\": true\n }\n }\n ]\n}\n",
"net-conf.json": "{\n \"Network\": \"10.244.0.0/16\",\n \"Backend\": {\n \"Type\": \"vxlan\"\n }\n}\n"
},
"kind": "ConfigMap",
"metadata": {
"annotations": {
"kubectl.kubernetes.io/last-applied-configuration": "{\"apiVersion\":\"v1\",\"data\":{\"cni-conf.json\":\"{\\n \\\"name\\\": \\\"cbr0\\\",\\n \\\"plugins\\\": [\\n {\\n \\\"type\\\": \\\"flannel\\\",\\n \\\"delegate\\\": {\\n \\\"hairpinMode\\\": true,\\n \\\"isDefaultGateway\\\": true\\n }\\n },\\n {\\n \\\"type\\\": \\\"portmap\\\",\\n \\\"capabilities\\\": {\\n \\\"portMappings\\\": true\\n }\\n }\\n ]\\n}\\n\",\"net-conf.json\":\"{\\n \\\"Network\\\": \\\"10.244.0.0/16\\\",\\n \\\"Backend\\\": {\\n \\\"Type\\\": \\\"vxlan\\\"\\n }\\n}\\n\"},\"kind\":\"ConfigMap\",\"metadata\":{\"annotations\":{},\"labels\":{\"app\":\"flannel\",\"tier\":\"node\"},\"name\":\"kube-flannel-cfg\",\"namespace\":\"kube-system\"}}\n"
},
"creationTimestamp": "2018-09-04T15:21:06Z",
"labels": {
"app": "flannel",
"tier": "node"
},
"name": "kube-flannel-cfg",
"namespace": "kube-system",
"resourceVersion": "1263",
"selfLink": "/api/v1/namespaces/kube-system/configmaps/kube-flannel-cfg",
"uid": "249399d6-b056-11e8-a432-000c29f33006"
}
}
通過上面的資訊可以看出:
預設網路時vxlan
預設的pod網路是:10.244.0.0/16
flannel的配置引數:
Network:flannel使用的CIDR格式的網路地址,用於Pod配置網路功能
10.244.0.0/16 ->
master:10.244.0.0/24
node01:10.244.1.0/24
node255:10.244.255.0/24
SubnetLen:把Network切分子網供各個節點使用時,使用多長的掩碼進行切分,預設為24位
subnetMin:10.244.10.0/24
subnetMax:10.244.100.0/24
Backend:vxlan,host-gw,udp
##########################
網路測試一
##########################
[[email protected] manifests]# cat deploy-demo.yaml
apiVersion: apps/v1
kind: Deployment
metadata:
name: myapp-deploy
namespace: default
spec:
replicas: 3
selector:
matchLabels:
app: myapp
release: canary
template:
metadata:
labels:
app: myapp
release: canary
spec:
containers:
- name: myapp
image: ikubernetes/myapp:v2
ports:
- name: http
containerPort: 80
[[email protected] manifests]# kubectl get pods -o wide
NAME READY STATUS RESTARTS AGE IP NODE NOMINATED NODE
myapp-deploy-67f6f6b4dc-2dqrp 1/1 Running 0 2m 10.244.2.85 node2 <none>
myapp-deploy-67f6f6b4dc-cqttt 1/1 Running 0 2m 10.244.1.73 node1 <none>
myapp-deploy-67f6f6b4dc-qqv7f 1/1 Running 0 2m 10.244.2.84 node2 <none>
pod-sa-demo 1/1 Running 0 3d 10.244.2.82 node2 <none>
node1和node2上面都有myapp服務
在master
[[email protected] manifests]# kubectl exec -it myapp-deploy-67f6f6b4dc-2dqrp /bin/sh
新開master視窗:
[[email protected] ~]# kubectl exec -it myapp-deploy-67f6f6b4dc-cqttt /bin/sh
在node1和node2上分別安裝tcpdump抓包工具
yum install -y tcpdump
[[email protected] ~]# brctl show cni0
bridge name bridge id STP enabled interfaces
cni0 8000.0a580af40201 no veth09de1518
veth91f026fc
vethb035fae2
因為他是經過cni0介面進行轉發
node1執行:
[[email protected] ~]# tcpdump -i cni0 -nn icmp
tcpdump: verbose output suppressed, use -v or -vv for full protocol decode
listening on cni0, link-type EN10MB (Ethernet), capture size 262144 bytes
11:44:52.773662 IP 10.244.2.85 > 10.244.1.73: ICMP echo request, id 4096, seq 364, length 64
11:44:52.773690 IP 10.244.1.73 > 10.244.2.85: ICMP echo reply, id 4096, seq 364, length 64
11:44:53.774519 IP 10.244.2.85 > 10.244.1.73: ICMP echo request, id 4096, seq 365, length 64
11:44:53.774562 IP 10.244.1.73 > 10.244.2.85: ICMP echo reply, id 4096, seq 365, length 64
11:44:54.774933 IP 10.244.2.85 > 10.244.1.73: ICMP echo request, id 4096, seq 366, length 64
11:44:54.774975 IP 10.244.1.73 > 10.244.2.85: ICMP echo reply, id 4096, seq 366, length 64
node2執行:
[[email protected] ~]# tcpdump -i cni0 -nn icmp
tcpdump: verbose output suppressed, use -v or -vv for full protocol decode
listening on cni0, link-type EN10MB (Ethernet), capture size 262144 bytes
11:45:25.798557 IP 10.244.2.85 > 10.244.1.73: ICMP echo request, id 4096, seq 397, length 64
11:45:25.798958 IP 10.244.1.73 > 10.244.2.85: ICMP echo reply, id 4096, seq 397, length 64
11:45:26.799021 IP 10.244.2.85 > 10.244.1.73: ICMP echo request, id 4096, seq 398, length 64
11:45:26.799405 IP 10.244.1.73 > 10.244.2.85: ICMP echo reply, id 4096, seq 398, length 64
從nni0進來,從flannel出去,到物理網絡卡的時候已經封裝成vxlan格式報文了
檢視flannel
[[email protected] ~]# tcpdump -i flannel.1 -nn
tcpdump: verbose output suppressed, use -v or -vv for full protocol decode
listening on flannel.1, link-type EN10MB (Ethernet), capture size 262144 bytes
11:48:55.927311 IP 10.244.2.85 > 10.244.1.73: ICMP echo request, id 4096, seq 606, length 64
11:48:55.927404 IP 10.244.1.73 > 10.244.2.85: ICMP echo reply, id 4096, seq 606, length 64
11:48:56.927997 IP 10.244.2.85 > 10.244.1.73: ICMP echo request, id 4096, seq 607, length 64
11:48:56.928074 IP 10.244.1.73 > 10.244.2.85: ICMP echo reply, id 4096, seq 607, length 64
11:48:57.928449 IP 10.244.2.85 > 10.244.1.73: ICMP echo request, id 4096, seq 608, length 64
11:48:57.928537 IP 10.244.1.73 > 10.244.2.85: ICMP echo reply, id 4096, seq 608, length 64
11:48:58.928862 IP 10.244.2.85 > 10.244.1.73: ICMP echo request, id 4096, seq 609, length 64
11:48:58.928918 IP 10.244.1.73 > 10.244.2.85: ICMP echo reply, id 4096, seq 609, length 64
直接網絡卡抓包:
tcpdump -i ens33 -nn
直接編輯kube-flannel-cfg網路模式:
kubectl edit configmap kube-flannel-cfg -n kube-system
net-conf.json: |
{
"Network": "10.244.0.0/16",
"Backend": {
"Type": "vxlan"
"Directrouting":true #新加
}
}
[[email protected] ~]# ip route show
default via 192.168.68.2 dev ens33 proto static metric 100
10.244.0.0/24 dev cni0 proto kernel scope link src 10.244.0.1
10.244.1.0/24 via 10.244.1.0 dev flannel.1 onlink
10.244.2.0/24 via 10.244.2.0 dev flannel.1 onlink
172.17.0.0/16 dev docker0 proto kernel scope link src 172.17.0.1
192.168.68.0/24 dev ens33 proto kernel scope link src 192.168.68.10 metric 100
可以看出:流量都是flanne1.1送出去的
[[email protected] ~]# route -n
Kernel IP routing table
Destination Gateway Genmask Flags Metric Ref Use Iface
0.0.0.0 192.168.68.2 0.0.0.0 UG 100 0 0 ens33
10.244.0.0 0.0.0.0 255.255.255.0 U 0 0 0 cni0
10.244.1.0 10.244.1.0 255.255.255.0 UG 0 0 0 flannel.1
10.244.2.0 10.244.2.0 255.255.255.0 UG 0 0 0 flannel.1
172.17.0.0 0.0.0.0 255.255.0.0 U 0 0 0 docker0
192.168.68.0 0.0.0.0 255.255.255.0 U 100 0 0 ens33
再次檢視配置:
kubectl get configmap kube-flannel-cfg -o json -n kube-system
有記錄:\"Directrouting\":true\
但是沒有生效,缺少,
net-conf.json: |
{
"Network": "10.244.0.0/16",
"Backend": {
"Type": "vxlan",
"Directrouting":true
}
}
修改之後,在node1上檢視,依舊沒有生效
正常情況下不會顯示flannel.1 他會直接顯示物理介面
[[email protected] ~]# ip route show
default via 192.168.68.2 dev ens33 proto static metric 100
10.244.0.0/24 via 10.244.0.0 dev flannel.1 onlink
10.244.1.0/24 dev cni0 proto kernel scope link src 10.244.1.1
10.244.2.0/24 via 10.244.2.0 dev flannel.1 onlink
172.17.0.0/16 dev docker0 proto kernel scope link src 172.17.0.1
192.168.68.0/24 dev ens33 proto kernel scope link src 192.168.68.20 metric 100
我更換一個方式繼續配置網路:
https://github.com/coreos/flannel#flannel
找到:
kubectl apply -f https://raw.githubusercontent.com/coreos/flannel/master/Documentation/kube-flannel.yml
把檔案下載下來:
開啟編輯檔案:
net-conf.json: |
{
"Network": "10.244.0.0/16",
"Backend": {
"Type": "vxlan",
"Directrouting":true #新加
}
開始建立:
[[email protected] flannel]# kubectl apply -f kube-flannel.yml
clusterrole.rbac.authorization.k8s.io/flannel configured
clusterrolebinding.rbac.authorization.k8s.io/flannel configured
serviceaccount/flannel unchanged
configmap/kube-flannel-cfg configured
daemonset.extensions/kube-flannel-ds-amd64 unchanged
daemonset.extensions/kube-flannel-ds-arm64 unchanged
daemonset.extensions/kube-flannel-ds-arm unchanged
daemonset.extensions/kube-flannel-ds-ppc64le unchanged
daemonset.extensions/kube-flannel-ds-s390x unchanged
這是在檢視節點
[[email protected] ~]# ip route show
default via 192.168.68.2 dev ens33 proto static metric 100
10.244.0.0/24 via 10.244.0.0 dev flannel.1 onlink
10.244.1.0/24 dev cni0 proto kernel scope link src 10.244.1.1
10.244.2.0/24 via 10.244.2.0 dev flannel.1 onlink
172.17.0.0/16 dev docker0 proto kernel scope link src 172.17.0.1
192.168.68.0/24 dev ens33 proto kernel scope link src 192.168.68.20 metric 100
發現還是沒有生效
然後我們刪除:
#注意,生產不能執行這一步,所有的pod都會執行不了,失去通訊
[[email protected] flannel]# kubectl delete -f kube-flannel.yml
clusterrole.rbac.authorization.k8s.io "flannel" deleted
clusterrolebinding.rbac.authorization.k8s.io "flannel" deleted
serviceaccount "flannel" deleted
configmap "kube-flannel-cfg" deleted
daemonset.extensions "kube-flannel-ds-amd64" deleted
daemonset.extensions "kube-flannel-ds-arm64" deleted
daemonset.extensions "kube-flannel-ds-arm" deleted
daemonset.extensions "kube-flannel-ds-ppc64le" deleted
daemonset.extensions "kube-flannel-ds-s390x" deleted
再次建立
[[email protected] flannel]# kubectl apply -f kube-flannel.yml
clusterrole.rbac.authorization.k8s.io/flannel created
clusterrolebinding.rbac.authorization.k8s.io/flannel created
serviceaccount/flannel created
configmap/kube-flannel-cfg created
daemonset.extensions/kube-flannel-ds-amd64 created
daemonset.extensions/kube-flannel-ds-arm64 created
daemonset.extensions/kube-flannel-ds-arm created
daemonset.extensions/kube-flannel-ds-ppc64le created
daemonset.extensions/kube-flannel-ds-s390x created
這是再次檢視就已經生效了,直接是本機的網絡卡
[[email protected] ~]# ip route show
default via 192.168.68.2 dev ens33 proto static metric 100
10.244.0.0/24 via 192.168.68.10 dev ens33
10.244.1.0/24 dev cni0 proto kernel scope link src 10.244.1.1
10.244.2.0/24 via 192.168.68.30 dev ens33
172.17.0.0/16 dev docker0 proto kernel scope link src 172.17.0.1
192.168.68.0/24 dev ens33 proto kernel scope link src 192.168.68.20 metric 100
正常執行:
[[email protected] flannel]# kubectl get pods -n kube-system -w
NAME READY STATUS RESTARTS AGE
coredns-78fcdf6894-27npt 1/1 Running 1 12d
coredns-78fcdf6894-mbg8n 1/1 Running 1 12d
etcd-master 1/1 Running 1 12d
kube-apiserver-master 1/1 Running 1 12d
kube-controller-manager-master 1/1 Running 1 12d
kube-flannel-ds-amd64-5lrjm 1/1 Running 0 2m
kube-flannel-ds-amd64-b8dfz 1/1 Running 0 2m
kube-flannel-ds-amd64-n45sn 1/1 Running 0 2m
kube-proxy-g9n4d 1/1 Running 1 12d
kube-proxy-wrqt8 1/1 Running 2 12d
kube-proxy-x7vc2 1/1 Running 0 12d
kube-scheduler-master 1/1 Running 1 12d
kubernetes-dashboard-767dc7d4d-7rmp8 1/1 Running 0 2d
##########################
再次開始node1的Pod ping node2的Pod
[[email protected] manifests]# kubectl apply -f deploy-demo.yaml
deployment.apps/myapp-deploy created
[[email protected] manifests]# kubectl get pods -o wide
NAME READY STATUS RESTARTS AGE IP NODE NOMINATED NODE
myapp-deploy-67f6f6b4dc-6k25w 1/1 Running 0 8s 10.244.1.74 node1 <none>
myapp-deploy-67f6f6b4dc-b28tl 1/1 Running 0 8s 10.244.2.86 node2 <none>
myapp-deploy-67f6f6b4dc-g5n95 1/1 Running 0 8s 10.244.2.87 node2 <none>
pod-sa-demo 1/1 Running 0 4d 10.244.2.82 node2 <none>
master:
[[email protected] manifests]# kubectl exec -it myapp-deploy-67f6f6b4dc-6k25w /bin/sh
[[email protected] ~]# kubectl exec -it myapp-deploy-67f6f6b4dc-b28tl /bin/sh
[[email protected] manifests]# kubectl exec -it myapp-deploy-67f6f6b4dc-6k25w /bin/sh
/ # ping 10.244.2.86 #node1 pod pingnode2 pod
PING 10.244.2.86 (10.244.2.86): 56 data bytes
64 bytes from 10.244.2.86: seq=0 ttl=62 time=1.020 ms
64 bytes from 10.244.2.86: seq=1 ttl=62 time=0.225 ms
可以ping通
開始在兩個節點開始抓包:
[[email protected] ~]# tcpdump -i ens33 -nn icmp
tcpdump: verbose output suppressed, use -v or -vv for full protocol decode
listening on ens33, link-type EN10MB (Ethernet), capture size 262144 bytes
14:18:29.327728 IP 10.244.1.74 > 10.244.2.86: ICMP echo request, id 3328, seq 58, length 64
14:18:29.327958 IP 10.244.2.86 > 10.244.1.74: ICMP echo reply, id 3328, seq 58, length 64
14:18:30.328669 IP 10.244.1.74 > 10.244.2.86: ICMP echo request, id 3328, seq 59, length 64
14:18:30.328904 IP 10.244.2.86 > 10.244.1.74: ICMP echo reply, id 3328, seq 59, length 64
14:18:31.328810 IP 10.244.1.74 > 10.244.2.86: ICMP echo request, id 3328, seq 60, length 64
14:18:31.329032 IP 10.244.2.86 > 10.244.1.74: ICMP echo reply, id 3328, seq 60, length 64
14:18:32.329177 IP 10.244.1.74 > 10.244.2.86: ICMP echo request, id 3328, seq 61, length 64
14:18:32.329371 IP 10.244.2.86 > 10.244.1.74: ICMP echo reply, id 3328, seq 61, length 64
[[email protected] ~]# tcpdump -i ens33 -nn icmp
tcpdump: verbose output suppressed, use -v or -vv for full protocol decode
listening on ens33, link-type EN10MB (Ethernet), capture size 262144 bytes
14:19:17.368560 IP 10.244.1.74 > 10.244.2.86: ICMP echo request, id 3328, seq 106, length 64
14:19:17.368623 IP 10.244.2.86 > 10.244.1.74: ICMP echo reply, id 3328, seq 106, length 64
14:19:18.369045 IP 10.244.1.74 > 10.244.2.86: ICMP echo request, id 3328, seq 107, length 64
14:19:18.369105 IP 10.244.2.86 > 10.244.1.74: ICMP echo reply, id 3328, seq 107, length 64
14:19:19.369631 IP 10.244.1.74 > 10.244.2.86: ICMP echo request, id 3328, seq 108, length 64
14:19:19.369689 IP 10.244.2.86 > 10.244.1.74: ICMP echo reply, id 3328, seq 108, length 64
14:19:20.370102 IP 10.244.1.74 > 10.244.2.86: ICMP echo request, id 3328, seq 109, length 64
14:19:20.370141 IP 10.244.2.86 > 10.244.1.74: ICMP echo reply, id 3328, seq 109, length 64
這是成功的
這是直接實現了橋接功能,效能也是非常優越的
都是直接路由,直接路由為什麼可以成功,因為是本地每一條都做了路由
ip route show
10.244.0.0/24 via 192.168.68.10 dev ens33
10.244.1.0/24 dev cni0 proto kernel scope link src 10.244.1.1
10.244.2.0/24 via 192.168.68.30 dev ens33
172.17.0.0/16 dev docker0 proto kernel scope link src 172.17.0.1
192.168.68.0/24 dev ens33 proto kernel scope link src 192.168.68.20 metric 100