1. 程式人生 > 其它 >k8s service負載均衡實現之IPVS

k8s service負載均衡實現之IPVS

1. k8s-service代理模式

  • iptables和ipvs工作流程圖

  • k8s-service工作流程圖

2. k8s-service代理模式IPVS

  • IPVS: 有兩種啟動模式

    • kubeadm方式修改ipvs模式:

      # kubectl edit configmap kube-proxy -n kube-system
      ...
      mode: “ipvs“
      ...
      # kubectl delete pod kube-proxy-btz4p -n kube-system
      注:
      1、kube-proxy配置檔案以configmap方式儲存
      2、如果讓所有節點生效,需要重建所有節點kube-proxy pod
      
    • 使用二進位制方式修改IPVS模式

      # vi kube-proxy-config.yml
      mode: ipvs
      ipvs:
      scheduler: "rr“
      # systemctl restart kube-proxy
      注:參考不同資料,檔名可能不同。
      

3. k8s-service代理模式IPVS案例

  • 使用kubeadm方式修改

    [root@k8s-master service]# kubectl edit configmap kube-proxy -n kube-system
    ...
    mode: “ipvs“     # 找到mode這裡,填寫ipvs模式
    ...
    
  • 我們修改之後沒有馬上應用

    [root@k8s-master service]# kubectl get pods -n kube-system -o wide
    NAME                                       READY   STATUS    RESTARTS   AGE     IP               NODE         NOMINATED NODE   READINESS GATES
    calico-kube-controllers-5dc87d545c-nscfb   1/1     Running   3          7d22h   10.244.235.203   k8s-master   <none>           <none>
    calico-node-j6rhw                          1/1     Running   3          7d22h   192.168.0.201    k8s-master   <none>           <none>
    calico-node-n7d6s                          1/1     Running   3          7d22h   192.168.0.203    k8s-node2    <none>           <none>
    calico-node-x86s2                          1/1     Running   3          7d22h   192.168.0.202    k8s-node1    <none>           <none>
    coredns-6d56c8448f-hkgnk                   1/1     Running   4          8d      10.244.235.204   k8s-master   <none>           <none>
    coredns-6d56c8448f-jfbjs                   1/1     Running   3          8d      10.244.235.202   k8s-master   <none>           <none>
    etcd-k8s-master                            1/1     Running   4          8d      192.168.0.201    k8s-master   <none>           <none>
    kube-apiserver-k8s-master                  1/1     Running   9          8d      192.168.0.201    k8s-master   <none>           <none>
    kube-controller-manager-k8s-master         1/1     Running   9          8d      192.168.0.201    k8s-master   <none>           <none>
    kube-proxy-fhgbd                           1/1     Running   3          7d23h   192.168.0.202    k8s-node1    <none>           <none>
    kube-proxy-l7q4r                           1/1     Running   3          8d      192.168.0.201    k8s-master   <none>           <none>
    kube-proxy-qwpjp                           1/1     Running   3          7d23h   192.168.0.203    k8s-node2    <none>           <none>
    kube-scheduler-k8s-master                  1/1     Running   10         8d      192.168.0.201    k8s-master   <none>           <none>
    
    
    [root@k8s-master service]# kubectl delete pod kube-proxy-fhgbd -n kube-system 
    pod "kube-proxy-fhgbd" deleted
    
    [root@k8s-master service]# kubectl get pods -n kube-system -o wide
    NAME                                       READY   STATUS    RESTARTS   AGE     IP               NODE         NOMINATED NODE   READINESS GATES
    calico-kube-controllers-5dc87d545c-nscfb   1/1     Running   3          7d22h   10.244.235.203   k8s-master   <none>           <none>
    calico-node-j6rhw                          1/1     Running   3          7d22h   192.168.0.201    k8s-master   <none>           <none>
    calico-node-n7d6s                          1/1     Running   3          7d22h   192.168.0.203    k8s-node2    <none>           <none>
    calico-node-x86s2                          1/1     Running   3          7d22h   192.168.0.202    k8s-node1    <none>           <none>
    coredns-6d56c8448f-hkgnk                   1/1     Running   4          8d      10.244.235.204   k8s-master   <none>           <none>
    coredns-6d56c8448f-jfbjs                   1/1     Running   3          8d      10.244.235.202   k8s-master   <none>           <none>
    etcd-k8s-master                            1/1     Running   4          8d      192.168.0.201    k8s-master   <none>           <none>
    kube-apiserver-k8s-master                  1/1     Running   9          8d      192.168.0.201    k8s-master   <none>           <none>
    kube-controller-manager-k8s-master         1/1     Running   9          8d      192.168.0.201    k8s-master   <none>           <none>
    kube-proxy-g5d56                           1/1     Running   0          30s     192.168.0.202    k8s-node1    <none>           <none>
    kube-proxy-l7q4r                           1/1     Running   3          8d      192.168.0.201    k8s-master   <none>           <none>
    kube-proxy-qwpjp                           1/1     Running   3          7d23h   192.168.0.203    k8s-node2    <none>           <none>
    kube-scheduler-k8s-master                  1/1     Running   10         8d      192.168.0.201    k8s-master   <none>           <none>
    
  • 在k8s-node1上安裝ipvs檢視工具ipvsadm

    [root@k8s-node1 ~]# yum install ipvsadm
    
  • 在k8s-node1上檢視是否生效

    [root@k8s-node1 ~]# ipvsadm -L -n
    IP Virtual Server version 1.2.1 (size=4096)
    Prot LocalAddress:Port Scheduler Flags
      -> RemoteAddress:Port           Forward Weight ActiveConn InActConn
    TCP  192.168.0.202:30001 rr
      -> 10.244.169.155:8443          Masq    1      0          0         
    TCP  192.168.0.202:30009 rr
      -> 10.244.36.79:80              Masq    1      0          0         
      -> 10.244.36.83:80              Masq    1      0          0         
      -> 10.244.169.169:80            Masq    1      0          0         
    TCP  10.96.0.1:443 rr
      -> 192.168.0.201:6443           Masq    1      0          0         
    TCP  10.96.0.10:53 rr
      -> 10.244.235.202:53            Masq    1      0          0         
      -> 10.244.235.204:53            Masq    1      0          0         
    TCP  10.96.0.10:9153 rr
      -> 10.244.235.202:9153          Masq    1      0          0         
      -> 10.244.235.204:9153          Masq    1      0          0         
    TCP  10.97.234.249:443 rr
      -> 10.244.169.155:8443          Masq    1      0          0         
    TCP  10.100.222.42:80 rr
      -> 10.244.36.79:80              Masq    1      0          0         
      -> 10.244.36.83:80              Masq    1      0          0         
      -> 10.244.169.169:80            Masq    1      0          0         
    TCP  10.104.161.168:80 rr
    TCP  10.110.198.136:8000 rr
      -> 10.244.169.154:8000          Masq    1      0          0         
    TCP  10.244.36.64:30001 rr
      -> 10.244.169.155:8443          Masq    1      0          0         
    TCP  10.244.36.64:30009 rr
      -> 10.244.36.79:80              Masq    1      0          0         
      -> 10.244.36.83:80              Masq    1      0          0         
      -> 10.244.169.169:80            Masq    1      0          0         
    TCP  127.0.0.1:30001 rr
      -> 10.244.169.155:8443          Masq    1      0          0         
    TCP  127.0.0.1:30009 rr
      -> 10.244.36.79:80              Masq    1      0          0         
      -> 10.244.36.83:80              Masq    1      0          0         
      -> 10.244.169.169:80            Masq    1      0          0         
    TCP  172.17.0.1:30001 rr
      -> 10.244.169.155:8443          Masq    1      0          0         
    TCP  172.17.0.1:30009 rr
      -> 10.244.36.79:80              Masq    1      0          0         
      -> 10.244.36.83:80              Masq    1      0          0         
      -> 10.244.169.169:80            Masq    1      0          0         
    UDP  10.96.0.10:53 rr
      -> 10.244.235.202:53            Masq    1      0          0         
      -> 10.244.235.204:53            Masq    1      0          0         
    
    
  • 註釋:我們可以看到已經