1. 程式人生 > 實用技巧 >搭建高可用kubernetes叢集(keepalived+haproxy)(轉載)

搭建高可用kubernetes叢集(keepalived+haproxy)(轉載)

由於單master節點的kubernetes叢集,存在master節點異常之後無法繼續使用的缺陷。本文參考網管流程搭建一套多master節點負載均衡的kubernetes叢集。官網給出了兩種拓撲結構:堆疊control plane node和external etcd node,本文基於第一種拓撲結構進行搭建,使用keepalived + haproxy搭建,完整的拓撲圖如下:

                     (堆疊control plane node)

(external etcd node)

mastre節點需要部署etcd、apiserver、controller-manager、schedule這4種服務,其中etcd、ntroller-manager、schedule這三種服務kubernetes自身已經實現了高可用,在多master節點的情況下,每個master節點都會啟動這三種伺服器,同一時間只有一個生效。因此要實現kubernetes的高可用,只需要apiserver服務高可用。

keepalived是一種高效能的伺服器高可用或熱備解決方案,可以用來防止伺服器單點故障導致服務中斷的問題。keepalived使用主備模式,至少需要兩臺伺服器才能正常工作。比如keepalived將三臺伺服器搭建成一個叢集,對外提供一個唯一IP,正常情況下只有一臺伺服器上可以看到這個IP的虛擬網絡卡。如果這臺服務異常,那麼keepalived會立即將IP移動到剩下的兩臺伺服器中的一臺上,使得IP可以正常使用。

haproxy是一款提供高可用性、負載均衡以及基於TCP(第四層)和HTTP(第七層)應用的代理軟體,支援虛擬主機,它是免費、快速並且可靠的一種解決方案。使用haproxy負載均衡後端的apiserver服務,達到apiserver服務高可用的目的。

本文使用的keepalived+haproxy方案,使用keepalived對外提供穩定的入口,使用haproxy對內均衡負載。因為haproxy執行在master節點上,當master節點異常後,haproxy服務也會停止,為了避免這種情況,我們在每一臺master節點都部署haproxy服務,達到haproxy服務高可用的目的。由於多master節點會出現投票競選的問題,因此master節點的資料最好是單數,避免票數相同的情況。

搭建環境

第一步:環境說明

1 2 3 4 192.168.1.13 master-01 192.168.1.14 master-02 192.168.1.15 master-03 192.168.1.16 node-01

第二步:關閉防火牆等(all)

1 2 3 4 5 6 7 8 9 10 11 12 13 14 # 1.關閉防火牆 systemctl stop firewalld && systemctl disable firewalld​ # 2.關閉selinux setenforce 0 vim/etc/selinux/config SELINUX=enforcing改為SELINUX=disabled,儲存後退出 # 3.關閉交換分割槽 swapoff -a sed-i's/.*swap.*/#&/'/etc/fstab # 4.iptables設定 iptables -P FORWARD ACCEPT

第三步:安裝docker(all)

1 2 3 4 5 6 7 8 9 10 11 12 13 14 # 1.安裝 yuminstalldocker -y # 2.啟動和預設啟動 systemctl start docker && systemctlenabledocker # 3.配置 cat>/etc/docker/daemon.json << EOF { "registry-mirrors": ["https://阿里個人加速網址.mirror.aliyuncs.com"], "exec-opts":["native.cgroupdriver=systemd"], "graph":"/new-path/docker"# 該路徑必須存在 } EOF

第四步:啟動docker時配置iptables(all)

1 2 3 vim/etc/systemd/system/docker.service [Service]下面新增 ExecStartPost=/sbin/iptables-I FORWARD -s 0.0.0.0/0-j ACCEPT

第五步:設定host(all)

1 2 3 4 5 6 7 8 9 10 11 12 13 # 在不同的伺服器上執行 hostnamectlset-hostnamemaster-01 hostnamectlset-hostnamemaster-02 hostnamectlset-hostnamemaster-03 hostnamectlset-hostnamenode-01 # 所有的伺服器都執行 cat>>/etc/hosts<< EOF 192.168.1.13 master-01 192.168.1.14 master-02 192.168.1.15 master-03 192.168.1.16 node-01 EOF

第六步:設定yum源(all)

1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 cat>/etc/yum.repos.d/docker.repo <<EOF [docker-repo] name=Docker Repository baseurl=http://mirrors.aliyun.com/docker-engine/yum/repo/main/centos/7 enabled=1 gpgcheck=0 EOF cat>/etc/sysctl.d/k8s.conf << EOF net.bridge.bridge-nf-call-ip6tables = 1 net.bridge.bridge-nf-call-iptables = 1 net.ipv4.ip_forward = 1 EOF cat>/etc/yum.repos.d/kubernetes.repo << EOF [kubernetes] name=Kubernetes baseurl=http://mirrors.aliyun.com/kubernetes/yum/repos/kubernetes-el7-x86_64 enabled=1 gpgcheck=0 repo_gpgcheck=0 gpgkey=http://mirrors.aliyun.com/kubernetes/yum/doc/yum-key.gpg http://mirrors.aliyun.com/kubernetes/yum/doc/rpm-package-key.gpg EOF

第七步:安裝keepalived(master)

1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 # 1. 安裝yum yuminstall-y keepalived # 2.備份配置檔案 cp/etc/keepalived/keepalived.conf/etc/keepalived/keepalived.conf-back # 3.編輯配置檔案 cat>/etc/keepalived/keepalived.conf << EOF ! Configuration Fileforkeepalived global_defs { router_id LVS_DEVEL } vrrp_script check_haproxy { script"killall -0 haproxy" interval 3 weight -2 fall 10 rise 2 } vrrp_instance VI_1 { state MASTER interface ens33# 虛擬網絡卡橋接的真實網絡卡 virtual_router_id 51 # 優先順序配置,每臺伺服器最好都不一樣,如100,90,80等,優先順序越高越先使用 priority 90 advert_int 1 authentication { auth_type PASS auth_pass 111 } virtual_ipaddress { 192.168.1.200# 對外提供的虛擬IP } track_script { check_haproxy } } EOF # 4.啟動 systemctl start keepalived && systemctlenablekeepalived && systemctl status keepalived

說明:檢查keepalived是否安裝成功的標準是:1.任何一臺伺服器上ping虛擬ip可以通;2.虛擬IP只在一臺服務上可見;3.任意停止某一臺伺服器後,虛擬IP會移動到剩下的某一臺伺服器上,並正常使用。

第九步:安裝haproxy(master)

1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53 54 55 56 57 58 59 60 61 62 63 64 65 66 67 68 69 70 71 72 73 74 75 76 77 78 79 80 81 # 1.安裝 yuminstall-y haproxy # 2.備份配置檔案 cp/etc/haproxy/haproxy.cfg/etc/haproxy/haproxy.cfg-back # 3.編輯配置檔案 cat>/etc/haproxy/haproxy.cfg << EOF #--------------------------------------------------------------------- # Global settings #--------------------------------------------------------------------- global # to have these messages end up in /var/log/haproxy.log you will # need to: # # 1) configure syslog to accept network log events. This is done # by adding the '-r' option to the SYSLOGD_OPTIONS in # /etc/sysconfig/syslog # # 2) configure local2 events to go to the /var/log/haproxy.log # file. A line like the following can be added to # /etc/sysconfig/syslog # # local2.* /var/log/haproxy.log # log 127.0.0.1 local2 chroot/var/lib/haproxy pidfile/var/run/haproxy.pid maxconn 4000 user haproxy group haproxy daemon # turn on stats unix socket stats socket/var/lib/haproxy/stats #--------------------------------------------------------------------- # common defaults that all the 'listen' and 'backend' sections will # use if not designated in their block #--------------------------------------------------------------------- defaults mode http log global option httplog option dontlognull option http-server-close option forwardfor except 127.0.0.0/8 option redispatch retries 3 timeout http-request 10s timeout queue 1m timeout connect 10s timeout client 1m timeout server 1m timeout http-keep-alive 10s timeout check 10s maxconn 3000 #--------------------------------------------------------------------- # kubernetes apiserver frontend which proxys to the backends #--------------------------------------------------------------------- frontend kubernetes-apiserver mode tcp bind *:6444# 對外提供服務的埠,必須和kubernetes一致 option tcplog default_backend kubernetes-apiserver#後端服務的名稱 #--------------------------------------------------------------------- # round robin balancing between the various backends #--------------------------------------------------------------------- backend kubernetes-apiserver mode tcp balance roundrobin server master-01 192.168.1.13:6443 check# 後端伺服器hostname和IP server master-02 192.168.1.14:6443 check# 後端伺服器hostname和IP server master-03 192.168.1.15:6443 check# 後端伺服器hostname和IP EOF # 4.啟動 systemctl start haproxy && systemctlenablehaproxy && systemctl status haproxy

第九步:安裝kubelet kubeadm kubectl(all)

1 2 3 4 5 # 1.安裝 yuminstall-y kubelet kubeadm kubectl #2.啟動 systemctlenablekubelet && systemctl start kubelet

第十步:master節點初始化kubeadm配置檔案(master)

1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 # 1.匯入預設的配置檔案 kubeadm config print init-defaults > kubeadm-config.yaml # 2.編輯配置檔案 catkubeadm-config.yaml apiVersion: kubeadm.k8s.io/v1beta2 bootstrapTokens: -groups: - system:bootstrappers:kubeadm:default-node-token token: abcdef.0123456789abcdef ttl: 24h0m0s usages: - signing - authentication kind: InitConfiguration localAPIEndpoint: advertiseAddress: 192.168.1.13 bindPort: 6443 nodeRegistration: criSocket:/var/run/dockershim.sock name: master-01 taints: - effect: NoSchedule key: node-role.kubernetes.io/master --- apiServer: timeoutForControlPlane: 4m0s apiVersion: kubeadm.k8s.io/v1beta2 certificatesDir:/etc/kubernetes/pki clusterName: kubernetes controlPlaneEndpoint:"192.168.1.200:6444"# IP要和keepalived一致 埠要和haproxy一致 controllerManager: {} dns: type: CoreDNS etcd: local: dataDir:/var/lib/etcd imageRepository: registry.aliyuncs.com/google_containers kind: ClusterConfiguration kubernetesVersion: v1.18.0 networking: dnsDomain: cluster.local podSubnet: 10.244.0.0/16# flannel網路IP範圍 serviceSubnet: 10.96.0.0/12 scheduler: {} # 初始化 kubeadm init --config kubeadm-config.yaml

第十一步:安裝flanner網路(all)

1 2 3 4 5 6 7 # 1.新增IP和hostname的對應關係 cat>>/etc/hosts<< EOF 151.101.76.133 raw.githubusercontent.com EOF # 2.下載並啟動flannel kubectl create -f https://raw.githubusercontent.com/coreos/flannel/master/Documentation/kube-flannel.yml

第十二步:新增master節點(master)

1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 # 1.從master節點拷貝配置檔案到準備新增的節點上 mkdir-p/etc/kubernetes/pki/etcd scp/etc/kubernetes/admin.conf [email protected]:/etc/kubernetes/admin.conf scp/etc/kubernetes/pki/{ca.*,sa.*,front-proxy-ca.*} [email protected]:/etc/kubernetes/pki scp/etc/kubernetes/pki/etcd/ca.* [email protected]:/etc/kubernetes/pki/etcd # 2.在master節點上初始化時,最後會列印兩條命令,一條是新增master節點,一條是新增node節點,直接執行即可,如下是新增master節點: kubeadmjoin192.168.1.200:6444 --token abcdef.0123456789abcdef \ --discovery-token-ca-cert-hashsha256:03e40218613fedde35123d1e0c81577d2f07285f7cda01000cf887ba17b2911f \ --control-plane # 3.join命令執行後,列印幾條命令,同樣需要執行,如下所示: mkdir-p $HOME/.kube sudocp-i/etc/kubernetes/admin.conf $HOME/.kube/config sudochown$(id-u):$(id-g) $HOME/.kube/config

第十三步:新增node節點(node)

1 2 3 # 1.在master節點上初始化時,最後會列印兩條命令,一條是新增master節點,一條是新增node節點,直接執行即可,如下是新增node節點: kubeadmjoin192.168.1.200:6444 --token abcdef.0123456789abcdef \ --discovery-token-ca-cert-hashsha256:03e40218613fedde35123d1e0c81577d2f07285f7cda01000cf887ba17b2911f

第十四步:檢查環節是否搭建成功

任意關閉一臺master節點,然後檢視叢集是否可以正常工作。

第十五步:同步叢集伺服器時間

1 2 3 4 yum -yinstallntp ntpdate#安裝ntpdate時間同步工具 ntpdate cn.pool.ntp.org#設定時間同步 hwclock --systohc#將系統時間寫入硬體時間 timedatectl#檢視系統時間

結束

本次搭建的高可用kubernetes叢集,版本是1.8,環境還在執行過程中,如有問題,歡迎一起討論學習。

原文連結:https://www.cnblogs.com/rzstrong/p/13345994.html