附028.Kubernetes_v1.20.0高可用部署架構二
阿新 • • 發佈:2020-12-22
[toc]
### kubeadm介紹
#### kubeadm概述
參考[附003.Kubeadm部署Kubernetes](https://www.cnblogs.com/itzgr/p/11050543.html)。
#### kubeadm功能
參考[附003.Kubeadm部署Kubernetes](https://www.cnblogs.com/itzgr/p/11050543.html)。
#### 本方案描述
- 本方案採用kubeadm部署Kubernetes 1.20.0版本;
- etcd採用混部方式;
- Keepalived:實現VIP高可用;
- Nginx:以Pod形式執行與Kubernetes之上,即in Kubernetes模式,提供反向代理至3個master 6443埠;
- 其他主要部署元件包括:
- Metrics:度量;
- Dashboard:Kubernetes 圖形UI介面;
- Helm:Kubernetes Helm包管理工具;
- Ingress:Kubernetes 服務暴露;
- Longhorn:Kubernetes 動態儲存元件。
### 部署規劃
#### 節點規劃
節點主機名|IP|型別|執行服務
:--:|:--:|:--:|:--:
master01|172.24.8.71|Kubernetes master節點|docker、etcd、kube-apiserver、kube-scheduler、kube-controller-manager、kubectl、kubelet、metrics、calico
master02|172.24.8.72|Kubernetes master節點|docker、etcd、kube-apiserver、kube-scheduler、kube-controller-manager、kubectl、kubelet、metrics、calico
master03|172.24.8.73|Kubernetes master節點|docker、etcd、kube-apiserver、kube-scheduler、kube-controller-manager、kubectl、kubelet、metrics、calico
worker01|172.24.8.74|Kubernetes worker節點|docker、kubelet、proxy、calico
worker02|172.24.8.75|Kubernetes worker節點|docker、kubelet、proxy、calico
worker03|172.24.8.76|Kubernetes worker節點|docker、kubelet、proxy、calico
Kubernetes的高可用主要指的是控制平面的高可用,即指多套Master節點元件和Etcd元件,工作節點通過負載均衡連線到各Master。
![架構圖](https://bed01.oss-cn-hangzhou.aliyuncs.com/study/kubernetes/f022/001.png)
Kubernetes高可用架構中etcd與Master節點元件混布方式特點:
- 所需機器資源少
- 部署簡單,利於管理
- 容易進行橫向擴充套件
- 風險大,一臺宿主機掛了,master和etcd就都少了一套,叢集冗餘度受到的影響比較大
***提示:本實驗使用Keepalived+Nginx架構實現Kubernetes的高可用。***
#### 初始準備
```
[root@master01 ~]# hostnamectl set-hostname master01 #其他節點依次修改
```
```
[root@master01 ~]# cat >> /etc/hosts << EOF
172.24.8.71 master01
172.24.8.72 master02
172.24.8.73 master03
172.24.8.74 worker01
172.24.8.75 worker02
172.24.8.76 worker03
EOF
[root@master01 ~]# wget http://down.linuxsb.com/k8sinit.sh
```
***提示:此操作僅需要在master01節點操作。
對於某些特性,可能需要升級核心,核心升級操作見《018.Linux升級核心》。4.19版及以上核心nf_conntrack_ipv4已經改為nf_conntrack。***
#### 互信配置
為了更方便遠端分發檔案和執行命令,本實驗配置master01節點到其它節點的 ssh 信任關係。
```
[root@master01 ~]# ssh-keygen -f ~/.ssh/id_rsa -N ''
[root@master01 ~]# ssh-copy-id -i ~/.ssh/id_rsa.pub root@master01
[root@master01 ~]# ssh-copy-id -i ~/.ssh/id_rsa.pub root@master02
[root@master01 ~]# ssh-copy-id -i ~/.ssh/id_rsa.pub root@master03
[root@master01 ~]# ssh-copy-id -i ~/.ssh/id_rsa.pub root@worker01
[root@master01 ~]# ssh-copy-id -i ~/.ssh/id_rsa.pub root@worker02
[root@master01 ~]# ssh-copy-id -i ~/.ssh/id_rsa.pub root@worker03
```
***提示:此操作僅需要在master01節點操作。***
#### 其他準備
`[root@master01 ~]# vi environment.sh`
```
#!/bin/sh
#****************************************************************#
# ScriptName: environment.sh
# Author: xhy
# Create Date: 2020-05-30 16:30
# Modify Author: xhy
# Modify Date: 2020-05-30 16:30
# Version:
#***************************************************************#
# 叢集 MASTER 機器 IP 陣列
export MASTER_IPS=(172.24.8.71 172.24.8.72 172.24.8.73)
# 叢集 MASTER IP 對應的主機名陣列
export MASTER_NAMES=(master01 master02 master03)
# 叢集 NODE 機器 IP 陣列
export NODE_IPS=(172.24.8.74 172.24.8.75 172.24.8.76)
# 叢集 NODE IP 對應的主機名陣列
export NODE_NAMES=(worker01 worker02 worker03)
# 叢集所有機器 IP 陣列
export ALL_IPS=(172.24.8.71 172.24.8.72 172.24.8.73 172.24.8.74 172.24.8.75 172.24.8.76)
# 叢集所有IP 對應的主機名陣列
export ALL_NAMES=(master01 master02 master03 worker01 worker02 worker03)
```
```
[root@master01 ~]# source environment.sh
[root@master01 ~]# chmod +x *.sh
[root@master01 ~]# for all_ip in ${ALL_IPS[@]}
do
echo ">>> ${all_ip}"
scp -rp /etc/hosts root@${all_ip}:/etc/hosts
scp -rp k8sinit.sh root@${all_ip}:/root/
ssh root@${all_ip} "bash /root/k8sinit.sh"
done
```
***提示:Kubernetes 1.20.0可相容的docker版本最新為19.03。***
### 叢集部署
#### 相關元件包
需要在每臺機器上都安裝以下的軟體包:
- kubeadm: 用來初始化叢集的指令;
- kubelet: 在叢集中的每個節點上用來啟動 pod 和 container 等;
- kubectl: 用來與叢集通訊的命令列工具。
****kubeadm不能安裝或管理 kubelet 或 kubectl ,所以得保證他們滿足通過 kubeadm 安裝的 Kubernetes控制層對版本的要求。如果版本沒有滿足要求,可能導致一些意外錯誤或問題。
具體相關元件安裝見;[附001.kubectl介紹及使用書](https://www.cnblogs.com/itzgr/p/10258937.html)****
***提示:Kubernetes 1.20版本所有相容相應元件的版本參考:https://github.com/kubernetes/kubernetes/blob/master/CHANGELOG/CHANGELOG-1.20.md。***
#### 正式安裝
```
[root@master01 ~]# for all_ip in ${ALL_IPS[@]}
do
echo "> >> ${all_ip}"
ssh root@${all_ip} "cat < /etc/yum.repos.d/kubernetes.repo
[kubernetes]
name=Kubernetes
baseurl=https://mirrors.aliyun.com/kubernetes/yum/repos/kubernetes-el7-x86_64/
enabled=1
gpgcheck=1
repo_gpgcheck=1
gpgkey=https://mirrors.aliyun.com/kubernetes/yum/doc/yum-key.gpg https://mirrors.aliyun.com/kubernetes/yum/doc/rpm-package-key.gpg
EOF"
ssh root@${all_ip} "yum install -y kubeadm-1.20.0-0.x86_64 kubelet-1.20.0-0.x86_64 kubectl-1.20.0-0.x86_64 --disableexcludes=kubernetes"
ssh root@${all_ip} "systemctl enable kubelet"
done
[root@master01 ~]# yum search -y kubelet --showduplicates #檢視相應版本
```
***提示:如上僅需Master01節點操作,從而實現所有節點自動化安裝,同時此時不需要啟動kubelet,初始化的過程中會自動啟動的,如果此時啟動了會出現報錯,忽略即可。***
****說明:同時安裝了cri-tools, kubernetes-cni, socat三個依賴:
socat:kubelet的依賴;
cri-tools:即CRI(Container Runtime Interface)容器執行時介面的命令列工具。****
### 部署高可用元件I
#### Keepalived安裝
```
[root@master01 ~]# for master_ip in ${MASTER_IPS[@]}
do
echo ">>> ${master_ip}"
ssh root@${master_ip} "yum -y install curl gcc gcc-c++ make libnl libnl-devel libnl3-devel libnfnetlink-devel openssl-devel"
ssh root@${master_ip} "wget http://down.linuxsb.com/software/keepalived-2.1.5.tar.gz"
ssh root@${master_ip} "tar -zxvf keepalived-2.1.5.tar.gz"
ssh root@${master_ip} "cd keepalived-2.1.5/ && LDFLAGS=\"$LDFAGS -L /usr/local/openssl/lib/\" ./configure --sysconf=/etc --prefix=/usr/local/keepalived && make && make install"
ssh root@${master_ip} "systemctl enable keepalived && systemctl start keepalived"
done
```
***提示:如上僅需Master01節點操作,從而實現所有節點自動化安裝。若出現如下報錯:undefined reference to `OPENSSL_init_ssl’,可帶上openssl lib路徑:***
***`LDFLAGS="$LDFAGS -L /usr/local/openssl/lib/" ./configure --sysconf=/etc --prefix=/usr/local/keepalived`***
#### 建立配置檔案
```
[root@master01 ~]# wget http://down.linuxsb.com/ngkek8s.sh #拉取自動部署指令碼
[root@master01 ~]# chmod u+x ngkek8s.sh
```
```
[root@master01 ~]# vi ngkek8s.sh #其他保持預設
#!/bin/sh
#****************************************************************#
# ScriptName: k8s_ha.sh
# Author: xhy
# Create Date: 2020-05-13 16:32
# Modify Author: xhy
# Modify Date: 2020-06-12 12:53
# Version: v2
#***************************************************************#
#######################################
# set variables below to create the config files, all files will create at ./config directory
#######################################
# master keepalived virtual ip address
export K8SHA_VIP=172.24.8.254
# master01 ip address
export K8SHA_IP1=172.24.8.71
# master02 ip address
export K8SHA_IP2=172.24.8.72
# master03 ip address
export K8SHA_IP3=172.24.8.73
# master01 hostname
export K8SHA_HOST1=master01
# master02 hostname
export K8SHA_HOST2=master02
# master03 hostname
export K8SHA_HOST3=master03
# master01 network interface name
export K8SHA_NETINF1=eth0
# master02 network interface name
export K8SHA_NETINF2=eth0
# master03 network interface name
export K8SHA_NETINF3=eth0
# keepalived auth_pass config
export K8SHA_KEEPALIVED_AUTH=412f7dc3bfed32194d1600c483e10ad1d
# kubernetes CIDR pod subnet
export K8SHA_PODCIDR=10.10.0.0
# kubernetes CIDR svc subnet
export K8SHA_SVCCIDR=10.20.0.0
```
```[root@master01 ~]# ./ngkek8s.sh```
**解釋:如上僅需Master01節點操作。執行ngkek8s.sh指令碼後,會自動生成以下配置檔案:**
- **kubeadm-config.yaml:kubeadm初始化配置檔案,位於當前目錄**
- **keepalived:keepalived配置檔案,位於各個master節點的/etc/keepalived目錄**
- **nginx-lb:nginx-lb負載均衡配置檔案,位於各個master節點的/etc/kubernetes/nginx-lb/目錄**
- **calico.yaml:calico網路元件部署檔案,位於config/calico/目錄**
```
[root@master01 ~]# cat kubeadm-config.yaml #檢查叢集初始化配置
apiVersion: kubeadm.k8s.io/v1beta2
kind: ClusterConfiguration
networking:
serviceSubnet: "10.20.0.0/16" #設定svc網段
podSubnet: "10.10.0.0/16" #設定Pod網段
dnsDomain: "cluster.local"
kubernetesVersion: "v1.20.0" #設定安裝版本
controlPlaneEndpoint: "172.24.11.254:16443" #設定相關API VIP地址
apiServer:
certSANs:
- master01
- master02
- master03
- 127.0.0.1
- 172.24.8.71
- 172.24.8.72
- 172.24.8.73
- 172.24.8.254
timeoutForControlPlane: 4m0s
certificatesDir: "/etc/kubernetes/pki"
imageRepository: "k8s.gcr.io"
---
apiVersion: kubeproxy.config.k8s.io/v1alpha1
kind: KubeProxyConfiguration
featureGates:
SupportIPVSProxyMode: true
mode: ipvs
```
***提示:如上僅需Master01節點操作,更多config檔案參考:https://godoc.org/k8s.io/kubernetes/cmd/kubeadm/app/apis/kubeadm/v1beta2。
此kubeadm部署初始化配置更多參考:https://pkg.go.dev/k8s.io/kubernetes/cmd/kubeadm/app/apis/kubeadm/v1beta2?tab=doc。***
#### 啟動Keepalived
```
[root@master01 ~]# cat /etc/keepalived/keepalived.conf
[root@master01 ~]# cat /etc/keepalived/check_apiserver.sh #確認Keepalived配置
[root@master01 ~]# for master_ip in ${MASTER_IPS[@]}
do
echo ">>> ${master_ip}"
ssh root@${master_ip} "systemctl restart keepalived.service && systemctl enable keepalived.service"
ssh root@${master_ip} "systemctl status keepalived.service | grep Active"
done
[root@master01 ~]# for all_ip in ${ALL_IPS[@]}
do
echo ">>> ${all_ip}"
ssh root@${all_ip} "ping -c1 172.24.8.254"
done #等待10s左右執行檢查
```
***提示:如上僅需Master01節點操作,從而實現所有節點自動啟動服務。***
#### 啟動Nginx
執行ngkek8s.sh指令碼後,nginx-lb的配置檔案會自動複製到各個master的節點的/etc/kubernetes/nginx-lb目錄。
```
[root@master01 ~]# for master_ip in ${MASTER_IPS[@]}
do
echo ">>> ${master_ip}"
ssh root@${master_ip} "cd /etc/kubernetes/nginx-lb/ && docker-compose up -d"
ssh root@${master_ip} "docker-compose ps"
done
```
***提示:如上僅需Master01節點操作,從而實現所有節點自動啟動服務。***
### 初始化叢集-Master
#### 拉取映象
```
[root@master01 ~]# kubeadm --kubernetes-version=v1.20.0 config images list #列出所需映象
[root@master01 ~]# cat config/loadimage.sh #確認版本,提前下載映象
#!/bin/sh
#****************************************************************#
# ScriptName: loadimage.sh
# Author: xhy
# Create Date: 2020-05-29 19:55
# Modify Author: xhy
# Modify Date: 2020-05-30 16:07
# Version: v2
#***************************************************************#
KUBE_VERSION=v1.20.0
CALICO_VERSION=v3.17.1
CALICO_URL=docker.io\/calico
KUBE_PAUSE_VERSION=3.2
ETCD_VERSION=3.4.13-0
CORE_DNS_VERSION=1.7.0
GCR_URL=k8s.gcr.io
METRICS_SERVER_VERSION=v0.4.0
INGRESS_VERSION=v0.41.2
CSI_PROVISIONER_VERSION=v1.4.0
CSI_NODE_DRIVER_VERSION=v1.2.0
CSI_ATTACHER_VERSION=v2.0.0
CSI_RESIZER_VERSION=v0.3.0
ALIYUN_URL=registry.cn-hangzhou.aliyuncs.com/google_containers
UCLOUD_URL=uhub.service.ucloud.cn/uxhy
QUAY_URL=quay.io
mkdir -p dockerimages/
# ¼¯Ⱥ̹ԐIP ¶Փ¦µŖ