1. 程式人生 > 其它 >【K8S】UTM虛擬機器安裝K8S叢集

【K8S】UTM虛擬機器安裝K8S叢集

二進位制安裝參考:https://www.cnblogs.com/security-guard/p/15356262.html
官方下載連結 https://www.downloadkubernetes.com

1. 安裝前準備

電腦:Macbook Pro 2020 , 16g, M1
系統版本:MacOS Monterey 12.1
安裝虛擬機器 UTM
安裝三個節點(映象版本:CentOS-7-aarch64-Minimal-2009)

叢集規劃:
共3個節點-node1 node2 node3
node1 master
node2 worker
node3 worker

1.1 系統初始化設定

# 1. 修改主機名
hostnamectl set-hostname node1

# 2. 關閉防火牆
systemctl stop firewalld
systemctl disable firewalld

# 3. 關閉swap
swapoff -a  # 臨時
sed -ri 's/.*swap.*/#&/' /etc/fstab    # 永久

# 4. 關閉selinux
sed -i 's/enforcing/disabled/' /etc/selinux/config  # 永久
setenforce 0  # 臨時

# 5. 在master(node1)新增hosts
cat >> /etc/hosts << EOF
192.168.64.5 node1
192.168.64.6 node2
192.168.64.7 node3
EOF

# 6. 時間同步
yum install ntpdate -y
ntpdate cn.ntp.org.cn # 不管用可以搜下其它的試一下


# 7. 將橋接的IPv4流量傳遞到iptables的鏈
cat > /etc/sysctl.d/k8s.conf << EOF
net.bridge.bridge-nf-call-ip6tables = 1
net.bridge.bridge-nf-call-iptables = 1
net.ipv4.ip_forward = 1
vm.swappiness = 0
EOF
# 載入br_netfilter模組
modprobe br_netfilter
# 檢視是否載入
# lsmod | grep br_netfilter
br_netfilter          262144  0
bridge                327680  1 br_netfilter

sysctl --system  # 生效

# 8. 確保各節點MAC 地址和 product_uuid 的唯一性 
# 確保各節點mac地址唯一性
ifconfig -a|grep ether
# 確保各節點produce_uuid唯一性
sudo cat /sys/class/dmi/id/product_uuid #

1.2 開啟ipvs

在kubernetes中service有兩種代理模型,一種是基於iptables,另一種是基於ipvs的。ipvs的效能要高於iptables的,但是如果要使用它,需要手動載入ipvs模組。

# 安裝ipvs ipvsadm
$ yum -y install ipset ipvsadm

$ cat > /etc/sysconfig/modules/ipvs.modules <<EOF
#!/bin/bash
modprobe -- ip_vs
modprobe -- ip_vs_rr
modprobe -- ip_vs_wrr
modprobe -- ip_vs_sh
modprobe -- nf_conntrack_ipv4
EOF

授權、執行、
$ chmod 755 /etc/sysconfig/modules/ipvs.modules && bash /etc/sysconfig/modules/ipvs.modules

檢查是否載入
$ lsmod | grep -e ipvs -e nf_conntrack_ipv4
nf_conntrack_ipv4     262144  0
nf_defrag_ipv4        262144  1 nf_conntrack_ipv4
nf_conntrack          327680  2 nf_conntrack_ipv4,ip_vs

1.3 安裝docker

k8s支援的容器執行時有很多如docker、containerd、cri-o等等,由於現在主流還是docker,所以這裡選擇安裝docker

1.3.1 安裝

# 新增源
wget https://mirrors.aliyun.com/docker-ce/linux/centos/docker-ce.repo -O /etc/yum.repos.d/docker-ce.repo

# 安裝包
yum -y install docker-ce-19.03.9-3.el7

# 啟動並設定開機自啟
systemctl enable docker && systemctl start docker

# docker version
Client: Docker Engine - Community
 Version:           20.10.13
 API version:       1.40
 Go version:        go1.16.15
 Git commit:        a224086
 Built:             Thu Mar 10 14:08:05 2022
 OS/Arch:           linux/arm64
 Context:           default
 Experimental:      true

Server: Docker Engine - Community
 Engine:
  Version:          19.03.9
  API version:      1.40 (minimum version 1.12)
  Go version:       go1.13.10
  Git commit:       9d98839
  Built:            Fri May 15 00:24:27 2020
  OS/Arch:          linux/arm64
  Experimental:     false
 containerd:
  Version:          1.5.10
  GitCommit:        2a1d4dbdb2a1030dc5b01e96fb110a9d9f150ecc
 runc:
  Version:          1.0.3
  GitCommit:        v1.0.3-0-gf46b6ba
 docker-init:
  Version:          0.18.0
  GitCommit:        fec3683

問題解決

# 安裝過程報錯:
--> 解決依賴關係完成
錯誤:軟體包:3:docker-ce-19.03.9-3.el7.aarch64 (docker-ce-stable)
          需要:container-selinux >= 2:2.74
錯誤:軟體包:containerd.io-1.5.10-3.1.el7.aarch64 (docker-ce-stable)
          需要:container-selinux >= 2:2.74
 您可以嘗試新增 --skip-broken 選項來解決該問題
 您可以嘗試執行:rpm -Va --nofiles --nodigest
 
 解決辦法:
 安裝 container-selinux >= 2:2.74
yum install  https://mirrors.aliyun.com/centos-altarch/7.9.2009/extras/aarch64/Packages/container-selinux-2.119.2-1.911c772.el7_8.noarch.rpm -y

1.3.2 設定docker映象加速

vim /etc/docker/daemon.json
 {
  "registry-mirrors": [
 "https://uyqa6c1l.mirror.aliyuncs.com",
   "https://hub-mirror.c.163.com",
    "https://dockerhub.azk8s.cn",
    "https://reg-mirror.qiniu.com",
    "https://registry.docker-cn.com"
  ]
}

systemctl daemon-reload
systemctl restart docker

docker info

2. k8s安裝

2.1 新增阿里雲的YUM軟體源

https://developer.aliyun.com/mirror/kubernetes?spm=a2c6h.13651102.0.0.32d51b11vVlb09

# 新增yum源
cat <<EOF > /etc/yum.repos.d/kubernetes.repo
[kubernetes]
name=Kubernetes
baseurl=https://mirrors.aliyun.com/kubernetes/yum/repos/kubernetes-el7-aarch64/
enabled=1
gpgcheck=0
repo_gpgcheck=0
gpgkey=https://mirrors.aliyun.com/kubernetes/yum/doc/yum-key.gpg https://mirrors.aliyun.com/kubernetes/yum/doc/rpm-package-key.gpg
EOF



2.2 安裝kubeadm、kubelet和kubectl(所有節點)

選擇版本:https://www.downloadkubernetes.com

# 安裝kubeadm、kubelet和kubectl,版本更新頻繁,指定穩定版本安裝
$ yum install -y kubelet-1.22.7 kubectl-1.22.7  kubeadm-1.22.7


# 為了實現Docker使用的cgroup drvier和kubelet使用的cgroup driver一致,建議修改"/etc/sysconfig/kubelet"檔案的內容
$ vim /etc/sysconfig/kubelet
# 修改
KUBELET_EXTRA_ARGS="--cgroup-driver=systemd"
KUBE_PROXY_MODE="ipvs"


$ systemctl enable kubelet

# 檢視所需映象
$ kubeadm config images list
I0312 21:34:05.743622    5929 version.go:255] remote version is much newer: v1.23.4; falling back to: stable-1.22
k8s.gcr.io/kube-apiserver:v1.22.7
k8s.gcr.io/kube-controller-manager:v1.22.7
k8s.gcr.io/kube-scheduler:v1.22.7
k8s.gcr.io/kube-proxy:v1.22.7
k8s.gcr.io/pause:3.5
k8s.gcr.io/etcd:3.5.0-0
k8s.gcr.io/coredns/coredns:v1.8.4

問題解決
kubelet啟動錯誤

# systemctl status kubelet
● kubelet.service - kubelet: The Kubernetes Node Agent
   Loaded: loaded (/usr/lib/systemd/system/kubelet.service; enabled; vendor preset: disabled)
  Drop-In: /usr/lib/systemd/system/kubelet.service.d
           └─10-kubeadm.conf
   Active: activating (auto-restart) (Result: exit-code) since 六 2022-03-12 21:50:13 EST; 1s ago
     Docs: https://kubernetes.io/docs/
  Process: 15364 ExecStart=/usr/bin/kubelet $KUBELET_KUBECONFIG_ARGS $KUBELET_CONFIG_ARGS $KUBELET_KUBEADM_ARGS $KUBELET_EXTRA_ARGS (code=exited, status=1/FAILURE)
 Main PID: 15364 (code=exited, status=1/FAILURE)

3月 12 21:50:13 node1 systemd[1]: Unit kubelet.service entered failed state.
3月 12 21:50:13 node1 systemd[1]: kubelet.service failed.


# 命令檢視日誌,檢查報錯原因
$ journalctl _PID=15364 |less -  
3月 12 22:06:45 node1 kubelet[15364]: E0312 22:06:45.708421   15364 server.go:294] "Failed to run kubelet" err="failed to run Kubelet: misconfiguration: kubelet cgroup driver: \"systemd\" is different from docker cgroup driver: \"cgroupfs\""

解決辦法:

# 由上面報錯資訊知道:docker使用的cgroup driver為cgroupfs,k8s使用的cgroup driver為systemed
# 報以修改docker的cgroup driver為systemd

vim /etc/docker/daemon.json
"exec-opts": ["native.cgroupdriver=systemd"],
systemctl restart docker

2.3 部署k8s的Master節點(node1)

# 由於預設拉取映象地址k8s.gcr.io國內無法訪問,這裡需要指定阿里雲映象倉庫地址  
kubeadm init \
  --apiserver-advertise-address=192.168.64.5 \
  --image-repository registry.aliyuncs.com/google_containers \
  --control-plane-endpoint=192.168.64.5:6443 \
  --kubernetes-version v1.22.7 \
  --service-cidr=10.96.0.0/12 \
  --pod-network-cidr=10.244.0.0/16 \
  --upload-certs
  --token-ttl 0 # 為了嘗試token失效情況,這個沒加

引數說明:

  • --image-repository:指定要使用的映象倉庫,預設為gcr.io。
  • --kubernetes-version:Kubernetes程式元件的版本號,它必須與安裝的kubelet程式包的版本號相同。
  • --control-plane-endpoint:控制平面的固定訪問端點,可以是IP地址或DNS名稱,作為叢集管理員與叢集元件的kubeconfig配置檔案的API Server的訪問地址。單控制平面部署時可以不使用該選項。
  • --pod-network-cidr:Pod網路的地址範圍,其值為CIDR格式的網路地址,通常Flannel網路外掛的預設值為10.244.0.0/16,Project Calico外掛的預設值為192.168.0.0/16。
  • --service-cidr:Service的網路地址範圍,其值為CIDR格式的網路地址,預設為10.96.0.0/12。通常,僅Flannel一類的網路外掛需要手動指定該地址。
  • --apiserver-advertise-address:API Server通告給其他元件的IP地址,一般為Master節點用於叢集內通訊的IP地址,0.0.0.0表示節點上所有可用地址。
  • --token-ttl:共享令牌的過期時長,預設為24小時,0表示永不過期。為防止不安全儲存等原因導致的令牌洩露危及叢集安全,建議為其設定過期時長

初始化輸出日誌內容如下:日誌很重要

...
Your Kubernetes control-plane has initialized successfully!

# 通過如下配置,即可操作叢集
To start using your cluster, you need to run the following as a regular user:

  mkdir -p $HOME/.kube
  sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
  sudo chown $(id -u):$(id -g) $HOME/.kube/config

Alternatively, if you are the root user, you can run:

  export KUBECONFIG=/etc/kubernetes/admin.conf

You should now deploy a pod network to the cluster.
Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
  https://kubernetes.io/docs/concepts/cluster-administration/addons/

#  通過如下命令新增master節點
You can now join any number of the control-plane node running the following command on each as root:

  kubeadm join 192.168.64.5:6443 --token qv4vub.h7aov4ae3z182y99 \
	--discovery-token-ca-cert-hash sha256:f3b753e4484154f11c9105427ca614a1e07dfc8ddaa167eec86c1cfed8cbfb7e \
	--control-plane --certificate-key a6c6f91d8d2360934b884eb6a5f65d8bad3a2be25c3da0e280de7ad2225668af

# token過期,可通過如下命令生成新的token
Please note that the certificate-key gives access to cluster sensitive data, keep it secret!
As a safeguard, uploaded-certs will be deleted in two hours; If necessary, you can use
"kubeadm init phase upload-certs --upload-certs" to reload certs afterward.

# 通過如下命令,新增node節點
Then you can join any number of worker nodes by running the following on each as root:

kubeadm join 192.168.64.5:6443 --token qv4vub.h7aov4ae3z182y99 \
	--discovery-token-ca-cert-hash sha256:f3b753e4484154f11c9105427ca614a1e07dfc8ddaa167eec86c1cfed8cbfb7e

另個可通過如下命令生成加入node節點的命令

# kubeadm token create --print-join-command
kubeadm join 192.168.64.5:6443 --token esl6lt.7jm2h1lc6oa077vp --discovery-token-ca-cert-hash sha256:f3b753e4484154f11c9105427ca614a1e07dfc8ddaa167eec86c1cfed8cbfb7e

2.4 部署k8s的node節點(node2 node3)

$ kubeadm join 192.168.64.5:6443 --token qv4vub.h7aov4ae3z182y99 \
	--discovery-token-ca-cert-hash sha256:f3b753e4484154f11c9105427ca614a1e07dfc8ddaa167eec86c1cfed8cbfb7e
[preflight] Running pre-flight checks
	[WARNING Hostname]: hostname "node2" could not be reached
	[WARNING Hostname]: hostname "node2": lookup node2 on [fe80::b0be:83ff:fed5:ce64%eth0]:53: no such host
	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
[preflight] Reading configuration from the cluster...
[preflight] FYI: You can look at this config file with 'kubectl -n kube-system get cm kubeadm-config -o yaml'
[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
[kubelet-start] Starting the kubelet
[kubelet-start] Waiting for the kubelet to perform the TLS Bootstrap...

This node has joined the cluster:
* Certificate signing request was sent to apiserver and a response was received.
* The Kubelet was informed of the new secure connection details.

Run 'kubectl get nodes' on the control-plane to see this node join the cluster.

2.5 部署CNI網路外掛flannel

kubernetes支援多種網路外掛,比如flannel、calico、canal等,任選一種即可,本次選擇flannel。
https://github.com/flannel-io/flannel

## For Kubernetes v1.17+ 
# kubectl apply -f https://raw.githubusercontent.com/flannel-io/flannel/master/Documentation/kube-flannel.yml
# 這裡選擇安裝0.17.0,但是這個檔案貌似有問題,打不開
kubectl apply -f  https://raw.githubusercontent.com/flannel-io/flannel/v0.17.0/Documentation/kube-flannel.yml

# 先下載再安裝
wget https://github.com/flannel-io/flannel/archive/refs/tags/v0.17.0.tar.gz
tar zxf v0.17.0.tar.gz && cd flannel-0.17.0/Documentation
kubectl apply -f kube-flannel.yml


# kubectl get pods -A
NAMESPACE     NAME                            READY   STATUS    RESTARTS      AGE
kube-system   coredns-7f6cbbb7b8-bsrj2        1/1     Running   0             70m
kube-system   coredns-7f6cbbb7b8-z24sw        1/1     Running   0             70m
kube-system   etcd-node1                      1/1     Running   0             70m
kube-system   kube-apiserver-node1            1/1     Running   0             70m
kube-system   kube-controller-manager-node1   1/1     Running   1 (40m ago)   70m
kube-system   kube-flannel-ds-d7rvn           1/1     Running   0             3m54s
kube-system   kube-flannel-ds-wclpk           1/1     Running   0             3m54s
kube-system   kube-flannel-ds-wmlbl           1/1     Running   0             3m54s
kube-system   kube-proxy-4v8gw                1/1     Running   0             25m
kube-system   kube-proxy-5khwr                1/1     Running   0             70m
kube-system   kube-proxy-xhgtz                1/1     Running   0             26m
kube-system   kube-scheduler-node1            1/1     Running   1 (40m ago)   70m

2.6 配置命令初全

# 安裝bash-completion
yum -y install bash-completion

# 載入bash-completion
source /etc/profile.d/bash_completion.sh

# 載入環境變數
echo "export KUBECONFIG=/root/.kube/config" >> /root/.bash_profile
echo "source <(kubectl completion bash)" >> /root/.bash_profile
source .bash_profile