kubeadm安裝k8s
阿新 • • 發佈:2021-06-10
1. 部署要求
2. 部署工具
3. 準備環境
3.1 測試環境說明
本測試環境由master01、node01、node02、node03組成,所需配置為4核4G。域名為ilinux.io。
(1)設定NTP時間同步
(2)通過DNS完成各節點的主機名解析,測試環境主機數量較少也可以使用hosts檔案進行
(3)關閉個節點的iptables和filewalld。
(4)各節點禁用selinux
(5)各節點禁用swap裝置
(6)若要使用ipvs模型的proxy,各節點還需載入ipvs相關的各模組
3.2 設定時間同步
若各節點可以直接訪問網際網路,直接啟動chronyd系統服務,並設定其跟隨系統引導而啟動。
# 所有節點安裝
yum -y install chrony
# 所有節點啟動
systemctl start chronyd
systemctl enable chronyd
不過,建議使用者配置使用本地的時間伺服器,在節點數量眾多時尤其如此。存在可用本地時間伺服器時,修改節點的/etc/crhony.conf配置檔案,並將時間伺服器指向相應的主機即可,配置格式如下:
server CHRONY-SERVER-NAME-OR-IP iburst
3.3 主機名解析
出於簡化配置步驟的目的,本測試環境使用hosts檔案進行各節點名稱解析
10.0.0.11 master01 master01.ilinux.io 10.0.0.12 node01 node01.ilinux.io 10.0.0.13 node02 node02.ilinux.io 10.0.0.14 node03 node03.ilinux.io
3.4 關閉iptables或filewalld服務
步驟略
3.5 關閉selinux
步驟略
3.6 關閉swap裝置
部署叢集時,kubeadm預設會預先檢查當前主機是否禁用了swap裝置,並在未禁用時強制終止部署過程。
因此,在主機資源充裕的條件下,需要禁用所有的swap裝置,否則,就需要在後文的kubeadm init及kubeadm join命令執行時額外使用相關的選項忽略檢查錯誤。
# 所有節點 swapoff -a tail -1 /etc/fstab # 禁用swap開機啟動 #UUID=5880d9d0-d597-4f0c-b7b7-5bb40a9577ac swap swap defaults 0 0
3.7 啟動ipvs核心模組(本次部署不使用)
建立核心模組載入相關指令碼檔案/etc/sysconfig/modules/ipvs.modules,設定自動載入的核心模組。
# 所有節點
cat > /etc/sysconfig/modules/ipvs.modules << EOF
#!/bin/bash
ipvs_mods_dir="/usr/lib/modules/$(uname -r)/kernel/net/netfilter/ipvs"
for mod in $(ls $ipvs_mods_dir | grep -o "^[^.]*")
do
/sbin/modinfo -F filename $mod &> /dev/null
if [ $? -eq 0 ]
then
/sbin/modprobe $mod
fi
done
EOF
# 修改檔案許可權,並手動為當前系統載入核心模組
chmod +x /etc/sysconfig/modules/ipvs.modules
bash /etc/sysconfig/modules/ipvs.modules
4. 使用kubeadm部署k8s
4.1 安裝docker-ce
4.1.1 安裝docker-ce
# 所有節點操作
yum install -y yum-utils device-mapper-persistent-data lvm2
yum-config-manager --add-repo https://mirrors.aliyun.com/docker-ce/linux/centos/docker-ce.repo
sed -i 's+download.docker.com+mirrors.aliyun.com/docker-ce+' /etc/yum.repos.d/docker-ce.repo
yum makecache fast
yum -y install docker-ce
4.1.2 啟動docker服務
# 所有節點操作
## docker自1.13版本起會自動設定iptables的FORWARD預設規則為DROP,這可能會影響Kubernets叢集依賴的報文轉發功能,因此,需要在/usr/lib/systemd/system/docker.service檔案中,ExecStart=/usr/bin/dockerd的下面增加:
ExecStartPost=/usr/sbin/iptables -P FORWARD ACCEPT
## 最終配置如下
vim /usr/lib/systemd/system/docker.service
……省略部分內容
[Service]
# for containers run by docker
ExecStart=/usr/bin/dockerd -H fd:// --containerd=/run/containerd/containerd.sock
ExecStartPost=/usr/sbin/iptables -P FORWARD ACCEPT # 增加
ExecReload=/bin/kill -s HUP $MAINPID
TimeoutSec=0
……省略部分內容
## 啟動docker
systemctl daemon-reload
systemctl start docker
systemctl enable docker
## 檢查配置是否生效
[root@k8s-master ~]# iptables -nL
Chain INPUT (policy ACCEPT)
target prot opt source destination
Chain FORWARD (policy ACCEPT) # 這裡的規則正確的為ACCEPT
target prot opt source destination
DOCKER-USER all -- 0.0.0.0/0 0.0.0.0/0
DOCKER-ISOLATION-STAGE-1 all -- 0.0.0.0/0 0.0.0.0/0
ACCEPT all -- 0.0.0.0/0 0.0.0.0/0 ctstate RELATED,ESTABLISHED
DOCKER all -- 0.0.0.0/0 0.0.0.0/0
ACCEPT all -- 0.0.0.0/0 0.0.0.0/0
ACCEPT all -- 0.0.0.0/0 0.0.0.0/0
……省略部分內容
## 解決 docker info 顯示的告警
WARNING: bridge-nf-call-iptables is disabled
WARNING: bridge-nf-call-ip6tables is disabled
[root@k8s-master ~]# sysctl -a|grep bridge
net.bridge.bridge-nf-call-arptables = 0
net.bridge.bridge-nf-call-ip6tables = 0 # 把這個值改為1
net.bridge.bridge-nf-call-iptables = 0 # 把這個值改為1
net.bridge.bridge-nf-filter-pppoe-tagged = 0
net.bridge.bridge-nf-filter-vlan-tagged = 0
net.bridge.bridge-nf-pass-vlan-input-dev = 0
sysctl: reading key "net.ipv6.conf.all.stable_secret"
sysctl: reading key "net.ipv6.conf.default.stable_secret"
sysctl: reading key "net.ipv6.conf.docker0.stable_secret"
sysctl: reading key "net.ipv6.conf.eth0.stable_secret"
sysctl: reading key "net.ipv6.conf.lo.stable_secret"
cat > /etc/sysctl.d/k8s.conf << EOF
net.bridge.bridge-nf-call-ip6tables = 1
net.bridge.bridge-nf-call-iptables = 1
EOF
~]# sysctl -p /etc/sysctl.d/k8s.conf
net.bridge.bridge-nf-call-ip6tables = 1
net.bridge.bridge-nf-call-iptables = 1
## 重啟docker
systemctl restart docker
4.2 安裝kubernetes相關程式包
所有節點操作
yum源地址:https://developer.aliyun.com/mirror/?spm=a2c6h.13651104.0.d1002.2a422a7b083uEZ
cat <<EOF > /etc/yum.repos.d/kubernetes.repo
[kubernetes]
name=Kubernetes
baseurl=https://mirrors.aliyun.com/kubernetes/yum/repos/kubernetes-el7-x86_64/
enabled=1
gpgcheck=1
repo_gpgcheck=1
gpgkey=https://mirrors.aliyun.com/kubernetes/yum/doc/yum-key.gpg https://mirrors.aliyun.com/kubernetes/yum/doc/rpm-package-key.gpg
EOF
yum install -y kubelet kubeadm kubectl
systemctl enable kubelet
4.3 初始化master節點
# 若未禁用swap裝置,則需要編輯kubelet的配置檔案/etc/sysconfig/kubelet,設定忽略swap啟用狀態的錯誤,如下:
KUBELET_EXTRA_ARGS="--fail-swap-on=false" # 如果swap是啟用狀態,不報錯
# kubeadm init命令支援兩種初始化方式,一是通過命令列選項傳遞關鍵的部署設定,二是基於yaml格式的專用配置檔案,允許使用者自定義各個部署引數。建議使用二種
[root@k8s-master ~]# rpm -q kubeadm
kubeadm-1.20.4-0.x86_64
[root@k8s-master ~]# rpm -q kubelet
kubelet-1.20.4-0.x86_64
[root@k8s-master ~]# kubeadm init --apiserver-advertise-address=10.0.0.11 --image-repository registry.aliyuncs.com/google_containers --kubernetes-version v1.20.4 --service-cidr=10.1.0.0/16 --pod-network-cidr=10.244.0.0/16
……省略部分內容
Your Kubernetes control-plane has initialized successfully! # 初始化成功
To start using your cluster, you need to run the following as a regular user:
mkdir -p $HOME/.kube
sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
sudo chown $(id -u):$(id -g) $HOME/.kube/config
Alternatively, if you are the root user, you can run:
export KUBECONFIG=/etc/kubernetes/admin.conf
You should now deploy a pod network to the cluster.
Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
https://kubernetes.io/docs/concepts/cluster-administration/addons/
Then you can join any number of worker nodes by running the following on each as root:
kubeadm join 10.0.0.11:6443 --token 16d7wn.7z6m9a73jlalki6j \ # 複製這個內容,等下新增node節點需要
--discovery-token-ca-cert-hash sha256:9cbcf118f00c8988664e5c97dbef0ec3be7989a9bbfb5e21a585dd09ca0d968a
# 如果忘記複製上面用來初始化node節點的內容,解決辦法:https://blog.csdn.net/wzy_168/article/details/106552841
[root@k8s-master ~]# mkdir .kube
[root@k8s-master ~]# cp /etc/kubernetes/admin.conf .kube/config
[root@k8s-master ~]# systemctl enable kubelet.service # 初始化的時候會啟動kubelet
4.3.1 安裝網路外掛
地址:https://github.com/flannel-io/flannel
[root@k8s-master ~]# kubectl apply -f https://raw.githubusercontent.com/coreos/flannel/master/Documentation/kube-flannel.yml # 這裡需要網路好,網路要是不好,可能下不下來。不行百度報錯,解決方法就是把raw.githubusercontent.com的IP解析出來,新增到hosts檔案中即可。解析網站好像是國外的。
podsecuritypolicy.policy/psp.flannel.unprivileged created
clusterrole.rbac.authorization.k8s.io/flannel created
clusterrolebinding.rbac.authorization.k8s.io/flannel created
serviceaccount/flannel created
configmap/kube-flannel-cfg created
daemonset.apps/kube-flannel-ds created
[root@k8s-master ~]# kubectl get po -n kube-system # 這裡pod的狀態並不會馬上running,需要等待一段時間才能全部正常
NAME READY STATUS RESTARTS AGE
coredns-7f89b7bc75-cmtlk 1/1 Running 0 22m
coredns-7f89b7bc75-rqn2r 1/1 Running 0 22m
etcd-k8s-master 1/1 Running 0 22m
kube-apiserver-k8s-master 1/1 Running 0 22m
kube-controller-manager-k8s-master 1/1 Running 0 22m
kube-flannel-ds-qlfrg 1/1 Running 0 2m30s
kube-proxy-wxv4l 1/1 Running 0 22m
kube-scheduler-k8s-master 1/1 Running 0 22m
[root@k8s-master ~]# kubectl get no # 到這裡 主節點就初始化完成了
NAME STATUS ROLES AGE VERSION
k8s-master Ready control-plane,master 22m v1.20.4
4.4 初始化node節點
所有node節點操作相同
# 若未禁用swap裝置,則需要編輯kubelet的配置檔案/etc/sysconfig/kubelet,設定忽略swap啟用狀態的錯誤,如下:
KUBELET_EXTRA_ARGS="--fail-swap-on=false" # 如果swap是啟用狀態,不報錯
## 初始化。這一步操作,也會先拉取對應的映象,拉取映象需要一定的時間
kubeadm join 10.0.0.11:6443 --token 16d7wn.7z6m9a73jlalki6j \
--discovery-token-ca-cert-hash sha256:9cbcf118f00c8988664e5c97dbef0ec3be7989a9bbfb5e21a585dd09ca0d968a
~]# docker images # 等映象拉取完畢後,就能成功加入叢集
REPOSITORY TAG IMAGE ID CREATED SIZE
registry.aliyuncs.com/google_containers/kube-proxy v1.20.4 c29e6c583067 41 hours ago 118MB
quay.io/coreos/flannel v0.13.1-rc2 dee1cac4dd20 2 weeks ago 64.3MB
registry.aliyuncs.com/google_containers/pause 3.2 80d28bedfe5d 12 months ago 683kB
~]# systemctl enable kubelet.service
[root@k8s-master ~]# kubectl get no
NAME STATUS ROLES AGE VERSION
k8s-master Ready control-plane,master 85m v1.20.4
k8s-node-1 Ready <none> 53m v1.20.4
k8s-node-2 Ready <none> 24m v1.20.4
node-03 Ready <none> 24m v1.20.4
5. 解決叢集不健康問題
# 初始化完master節點後,使用命令檢視叢集狀態,會發現如下問題
[root@master01 ~]# kubectl get cs
Warning: v1 ComponentStatus is deprecated in v1.19+
NAME STATUS MESSAGE ERROR
scheduler Unhealthy Get "http://127.0.0.1:10251/healthz": dial tcp 127.0.0.1:10251: connect: connection refused # scheduler不健康
controller-manager Unhealthy Get "http://127.0.0.1:10252/healthz": dial tcp 127.0.0.1:10252: connect: connection refused # controller-manager不健康
etcd-0 Healthy {"health":"true"}
# 原因是他們兩個的yaml檔案裡面,沒有開放對應的10251和10252埠。
# 解決辦法如下,編輯下面這兩個檔案,然後刪除- --port=0這行配置,最後重啟kubelet服務就可以了。
/etc/kubernetes/manifests/kube-controller-manager.yaml
/etc/kubernetes/manifests/kube-scheduler.yaml
# 看網上說,這樣的不健康好像也不影響使用。
6. 設定命令自動補全
kubectl completion bash > ~/.kube/completion.bash.inc
echo source ~/.kube/completion.bash.inc >> /root/.bashrc
source ~/.kube/completion.bash.inc
注意
這裡在新增node節點時,要注意一個告警:
[WARNING SystemVerification]: this Docker version is not on the list of validated versions: 20.10.3. Latest validated version: 19.03
大概意思是,k8s官方認證過的可用版本的docker為19.03,而我們這裡用的20.10.3。
由於這裡是學習環境,所以沒關係,但是生產環境一定要使用對應的認證過得版本。
7. 修改k8s證書過期時間
kubeadm安裝的k8s,證書預設有效期為1年。可以通過手動更改為10年,甚至100年。
7.1 證書有效期檢視
[root@master01 ~]# cd /etc/kubernetes/pki
[root@master01 pki]# ll
total 56
-rw-r--r-- 1 root root 1265 Feb 23 10:05 apiserver.crt
-rw-r--r-- 1 root root 1135 Feb 23 10:05 apiserver-etcd-client.crt
-rw------- 1 root root 1679 Feb 23 10:05 apiserver-etcd-client.key
-rw------- 1 root root 1675 Feb 23 10:05 apiserver.key
-rw-r--r-- 1 root root 1143 Feb 23 10:05 apiserver-kubelet-client.crt
-rw------- 1 root root 1679 Feb 23 10:05 apiserver-kubelet-client.key
-rw-r--r-- 1 root root 1066 Feb 23 10:05 ca.crt
-rw------- 1 root root 1675 Feb 23 10:05 ca.key
drwxr-xr-x 2 root root 162 Feb 23 10:05 etcd
-rw-r--r-- 1 root root 1078 Feb 23 10:05 front-proxy-ca.crt
-rw------- 1 root root 1679 Feb 23 10:05 front-proxy-ca.key
-rw-r--r-- 1 root root 1103 Feb 23 10:05 front-proxy-client.crt
-rw------- 1 root root 1675 Feb 23 10:05 front-proxy-client.key
-rw------- 1 root root 1675 Feb 23 10:05 sa.key
-rw------- 1 root root 451 Feb 23 10:05 sa.pub
[root@master01 pki]# for i in $(ls *.crt); do echo "===== $i ====="; openssl x509 -in $i -text -noout | grep -A 3 'Validity' ; done ===== apiserver.crt =====
Validity
Not Before: Feb 23 02:05:22 2021 GMT # 有效起始日期
Not After : Feb 23 02:05:22 2022 GMT # 有效終止日期
Subject: CN=kube-apiserver
===== apiserver-etcd-client.crt =====
Validity
Not Before: Feb 23 02:05:22 2021 GMT
Not After : Feb 23 02:05:23 2022 GMT
Subject: O=system:masters, CN=kube-apiserver-etcd-client
===== apiserver-kubelet-client.crt =====
Validity
Not Before: Feb 23 02:05:22 2021 GMT
Not After : Feb 23 02:05:22 2022 GMT
Subject: O=system:masters, CN=kube-apiserver-kubelet-client
===== ca.crt =====
Validity
Not Before: Feb 23 02:05:22 2021 GMT
Not After : Feb 21 02:05:22 2031 GMT
Subject: CN=kubernetes
===== front-proxy-ca.crt =====
Validity
Not Before: Feb 23 02:05:22 2021 GMT
Not After : Feb 21 02:05:22 2031 GMT
Subject: CN=front-proxy-ca
===== front-proxy-client.crt =====
Validity
Not Before: Feb 23 02:05:22 2021 GMT
Not After : Feb 23 02:05:22 2022 GMT
Subject: CN=front-proxy-client
7.2 證書過期時間修改
7.2.1 部署go環境
go包下載地址:https://studygolang.com/dl
[root@master01 pki]# cd
[root@master01 ~]# wget https://studygolang.com/dl/golang/go1.16.linux-amd64.tar.gz
[root@master01 ~]# tar zxf go1.16.linux-amd64.tar.gz -C /usr/local/
[root@master01 ~]# tail -1 /etc/profile
export PATH=$PATH:/usr/local/go/bin # 新增這行內容到該檔案末尾
[root@master01 ~]# source /etc/profile
7.2.2 Kubernetes原始碼下載與更改證書策略
原始碼包下載時,必須下載與當前使用版本相同的版本
[root@master01 ~]# kubectl version
Client Version: version.Info{Major:"1", Minor:"20", GitVersion:"v1.20.4", GitCommit:"e87da0bd6e03ec3fea7933c4b5263d151aafd07c", GitTreeState:"clean", BuildDate:"2021-02-18T16:12:00Z", GoVersion:"go1.15.8", Compiler:"gc", Platform:"linux/amd64"}
Server Version: version.Info{Major:"1", Minor:"20", GitVersion:"v1.20.4", GitCommit:"e87da0bd6e03ec3fea7933c4b5263d151aafd07c", GitTreeState:"clean", BuildDate:"2021-02-18T16:03:00Z", GoVersion:"go1.15.8", Compiler:"gc", Platform:"linux/amd64"}
[root@master01 ~]# wget https://github.com/kubernetes/kubernetes/archive/v1.20.4.tar.gz
[root@master01 ~]# tar zxf kubernetes-1.20.4.tar.gz
[root@master01 ~]# cd kubernetes-1.20.4/
[root@master01 kubernetes-1.20.4]# vim cmd/kubeadm/app/util/pkiutil/pki_helpers.go
#……省略部分內容
func NewSignedCert(cfg *CertConfig, key crypto.Signer, caCert *x509.Certificate, caKey crypto.Signer) (*x509.Certificate, error) {
const effectyear = time.Hour * 24 * 365 * 100 # 新增這行內容
serial, err := cryptorand.Int(cryptorand.Reader, new(big.Int).SetInt64(math.MaxInt64))
if err != nil {
return nil, err
}
#……省略部分內容
DNSNames: cfg.AltNames.DNSNames,
IPAddresses: cfg.AltNames.IPs,
SerialNumber: serial,
NotBefore: caCert.NotBefore,
//NotAfter: time.Now().Add(kubeadmconstants.CertificateValidity).UTC(), # 註釋原來的行
NotAfter: time.Now().Add(effectyear).UTC(), # 新增這行
KeyUsage: x509.KeyUsageKeyEncipherment | x509.KeyUsageDigitalSignature,
ExtKeyUsage: cfg.Usages,
[root@master01 kubernetes-1.20.4]# pwd
/root/kubernetes-1.20.4
[root@master01 kubernetes-1.20.4]# make WHAT=cmd/kubeadm GOFLAGS=-v
[root@master01 kubernetes-1.20.4]# echo $? # 確認編譯是否成功
0
# 將更新後的kubeadm拷貝到指定位置
[root@master01 kubernetes-1.20.4]# cp -a _output/bin/kubeadm /root/kubeadm-new
7.2.3 更新kubeadm並備份原證書
[root@master01 kubernetes-1.20.4]# cp -a _output/bin/kubeadm /root/kubeadm-new
[root@master01 kubernetes-1.20.4]# mv /usr/bin/kubeadm /usr/bin/kubeadm_`date +%F`
[root@master01 kubernetes-1.20.4]# mv /root/kubeadm-new /usr/bin/kubeadm
[root@master01 kubernetes-1.20.4]# chmod 755 /usr/bin/kubeadm
[root@master01 kubernetes-1.20.4]# cp -a /etc/kubernetes/pki/ /etc/kubernetes/pki_`date +%F`
7.2.4 證書更新
[root@master01 kubernetes-1.20.4]# cd
[root@master01 ~]# kubeadm alpha certs renew all
[root@master01 ~]# \cp -f /etc/kubernetes/admin.conf ~/.kube/config
[root@master01 ~]# cd /etc/kubernetes/pki
[root@master01 pki]# for i in $(ls *.crt); do echo "===== $i ====="; openssl x509 -in $i -text -noout | grep -A 3 'Validity' ; done
===== apiserver.crt =====
Validity
Not Before: Feb 23 02:05:22 2021 GMT
Not After : Jan 30 03:24:41 2121 GMT # 過期時間100年
Subject: CN=kube-apiserver
===== apiserver-etcd-client.crt =====
Validity
Not Before: Feb 23 02:05:22 2021 GMT
Not After : Jan 30 03:24:42 2121 GMT
Subject: O=system:masters, CN=kube-apiserver-etcd-client
===== apiserver-kubelet-client.crt =====
Validity
Not Before: Feb 23 02:05:22 2021 GMT
Not After : Jan 30 03:24:42 2121 GMT
Subject: O=system:masters, CN=kube-apiserver-kubelet-client
===== ca.crt =====
Validity
Not Before: Feb 23 02:05:22 2021 GMT
Not After : Feb 21 02:05:22 2031 GMT
Subject: CN=kubernetes
===== front-proxy-ca.crt =====
Validity
Not Before: Feb 23 02:05:22 2021 GMT
Not After : Feb 21 02:05:22 2031 GMT
Subject: CN=front-proxy-ca
===== front-proxy-client.crt =====
Validity
Not Before: Feb 23 02:05:22 2021 GMT
Not After : Jan 30 03:24:43 2121 GMT
Subject: CN=front-proxy-client
7.2.5 重啟相關服務
完成更新證書後。您必須重新啟動kube-apiserver、kube-controller-manager、kube-scheduler和etcd,以便它們能夠使用新的證書。
kubectl delete po kube-apiserver-master01 -n kube-system
kubectl delete po kube-controller-manager-master01 -n kube-system
kubectl delete po kube-scheduler-master01 -n kube-system
kubectl delete po etcd-master01 -n kube-system
# 問題:怎麼這些pod直接刪除後又自動啟動了?
## 通過大哥解釋和百度:平常我們提得比較多的Pod,都是通過Deployment,DaemonSet,StatefulSet等方式建立管理的。今天我們介紹一種特殊的Pod,叫靜態(Static) Pod。
### 靜態Pod是由kubelet進行管理,僅存在於特定Node上的Pod,這些Pod是不能通過API Server進行管理的,無法與ReplicationController,Deployment或DaemonSet關聯。
### kubelet會掃描staticPodPath,檢測到這個目錄下有yaml檔案,就建立Pod了。如果要刪除Pod,把這些配置檔案刪除即可。
[root@master01 ~]# cat /var/lib/kubelet/config.yaml | grep staticPodPath
staticPodPath: /etc/kubernetes/manifests
[root@master01 ~]# ll /etc/kubernetes/manifests
total 16
-rw------- 1 root root 2192 Feb 23 10:05 etcd.yaml
-rw------- 1 root root 3309 Feb 23 10:05 kube-apiserver.yaml
-rw------- 1 root root 2811 Feb 23 10:36 kube-controller-manager.yaml
-rw------- 1 root root 1398 Feb 23 10:36 kube-scheduler.yaml
8. 修改預設埠範圍
kubernetes預設埠號範圍是 30000-32767
[root@master01 ~]# vim /etc/kubernetes/manifests/kube-apiserver.yaml
……省略部分內容
- --service-node-port-range=1000-50000 # 新增內容
……省略部分內容