第六章 部署node運算節點服務
一、部署Kubelet
1.1 叢集規劃
主機名 角色 IP
HDSS7-21 kubelet 10.4.7.21
HDSS7-22 kubelet 10.4.7.22
注意:部署以10.4.7.21為例,10.4.7.22節點類似
1.2 簽發kubelet證書
證書籤發需要在10.4.7.200上操作
[root@hdss7-200 ~]# cd /opt/certs/ 注意:將所有可能的kubelet伺服器的IP都加進去,後期如果需要再加入其他IP節點的話就需要重新簽發此證書,有計劃的將證書替換成最新的,最好避免後期加入新的節點。 [root@hdss7-200 certs]# vim kubelet-csr.json { "CN": "k8s-kubelet", "hosts": [ "127.0.0.1", "10.4.7.10", "10.4.7.21", "10.4.7.22", "10.4.7.23", "10.4.7.24", "10.4.7.25", "10.4.7.26", "10.4.7.27", "10.4.7.28" ], "key": { "algo": "rsa", "size": 2048 }, "names": [ { "C": "CN", "ST": "beijing", "L": "beijing", "O": "od", "OU": "ops" } ] } 生成證書 [root@hdss7-200 certs]# cfssl gencert -ca=ca.pem -ca-key=ca-key.pem -config=ca-config.json -profile=server kubelet-csr.json | cfssl-json -bare kubelet 注意私鑰檔案的屬性許可權是600 certs]# ll kubelet* -rw-r--r-- 1 root root 1115 6月 10 00:04 kubelet.csr -rw-r--r-- 1 root root 452 6月 10 00:03 kubelet-csr.json -rw------- 1 root root 1675 6月 10 00:04 kubelet-key.pem -rw-r--r-- 1 root root 1468 6月 10 00:04 kubelet.pem 分發證書 [root@hdss7-200 certs]# scp kubelet.pem kubelet-key.pem hdss7-21:/opt/kubernetes/server/bin/certs/ [root@hdss7-200 certs]# scp kubelet.pem kubelet-key.pem hdss7-22:/opt/kubernetes/server/bin/certs/
1.3 建立kubelet的配置
在10.4.7.21,22伺服器上操作
1.3.1 set-cluster:建立需要連線的叢集資訊,可以建立多個k8s資訊(會將ca.pem證書編碼後嵌入到/opt/kubernetes/conf/kubelet.kubeconfig配置檔案中)
注意:10.4.7.10是apiserver的VIP,之前我們在10.4.7.11/21上部署的nginx就是代理10.4.7.21/22的apiserver叢集,部署的keepalived的VIP就是10.4.7.10
[root@hdss7-21 ~]# cd /opt/kubernetes/ [root@hdss7-21 conf]# kubectl config set-cluster myk8s \ --certificate-authority=/opt/kubernetes/server/bin/certs/ca.pem \ --embed-certs=true \ --server=https://10.4.7.10:7443 \ --kubeconfig=/opt/kubernetes/conf/kubelet.kubeconfig Cluster "myk8s" set. [root@hdss7-21 conf]# ll /opt/kubernetes/conf/ 總用量 8 -rw-r--r-- 1 root root 2223 6月 8 22:00 audit.yaml -rw------- 1 root root 1986 6月 10 00:14 kubelet.kubeconfig
1.3.2 set-credentials:建立使用者賬號,即使用者登入的客戶端私有證書,可以建立多個證書(將client.pem證書和client-key.pem私鑰編碼後嵌入到kubelet.kubeconfig檔案中)
[root@hdss7-21 conf]# kubectl config set-credentials k8s-node \ --client-certificate=/opt/kubernetes/server/bin/certs/client.pem \ --client-key=/opt/kubernetes/server/bin/certs/client-key.pem \ --embed-certs=true \ --kubeconfig=/opt/kubernetes/conf/kubelet.kubeconfig User "k8s-node" set.
1.3.3 set-context:設定context,即確定賬號和叢集對應關係
[root@hdss7-21 conf]# kubectl config set-context myk8s-context \
--cluster=myk8s \
--user=k8s-node \
--kubeconfig=/opt/kubernetes/conf/kubelet.kubeconfig
Context "myk8s-context" created.
1.3.4 use-context:設定當前使用哪個context
[root@hdss7-21 conf]# kubectl config use-context myk8s-context --kubeconfig=/opt/kubernetes/conf/kubelet.kubeconfig
Switched to context "myk8s-context".
把此配置傳送給10.4.7.22,那麼在22上就不需要重複操作以上4個步驟了
[root@hdss7-21 conf]# scp /opt/kubernetes/conf/kubelet.kubeconfig hdss7-22:/opt/kubernetes/conf/
1.4 授權k8s-node使用者
此步驟只需要在一臺master節點上操作就行(10.4.7.21)
授權k8s-node使用者繫結叢集角色system:node,讓k8s-node擁有具備運算節點的許可權
[root@hdss7-21 conf]# vim /opt/kubernetes/conf/k8s-node.yaml
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRoleBinding
metadata:
name: k8s-node
roleRef:
apiGroup: rbac.authorization.k8s.io
kind: ClusterRole
name: system:node
subjects:
- apiGroup: rbac.authorization.k8s.io
kind: User
name: k8s-node
[root@hdss7-21 conf]# kubectl create -f /opt/kubernetes/conf/k8s-node.yaml
clusterrolebinding.rbac.authorization.k8s.io/k8s-node created
建立資源(會儲存到etcd中)
[root@hdss7-21 conf]# kubectl get clusterrolebinding k8s-node
NAME AGE
k8s-node 51s
注意:檢視7443埠是否正常啟動,非常重要,7443埠無法連線會導致node節點無法加入到master節點
~]# telnet 10.4.7.10 7443
Trying 10.4.7.10...
Connected to 10.4.7.10.
Escape character is '^]'.
^]
telnet> q
刪除資源命令如下
[root@hdss7-21 conf]# kubectl delete -f k8s-node.yaml
clusterrolebinding.rbac.authorization.k8s.io "k8s-node" deleted
檢視資源配置
[root@hdss7-21 conf]# kubectl get clusterrolebinding k8s-node -o yaml
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRoleBinding
metadata:
creationTimestamp: "2021-06-10T13:51:06Z"
name: k8s-node
resourceVersion: "12725"
selfLink: /apis/rbac.authorization.k8s.io/v1/clusterrolebindings/k8s-node
uid: e70f91af-c9f2-11eb-aaf3-000c29e396b1
roleRef:
apiGroup: rbac.authorization.k8s.io
kind: ClusterRole
name: system:node
subjects:
- apiGroup: rbac.authorization.k8s.io
kind: User
name: k8s-node
1.5 準備pause基礎映象
因為kubelet在啟動容器時需要有一個基礎映象初始化網路空間等來幫助我們去啟動容器,從而讓我們能夠啟動pod;
將pause映象放入到harbor私有倉庫中,僅在10.4.7.200上操作,確保harbor和docker執行正常,提前檢查
下載映象
[root@hdss7-200 ~]# docker image pull kubernetes/pause
打標籤
[root@hdss7-200 ~]# docker image tag kubernetes/pause:latest harbor.od.com/public/pause:latest
登入harbor
[root@hdss7-200 ~]# docker login -u admin harbor.od.com
推送pause映象到harbor私有倉庫
[root@hdss7-200 ~]# docker image push harbor.od.com/public/pause:latest
1.6 建立kubelet啟動指令碼
在node節點建立啟動指令碼,並啟動kubelet,在10.4.7.21/22上操作,以21為例
22上修改--hostname-override項
[root@hdss7-21 ~]# vim /opt/kubernetes/server/bin/kubelet-startup.sh
#!/bin/sh
WORK_DIR=$(dirname $(readlink -f $0))
[ $? -eq 0 ] && cd $WORK_DIR || exit
/opt/kubernetes/server/bin/kubelet \
--anonymous-auth=false \
--cgroup-driver systemd \
--cluster-dns 192.168.0.2 \
--cluster-domain cluster.local \
--runtime-cgroups=/systemd/system.slice \
--kubelet-cgroups=/systemd/system.slice \
--fail-swap-on="false" \
--client-ca-file ./certs/ca.pem \
--tls-cert-file ./certs/kubelet.pem \
--tls-private-key-file ./certs/kubelet-key.pem \
--hostname-override hdss7-21.host.com \
--image-gc-high-threshold 20 \
--image-gc-low-threshold 10 \
--kubeconfig ../../conf/kubelet.kubeconfig \
--log-dir /data/logs/kubernetes/kube-kubelet \
--pod-infra-container-image harbor.od.com/public/pause:latest \
--root-dir /data/kubelet
1.7 新增許可權,建立目錄
[root@hdss7-21 ~]# chmod +x /opt/kubernetes/server/bin/kubelet-startup.sh
[root@hdss7-21 ~]# mkdir -p /data/logs/kubernetes/kube-kubelet /data/kubelet
1.8 配置supervisor配置
[root@hdss7-21 ~]# vim /etc/supervisord.d/kube-kubelet.ini
[program:kube-kubelet-7-21]
command=/opt/kubernetes/server/bin/kubelet-startup.sh
numprocs=1
directory=/opt/kubernetes/server/bin
autostart=true
autorestart=true
startsecs=30
startretries=3
exitcodes=0,2
stopsignal=QUIT
stopwaitsecs=10
user=root
redirect_stderr=true
stdout_logfile=/data/logs/kubernetes/kube-kubelet/kubelet.stdout.log
stdout_logfile_maxbytes=64MB
stdout_logfile_backups=5
stdout_capture_maxbytes=1MB
stdout_events_enabled=false
1.9 啟動服務並檢查
[root@hdss7-21 ~]# supervisorctl update
kube-kubelet-7-21: added process group
[root@hdss7-21 ~]# supervisorctl status
etcd-server-7-21 RUNNING pid 1172, uptime 1:06:46
kube-apiserver-7-21 RUNNING pid 1183, uptime 1:06:46
kube-controller-manager-7-21 RUNNING pid 1167, uptime 1:06:46
kube-kubelet-7-21 RUNNING pid 2280, uptime 0:01:44
kube-scheduler-7-21 RUNNING pid 1169, uptime 1:06:46
[root@hdss7-22 ~]# tail -100f /data/logs/kubernetes/kube-kubelet/kubelet.stdout.log
I0713 21:44:08.453953 2265 kubelet_node_status.go:75] Successfully registered node hdss7-21.host.com
I0713 21:44:08.509328 2265 cpu_manager.go:155] [cpumanager] starting with none policy
I0713 21:44:08.509382 2265 cpu_manager.go:156] [cpumanager] reconciling every 10s
I0713 21:44:08.509441 2265 policy_none.go:42] [cpumanager] none policy: Start
W0713 21:44:08.644794 2265 manager.go:540] Failed to retrieve checkpoint for "kubelet_internal_checkpoint": checkpoint is not found
I0713 21:44:08.878478 2265 reconciler.go:154] Reconciler: start to sync state
出現如上表示正常啟動
檢視node節點是否加入到叢集中
[root@hdss7-21 ~]# kubectl get node
NAME STATUS ROLES AGE VERSION
hdss7-21.host.com Ready <none> 13s v1.14.10
hdss7-22.host.com NotReady <none> 0s v1.14.10
別急,需要載入一會
[root@hdss7-21 ~]# kubectl get node
NAME STATUS ROLES AGE VERSION
hdss7-21.host.com Ready <none> 55m v1.14.10
hdss7-22.host.com Ready <none> 54m v1.14.10
1.10 修改節點角色
[root@hdss7-21 ~]# kubectl label node hdss7-21.host.com node-role.kubernetes.io/node=
node/hdss7-21.host.com labeled
[root@hdss7-21 ~]# kubectl label node hdss7-21.host.com node-role.kubernetes.io/master=
node/hdss7-21.host.com labeled
[root@hdss7-21 ~]# kubectl label node hdss7-22.host.com node-role.kubernetes.io/master=
node/hdss7-22.host.com labeled
[root@hdss7-21 ~]# kubectl label node hdss7-22.host.com node-role.kubernetes.io/node=
node/hdss7-22.host.com labeled
[root@hdss7-21 ~]# kubectl get node
NAME STATUS ROLES AGE VERSION
hdss7-21.host.com Ready master,node 57m v1.14.10
hdss7-22.host.com Ready master,node 57m v1.14.10
1.11 安裝部署其他節點
在10.4.7.22上同樣操作
1.12 報錯排查
在10.4.7.21(10.4.7.10)master節點上檢視node,發現無任何資源可訪問,如下
[root@hdss7-21 ~]# kubectl get node
No resources found.
檢視kubectl日誌
[root@hdss7-21 ~]# tail -100f /data/logs/kubernetes/kube-kubelet/kubelet.stdout.log
第一種如下:
failed to ensure node lease exists connect: no route to host
原因:這是根本就沒有10.4.7.10這個ip,或者無法連線到此ip,ping一下,檢查是否可以連線,新增此虛IP重新,重新執行本章1.2步驟即可
第二種如下:
E0611 20:41:55.908234 1414 kubelet.go:2246] node "hdss7-22.host.com" not found
E0611 20:41:56.008667 1414 kubelet.go:2246] node "hdss7-22.host.com" not found
這個報錯可以忽略
第三種如下:
E0611 20:41:55.838167 1414 reflector.go:126] k8s.io/client-go/informers/factory.go:133: Failed to list *v1beta1.RuntimeClass: Get https://10.4.7.10:7443/apis/node.k8s.io/v1beta1/runtimeclasses?limit=500&resourceVersion=0: dial tcp 10.4.7.10:7443: connect: connection refused
這就說明雖然找到了這個伺服器,但是拒絕連線,測試一下埠的連通性,發現被拒絕,無此埠
[root@hdss7-21 ~]# telnet 10.4.7.10 7443
Trying 10.4.7.10...
telnet: connect to address 10.4.7.10: Connection refused
檢查虛IP是否正確配置,檢查1.2步驟是否正確執行,在本地telnet一下,是否正確啟動7443埠,正常啟動後重啟一下kube-kubelet-7-21,kube-kubelet-7-22服務即可
正確的日誌如下:
I0611 21:06:18.917499 9153 kubelet_node_status.go:72] Attempting to register node hdss7-22.host.com
I0611 21:06:18.947122 9153 kubelet_node_status.go:75] Successfully registered node hdss7-22.host.com
I0611 21:06:18.989477 9153 kubelet.go:1825] skipping pod synchronization - container runtime status check may not have completed yet.
I0611 21:06:19.015529 9153 cpu_manager.go:155] [cpumanager] starting with none policy
I0611 21:06:19.015565 9153 cpu_manager.go:156] [cpumanager] reconciling every 10s
二、部署kube-proxy