1. 程式人生 > >015.Kubernetes二進位制部署所有節點kubelet

015.Kubernetes二進位制部署所有節點kubelet

一 部署 kubelet

kubelet 執行在每個 worker 節點上,接收 kube-apiserver 傳送的請求,管理 Pod 容器,執行互動式命令,如 exec、run、logs 等。 kubelet 啟動時自動向 kube-apiserver 註冊節點資訊,內建的 cadvisor 統計和監控節點的資源使用情況。 為確保安全,部署時關閉了 kubelet 的非安全 http 埠,對請求進行認證和授權,拒絕未授權的訪問(如 apiserver、heapster 的請求)。

1.1 安裝kubelet

提示:k8smaster01節點已下載相應二進位制,可直接分發至node節點。

1.2 分發kubelet

  1 [root@k8smaster01 ~]# cd /opt/k8s/work
  2 [root@k8smaster01 work]# source /opt/k8s/bin/environment.sh
  3 [root@k8smaster01 work]# for all_ip in ${ALL_IPS[@]}
  4   do
  5     echo ">>> ${all_ip}"
  6     scp kubernetes/server/bin/kubelet root@${all_ip}:/opt/k8s/bin/
  7     ssh root@${all_ip} "chmod +x /opt/k8s/bin/*"
  8   done

1.3 分發kubeconfig

  1 [root@k8smaster01 ~]# cd /opt/k8s/work
  2 [root@k8smaster01 work]# source /opt/k8s/bin/environment.sh
  3 [root@k8smaster01 work]# for all_name in ${ALL_NAMES[@]}
  4   do
  5     echo ">>> ${all_name}"
  6 
  7     # 建立 token
  8     export BOOTSTRAP_TOKEN=$(kubeadm token create \
  9       --description kubelet-bootstrap-token \
 10       --groups system:bootstrappers:${all_name} \
 11       --kubeconfig ~/.kube/config)
 12 
 13     # 設定叢集引數
 14     kubectl config set-cluster kubernetes \
 15       --certificate-authority=/etc/kubernetes/cert/ca.pem \
 16       --embed-certs=true \
 17       --server=${KUBE_APISERVER} \
 18       --kubeconfig=kubelet-bootstrap-${all_name}.kubeconfig
 19 
 20     # 設定客戶端認證引數
 21     kubectl config set-credentials kubelet-bootstrap \
 22       --token=${BOOTSTRAP_TOKEN} \
 23       --kubeconfig=kubelet-bootstrap-${all_name}.kubeconfig
 24 
 25     # 設定上下文引數
 26     kubectl config set-context default \
 27       --cluster=kubernetes \
 28       --user=kubelet-bootstrap \
 29       --kubeconfig=kubelet-bootstrap-${all_name}.kubeconfig
 30 
 31     # 設定預設上下文
 32     kubectl config use-context default --kubeconfig=kubelet-bootstrap-${all_name}.kubeconfig
 33   done
解釋: 向 kubeconfig 寫入的是 token,bootstrap 結束後 kube-controller-manager 為 kubelet 建立 client 和 server 證書。 token 有效期為 1 天,超期後將不能再被用來 boostrap kubelet,且會被 kube-controller-manager 的 tokencleaner 清理; kube-apiserver 接收 kubelet 的 bootstrap token 後,將請求的 user 設定為 system:bootstrap:<Token ID>,group 設定為 system:bootstrappers,後續將為這個 group 設定 ClusterRoleBinding。
  1 [root@k8smaster01 work]# kubeadm token list --kubeconfig ~/.kube/config		#檢視 kubeadm 為各節點建立的 token
  2 [root@k8smaster01 work]# kubectl get secrets  -n kube-system|grep bootstrap-token	#檢視各 token 關聯的 Secret

1.5 分發bootstrap kubeconfig

  1 [root@k8smaster01 ~]# cd /opt/k8s/work
  2 [root@k8smaster01 work]# source /opt/k8s/bin/environment.sh
  3 [root@k8smaster01 work]# for all_name in ${ALL_NAMES[@]}
  4   do
  5     echo ">>> ${all_name}"
  6     scp kubelet-bootstrap-${all_name}.kubeconfig root@${all_name}:/etc/kubernetes/kubelet-bootstrap.kubeconfig
  7   done

1.6 建立kubelet 引數配置檔案

從 v1.10 開始,部分 kubelet 引數需在配置檔案中配置,建議建立kubelet配置檔案。
  1 [root@k8smaster01 ~]# cd /opt/k8s/work
  2 [root@k8smaster01 work]# source /opt/k8s/bin/environment.sh
  3 [root@k8smaster01 work]# cat > kubelet-config.yaml.template <<EOF
  4 kind: KubeletConfiguration
  5 apiVersion: kubelet.config.k8s.io/v1beta1
  6 address: "##ALL_IP##"
  7 staticPodPath: ""
  8 syncFrequency: 1m
  9 fileCheckFrequency: 20s
 10 httpCheckFrequency: 20s
 11 staticPodURL: ""
 12 port: 10250
 13 readOnlyPort: 0
 14 rotateCertificates: true
 15 serverTLSBootstrap: true
 16 authentication:
 17   anonymous:
 18     enabled: false
 19   webhook:
 20     enabled: true
 21   x509:
 22     clientCAFile: "/etc/kubernetes/cert/ca.pem"
 23 authorization:
 24   mode: Webhook
 25 registryPullQPS: 0
 26 registryBurst: 20
 27 eventRecordQPS: 0
 28 eventBurst: 20
 29 enableDebuggingHandlers: true
 30 enableContentionProfiling: true
 31 healthzPort: 10248
 32 healthzBindAddress: "##ALL_IP##"
 33 clusterDomain: "${CLUSTER_DNS_DOMAIN}"
 34 clusterDNS:
 35   - "${CLUSTER_DNS_SVC_IP}"
 36 nodeStatusUpdateFrequency: 10s
 37 nodeStatusReportFrequency: 1m
 38 imageMinimumGCAge: 2m
 39 imageGCHighThresholdPercent: 85
 40 imageGCLowThresholdPercent: 80
 41 volumeStatsAggPeriod: 1m
 42 kubeletCgroups: ""
 43 systemCgroups: ""
 44 cgroupRoot: ""
 45 cgroupsPerQOS: true
 46 cgroupDriver: cgroupfs
 47 runtimeRequestTimeout: 10m
 48 hairpinMode: promiscuous-bridge
 49 maxPods: 220
 50 podCIDR: "${CLUSTER_CIDR}"
 51 podPidsLimit: -1
 52 resolvConf: /etc/resolv.conf
 53 maxOpenFiles: 1000000
 54 kubeAPIQPS: 1000
 55 kubeAPIBurst: 2000
 56 serializeImagePulls: false
 57 evictionHard:
 58   memory.available:  "100Mi"
 59 nodefs.available:  "10%"
 60 nodefs.inodesFree: "5%"
 61 imagefs.available: "15%"
 62 evictionSoft: {}
 63 enableControllerAttachDetach: true
 64 failSwapOn: true
 65 containerLogMaxSize: 20Mi
 66 containerLogMaxFiles: 10
 67 systemReserved: {}
 68 kubeReserved: {}
 69 systemReservedCgroup: ""
 70 kubeReservedCgroup: ""
 71 enforceNodeAllocatable: ["pods"]
 72 EOF

1.7 分發kubelet 引數配置檔案

  1 [root@k8smaster01 ~]# cd /opt/k8s/work
  2 [root@k8smaster01 work]# source /opt/k8s/bin/environment.sh
  3 [root@k8smaster01 work]# for all_ip in ${ALL_IPS[@]}
  4   do
  5     echo ">>> ${all_ip}"
  6     sed -e "s/##ALL_IP##/${all_ip}/" kubelet-config.yaml.template > kubelet-config-${all_ip}.yaml.template
  7     scp kubelet-config-${all_ip}.yaml.template root@${all_ip}:/etc/kubernetes/kubelet-config.yaml
  8   done

1.8 建立kubelet systemd

  1 [root@k8smaster01 ~]# cd /opt/k8s/work
  2 [root@k8smaster01 work]# source /opt/k8s/bin/environment.sh
  3 [root@k8smaster01 work]# cat > kubelet.service.template <<EOF
  4 [Unit]
  5 Description=Kubernetes Kubelet
  6 Documentation=https://github.com/GoogleCloudPlatform/kubernetes
  7 After=docker.service
  8 Requires=docker.service
  9 
 10 [Service]
 11 WorkingDirectory=${K8S_DIR}/kubelet
 12 ExecStart=/opt/k8s/bin/kubelet \\
 13   --allow-privileged=true \\
 14   --bootstrap-kubeconfig=/etc/kubernetes/kubelet-bootstrap.kubeconfig \\
 15   --cert-dir=/etc/kubernetes/cert \\
 16   --cni-conf-dir=/etc/cni/net.d \\
 17   --container-runtime=docker \\
 18   --container-runtime-endpoint=unix:///var/run/dockershim.sock \\
 19   --root-dir=${K8S_DIR}/kubelet \\
 20   --kubeconfig=/etc/kubernetes/kubelet.kubeconfig \\
 21   --config=/etc/kubernetes/kubelet-config.yaml \\
 22   --hostname-override=##ALL_NAME## \\
 23   --pod-infra-container-image=registry.cn-beijing.aliyuncs.com/k8s_images/pause-amd64:3.1 \\
 24   --image-pull-progress-deadline=15m \\
 25   --volume-plugin-dir=${K8S_DIR}/kubelet/kubelet-plugins/volume/exec/ \\
 26   --logtostderr=true \\
 27   --v=2
 28 Restart=always
 29 RestartSec=5
 30 StartLimitInterval=0
 31 
 32 [Install]
 33 WantedBy=multi-user.target
 34 EOF
解釋:
  • 如果設定了 --hostname-override 選項,則 kube-proxy 也需要設定該選項,否則會出現找不到 Node 的情況;
  • --bootstrap-kubeconfig:指向 bootstrap kubeconfig 檔案,kubelet 使用該檔案中的使用者名稱和 token 向 kube-apiserver 傳送 TLS Bootstrapping 請求;
  • K8S approve kubelet 的 csr 請求後,在 --cert-dir 目錄建立證書和私鑰檔案,然後寫入 --kubeconfig 檔案;
  • --pod-infra-container-image 不使用 redhat 的 pod-infrastructure:latest 映象,它不能回收容器的殭屍。

1.9 分發kubelet systemd

  1 [root@k8smaster01 ~]# cd /opt/k8s/work
  2 [root@k8smaster01 work]# source /opt/k8s/bin/environment.sh
  3 [root@k8smaster01 work]# for all_name in ${ALL_NAMES[@]}
  4   do
  5     echo ">>> ${all_name}"
  6     sed -e "s/##ALL_NAME##/${all_name}/" kubelet.service.template > kubelet-${all_name}.service
  7     scp kubelet-${all_name}.service root@${all_name}:/etc/systemd/system/kubelet.service
  8   done

二 啟動驗證

2.1 授權

kubelet 啟動時查詢 --kubeletconfig 引數對應的檔案是否存在,如果不存在則使用 --bootstrap-kubeconfig 指定的 kubeconfig 檔案向 kube-apiserver 傳送證書籤名請求 (CSR)。 kube-apiserver 收到 CSR 請求後,對其中的 Token 進行認證,認證通過後將請求的 user 設定為 system:bootstrap:<Token ID>,group 設定為 system:bootstrappers,這一過程稱為 Bootstrap Token Auth。 預設情況下,這個 user 和 group 沒有建立 CSR 的許可權,因此kubelet 會啟動失敗,可通過如下方式建立一個 clusterrolebinding,將 group system:bootstrappers 和 clusterrole system:node-bootstrapper 繫結。
  1 [root@k8smaster01 ~]#  kubectl create clusterrolebinding kubelet-bootstrap --clusterrole=system:node-bootstrapper --group=system:bootstrappers

2.2 啟動kubelet

  1 [root@k8smaster01 ~]# source /opt/k8s/bin/environment.sh
  2 [root@k8smaster01 ~]# for all_name in ${ALL_NAMES[@]}
  3   do
  4     echo ">>> ${all_name}"
  5     ssh root@${all_name} "mkdir -p ${K8S_DIR}/kubelet/kubelet-plugins/volume/exec/"
  6     ssh root@${all_name} "/usr/sbin/swapoff -a"
  7     ssh root@${all_name} "systemctl daemon-reload && systemctl enable kubelet && systemctl restart kubelet"
  8   done
kubelet 啟動後使用 --bootstrap-kubeconfig 向 kube-apiserver 傳送 CSR 請求,當這個 CSR 被 approve 後,kube-controller-manager 為 kubelet 建立 TLS 客戶端證書、私鑰和 --kubeletconfig 檔案。 注意:kube-controller-manager 需要配置 --cluster-signing-cert-file 和 --cluster-signing-key-file 引數,才會為 TLS Bootstrap 建立證書和私鑰。 提示: 啟動服務前必須先建立工作目錄; 關閉 swap 分割槽,否則 kubelet 會啟動失敗。

2.3 檢視kubelet服務

  1 [root@k8smaster01 ~]# source /opt/k8s/bin/environment.sh
  2 [root@k8smaster01 ~]# for all_name in ${ALL_NAMES[@]}
  3   do
  4     echo ">>> ${all_name}"
  5     ssh root@${all_name} "systemctl status kubelet"
  6   done
  7 [root@k8snode01 ~]# kubectl get csr
  8 [root@k8snode01 ~]# kubectl get nodes

三 approve CSR 請求

3.1 自動 approve CSR 請求

建立三個 ClusterRoleBinding,分別用於自動 approve client、renew client、renew server 證書。
  1 [root@k8snode01 ~]# cd /opt/k8s/work
  2 [root@k8snode01 work]# cat > csr-crb.yaml <<EOF
  3  # Approve all CSRs for the group "system:bootstrappers"
  4  kind: ClusterRoleBinding
  5  apiVersion: rbac.authorization.k8s.io/v1
  6  metadata:
  7    name: auto-approve-csrs-for-group
  8  subjects:
  9  - kind: Group
 10    name: system:bootstrappers
 11    apiGroup: rbac.authorization.k8s.io
 12  roleRef:
 13    kind: ClusterRole
 14    name: system:certificates.k8s.io:certificatesigningrequests:nodeclient
 15    apiGroup: rbac.authorization.k8s.io
 16 ---
 17  # To let a node of the group "system:nodes" renew its own credentials
 18  kind: ClusterRoleBinding
 19  apiVersion: rbac.authorization.k8s.io/v1
 20  metadata:
 21    name: node-client-cert-renewal
 22  subjects:
 23  - kind: Group
 24    name: system:nodes
 25    apiGroup: rbac.authorization.k8s.io
 26  roleRef:
 27    kind: ClusterRole
 28    name: system:certificates.k8s.io:certificatesigningrequests:selfnodeclient
 29    apiGroup: rbac.authorization.k8s.io
 30 ---
 31 # A ClusterRole which instructs the CSR approver to approve a node requesting a
 32 # serving cert matching its client cert.
 33 kind: ClusterRole
 34 apiVersion: rbac.authorization.k8s.io/v1
 35 metadata:
 36   name: approve-node-server-renewal-csr
 37 rules:
 38 - apiGroups: ["certificates.k8s.io"]
 39   resources: ["certificatesigningrequests/selfnodeserver"]
 40   verbs: ["create"]
 41 ---
 42  # To let a node of the group "system:nodes" renew its own server credentials
 43  kind: ClusterRoleBinding
 44  apiVersion: rbac.authorization.k8s.io/v1
 45  metadata:
 46    name: node-server-cert-renewal
 47  subjects:
 48  - kind: Group
 49    name: system:nodes
 50    apiGroup: rbac.authorization.k8s.io
 51  roleRef:
 52    kind: ClusterRole
 53    name: approve-node-server-renewal-csr
 54    apiGroup: rbac.authorization.k8s.io
 55 EOF
 56 [root@k8snode01 work]# kubectl apply -f csr-crb.yaml
解釋: auto-approve-csrs-for-group:自動 approve node 的第一次 CSR; 注意第一次 CSR 時,請求的 Group 為 system:bootstrappers; node-client-cert-renewal:自動 approve node 後續過期的 client 證書,自動生成的證書 Group 為 system:nodes; node-server-cert-renewal:自動 approve node 後續過期的 server 證書,自動生成的證書 Group 為 system:nodes。

3.2 檢視 kubelet 的情況

  1 [root@k8snode01 ~]# kubectl get csr | grep boot		#等待一段時間(1-10 分鐘),三個節點的 CSR 都被自動 approved
  2 [root@k8snode01 ~]# kubectl get nodes			#所有節點均 ready
  3 [root@k8snode01 ~]# ls -l /etc/kubernetes/kubelet.kubeconfig
  4 [root@k8snode01 ~]# ls -l /etc/kubernetes/cert/|grep kubelet

3.3 手動 approve server cert csr

基於安全性考慮,CSR approving controllers 不會自動 approve kubelet server 證書籤名請求,需要手動 approve。
  1 [root@k8smaster01 ~]# kubectl get csr
  2 [root@k8smaster01 ~]# kubectl certificate approve csr-2kmtj
  3 
  1 [root@k8smaster01 ~]# ls -l /etc/kubernetes/cert/kubelet-*

四 kubelet API 介面

4.1 kubelet 提供的 API 介面

  1 [root@k8smaster01 ~]# sudo netstat -lnpt|grep kubelet			#檢視kubelet監聽埠
解釋:
  • 10248: healthz http 服務;
  • 10250: https 服務,訪問該埠時需要認證和授權(即使訪問 /healthz 也需要);
  • 未開啟只讀埠 10255;
  • 從 K8S v1.10 開始,去除了 --cadvisor-port 引數(預設 4194 埠),不支援訪問 cAdvisor UI & API。

4.2 kubelet api 認證和授權

kubelet 配置瞭如下認證引數:
  • authentication.anonymous.enabled:設定為 false,不允許匿名�訪問 10250 埠;
  • authentication.x509.clientCAFile:指定簽名客戶端證書的 CA 證書,開啟 HTTPs 證書認證;
  • authentication.webhook.enabled=true:開啟 HTTPs bearer token 認證。
同時配置瞭如下授權引數: authroization.mode=Webhook:開啟 RBAC 授權。
kubelet 收到請求後,使用 clientCAFile 對證書籤名進行認證,或者查詢 bearer token 是否有效。如果兩者都沒通過,則拒絕請求,提示 Unauthorized。
  1 [root@k8smaster01 ~]# curl -s --cacert /etc/kubernetes/cert/ca.pem https://172.24.8.71:10250/metrics   
  2 Unauthorized[root@k8smaster01 ~]#
  3 [root@k8smaster01 ~]# curl -s --cacert /etc/kubernetes/cert/ca.pem -H "Authorization: Bearer 123456" https://172.24.8.71:10250/metrics   
  4 Unauthorized
若通過認證後,kubelet 使用 SubjectAccessReview API 向 kube-apiserver 傳送請求,查詢證書或 token 對應的 user、group 是否有操作資源的許可權(RBAC)。

4.3 證書認證和授權

  1 [root@k8smaster01 ~]# curl -s --cacert /etc/kubernetes/cert/ca.pem --cert /etc/kubernetes/cert/kube-controller-manager.pem --key /etc/kubernetes/cert/kube-controller-manager-key.pem https://172.24.8.71:10250/metrics	#預設許可權不足
  2 Forbidden (user=system:kube-controller-manager, verb=get, resource=nodes, subresource=metrics)
  3 curl -s --cacert /etc/kubernetes/cert/ca.pem --cert /opt/k8s/work/admin.pem --key /opt/k8s/work/admin-key.pem https://172.24.8.71:10250/metrics|head				#使用最高許可權的admin
解釋: --cacert、--cert、--key 的引數值必須是檔案路徑,如上面的 ./admin.pem 不能省略 ./,否則返回 401 Unauthorized。

4.4 建立bear token 認證和授權

  1 [root@k8smaster01 ~]# kubectl create sa kubelet-api-test
  2 [root@k8smaster01 ~]# kubectl create clusterrolebinding kubelet-api-test --clusterrole=system:kubelet-api-admin --serviceaccount=default:kubelet-api-test
  3 [root@k8smaster01 ~]# SECRET=$(kubectl get secrets | grep kubelet-api-test | awk '{print $1}')
  4 [root@k8smaster01 ~]# TOKEN=$(kubectl describe secret ${SECRET} | grep -E '^token' | awk '{print $2}')
  5 [root@k8smaster01 ~]# echo ${TOKEN}
  1 [root@k8smaster01 ~]# curl -s --cacert /etc/kubernetes/cert/ca.pem -H "Authorization: Bearer ${TOKEN}" https://172.24.8.71:10250/metrics|head

4.5 cadvisor 和 metrics

cadvisor 是內嵌在 kubelet 二進位制中的,統計所在節點各容器的資源(CPU、記憶體、磁碟、網絡卡)使用情況的服務。 瀏覽器訪問 https://172.24.8.71:10250/metrics 和 https://172.24.8.71:10250/metrics/cadvisor 分別返回 kubelet 和 cadvisor 的 metrics。 注意: kubelet.config.json 設定 authentication.anonymous.enabled 為 false,不允許匿名證書訪問 10250 的 https 服務; 參考https://github.com/opsnull/follow-me-install-kubernetes-cluster/blob/master/A.%E6%B5%8F%E8%A7%88%E5%99%A8%E8%AE%BF%E9%97%AEkube-apiserver%E5%AE%89%E5%85%A8%E7%AB%AF%E5%8F%A3.md,建立和匯入相關證書,然後訪問上面的 10250 埠