1. 程式人生 > 其它 >Kubernetes版本離線升級(伺服器不能訪問網際網路) Kubeadm頒發證書延遲到10年定時備份etcd資料

Kubernetes版本離線升級(伺服器不能訪問網際網路) Kubeadm頒發證書延遲到10年定時備份etcd資料

一、kubernetes升級概述

kubernetes版本升級迭代非常快,每三個月更新一個版本,很多新的功能在新版本中快速迭代,為了與社群版本功能保持一致,升級kubernetes叢集,社群已通過kubeadm工具統一升級叢集,升級步驟簡單易行。

1、升級kubernetes叢集的基本流程

首先來看下升級kubernetes叢集的基本流程:

  • 升級主控制平面節點,升級管理節點上的kube-apiserver,kuber-controller-manager,kube-scheduler,etcd等;
  • 升級其他控制平面節點,管理節點如果以高可用的方式部署,多個高可用節點需要一併升級;
  • 升級工作節點
    ,升級工作節點上的Container Runtime如docker,kubelet和kube-proxy。

版本升級通常分為兩類:小版本升級和跨版本升級,小版本升級如1.14.1升級至1.14.2,小版本之間可以跨版本升級如1.14.1直接升級至1.14.3;跨版本升級指大版本升級,如1.14.x升級至1.15.x。本文以離線的方式將1.18.6升級至1.19.16版本,升級前需要滿足條件如下:

  • 當前叢集版本需要大於1.14.x,可升級至1.14.x和1.15.x版本,小版本和跨版本之間升級;
  • 關閉swap空間;
  • 備份資料,將etcd資料備份,以及一些重要目錄如/etc/kubernetes,/var/lib/kubelet;
  • 升級過程中pod需要重啟,確保應用使用RollingUpdate滾動升級策略,避免業務有影響。

升級注意,不能跨版本升級,比如:

  • 1.19.x → 1.20.y——是可以的(其中y > x)
  • 1.19.x → 1.21.y——不可以【跨段了】(其中y > x)
  • 1.21.x→ 1.21.y——也可以(只要其中y > x)

所以,如果需要跨大版本升級,必須多次逐步升級

2、為什麼要升級叢集

  • 功能上的更新
  • 軟體這些有bug
  • 存在安全隱患

二、Kubernetes 1.18.6 -> 1.19.16升級前準備

1、下載指定版本kubernetes原始碼

wget https://github.com/kubernetes/kubernetes/archive/refs/tags/v1.19.16.tar.gz 

 解壓並進入到kubernetes-1.19.16目錄

tar -zxvf v1.19.16.tar.gz && cd kubernetes-1.19.16

2、編譯kubeadm

make WHAT=cmd/kubeadm GOFLAGS=-v

檢查編譯好的kubeadm:

如果需要修改頒發證書過期年限,請參見:Kubeadm頒發證書延遲到10年

3、編譯kubelet

GO111MODULE=on KUBE_GIT_TREE_STATE=clean KUBE_GIT_VERSION=v1.19.6 make kubelet GOFLAGS="-tags=nokmem"

檢查編譯好的kubelet:

4、編譯kubectl

make WHAT=cmd/kubectl GOFLAGS=-v

檢查編譯好的kubectl: 

5、下載並推送指定版本映象到私有映象倉庫

執行kubeadm config images list命令檢視當前kubeadm版本所需要的映象

[root@m-master126 kubernetes_bak]# kubeadm config images list
I0505 21:46:15.423822   22607 version.go:255] remote version is much newer: v1.24.0; falling back to: stable-1.19
W0505 21:46:16.466441   22607 configset.go:348] WARNING: kubeadm cannot validate component configs for API groups [kubelet.config.k8s.io kubeproxy.config.k8s.io]
k8s.gcr.io/kube-apiserver:v1.19.16
k8s.gcr.io/kube-controller-manager:v1.19.16
k8s.gcr.io/kube-scheduler:v1.19.16
k8s.gcr.io/kube-proxy:v1.19.16
k8s.gcr.io/pause:3.2
k8s.gcr.io/etcd:3.4.13-0
k8s.gcr.io/coredns:1.7.0

注意:由於安裝kubernetes叢集時已經指定了映象倉庫地址和映象所在專案,所以這裡直接把以上映象推送到映象倉庫指定專案即可,詳細步驟不再贅餘(不用管上面命令輸出的k8s.gcr.io),如果需要修改映象倉庫地址或修改映象所在專案,請執行如下命令進行修改

kubectl edit cm -n kube-system kubeadm-config

6、升級前對待升級Kubernetes叢集進行etcd資料備份

詳細步驟參見:定時備份etcd資料

三、kubernetes 1.18.6 -> 1.19.16升級步驟

1、升級主控制平面節點

1)進行主控節點資料備份

備份/etc/kubernetes/目錄

tar -zcvf etc_kubernetes_2022_0505.tar.gz /etc/kubernetes/

備份/var/lib/kubelet/目錄 

tar -zcvf var_lib_kubelet_2022_0505.tar.gz /var/lib/kubelet/

備份kubeadm、kubelet、kubectl二進位制檔案

cp /usr/local/bin/kubeadm /usr/local/bin/kubeadm.old #備份
cp /usr/local/bin/kubelet /usr/bin/kubelet.old
cp /usr/local/bin/kubectl /usr/local/bin/kubectl.old

2) 替換主控節點的kubeadm

將編譯好的kubeadm二級制檔案上傳到/usr/local/bin/目錄。

3)檢視升級計劃,通過kubeadm可以檢視當前叢集的升級計劃

[root@m-master126 ~]# kubeadm upgrade plan
[upgrade/config] Making sure the configuration is correct:
[upgrade/config] Reading configuration from the cluster...
[upgrade/config] FYI: You can look at this config file with 'kubectl -n kube-system get cm kubeadm-config -oyaml'
[preflight] Running pre-flight checks.
[upgrade] Running cluster health checks
[upgrade] Fetching available versions to upgrade to
[upgrade/versions] Cluster version: v1.18.6
[upgrade/versions] kubeadm version: v1.19.16
W0505 21:07:13.750248   12476 version.go:103] could not fetch a Kubernetes version from the internet: unable to get URL "https://dl.k8s.io/release/stable.txt": Get "https://dl.k8s.io/release/stable.txt": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
W0505 21:07:13.750948   12476 version.go:104] falling back to the local client version: v1.19.16
[upgrade/versions] Latest stable version: v1.19.16
[upgrade/versions] Latest stable version: v1.19.16
W0505 21:07:23.758371   12476 version.go:103] could not fetch a Kubernetes version from the internet: unable to get URL "https://dl.k8s.io/release/stable-1.18.txt": Get "https://dl.k8s.io/release/stable-1.18.txt": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
W0505 21:07:23.758400   12476 version.go:104] falling back to the local client version: v1.19.16
[upgrade/versions] Latest version in the v1.18 series: v1.19.16
[upgrade/versions] Latest version in the v1.18 series: v1.19.16

Components that must be upgraded manually after you have upgraded the control plane with 'kubeadm upgrade apply':
COMPONENT   CURRENT       AVAILABLE
kubelet     4 x v1.18.6   v1.19.16

Upgrade to the latest version in the v1.18 series:

COMPONENT                 CURRENT   AVAILABLE
kube-apiserver            v1.18.6   v1.19.16
kube-controller-manager   v1.18.6   v1.19.16
kube-scheduler            v1.18.6   v1.19.16
kube-proxy                v1.18.6   v1.19.16
CoreDNS                   1.6.9     1.7.0

You can now apply the upgrade by executing the following command:

	kubeadm upgrade apply v1.19.16

_____________________________________________________________________


The table below shows the current state of component configs as understood by this version of kubeadm.
Configs that have a "yes" mark in the "MANUAL UPGRADE REQUIRED" column require manual config upgrade or
resetting to kubeadm defaults before a successful upgrade can be performed. The version to manually
upgrade to is denoted in the "PREFERRED VERSION" column.

API GROUP                 CURRENT VERSION   PREFERRED VERSION   MANUAL UPGRADE REQUIRED
kubeproxy.config.k8s.io   v1alpha1          v1alpha1            no
kubelet.config.k8s.io     v1beta1           v1beta1             no
_____________________________________________________________________

此命令檢查你的叢集是否可被升級,並取回你要升級的目標版本。 命令也會顯示一個包含元件配置版本狀態的表格。  

說明:

kubeadm upgrade 也會自動對 kubeadm 在節點上所管理的證書執行續約操作。 如果需要略過證書續約操作,可以使用標誌 --certificate-renewal=false。 更多的資訊,可參閱證書管理指南

4)升級主控節點元件

kubeadm upgrade apply v1.19.16 --certificate-renewal=false --v=5

注意:此次kubernetes版本升級不需要再重新版本證書,所以需要加上--certificate-renewal=false配置項,另外需要確保待升級叢集版本的相關映象已推送到私有映象倉庫指定專案下。

[root@m-master126 kubernetes_bak]# kubeadm upgrade apply v1.19.16 --certificate-renewal=false --v=5
I0505 22:03:33.208373    8754 apply.go:113] [upgrade/apply] verifying health of cluster
I0505 22:03:33.208622    8754 apply.go:114] [upgrade/apply] retrieving configuration from cluster
[upgrade/config] Making sure the configuration is correct:
[upgrade/config] Reading configuration from the cluster...
[upgrade/config] FYI: You can look at this config file with 'kubectl -n kube-system get cm kubeadm-config -oyaml'
W0505 22:03:33.239526    8754 utils.go:69] The recommended value for "clusterDNS" in "KubeletConfiguration" is: [10.233.0.10]; the provided value is: [169.254.25.10]
I0505 22:03:33.312112    8754 common.go:168] running preflight checks
[preflight] Running pre-flight checks.
I0505 22:03:33.312151    8754 preflight.go:87] validating if there are any unsupported CoreDNS plugins in the Corefile
I0505 22:03:33.322765    8754 preflight.go:113] validating if migration can be done for the current CoreDNS release.
[upgrade] Running cluster health checks
I0505 22:03:33.328598    8754 health.go:158] Creating Job "upgrade-health-check" in the namespace "kube-system"
I0505 22:03:33.344471    8754 health.go:188] Job "upgrade-health-check" in the namespace "kube-system" is not yet complete, retrying
I0505 22:03:34.416940    8754 health.go:188] Job "upgrade-health-check" in the namespace "kube-system" is not yet complete, retrying
I0505 22:03:35.346623    8754 health.go:188] Job "upgrade-health-check" in the namespace "kube-system" is not yet complete, retrying
I0505 22:03:36.347622    8754 health.go:195] Job "upgrade-health-check" in the namespace "kube-system" completed
I0505 22:03:36.347655    8754 health.go:201] Deleting Job "upgrade-health-check" in the namespace "kube-system"
I0505 22:03:36.362883    8754 apply.go:121] [upgrade/apply] validating requested and actual version
I0505 22:03:36.362936    8754 apply.go:137] [upgrade/version] enforcing version skew policies
[upgrade/version] You have chosen to change the cluster version to "v1.19.16"
[upgrade/versions] Cluster version: v1.18.6
[upgrade/versions] kubeadm version: v1.19.16
[upgrade/confirm] Are you sure you want to proceed with the upgrade? [y/N]: y
[upgrade/prepull] Pulling images required for setting up a Kubernetes cluster
[upgrade/prepull] This might take a minute or two, depending on the speed of your internet connection
[upgrade/prepull] You can also perform this action in beforehand using 'kubeadm config images pull'
I0505 22:03:38.077681    8754 checks.go:839] image exists: ******:443/**/kube-apiserver:v1.19.16
I0505 22:03:38.114688    8754 checks.go:839] image exists: ******:443/**/kube-controller-manager:v1.19.16
I0505 22:03:38.149041    8754 checks.go:839] image exists: ******:443/**/kube-scheduler:v1.19.16
I0505 22:03:38.182660    8754 checks.go:839] image exists: ******:443/**/kube-proxy:v1.19.16
I0505 22:03:38.217020    8754 checks.go:839] image exists: ******:443/**/pause:3.2
I0505 22:03:38.250097    8754 checks.go:839] image exists: ******:443/coredns/coredns:1.6.9
I0505 22:03:38.250131    8754 apply.go:163] [upgrade/apply] performing upgrade
[upgrade/apply] Upgrading your Static Pod-hosted control plane to version "v1.19.16"...
Static pod: kube-apiserver-m-master126 hash: cf2e8b11fd6311e4b4a75188bcafcbf2
Static pod: kube-controller-manager-m-master126 hash: f21a611aa6f96a5540e12f8e074e3035
Static pod: kube-scheduler-m-master126 hash: 56a16186baf8b51953cc2402e6465243
[upgrade/staticpods] Writing new Static Pod manifests to "/etc/kubernetes/tmp/kubeadm-upgraded-manifests248879628"
I0505 22:03:38.261365    8754 manifests.go:42] [control-plane] creating static Pod files
I0505 22:03:38.261379    8754 manifests.go:96] [control-plane] getting StaticPodSpecs
I0505 22:03:38.262157    8754 manifests.go:109] [control-plane] adding volume "ca-certs" for component "kube-apiserver"
I0505 22:03:38.262169    8754 manifests.go:109] [control-plane] adding volume "etc-pki" for component "kube-apiserver"
I0505 22:03:38.262174    8754 manifests.go:109] [control-plane] adding volume "etcd-certs-0" for component "kube-apiserver"
I0505 22:03:38.262178    8754 manifests.go:109] [control-plane] adding volume "k8s-certs" for component "kube-apiserver"
I0505 22:03:38.266240    8754 manifests.go:135] [control-plane] wrote static Pod manifest for component "kube-apiserver" to "/etc/kubernetes/tmp/kubeadm-upgraded-manifests248879628/kube-apiserver.yaml"
I0505 22:03:38.266254    8754 manifests.go:109] [control-plane] adding volume "ca-certs" for component "kube-controller-manager"
I0505 22:03:38.266260    8754 manifests.go:109] [control-plane] adding volume "etc-pki" for component "kube-controller-manager"
I0505 22:03:38.266264    8754 manifests.go:109] [control-plane] adding volume "flexvolume-dir" for component "kube-controller-manager"
I0505 22:03:38.266268    8754 manifests.go:109] [control-plane] adding volume "host-time" for component "kube-controller-manager"
I0505 22:03:38.266273    8754 manifests.go:109] [control-plane] adding volume "k8s-certs" for component "kube-controller-manager"
I0505 22:03:38.266315    8754 manifests.go:109] [control-plane] adding volume "kubeconfig" for component "kube-controller-manager"
I0505 22:03:38.267104    8754 manifests.go:135] [control-plane] wrote static Pod manifest for component "kube-controller-manager" to "/etc/kubernetes/tmp/kubeadm-upgraded-manifests248879628/kube-controller-manager.yaml"
I0505 22:03:38.267117    8754 manifests.go:109] [control-plane] adding volume "kubeconfig" for component "kube-scheduler"
I0505 22:03:38.267567    8754 manifests.go:135] [control-plane] wrote static Pod manifest for component "kube-scheduler" to "/etc/kubernetes/tmp/kubeadm-upgraded-manifests248879628/kube-scheduler.yaml"
[upgrade/staticpods] Preparing for "kube-apiserver" upgrade
[upgrade/staticpods] Moved new manifest to "/etc/kubernetes/manifests/kube-apiserver.yaml" and backed up old manifest to "/etc/kubernetes/tmp/kubeadm-backup-manifests-2022-05-05-22-03-38/kube-apiserver.yaml"
[upgrade/staticpods] Waiting for the kubelet to restart the component
[upgrade/staticpods] This might take a minute or longer depending on the component/version gap (timeout 5m0s)
Static pod: kube-apiserver-m-master126 hash: cf2e8b11fd6311e4b4a75188bcafcbf2
Static pod: kube-apiserver-m-master126 hash: cf2e8b11fd6311e4b4a75188bcafcbf2
Static pod: kube-apiserver-m-master126 hash: cf2e8b11fd6311e4b4a75188bcafcbf2
Static pod: kube-apiserver-m-master126 hash: 0bec5f4dfd3442da288036305e19c787
[apiclient] Found 1 Pods for label selector component=kube-apiserver
[upgrade/staticpods] Component "kube-apiserver" upgraded successfully!
[upgrade/staticpods] Preparing for "kube-controller-manager" upgrade
[upgrade/staticpods] Moved new manifest to "/etc/kubernetes/manifests/kube-controller-manager.yaml" and backed up old manifest to "/etc/kubernetes/tmp/kubeadm-backup-manifests-2022-05-05-22-03-38/kube-controller-manager.yaml"
[upgrade/staticpods] Waiting for the kubelet to restart the component
[upgrade/staticpods] This might take a minute or longer depending on the component/version gap (timeout 5m0s)
Static pod: kube-controller-manager-m-master126 hash: f21a611aa6f96a5540e12f8e074e3035
Static pod: kube-controller-manager-m-master126 hash: 6ccd91a4d27bfea3dffe8f2c6b0c3088
[apiclient] Found 1 Pods for label selector component=kube-controller-manager
[upgrade/staticpods] Component "kube-controller-manager" upgraded successfully!
[upgrade/staticpods] Preparing for "kube-scheduler" upgrade
[upgrade/staticpods] Moved new manifest to "/etc/kubernetes/manifests/kube-scheduler.yaml" and backed up old manifest to "/etc/kubernetes/tmp/kubeadm-backup-manifests-2022-05-05-22-03-38/kube-scheduler.yaml"
[upgrade/staticpods] Waiting for the kubelet to restart the component
[upgrade/staticpods] This might take a minute or longer depending on the component/version gap (timeout 5m0s)
Static pod: kube-scheduler-m-master126 hash: 56a16186baf8b51953cc2402e6465243
Static pod: kube-scheduler-m-master126 hash: 2ac1915cf7a77bd5c2616a0d509132f6
[apiclient] Found 1 Pods for label selector component=kube-scheduler
[upgrade/staticpods] Component "kube-scheduler" upgraded successfully!
I0505 22:03:48.816976    8754 apply.go:169] [upgrade/postupgrade] upgrading RBAC rules and addons
[upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
[kubelet] Creating a ConfigMap "kubelet-config-1.19" in namespace kube-system with the configuration for the kubelets in the cluster
[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
I0505 22:03:48.843028    8754 patchnode.go:30] [patchnode] Uploading the CRI Socket information "/var/run/dockershim.sock" to the Node API object "m-master126" as an annotation
[bootstrap-token] configured RBAC rules to allow Node Bootstrap tokens to get nodes
[bootstrap-token] configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
[bootstrap-token] configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
......

[upgrade/successful] SUCCESS! Your cluster was upgraded to "v1.19.16". Enjoy!

[upgrade/kubelet] Now that your control plane is upgraded, please proceed with upgrading your kubelets if you haven't already done so.

升級完檢視kube-system下面的Pod,可以發現映象版本已經替換,下面以檢視kube-apiserver為例

 另外,由於升級主控節點時指定了kubeadm不重新頒發證書,下面通過openssl驗證下證書是否重新頒發

cd /etc/kubernetes/pki/ && openssl x509 -text -noout -in apiserver.crt

 可以看到沒有重新頒發證書。

5)替換主控節點的kubelet

將編譯好的kubelet和kubectl二級制檔案上傳到/usr/local/bin/目錄,替換kubelet時需要先停止kubelet服務,替換完kubelet二進位制檔案後使用如下命令啟動新版本的kubelet服務

systemctl daemon-reload
systemctl restart kubelet

6) 替換主控節點的kubectl

將編譯好的kubectl二級制檔案上傳到/usr/local/bin/目錄,通過kubectl version命令驗證主控節點是否升級成功

至此,master節點版本升級完畢。

注意:此叢集為單控制平面節點kubernetes叢集,如果管理節點是以高可用的方式部署,升級完主控節點後,其他其他控制平面節點也需要一併升級。

2、升級worker

 

參考:https://v1-20.docs.kubernetes.io/zh/docs/tasks/administer-cluster/kubeadm/kubeadm-upgrade/