kubernetes之滾動更新
kubernetes之滾動更新
滾動更新
滾動更新是一次只更新一小部分副本,成功後,在更新更多的副本,最終完成所有副本的更新,滾動更新的好處是零停機,整個過程始終有副本再執行,從而保證業務的連續性
下面我們不熟三副本應用,初始映象為httpd:2.2 然後將其更新到httpd:2.4
httpd:2.2配置檔案:
[root@master music]# cat httpd.yml apiVersion: apps/v1 kind: Deployment metadata: name: http-deploy labels: run: apache spec: replicas:3 selector: matchLabels: run: apache template: metadata: labels: run: apache spec: containers: - name: httpd image: httpd:2.4 ports: - containerPort: 80
檢視一下pod:
[root@master music]# kubectl get pod -o wide NAME READY STATUS RESTARTS AGE IP NODE NOMINATED NODE READINESS GATES http-deploy-849cf97446-6k8jj 1/1 Running 0 2m28s 10.244.1.54 node1 <none> <none> http-deploy-849cf97446-l987p 1/1 Running 0 2m28s 10.244.1.55 node1 <none> <none> http-deploy-849cf97446-mtsqf 1/1 Running 0 2m28s 10.244.2.42 node2 <none> <none>
在檢視一下當前版本:
[root@master music]# kubectl get replicasets.apps -o wide NAME DESIRED CURRENT READY AGE CONTAINERS IMAGES SELECTOR http-deploy-849cf97446 3 3 3 10m httpd httpd:2.2 pod-template-hash=849cf97446,run=apache
現在我們來滾動更新: 把配置檔案htppd.yml映象httpd:2.2 更改為 httpd2.4,然後重新執行
現在我們再來看看
[root@master music]# kubectl get replicasets.apps -o wide NAME DESIRED CURRENT READY AGE CONTAINERS IMAGES SELECTOR http-deploy-77c8788b9b 3 3 3 39s httpd httpd:2.4 pod-template-hash=77c8788b9b,run=apache http-deploy-849cf97446 0 0 0 13m httpd httpd:2.2 pod-template-hash=849cf97446,run=apache
發現了變化映象2.2變成了2.4,重新建立了pod 映象為2.4
[root@master music]# kubectl describe deployment Name: http-deploy Namespace: default CreationTimestamp: Mon, 20 Jul 2020 20:08:32 +0800 Labels: run=apache Annotations: deployment.kubernetes.io/revision: 2 kubectl.kubernetes.io/last-applied-configuration: {"apiVersion":"apps/v1","kind":"Deployment","metadata":{"annotations":{},"l Selector: run=apache Replicas: 3 desired | 3 updated | 3 total | 3 available | 0 unavailable StrategyType: RollingUpdate MinReadySeconds: 0 RollingUpdateStrategy: 25% max unavailable, 25% max surge Pod Template: Labels: run=apache Containers: httpd: Image: httpd:2.4 Port: 80/TCP Host Port: 0/TCP Environment: <none> Mounts: <none> Volumes: <none> Conditions: Type Status Reason ---- ------ ------ Available True MinimumReplicasAvailable Progressing True NewReplicaSetAvailable OldReplicaSets: <none> NewReplicaSet: http-deploy-77c8788b9b (3/3 replicas created) Events: Type Reason Age From Message ---- ------ ---- ---- ------- Normal ScalingReplicaSet 17m deployment-controller Scaled up replica set http-deploy-849cf974 Normal ScalingReplicaSet 5m9s deployment-controller Scaled up replica set http-deploy-77c8788b Normal ScalingReplicaSet 4m52s deployment-controller Scaled down replica set http-deploy-849cf9 Normal ScalingReplicaSet 4m52s deployment-controller Scaled up replica set http-deploy-77c8788b Normal ScalingReplicaSet 4m35s deployment-controller Scaled down replica set http-deploy-849cf9 Normal ScalingReplicaSet 4m35s deployment-controller Scaled up replica set http-deploy-77c8788b Normal ScalingReplicaSet 4m34s deployment-controller Scaled down replica set http-deploy-849cf9
每次只更新替換一個pod,每次更換的pod數量是可以定製的,kubernetes提供了兩個引數maxSurge和 maxUnavailable,來精細更換pod數量
回滾
kubectl apply 每次更新應用時 kubernetes都會記錄下當然的配置,,儲存為一個 revision(版次),這樣就可以回滾到某個指定的revision
就是在執行的時候後面跟上一個引數, --record
下面我們來建立三個配置檔案,三個檔案版本不一樣就可以我們用httpd:2.37,httpd:2.38,httpd:2.39
[root@master music]# cat httpd.yml apiVersion: apps/v1 kind: Deployment metadata: name: http-deploy labels: run: apache spec: replicas: 3 selector: matchLabels: run: apache template: metadata: labels: run: apache spec: containers: - name: httpd image: httpd:2.4.37 ##其餘兩個在這裡就不寫在這裡了,把映象版本改了就可以了 ports: - containerPort: 80
執行:
[root@master music]# kubectl apply -f httpd.yml --record deployment.apps/http-deploy created [root@master music]# kubectl apply -f httpd1.yml --record deployment.apps/http-deploy configured [root@master music]# kubectl apply -f httpd2.yml --record deployment.apps/http-deploy configured
通過檢視可以看到每一次的更新。
[root@master music]# kubectl get deployments.apps -o wide NAME READY UP-TO-DATE AVAILABLE AGE CONTAINERS IMAGES SELECTOR http-deploy 3/3 3 3 5m14s httpd httpd:2.4.39 run=apache
這是由2.4.37更新到2.4.39
--record的作用是將當前的命令記錄到revision記錄中,這樣我們就可以知道每個revision對應的是那個配置檔案了,通過
kubectl rollout history deployment 檢視revision歷史記錄
[root@master music]# kubectl rollout history deployment deployment.apps/http-deploy REVISION CHANGE-CAUSE 1 kubectl apply --filename=httpd.yml --record=true 2 kubectl apply --filename=httpd1.yml --record=true 3 kubectl apply --filename=httpd2.yml --record=true
如果想要回到某個版本,比如說最初的2.4.37.可以執行命令
[root@master music]# kubectl rollout history deployment ##先檢視一下歷史版本 deployment.apps/http-deploy REVISION CHANGE-CAUSE 1 kubectl apply --filename=httpd.yml --record=true 2 kubectl apply --filename=httpd1.yml --record=true 3 kubectl apply --filename=httpd2.yml --record=true [root@master music]# kubectl get deployments.apps -o wide NAME READY UP-TO-DATE AVAILABLE AGE CONTAINERS IMAGES SELECTOR http-deploy 3/3 3 3 21m httpd httpd:2.4.39 run=apache [root@master music]# kubectl rollout undo deployment --to-revision=1 deployment.apps/http-deploy rolled back [root@master music]# kubectl get deployments.apps -o wide NAME READY UP-TO-DATE AVAILABLE AGE CONTAINERS IMAGES SELECTOR http-deploy 3/3 3 3 22m httpd httpd:2.4.37 run=apache
可以看到我們回到了我們制定的最開始的版本,此時。版本歷史也會發生相應變化
[root@master music]# kubectl rollout history deployment deployment.apps/http-deploy REVISION CHANGE-CAUSE 2 kubectl apply --filename=httpd1.yml --record=true 3 kubectl apply --filename=httpd2.yml --record=true 4 kubectl apply --filename=httpd.yml --record=true
之前的1變成了4
Health Check
強大的自愈能力是k8s這類容器編排引擎的一個重要特性,自愈的預設實現方式是自動重啟發生故障的容器,除此之外,使用者還可以利用liveness和readiness探測機制設定更精細的健康檢查,進而實現如下需求
1:0停機部署
2:避免部署無效的映象
3:更加安全的滾動升級
預設的健康檢查
下面我們來模擬一個容器發生故障的場景,pod配置如下
[root@master health]# cat health.yml apiVersion: v1 kind: Pod metadata: labels: test: healthcheck name: healthcheck spec: restartPolicy: OnFailure containers: - name: healthcheck image: busybox args: - /bin/bash - -c - sleep 10;exit 1
pod的restartpolicy 設定為onfailure,預設為always
sleep10;exit1 模擬容器啟動10秒後發生故障
執行建立pod 命名為healthcheck
[root@master health]# kubectl get pods NAME READY STATUS RESTARTS AGE healthcheck 0/1 CrashLoopBackOff 6 7m37s
可見容器已經啟動了6次
liveness探測
liveness探測讓使用者可以自定義判斷容器是否健康的條件,如果探測失敗,k8s就會重啟容器
案例
[root@master health]# cat liveness.yml apiVersion: v1 kind: Pod metadata: labels: test: liveness name: liveness spec: restartPolicy: OnFailure containers: - name: liveness image: busybox args: - /bin/sh - -c - touch /tmp/healthy;sleep 30;rm -rf /tmp/healthy;sleep 600 livenessProbe: exec: command: - cat - /tmp/healthy initialDelaySeconds: 10 periodSeconds: 5
執行以後程序是首先建立檔案/tmp/healthy,30秒以後刪除,如果檔案存在則健康,否則就會認為是故障
可以通過檢視日誌
kubectl describe pod liveness
[root@master health]# kubectl describe pod liveness Name: liveness Namespace: default Priority: 0 Node: node2/192.168.172.136 Start Time: Mon, 20 Jul 2020 22:01:31 +0800 Labels: test=liveness Annotations: kubectl.kubernetes.io/last-applied-configuration: {"apiVersion":"v1","kind":"Pod","metadata":{"annotations":{},"labels":{"test":"liveness"},"name":"liveness","namespace":"default"},"spec":... Status: Running IP: 10.244.2.50 IPs: IP: 10.244.2.50 Containers: liveness: Container ID: docker://5a535ca4965f649b90161b72521c4bc75c52097f7a6f0f816dee991a0000156e Image: busybox Image ID: docker-pullable://busybox@sha256:9ddee63a712cea977267342e8750ecbc60d3aab25f04ceacfa795e6fce341793 Port: <none> Host Port: <none> Args: /bin/sh -c touch /tmp/healthy;sleep 30;rm -rf /tmp/healthy;sleep 600 State: Waiting Reason: CrashLoopBackOff Last State: Terminated Reason: Error Exit Code: 137 Started: Mon, 20 Jul 2020 22:10:13 +0800 Finished: Mon, 20 Jul 2020 22:11:27 +0800 Ready: False Restart Count: 6 Liveness: exec [cat /tmp/healthy] delay=10s timeout=1s period=5s #success=1 #failure=3 Environment: <none> Mounts: /var/run/secrets/kubernetes.io/serviceaccount from default-token-ptz8b (ro) Conditions: Type Status Initialized True Ready False ContainersReady False PodScheduled True Volumes: default-token-ptz8b: Type: Secret (a volume populated by a Secret) SecretName: default-token-ptz8b Optional: false QoS Class: BestEffort Node-Selectors: <none> Tolerations: node.kubernetes.io/not-ready:NoExecute for 300s node.kubernetes.io/unreachable:NoExecute for 300s Events: Type Reason Age From Message ---- ------ ---- ---- ------- Normal Scheduled 12m default-scheduler Successfully assigned default/liveness to node2 Normal Pulled 9m43s (x3 over 12m) kubelet, node2 Successfully pulled image "busybox" Normal Created 9m43s (x3 over 12m) kubelet, node2 Created container liveness Normal Started 9m43s (x3 over 12m) kubelet, node2 Started container liveness Normal Killing 8m58s (x3 over 11m) kubelet, node2 Container liveness failed liveness probe, will be restarted Normal Pulling 8m28s (x4 over 12m) kubelet, node2 Pulling image "busybox" Warning Unhealthy 7m48s (x10 over 12m) kubelet, node2 Liveness probe failed: cat: can't open '/tmp/healthy': No such file or directory Warning BackOff 2m50s (x4 over 3m3s) kubelet, node2 Back-off restarting failed container
[root@master health]# kubectl get pods NAME READY STATUS RESTARTS AGE liveness 1/1 Running 0 27s