1. 程式人生 > 其它 >六、Kubernetes的資源物件之Pod控制器的基礎

六、Kubernetes的資源物件之Pod控制器的基礎

一、pod控制器的說明

1、什麼是pod控制器?

自主式Pod物件由排程器繫結至目標工作節點後即由相應節點上的kubelet負責監控其容器的存活性,容器主程序崩潰後,kubelet能夠自動重啟相應的容器。不過,kubelet對非主程序崩潰類的容器錯誤卻無從感知,這依賴於使用者為Pod資源物件自定義的存活性探測(liveness probe)機制,以便kubelet能夠探知到此類故障。然而,在Pod物件遭到意外刪除,或者工作節點自身發生故障時,又該如何處理呢? kubelet是Kubernetes叢集節點代理程式,它在每個工作節點上都執行著一個例項。因而,叢集中的某工作節點發生故障時,其kubelet也必將不再可用,於是,節點上的Pod資源的健康狀態將無從得到保證,也無法再由kubelet重啟。此種場景中的Pod存活性一般要由工作節點之外的Pod控制器來保證。事實上,遭到意外刪除的Pod資源的恢復也依賴於其控制器。 Pod控制器由master的kube-controller-manager元件提供,常見的此類控制器有ReplicationController、ReplicaSet、Deployment、DaemonSet、StatefulSet、Job和CronJob等,它們分別以不同的方式管理Pod資源物件。實踐中,對Pod物件的管理通常都是由某種控制器的特定物件來實現的,包括其建立、刪除及重新排程等操作。

Master的各元件中,API Server僅負責將資源儲存於etcd中,並將其變動通知給各相關的客戶端程式,如kubelet、kube-scheduler、kube-proxy和kube-controller-manager等,kube-scheduler監控到處於未繫結狀態的Pod物件出現時遂啟動排程器為其挑選適配的工作節點,然而,Kubernetes的核心功能之一還在於要確保各資源物件的當前狀態(status)以匹配使用者期望的狀態(spec),使當前狀態不斷地向期望狀態“和解”(reconciliation)來完成容器應用管理,而這些則是kube-controller-manager的任務。kube-controller-manager是一個獨立的單體守護程序,然而它包含了眾多功能不同的控制器型別分別用於各類和解任務.

2、常用的pod控制器

pod控制器是K8s的一個抽象概念,用於更高階層次物件,部署和管理Pod。 常用工作負載控制器: •Deployment :無狀態應用部署 •StatefulSet :有狀態應用部署 •DaemonSet :確保所有Node運行同一個Pod •Job :一次性任務 •Cronjob :定時任務

3、控制器的作用

•管理Pod物件 •使用標籤與Pod關聯 •控制器實現了Pod的運維,例如滾動更新、伸縮、副本管理、維護Pod狀態等。

二、deployment

1、deployment的介紹

Deployment(簡寫為deploy)是Kubernetes控制器的又一種實現,它構建於ReplicaSet控制器之上,可為Pod和ReplicaSet資源提供宣告式更新。相比較而言,Pod和ReplicaSet是較低級別的資源,它們很少被直接使用。

Deployment控制器為 Pod 和 ReplicaSet 提供了一個宣告式更新的方法,在Deployment物件中描述一個期望的狀態,Deployment控制器就會按照一定的控制速率把實際狀態改成期望狀態,通過定義一個Deployment控制器會建立一個新的ReplicaSets控制器,通過replicaset建立pod,刪除Deployment控制器,也會刪除Deployment控制器下對應的ReplicaSet控制器和pod資源

Deployment控制器資源的主要職責同樣是為了保證Pod資源的健康執行,其大部分功能均可通過呼叫ReplicaSet控制器來實現,同時還增添了部分特性。

·事件和狀態檢視:必要時可以檢視Deployment物件升級的詳細進度和狀態。·

回滾:升級操作完成後發現問題時,支援使用回滾機制將應用返回到前一個或由使用者指定的歷史記錄中的版本上。·

版本記錄:對Deployment物件的每一次操作都予以儲存,以供後續可能執行的回滾操作使用。·

暫停和啟動:對於每一次升級,都能夠隨時暫停和啟動。·

多種自動更新方案:一是Recreate,即重建更新機制,全面停止、刪除舊有的Pod後用新版本替代;另一個是RollingUpdate,即滾動升級機制,逐步替換舊有的Pod至新的版本。

Deployment可以用來管理藍綠髮布的情況,建立在rs之上的,一個Deployment可以管理多個rs,有多個rs存在,但實際執行的只有一個,當你更新到一個新版本的時候,只是建立了一個新的rs,把舊的rs替換掉了

rs的v1控制三個pod,刪除一個,在rs的v2上重新建立一個,依次類推,直到全部都是由rs2控制,如果rs v2有問題,還可以回滾,Deployment是建構在rs之上的,多個rs組成一個Deployment,但是隻有一個rs處於活躍狀態。

Deployment預設保留10個歷史版本。

Deployment可以使用宣告式定義,直接在命令列通過純命令的方式完成對應資源版本的內容的修改,也就是通過打補丁的方式進行修改;Deployment能提供滾動式自定義自控制的更新;對Deployment來講,我們在實現更新時還可以實現控制更新節奏和更新邏輯,什麼叫做更新節奏和更新邏輯呢?

比如說ReplicaSet控制5個pod副本,pod的期望值是5個,但是升級的時候需要額外多幾個pod,那麼我們控制器可以控制在5個pod副本之外還能再增加幾個pod副本;比方說能多一個,但是不能少,那麼升級的時候就是先增加一個,再刪除一個,增加一個刪除一個,始終保持pod副本數是5個,但是有個別交叉之間是6個;還有一種情況,最多允許多一個,最少允許少一個,也就是最多6個,最少4個,第一次加一個,刪除兩個,第二次加兩個,刪除兩個,依次類推,可以自己控制更新方式,這種是滾動更新的,需要加readinessProbe和livenessProbe探測,確保pod中容器裡的應用都正常啟動了才刪除之前的pod;啟動的第一步,剛更新第一批就暫停了也可以;假如目標是5個,允許一個也不能少,允許最多可以10個,那一次加5個即可;這就是我們可以自己控制節奏來控制更新的方法

應用場景:網站、API、微服務

2、deployment的部署

編寫deployment.yml

apiVersion: apps/v1
kind: Deployment #型別為deployment
metadata:
  name: nginx-deployment
  labels:
    app: nginx
spec:
  replicas: 3  #副本數
  selector:    #匹配標籤,必須與template中定義的標籤一樣
    matchLabels:
      app: nginx
  template:    #定義pod模板
    metadata:
      labels:
        app: nginx  #pod的標籤與deployment選擇的標籤一致
    spec: #定義容器
      containers:
      - name: nginx
        image: nginx:1.14.2
        imagePullPolicy: IfNotPresent
        ports:
        - name: http
          containerPort: 80
        ports:
          - name: http
            containerPort: 80
        livenessProbe:
          initialDelaySeconds: 3
          periodSeconds: 10
          httpGet:
            port: 80
            path: /index.html
        readinessProbe:
          initialDelaySeconds: 3
          periodSeconds: 10
          httpGet:
            port: 80
            path: /index.html

        

執行deployment

root@k8s-master01:/apps/k8s-yaml/deployment-case# kubectl apply -f nginx-deploy.yaml 
deployment.apps/nginx-deployment created

檢視deployment和pod資訊

#1、建立deployment
root@k8s-master01:/apps/k8s-yaml/deployment-case# kubectl get deployments -n default
NAME               READY   UP-TO-DATE   AVAILABLE   AGE
nginx-deployment   3/3     3            3           54s

#2、deployment建立replicasets
root@k8s-master01:/apps/k8s-yaml/deployment-case# kubectl get replicasets -n default
NAME                          DESIRED   CURRENT   READY   AGE
nginx-deployment-657df44b4f   3         3         3       82s

#3、replicasets再建立pod
root@k8s-master01:/apps/k8s-yaml/deployment-case# kubectl get pod -n default
NAME                                READY   STATUS    RESTARTS   AGE
nginx-deployment-657df44b4f-hk8x5   1/1     Running   0          109s
nginx-deployment-657df44b4f-mcw7v   1/1     Running   0          109s
nginx-deployment-657df44b4f-vv2wh   1/1     Running   0          109s


#deployment的名稱為nginx-deployment
#replicasets的名稱為deployment名稱後新增隨機數nginx-deployment-5899cb477c
#pod的名稱為replicasets名稱後再新增隨機數nginx-deployment-5899cb477c-6dd2x nginx-deployment-5899cb477c-8j2fr nginx-deployment-5899cb477c-jst94 。
#因此deployment的建立順序為deployment--->replicasets--->pod

3、使用deployment對pod升級

更新deployment.yml檔案,再kubectl apply -f nginx-deploy.yml即可
把nginx映象檔案從1.14.2升級為1.16.0
副本數量改為5個
root@k8s-master01:/apps/k8s-yaml/deployment-case# vim nginx-deploy.yml 
apiVersion: apps/v1
kind: Deployment #型別為deployment
metadata:
  name: nginx-deployment
  labels:
    app: nginx
spec:
  replicas: 5  #修改副本數
  selector:    
    matchLabels:
      app: nginx
  template:   
    metadata:
      labels:
        app: nginx  
    spec:
      containers:
      - name: nginx
        image: nginx:1.16.0 #更改映象檔案
        ports:
        - containerPort: 80
        
root@k8s-master01:/apps/k8s-yaml/deployment-case# cp nginx-deploy.yml  nginx-deploy.yml.bak
#注意:變更deployment.yml時,注意儲存原檔案。

root@k8s-master01:/apps/k8s-yaml/deployment-case# kubectl apply -f nginx-deploy.yaml 
deployment.apps/nginx-deployment configured

#deployment副本數變了5個
root@k8s-master01:/apps/k8s-yaml/deployment-case# kubectl get deployments -n default
NAME               READY   UP-TO-DATE   AVAILABLE   AGE
nginx-deployment   5/5     5            5           10m

#replicasets已經更新也變了5個,老的在一段時間後會自動消失
root@k8s-master01:/apps/k8s-yaml/deployment-case# kubectl get replicasets -n default
NAME                          DESIRED   CURRENT   READY   AGE
nginx-deployment-657df44b4f   0         0         0       11m
nginx-deployment-9fc7f565     5         5         5       95s

#pod副本升級為5個
root@k8s-master01:/apps/k8s-yaml/deployment-case# kubectl get pod -n default
NAME                              READY   STATUS    RESTARTS   AGE
nginx-deployment-9fc7f565-8nwng   1/1     Running   0          118s
nginx-deployment-9fc7f565-cg99d   1/1     Running   0          2m28s
nginx-deployment-9fc7f565-cxcn8   1/1     Running   0          2m28s
nginx-deployment-9fc7f565-czdnm   1/1     Running   0          2m28s
nginx-deployment-9fc7f565-tgqwh   1/1     Running   0          118s

#檢視pod的image也升級了nginx:1.16.0
root@k8s-master01:/apps/k8s-yaml/deployment-case# kubectl describe pod nginx-deployment-9fc7f565-8nwng|grep "Image"
    Image:          nginx:1.16.0
    Image ID:       docker-pullable://nginx@sha256:3e373fd5b8d41baeddc24be311c5c6929425c04cabf893b874ac09b72a798010



root@k8s-master01:/apps/k8s-yaml/deployment-case#  kubectl rollout history deployment nginx-deployment
deployment.apps/nginx-deployment 
REVISION  CHANGE-CAUSE
1         <none>  #建立deployment
2         <none>  #第一次升級deployment


注意: v1.20之前的版本需要使用deployment做回滾的時候,需要在建立deployment時新增"--record"引數
root@k8s-master01:/apps/k8s-yaml/deployment-case# kubectl apply -f nginx-deploy.yaml --record

deployment對pod升級預設為滾動升級:也是K8s對Pod升級的預設策略,通過使用新版本Pod逐步更新舊版本Pod,實現零停機發布,使用者無感知。

4、deployment滾動升級策略

vim roll-deploy.yml

apiVersion: apps/v1
kind: Deployment 
metadata:
  name: nginx-deployment
  labels:
    app: nginx
spec:
  replicas: 5 
  revisionHistoryLimit: 10 # RS歷史版本儲存數量
  selector:    
    matchLabels:
      app: nginx
  #滾動升級策略
  strategy:
    rollingUpdate:
      #maxSurge:滾動更新過程中最大Pod副本數,確保在更新時啟動的Pod數量比期望(replicas)Pod數量最大多出25%
      maxSurge: 25% 
      #maxUnavailable:滾動更新過程中最大不可用Pod副本數,確保在更新時最大25%Pod數量不可用,即確保75%Pod數量是可用狀態。
      maxUnavailable: 25%
    type: RollingUpdate  #型別為滾動升級
  template:   
    metadata:
      labels:
        app: nginx  
    spec:
      containers:
      - name: nginx
        image: nginx:1.18.0
        imagePullPolicy: IfNotPresent
        ports:
        - containerPort: 80

執行升級

#將nginx映象該為nginx:1.18.0來升級roll-deploy.yaml

root@k8s-master01:/apps/k8s-yaml/deployment-case# kubectl apply -f roll-deploy.yaml 
deployment.apps/roll-deployment configured

root@k8s-master01:/apps/k8s-yaml/deployment-case# kubectl get deployments roll-deployment -n default
NAME              READY   UP-TO-DATE   AVAILABLE   AGE
roll-deployment   5/5     5            5           4m53s

root@k8s-master01:/apps/k8s-yaml/deployment-case#  kubectl get replicasets -n default
NAME                         DESIRED   CURRENT   READY   AGE
roll-deployment-67dfd6c8f9   5         5         5       42s
roll-deployment-75d4475c89   0         0         0       3m38s

NAME                               READY   STATUS    RESTARTS   AGE
roll-deployment-67dfd6c8f9-59cv2   1/1     Running   0          112s
roll-deployment-67dfd6c8f9-cqgn8   1/1     Running   0          2m19s
roll-deployment-67dfd6c8f9-jlkbs   1/1     Running   0          112s
roll-deployment-67dfd6c8f9-vdxjh   1/1     Running   0          2m19s
roll-deployment-67dfd6c8f9-x5dhc   1/1     Running   0          2m18s



root@k8s-master01:/apps/k8s-yaml/deployment-case# kubectl describe deployments roll-deployment 
Name:                   roll-deployment
Namespace:              default
CreationTimestamp:      Sat, 02 Oct 2021 21:04:50 +0800
Labels:                 app=nginx
Annotations:            deployment.kubernetes.io/revision: 2
Selector:               app=nginx
Replicas:               5 desired | 5 updated | 5 total | 5 available | 0 unavailable
StrategyType:           RollingUpdate
MinReadySeconds:        0
RollingUpdateStrategy:  25% max unavailable, 25% max surge
Pod Template:
  Labels:  app=nginx
  Containers:
   nginx:
    Image:        nginx:1.18.0
    Port:         80/TCP
    Host Port:    0/TCP
    Environment:  <none>
    Mounts:       <none>
  Volumes:        <none>
Conditions:
  Type           Status  Reason
  ----           ------  ------
  Available      True    MinimumReplicasAvailable
  Progressing    True    NewReplicaSetAvailable
OldReplicaSets:  <none>
NewReplicaSet:   roll-deployment-67dfd6c8f9 (5/5 replicas created)
Events:
  Type    Reason             Age    From                   Message
  ----    ------             ----   ----                   -------
  Normal  ScalingReplicaSet  5m41s  deployment-controller  Scaled up replica set roll-deployment-75d4475c89 to 5
  Normal  ScalingReplicaSet  2m45s  deployment-controller  Scaled up replica set roll-deployment-67dfd6c8f9 to 2
  Normal  ScalingReplicaSet  2m45s  deployment-controller  Scaled down replica set roll-deployment-75d4475c89 to 4
  Normal  ScalingReplicaSet  2m44s  deployment-controller  Scaled up replica set roll-deployment-67dfd6c8f9 to 3
  Normal  ScalingReplicaSet  2m18s  deployment-controller  Scaled down replica set roll-deployment-75d4475c89 to 3
  Normal  ScalingReplicaSet  2m18s  deployment-controller  Scaled up replica set roll-deployment-67dfd6c8f9 to 4
  Normal  ScalingReplicaSet  2m18s  deployment-controller  Scaled down replica set roll-deployment-75d4475c89 to 2
  Normal  ScalingReplicaSet  2m18s  deployment-controller  Scaled up replica set roll-deployment-67dfd6c8f9 to 5
  Normal  ScalingReplicaSet  2m14s  deployment-controller  Scaled down replica set roll-deployment-75d4475c89 to 1
  Normal  ScalingReplicaSet  2m10s  deployment-controller  (combined from similar events): Scaled down replica set roll-deployment-75d4475c89 to 0

5、deployment的水平擴容

方法一:命令(不推薦)
kubectl scale deployment web --replicas=10
方法二:編寫deployment.yml
修改yaml裡replicas值,再kubectl apply -f deploment.yml
#注意:保留原deployment.yml檔案

replicas引數控制Pod副本數量

6、deployment的回滾

kubectl rollout history deployments roll-deploy # 檢視歷史釋出版本
kubectl rollout undo deployments roll-deploy # 回滾上一個版本
kubectl rollout undo deployments roll-deploy --to-revision=2 # 回滾歷史指定版本
注:回滾是重新部署某一次部署時的狀態,即當時版本所有配置

建議:編寫deployment.yml檔案,再kubectl apply -f deployment.yml
注意儲存deployment.yml原檔案
因為kubectl rollout history deployments roll-deploy 無法檢視當前pod的具體資訊

7、deployment的刪除

方法一:
kubectl delete -f deployment.yml
方法二:
kubectl delete deployments roll-deploy

8、Deployment:ReplicaSet

ReplicaSet控制器用途: •Pod副本數量管理,不斷對比當前Pod數量與期望Pod數量 •Deployment每次釋出都會建立一個RS作為記錄,用於實現回滾

kubectl get rs #檢視RS記錄
root@k8s-master01:/apps/k8s-yaml/deployment-case# kubectl get rs -n default
NAME                         DESIRED   CURRENT   READY   AGE
roll-deployment-67dfd6c8f9   5         5         5       10m
roll-deployment-75d4475c89   0         0         0       13m


kubectl rollout history deployment roll-deploy #版本對應RS記錄
root@k8s-master01:/apps/k8s-yaml/deployment-case# kubectl rollout history deployments roll-deployment -n default
deployment.apps/roll-deployment 
REVISION  CHANGE-CAUSE
3         <none>
4         <none>

三、DaemonSet

1、DaemonSet介紹

DaemonSet是Pod控制器的又一種實現,用於在叢集中的全部節點上同時執行一份指定的Pod資源副本,後續新加入叢集的工作節點也會自動建立一個相關的Pod物件,當從叢集移除節點時,此類Pod物件也將被自動回收而無須重建。管理員也可以使用節點選擇器及節點標籤指定僅在部分具有特定特徵的節點上執行指定的Pod物件。

DaemonSet是一種特殊的控制器,它有特定的應用場景,通常執行那些執行系統級操作任務的應用,其應用場景具體如下。·執行叢集儲存的守護程序,如在各個節點上執行glusterd或ceph。·在各個節點上執行日誌收集守護程序,如fluentd和logstash。·在各個節點上執行監控系統的代理守護程序,如Prometheus Node Exporter、collectd、Datadog agent、New Relic agent或Ganglia gmond等。 當然,既然是需要運行於叢集內的每個節點或部分節點,於是很多場景中也可以把應用直接執行為工作節點上的系統級守護程序,不過,這樣一來就失去了運用Kubernetes管理所帶來的便捷性。另外,也只有必須將Pod物件運行於固定的幾個節點並且需要先於其他Pod啟動時,才有必要使用DaemonSet控制器,否則就應該使用Deployment控制器。

DaemonSet功能: •在每一個Node上執行一個Pod •新加入的Node也同樣會自動執行一個Pod 應用場景:網路外掛(kube-proxy、calico)、其他Agent。

2、在每個node上部署一個日誌採集

編寫daemonset.yml

apiVersion: apps/v1
kind: DaemonSet #型別為Daemonset
metadata:
  name: filebeat
  namespace: kube-system
spec:
  selector:
    matchLabels:
      name: filebeat
  template:
    metadata:
      labels:
        name: filebeat
    spec:
      containers:
      - name: log
        image: elastic/filebeat:7.3.2

執行daemonset.yml

root@k8s-master01:/apps/k8s-yaml/deployment-case# kubectl apply -f daemonset.yaml 
daemonset.apps/filebeat created


root@k8s-master01:/apps/k8s-yaml/deployment-case# kubectl get pod -n kube-system -o wide|grep filebeat
filebeat-4z59k                             1/1     Running   0          3m37s   172.20.32.129    172.168.33.207   <none>           <none>
filebeat-d2hfc                             1/1     Running   0          3m37s   172.20.135.162   172.168.33.212   <none>           <none>
filebeat-jdqdl                             1/1     Running   0          3m37s   172.20.122.130   172.168.33.209   <none>           <none>
filebeat-mg6nb                             1/1     Running   0          3m37s   172.20.85.249    172.168.33.210   <none>           <none>
filebeat-vzkt9                             1/1     Running   0          3m37s   172.20.58.212    172.168.33.211   <none>           <none>
filebeat-wlxnv                             1/1     Running   0          3m37s   172.20.122.129   172.168.33.208   <none>           <none>

會自動的在每個節點上部署一個fIlebeat的pod,有新增節點該pod會自動在新節點上部署,有節點需要下線,該pod會自動從下線節點刪除。

四、Job

1、Job的介紹

Job控制器用於調配Pod物件執行一次性任務,容器中的程序在正常執行結束後不會對其進行重啟,而是將Pod物件置於“Completed”(完成)狀態。若容器中的程序因錯誤而終止,則需要依配置確定重啟與否,未執行完成的Pod物件因其所在的節點故障而意外終止後會被重新排程。

應用場景:離線資料處理,視訊解碼等業務。

2、Job的部署

編寫job.yml

apiVersion: batch/v1
kind: Job
metadata:
  name: pi
spec:
  template:
    spec:
      containers:
      - name: pi
        image: perl
        command: ["perl",  "-Mbignum=bpi", "-wle", "print bpi(2000)"]
      restartPolicy: Never
  backoffLimit: 4

執行job.yml

root@k8s-master01:/apps/k8s-yaml/deployment-case# kubectl apply -f job.yml 
job.batch/pi created

root@k8s-master01:/apps/k8s-yaml/deployment-case# kubectl get job -o wide
NAME   COMPLETIONS   DURATION   AGE   CONTAINERS   IMAGES   SELECTOR
pi     1/1           52s        82s   pi           perl     controller-uid=461170f9-603e-4cdc-8af8-b206b8dbab8f

root@k8s-master01:/apps/k8s-yaml/deployment-case# kubectl describe job/pi
Name:           pi
Namespace:      default
Selector:       controller-uid=461170f9-603e-4cdc-8af8-b206b8dbab8f
Labels:         controller-uid=461170f9-603e-4cdc-8af8-b206b8dbab8f
                job-name=pi
Annotations:    <none>
Parallelism:    1
Completions:    1
Start Time:     Tue, 13 Apr 2021 16:43:54 +0800
Completed At:   Tue, 13 Apr 2021 16:44:46 +0800
Duration:       52s
Pods Statuses:  0 Running / 1 Succeeded / 0 Failed
Pod Template:
  Labels:  controller-uid=461170f9-603e-4cdc-8af8-b206b8dbab8f
           job-name=pi
  Containers:
   pi:
    Image:      perl
    Port:       <none>
    Host Port:  <none>
    Command:
      perl
      -Mbignum=bpi
      -wle
      print bpi(2000)
    Environment:  <none>
    Mounts:       <none>
  Volumes:        <none>
Events:
  Type    Reason            Age   From            Message
  ----    ------            ----  ----            -------
  Normal  SuccessfulCreate  109s  job-controller  Created pod: pi-7nbgv
  Normal  Completed         57s   job-controller  Job completed

root@k8s-master01:/apps/k8s-yaml/deployment-case# kubectl get pod
NAME                                READY   STATUS      RESTARTS   AGE
pi-7nbgv                            0/1     Completed   0          2m56s
#該pod pi-7nbgv已經執行成功

[root@k8s-master01 apps]# kubectl logs pi-7nbgv
3.14159265358979323846264338327950288419716939937510582097494459230781640628620899862803482534211706798214808651328230664709384460955058223172535940812848111745028410270193852110555964462294895。。。。。。

五、CronJob

1、CronJob的介紹

CronJob控制器用於管理Job控制器資源的執行時間。Job控制器定義的作業任務在其控制器資源建立之後便會立即執行,但CronJob可以以類似於Linux作業系統的週期性任務作業計劃(crontab)的方式控制其執行的時間點及重複執行的方式,具體如下。·在未來某時間點執行作業一次。·在指定的時間點重複執行作業。 CronJob物件支援使用的時間格式類似於Crontab,略有不同的是,CronJob控制器在指定的時間點時,“”和“*”的意義相同,都表示任何可用的有效值。

CronJob用於實現定時任務,像Linux的Crontab一樣。 •定時任務 應用場景:通知,備份

2、CronJob部署

每分鐘輸出一個hello

編寫cronjob.yml

apiVersion: batch/v1beta1
kind: CronJob
metadata:
  name: hello
spec:
  schedule: "*/1 * * * *"
  jobTemplate:
    spec:
      template:
        spec:
          containers:
          - name: hello
            image: busybox
            imagePullPolicy: IfNotPresent
            command:
            - /bin/sh
            - -c
            - date; echo Hello from the Kubernetes cluster
          restartPolicy: OnFailure

執行cronjob

root@k8s-master01:/apps/k8s-yaml/deployment-case# kubectl apply -f cronjob.yml 
cronjob.batch/hello created

root@k8s-master01:/apps/k8s-yaml/deployment-case# kubectl get pod
NAME                                READY   STATUS      RESTARTS   AGE
hello-1618304040-mr2v5              0/1     Completed   0          49s

root@k8s-master01:/apps/k8s-yaml/deployment-case# kubectl logs hello-1618304040-mr2v5
Tue Apr 13 08:54:02 UTC 2021
Hello from the Kubernetes cluster
#每分鐘會執行一次
root@k8s-master01:/apps/k8s-yaml/deployment-case# kubectl get pod
NAME                                READY   STATUS      RESTARTS   AGE
hello-1618304040-mr2v5              0/1     Completed   0          2m40s
hello-1618304100-mzcs9              0/1     Completed   0          100s
hello-1618304160-hf56t              0/1     Completed   0          39s

root@k8s-master01:/apps/k8s-yaml/deployment-case# kubectl describe cronjob hello
。。。。。。
Events:
  Type    Reason            Age    From                Message
  ----    ------            ----   ----                -------
  Normal  SuccessfulCreate  5m17s  cronjob-controller  Created job hello-1618304040
  Normal  SawCompletedJob   5m7s   cronjob-controller  Saw completed job: hello-1618304040, status: Complete
  Normal  SuccessfulCreate  4m17s  cronjob-controller  Created job hello-1618304100
  Normal  SawCompletedJob   4m7s   cronjob-controller  Saw completed job: hello-1618304100, status: Complete
  Normal  SuccessfulCreate  3m16s  cronjob-controller  Created job hello-1618304160
  Normal  SawCompletedJob   3m6s   cronjob-controller  Saw completed job: hello-1618304160, status: Complete
  Normal  SuccessfulCreate  2m16s  cronjob-controller  Created job hello-1618304220
  Normal  SuccessfulDelete  2m6s   cronjob-controller  Deleted job hello-1618304040
  Normal  SawCompletedJob   2m6s   cronjob-controller  Saw completed job: hello-1618304220, status: Complete
  Normal  SuccessfulCreate  76s    cronjob-controller  Created job hello-1618304280
  Normal  SawCompletedJob   66s    cronjob-controller  Saw completed job: hello-1618304280, status: Complete
  Normal  SuccessfulDelete  66s    cronjob-controller  Deleted job hello-1618304100
  Normal  SuccessfulCreate  15s    cronjob-controller  Created job hello-1618304340
  Normal  SawCompletedJob   5s     cronjob-controller  Saw completed job: hello-1618304340, status: Complete
  Normal  SuccessfulDelete  5s     cronjob-controller  Deleted job hello-1618304160

I have a dream so I study hard!!!