1. 程式人生 > 實用技巧 >k8s資料持久化之statefulset的資料持久化,並自動建立PV與PVC

k8s資料持久化之statefulset的資料持久化,並自動建立PV與PVC

Statefulset

StatefulSet是為了解決有狀態服務的問題,對應的Deployment和ReplicaSet是為了無狀態服務而設計,其應用場景包括:

  1. 穩定的持久化儲存,即Pod重新排程後還是能訪問到相同的持久化資料,基於PVC來實現

  2. 穩定的網路標誌,即Pod重新排程後其PodName和HostName不變,基於Headless Service(即沒有Cluster IP的Service)來實現

  3. 有序部署,有序擴充套件,即Pod是有順序的,在部署或者擴充套件的時候要依據定義的順序依次依次進行(即從0到N-1,在下一個Pod執行之前所有之前的Pod必須都是Running和Ready狀態),基於init containers來實現

  4. 有序收縮,有序刪除(即從N-1到0)

因為statefulset要求Pod的名稱是有順序的,每一個Pod都不能被隨意取代,也就是即使Pod重建之後,名稱依然不變。為後端的每一個Pod去命名。

從上面的應用場景可以發現,StatefulSet由以下幾部分組成:

  1. 用於定義網路標誌的Headless Service(headless-svc:無頭服務。因為沒有IP地址,所以它不具備負載均衡的功能了。)

  2. 用於建立PersistentVolumes的volumeClaimTemplates

  3. 定義具體應用的StatefulSet

》》點選免費領取:【阿里雲】深入淺出Kubernetes專案實戰手冊(超詳細127頁)

StatefulSet:Pod控制器。RC、RS、Deployment、DS。 無狀態的服務。

template(模板):根據模板創建出來的Pod,它們的狀態都是一模一樣的(除了名稱、IP、域名之外)可以理解為:任何一個Pod,都可以被刪除,然後用新生成的Pod進行替換。

有狀態的服務:需要記錄前一次或者多次通訊中的相關時間,以作為下一次通訊的分類標準。比如:MySQL等資料庫服務。(Pod的名稱,不能隨意變化。資料持久化的目錄也是不一樣,每一個Pod都有自己獨有的資料持久化儲存目錄。)

每一個Pod-----對應一個PVC-----每一個PVC對應一個PV。

以自己的名稱建立一個名稱空間,以下所有資源都執行在此空間中。

用statefuset資源執行一個httpd web服務,要求3個Pod,但是每個Pod的主介面內容不一樣,並且都要做專有的資料持久化,嘗試刪除其中一個Pod,檢視新生成的Pod,是否資料與之前一致。

基於NFS服務,建立NFS服務。
​
[root@master ~]# yum -y install nfs-utils rpcbind  br/>2.[root@master ~]# mkdir /nfsdata  
[root@master ~]# vim /etc/exports  br/>4./nfsdata  *(rw,sync,no_root_squash)  
[root@master ~]# systemctl start nfs-server.service   
[root@master ~]# systemctl start rpcbind  br/>7.[root@master ~]# showmount -e  
Export list for master:  
./nfsdata *  

2.建立RBAC許可權

vim rbac-rolebind.yaml

apiVersion: v1
kind: Namespace
metadata: 
  name: lbs-test
apiVersion: v1
    kind: ServiceAccount  建立rbac授權使用者。及定義許可權
metadata:
  name: nfs-provisioner
  name:lbs-test
---
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRole
metadata:
  name: nfs-provisioner-runner
  name:lbs-test
rules:
   -  apiGroups: [""]
      resources: ["persistentvolumes"]
      verbs: ["get", "list", "watch", "create", "delete"]
   -  apiGroups: [""]
      resources: ["persistentvolumeclaims"]
      verbs: ["get", "list", "watch", "update"]
   -  apiGroups: ["storage.k8s.io"]
      resources: ["storageclasses"]
      verbs: ["get", "list", "watch"]
   -  apiGroups: [""]
      resources: ["events"]
      verbs: ["watch", "create", "update", "patch"]
   -  apiGroups: [""]
      resources: ["services", "endpoints"]
      verbs: ["get","create","list", "watch","update"]
   -  apiGroups: ["extensions"]
      resources: ["podsecuritypolicies"]
      resourceNames: ["nfs-provisioner
      verbs: ["use"]
---
kind: ClusterRoleBinding
apiVersion: rbac.authorization.k8s.io/v1
metadata:
  name: run-nfs-provisioner
subjects:
  - kind: ServiceAccount
    name: nfs-provisioner
    namespace:  lbs-test            如沒有名稱空間需要新增這個default預設否則報錯
roleRef:
  kind: ClusterRole
  name: nfs-provisioner-runner
  apiGroup: rbac.authorization.k8s.io

執行yaml檔案:

[root@master yaml]# kubectl apply -f rbac-rolebind.yaml   
namespace/lbh-test created  
serviceaccount/nfs-provisioner created  
clusterrole.rbac.authorization.k8s.io/nfs-provisioner-runner created  
clusterrolebinding.rbac.authorization.k8s.io/run-nfs-provisioner created  

3.建立Deployment資源物件

[root@master yaml]# vim nfs-deployment.yaml 
apiVersion: extensions/v1beta1
kind: Deployment
metadata:
  name: nfs-client-provisioner
  name:lbs-test
spec:
  replicas: 1#副本數量為1
  strategy:
    type: Recreate#重置
  template:
    metadata:
      labels:
        app: nfs-client-provisioner
    spec:
      serviceAccount: nfs-provisioner#指定賬戶
      containers:
        - name: nfs-client-provisioner
          image: registry.cn-hangzhou.aliyuncs.com/open-ali/nfs-client-provisioner使用的是這個映象。
          volumeMounts:
            - name: nfs-client-root
              mountPath:  /persistentvolumes#指定容器內的掛載目錄
          env:
            - name: PROVISIONER_NAME#容器內建變數
              value: bdqn#這是變數的名字
            - name: NFS_SERVER
              value: 192.168.1.1
            - name: NFS_PATH#指定Nfs的共享目錄
              value: /nfsdata
      volumes:#指定掛載到容器內的nfs路徑與IP
        - name: nfs-client-root
          nfs:
            server: 192.168.1.1
            path: /nfsdata

執行yaml檔案,檢視Pod

[root@master yaml]# kubectl apply -f nfs-deployment.yaml   
deployment.extensions/nfs-client-provisioner created   
[root@master yaml]# kubectl get pod -n lbs-test   
NAME                                      READY   STATUS    RESTARTS   AGE  
nfs-client-provisioner-5d88975f6d-wdbnc   1/1     Running   0          13s  

4.建立Storageclass資源物件(sc):

root@master yaml]# vim sc.yaml 
apiVersion: storage.k8s.io/v1
kind: StorageClass
metadata:
  name: sc-nfs
  namespace:lbs-test #名稱空間 名
provisioner: lbs-test#與deployment資源的env環境變數value值相同
reclaimPolicy: Retain #回收策略

執行yaml檔案,檢視SC

[root@master yaml]# kubectl apply -f sc.yaml   
storageclass.storage.k8s.io/sc-nfs created  
[root@master yaml]# kubectl get sc -n lbs-test   
NAME     PROVISIONER   AGE  
sc-nfs   lbs-test      8s  

5.建立StatefulSet資源物件,自動建立PVC:

vim statefulset.yaml
apiVersion: v1
kind: Service
metadata:
  name: headless-svc
  namespace: lbs-test
  labels:
    app: headless-svc
spec:
  ports:
  - port: 80
    name: myweb
  selector:
    app: headless-pod
  clusterIP: None
---
apiVersion: apps/v1
kind: StatefulSet
metadata:
  name: statefulset-test
  namespace: lbs-test
spec:
  serviceName: headless-svc
  replicas: 3
  selector:
    matchLabels:
      app: headless-pod
  template:
    metadata:
      labels:
        app: headless-pod
    spec:
      containers:
      - image: httpd
        name: myhttpd
        ports:
        - containerPort: 80
          name: httpd
        volumeMounts:
        - mountPath: /mnt
          name: test
  volumeClaimTemplates:     這個欄位:自動建立PVC
  - metadata:
      name: test
      annotations:   //這是指定storageclass,名稱要一致
        volume.beta.kubernetes.io/storage-class: sc-nfs
    spec:
      accessModes:
        - ReadWriteOnce
      resources:
        requests:
          storage: 100Mi

執行yaml檔案,檢視Pod:

[root@master yaml]# kubectl apply -f statefulset.yaml   
service/headless-svc created
statefulset.apps/statefulset-test created  
[root@master yaml]# kubectl get pod -n lbs-test   
NAME                                      READY   STATUS    RESTARTS   AGE  
nfs-client-provisioner-5d88975f6d-wdbnc   1/1     Running   0          22m  
statefulset-test-0                        1/1     Running   0          8m59s  
statefulset-test-1                        1/1     Running   0          2m30s  
statefulset-test-2                        1/1     Running   0          109s  

檢視是否自動建立PV及PVC

PV:

[root@master yaml]# kubectl get pv -n lbs-test   
NAME                                       CAPACITY   ACCESS MODES   RECLAIM POLICY   STATUS   CLAIM                              STORAGECLASS   REASON   AGE  
pvc-0454e9ad-892f-4e39-8dcb-79664f65d1e5   100Mi      RWO            Delete           Bound    lbh-test/test-statefulset-test-2   sc-nfs                  4m23s  
pvc-2cb98c60-977f-4f3b-ba97-b84275f3b9e5   100Mi      RWO            Delete           Bound    lbh-test/test-statefulset-test-0   sc-nfs                  11m  
pvc-99137753-ccd0-4524-bf40-f3576fc97eba   100Mi      RWO            Delete           Bound    lbh-test/test-statefulset-test-1   sc-nfs                  5m4s  

PVC:

[root@master yaml]# kubectl get pvc -n lbs-test   
NAME                      STATUS   VOLUME                                     CAPACITY   ACCESS MODES   STORAGECLASS   AGE  
test-statefulset-test-0   Bound    pvc-2cb98c60-977f-4f3b-ba97-b84275f3b9e5   100Mi      RWO            sc-nfs         13m  
test-statefulset-test-1   Bound    pvc-99137753-ccd0-4524-bf40-f3576fc97eba   100Mi      RWO            sc-nfs         6m42s  
test-statefulset-test-2   Bound    pvc-0454e9ad-892f-4e39-8dcb-79664f65d1e5   100Mi      RWO            sc-nfs         6m1s  

檢視是否建立持久化目錄:

[root@master yaml]# ls /nfsdata/  
lbh-test-test-statefulset-test-0-pvc-2cb98c60-977f-4f3b-ba97-b84275f3b9e5  
lbh-test-test-statefulset-test-1-pvc-99137753-ccd0-4524-bf40-f3576fc97eba  
lbh-test-test-statefulset-test-2-pvc-0454e9ad-892f-4e39-8dcb-79664f65d1e5  

6.在pod資源內建立資料。並訪問測試。

[root@master yaml]# cd /nfsdata/  
[root@master nfsdata]# echo 111 > lbs-test-test-statefulset-test-0-pvc-2cb98c60-977f-4f3b-ba97-b84275f3b9e5/index.html  
[root@master nfsdata]# echo 222 > lbs-test-test-statefulset-test-1-pvc-99137753-ccd0-4524-bf40-f3576fc97eba/index.html  
[root@master nfsdata]# echo 333 > lbs-test-test-statefulset-test-2-pvc-0454e9ad-892f-4e39-8dcb-79664f65d1e5/index.html  
[root@master nfsdata]# kubectl get pod -o wide -n lbs-test   
NAME                                      READY   STATUS    RESTARTS   AGE     IP           NODE     NOMINATED NODE   READINESS GATES  
nfs-client-provisioner-5d88975f6d-wdbnc   1/1     Running   0          30m     10.244.2.2   node02   <none>           <none>  
statefulset-test-0                        1/1     Running   0          17m     10.244.1.2   node01   <none>           <none>  
statefulset-test-1                        1/1     Running   0          10m     10.244.2.3   node02   <none>           <none>  
statefulset-test-2                        1/1     Running   0          9m57s   10.244.1.3   node01   <none>           <none>  
[root@master nfsdata]# curl 10.244.1.2  
111  
[root@master nfsdata]# curl 10.244.2.3  
222  
[root@master nfsdata]# curl 10.244.1.3  
333  

7.刪除其中一個pod,檢視該pod資源的資料是否會重新建立並存在。

[root@master ~]# kubectl get pod -n lbs-test   
NAME                                      READY   STATUS    RESTARTS   AGE  
nfs-client-provisioner-5d88975f6d-wdbnc   1/1     Running   0          33m  
statefulset-test-0                        1/1     Running   0          20m  
statefulset-test-1                        1/1     Running   0          13m  
statefulset-test-2                        1/1     Running   0          13m  
[root@master ~]# kubectl delete pod -n lbs-test statefulset-test-0   
pod "statefulset-test-0" deleted  

刪除後會重新建立pod資源

[root@master ~]# kubectl get pod -n lbs-test -o wide  
NAME                                      READY   STATUS    RESTARTS   AGE   IP           NODE     NOMINATED NODE   READINESS GATES  
nfs-client-provisioner-5d88975f6d-wdbnc   1/1     Running   0          35m   10.244.2.2   node02   <none>           <none>  
statefulset-test-0                        1/1     Running   0          51s   10.244.1.4   node01   <none>           <none>  
statefulset-test-1                        1/1     Running   0          15m   10.244.2.3   node02   <none>           <none>  
statefulset-test-2                        1/1     Running   0          14m   10.244.1.3   node01   <none>           <none>  

資料依舊存在

[root@master ~]# curl 10.244.1.4  
111  
[root@master ~]# cat /nfsdata/lbs-test-test-statefulset-test-0-pvc-2cb98c60-977f-4f3b-ba97-b84275f3b9e5/index.html   
111 

》》點選免費領取:2020持續更新學習教程視訊實戰進階提升(學習路線+課程大綱+視訊教程+面試題+學習工具+大廠實戰手冊)

StatefulSet資源物件,針對有狀態的服務的資料持久化測試完成。 通過測試,即使刪除Pod,重新生成排程後,依舊能訪問到之前的持久化資料