1. 程式人生 > 其它 >Kubernetes(十七) 基於NFS的動態儲存申請

Kubernetes(十七) 基於NFS的動態儲存申請

技術標籤:技術

部署nfs-provisioner external-storage-nfs

  1. 建立工作目錄

    $ mkdir -p /opt/k8s/nfs/data
    
    
  2. 下載nfs-provisioner對應的映象,上傳到自己的私有映象中

    $ docker pull fishchen/nfs-provisioner:v2.2.2
    $ docker tag fishchen/nfs-provisioner:v2.2.2 192.168.0.107/k8s/nfs-provisioner:v2.2.2
    $ docker push 192.168.0.107/k8s/nfs-provisioner:v2.2.2
    
    
  3. 編輯啟動nfs-provisioner的deploy.yml檔案

    $ cd /opt/k8s/nfs
    $ cat > deploy.yml << EOF
    apiVersion: v1
    kind: ServiceAccount
    metadata:
      name: nfs-provisioner
    ---
    kind: Service
    apiVersion: v1
    metadata:
      name: nfs-provisioner
      labels:
        app: nfs-provisioner
    spec:
      ports:
        - name: nfs
          port: 2049
        - name: mountd
          port: 20048
        - name: rpcbind
          port: 111
        - name: rpcbind-udp
          port: 111
          protocol: UDP 
      selector:
        app: nfs-provisioner
    ---
    kind: Deployment
    apiVersion: apps/v1
    metadata:
      name: nfs-provisioner
    spec:
      selector:
        matchLabels:
          app: nfs-provisioner
      replicas: 1
      strategy:
        type: Recreate 
      template:
        metadata:
          labels:
            app: nfs-provisioner
        spec:
          serviceAccount: nfs-provisioner
          containers:
            - name: nfs-provisioner
              image: 192.168.0.107/k8s/nfs-provisioner:v2.2.2
              ports:
                - name: nfs
                  containerPort: 2049
                - name: mountd
                  containerPort: 20048
                - name: rpcbind
                  containerPort: 111
                - name: rpcbind-udp
                  containerPort: 111
                  protocol: UDP
              securityContext:
                capabilities:
                  add:
                    - DAC_READ_SEARCH
                    - SYS_RESOURCE
              args:
                - "-provisioner=myprovisioner.kubernetes.io/nfs"
              env:
                - name: POD_IP
                  valueFrom:
                    fieldRef:
                      fieldPath: status.podIP
                - name: SERVICE_NAME
                  value: nfs-provisioner
                - name: POD_NAMESPACE
                  valueFrom:
                    fieldRef:
                      fieldPath: metadata.namespace
              imagePullPolicy: "IfNotPresent"
              volumeMounts:
                - name: export-volume
                  mountPath: /export
          volumes:
            - name: export-volume
              hostPath:
                path: /opt/k8s/nfs/data
    EOF
    
    
    • volumes.hostPath 指向剛建立的資料目錄,作為nfs的export目錄,此目錄可以是任意的Linux目錄
    • args: - "-myprovisioner.kubernetes.io/nfs" 指定provisioner的名稱,要和後面建立的storeClass中的名稱保持一致
  4. 編輯自動建立pv相關的rbac檔案

    $ cd /opt/k8s/nfs
    $ cat > rbac.yml << EOF
    kind: ClusterRole
    apiVersion: rbac.authorization.k8s.io/v1
    metadata:
      name: nfs-provisioner-runner
    rules:
      - apiGroups: [""]
        resources: ["persistentvolumes"]
        verbs: ["get", "list", "watch", "create", "delete"]
      - apiGroups: [""]
        resources: ["persistentvolumeclaims"]
        verbs: ["get", "list", "watch", "update"]
      - apiGroups: ["storage.k8s.io"]
        resources: ["storageclasses"]
        verbs: ["get", "list", "watch"]
      - apiGroups: [""]
        resources: ["events"]
        verbs: ["create", "update", "patch"]
      - apiGroups: [""]
        resources: ["services", "endpoints"]
        verbs: ["get"]
      - apiGroups: ["extensions"]
        resources: ["podsecuritypolicies"]
        resourceNames: ["nfs-provisioner"]
        verbs: ["use"]
    ---
    kind: ClusterRoleBinding
    apiVersion: rbac.authorization.k8s.io/v1
    metadata:
      name: run-nfs-provisioner
    subjects:
      - kind: ServiceAccount
        name: nfs-provisioner
         # replace with namespace where provisioner is deployed
        namespace: default
    roleRef:
      kind: ClusterRole
      name: nfs-provisioner-runner
      apiGroup: rbac.authorization.k8s.io
    ---
    kind: Role
    apiVersion: rbac.authorization.k8s.io/v1
    metadata:
      name: leader-locking-nfs-provisioner
    rules:
      - apiGroups: [""]
        resources: ["endpoints"]
        verbs: ["get", "list", "watch", "create", "update", "patch"]
    ---
    kind: RoleBinding
    apiVersion: rbac.authorization.k8s.io/v1
    metadata:
      name: leader-locking-nfs-provisioner
    subjects:
      - kind: ServiceAccount
        name: nfs-provisioner
        # replace with namespace where provisioner is deployed
        namespace: default
    roleRef:
      kind: Role
      name: leader-locking-nfs-provisioner
      apiGroup: rbac.authorization.k8s.io
    
    EOF
    
    
  5. 編輯建立StorageClass的啟動檔案

    $ cd /opt/k8s/nfs
    $ cat > class.yml << EOF
    kind: StorageClass
    apiVersion: storage.k8s.io/v1
    metadata:
      name: example-nfs
    provisioner: myprovisioner.kubernetes.io/nfs
    mountOptions:
      - vers=4.1
    EOF
    
    
    • provisioner 對應的值要和前面deployment.yml中配置的值一樣
  6. 啟動nfs-provisioner

    $ kubectl create -f deploy.yml -f rbac.yml -f class.yml
    
    

驗證和使用nfs-provisioner

下面我們通過一個簡單的例子來驗證剛建立的nfs-provisioner,例子中主要包含兩個應用,一個busyboxy和一個web,兩個應用掛載同一個PVC,其中busybox負責向共享儲存中寫入內容,web應用讀取共享儲存中的內容,並展示到介面。

  1. 編輯建立PVC檔案

    $ cd /opt/k8s/nfs
    $ cat > claim.yml << EOF
    kind: PersistentVolumeClaim
    apiVersion: v1
    metadata:
      name: nfs
    storageClassName: example-nfs
    spec:
      accessModes:
        - ReadWriteMany
      resources:
        requests:
          storage: 100Mi
    EOF
      
    
    • storageClassName: 指定前面建立的StorageClass對應的名稱
    • accessModes: ReadWriteMany 允許多個node進行掛載和read、write
    • 申請資源是100Mi
  2. 建立PVC,並檢查是否能自動建立相應的pv

    $ kubectl create -f claim.yml
    $ kubectl get pvc
    NAME   STATUS   VOLUME                                     CAPACITY   ACCESS MODES   STORAGECLASS   AGE
    nfs    Bound    pvc-10a1a98c-2d0f-4324-8617-618cf03944fe   100Mi      RWX            example-nfs    11s
    $kubectl get pv
    NAME                                       CAPACITY   ACCESS MODES   RECLAIM POLICY   STATUS   CLAIM         STORAGECLASS   REASON   AGE
    pvc-10a1a98c-2d0f-4324-8617-618cf03944fe   100Mi      RWX            Delete           Bound    default/nfs   example-nfs             18s
    
    
    • 可以看到自動給我們建立了一個pv對應的名稱是pvc-10a1a98c-2d0f-4324-8617-618cf03944fe,STORAGECLASS是example-nfs,對應的claim是default/nfs
  3. 啟動一個busybox應用,通過掛載共享目錄,向其中寫入資料

    1. 編輯啟動檔案

      $ cd /opt/k8s/nfs
      $ cat > deploy-busybox.yml << EOF
      apiVersion: apps/v1
      kind: Deployment
      metadata:
        name: nfs-busybox
      spec:
        replicas: 1
        selector:
          matchLabels:
            name: nfs-busybox
        template:
          metadata:
            labels:
              name: nfs-busybox
          spec:
            containers:
            - image: busybox
              command:
                - sh
                - -c
                - 'while true; do date > /mnt/index.html; hostname >> /mnt/index.html; sleep 20; done'
              imagePullPolicy: IfNotPresent
              name: busybox
              volumeMounts:
                # name must match the volume name below
                - name: nfs
                  mountPath: "/mnt"
            volumes:
            - name: nfs
              persistentVolumeClaim:
                claimName: nfs
      EOF
      
      
      • volumes.persistentVolumeClaim.claimName 設定成剛建立的PVC
    2. 啟動busybox

      $ cd /opt/k8s/nfs
      $ kubectl create -f deploy-busybox.yml
      
      

      檢視是否在對應的pv下生成了index.html

      $ cd /opt/k8s/nfs
      $ ls data/pvc-10a1a98c-2d0f-4324-8617-618cf03944fe/
      

    index.html
    $ cat data/pvc-10a1a98c-2d0f-4324-8617-618cf03944fe/index.html
    Sun Mar 1 12:51:30 UTC 2020
    nfs-busybox-6b677d655f-qcg5c

     ```
     
     * 可以看到在對應的pv下生成了檔案,也正確寫入了內容
    
  4. 啟動web應用(nginx),讀取共享掛載中的內容

    1. 編輯啟動檔案

      $ cd /opt/k8s/nfs
      $ cat >deploy-web.yml << EOF
      apiVersion: v1
      kind: Service
      metadata:
        name: nfs-web
      spec:
        type: NodePort
        selector:
          role: web-frontend
        ports:
        - name: http
          port: 80
          targetPort: 80
          nodePort: 8086
      ---
      apiVersion: apps/v1
      kind: Deployment
      metadata:
        name: nfs-web
      spec:
        replicas: 2
        selector:
          matchLabels:
            role: web-frontend
        template:
          metadata:
            labels:
              role: web-frontend
          spec:
            containers:
            - name: web
              image: nginx:1.9.1
              ports:
                - name: web
                  containerPort: 80
              volumeMounts:
                  # name must match the volume name below
                  - name: nfs
                    mountPath: "/usr/share/nginx/html"
            volumes:
            - name: nfs
              persistentVolumeClaim:
                claimName: nfs
      EOF
      
      
      • volumes.persistentVolumeClaim.claimName 設定成剛建立的PVC
    2. 啟動web程式

      $ cd /opt/k8s/nfs
      $ kubectl create -f deploy-web.yml
      
      
    3. 訪問頁面

      • 可以看到正確讀取了內容,沒過20秒,持續觀察可發現介面的時間可以重新整理

遇到問題

參照github上的步驟執行,啟動PVC後無法建立pv,檢視nfs-provisioner服務的日誌,有出現錯誤:

error syncing claim "20eddcd8-1771-44dc-b185-b1225e060c9d": failed to provision volume with StorageClass "example-nfs": error getting NFS server IP for volume: service SERVICE_NAME=nfs-provisioner is not valid; check that it has for ports map[{111 TCP}:true {111 UDP}:true {2049 TCP}:true {20048 TCP}:true] exactly one endpoint, this pod's IP POD_IP=172.30.22.3

錯誤原因issues/1262,之後把錯誤中提到埠保留,其他埠號都去掉,正常