1. 程式人生 > 其它 >k8s基於NFS建立動態儲存StorageClass

k8s基於NFS建立動態儲存StorageClass

簡介

nfs-subdir-external-provisioner是一個自動供應器,它使用現有的NFS 服務來支援通過 Persistent Volume Claims 動態持久卷在nfs伺服器持久卷被配置為${namespace}-${pvcName}-${pvName}

NFS-Subdir-External-Provisioner此元件是對nfs-client-provisioner 的擴充套件,GitHub地址 https://github.com/kubernetes-sigs/nfs-subdir-external-provisioner

部署nfs

所以節點必須安裝nfs-utils

# 具體配置過程略,這裡僅看下nfs配置
/xxxx/data/nfs1/       *(rw,sync,no_root_squash)

配置Storageclass

  • 建立授權
apiVersion: v1
kind: ServiceAccount
metadata:
  name: nfs-client-provisioner
  namespace: default
---
kind: ClusterRole
apiVersion: rbac.authorization.k8s.io/v1
metadata:
  name: nfs-client-provisioner-runner
rules:
  - apiGroups: [""]
    resources: ["persistentvolumes"]
    verbs: ["get", "list", "watch", "create", "delete"]
  - apiGroups: [""]
    resources: ["persistentvolumeclaims"]
    verbs: ["get", "list", "watch", "update"]
  - apiGroups: ["storage.k8s.io"]
    resources: ["storageclasses"]
    verbs: ["get", "list", "watch"]
  - apiGroups: [""]
    resources: ["events"]
    verbs: ["create", "update", "patch"]
---
kind: ClusterRoleBinding
apiVersion: rbac.authorization.k8s.io/v1
metadata:
  name: run-nfs-client-provisioner
subjects:
  - kind: ServiceAccount
    name: nfs-client-provisioner
    namespace: default
roleRef:
  kind: ClusterRole
  name: nfs-client-provisioner-runner
  apiGroup: rbac.authorization.k8s.io
---
kind: Role
apiVersion: rbac.authorization.k8s.io/v1
metadata:
  name: leader-locking-nfs-client-provisioner
  namespace: default
rules:
  - apiGroups: [""]
    resources: ["endpoints"]
    verbs: ["get", "list", "watch", "create", "update", "patch"]
---
kind: RoleBinding
apiVersion: rbac.authorization.k8s.io/v1
metadata:
  name: leader-locking-nfs-client-provisioner
  namespace: default
subjects:
  - kind: ServiceAccount
    name: nfs-client-provisioner
    namespace: default
roleRef:
  kind: Role
  name: leader-locking-nfs-client-provisioner
  apiGroup: rbac.authorization.k8s.io

  • 部署 NFS-Subdir-External-Provisioner
apiVersion: apps/v1
kind: Deployment
metadata:
  name: nfs-client-provisioner
  labels:
    app: nfs-client-provisioner
  namespace: default
spec:
  replicas: 1
  strategy:
    type: Recreate
  selector:
    matchLabels:
      app: nfs-client-provisioner
  template:
    metadata:
      labels:
        app: nfs-client-provisioner
    spec:
      serviceAccountName: nfs-client-provisioner
      containers:
        - name: nfs-client-provisioner
          image: quay.io/external_storage/nfs-client-provisioner:latest
          volumeMounts:
            - name: nfs-client-root
              mountPath: /persistentvolumes
          env:
            - name: PROVISIONER_NAME
              value: nfs-client #---nfs-provisioner的名稱,以後設定的storageclass要和這個保持一致
            - name: NFS_SERVER
              value: 10.10.10.21 #nfs伺服器的地址
            - name: NFS_PATH
              value: /epailive/data/nfs1 #nfs路徑
      volumes:
        - name: nfs-client-root
          nfs:
            server: 10.10.10.21
            path: /epailive/data/nfs1 #nfs路徑
  • 建立Storageclass
#cat storageclass.yaml 
apiVersion: storage.k8s.io/v1
kind: StorageClass
metadata:
  name: nfs-storage
  annotations:
    storageclass.kubernetes.io/is-default-class: "true"  #---設定為預設的storageclass
provisioner: nfs-client  #---動態卷分配者名稱,必須和上面建立的"PROVISIONER_NAME"變數中設定的Name一致
parameters:
  archiveOnDelete: "true"  #---設定為"false"時刪除PVC不會保留資料,"true"則保留資料
  • 建立pvc測試下
kind: PersistentVolumeClaim
apiVersion: v1
metadata:
  name: test-chaim
spec:
  storageClassName: nfs-storage #---需要與上面建立的storageclass的名稱一致
  accessModes:
    - ReadWriteMany
  resources:
    requests:
      storage: 1Gi #需要的資源大小根據自己的實際情況修改

檢視pv,pvc的狀態

檢視nfs自動建立的資料

進入nfs共享共享目錄檢視volume name的目錄已經創建出來了。其中volume的名字是namespace,PVC name以及uuid的組合

測試pod

cat > test-pod.yaml <<\EOF
kind: Pod
apiVersion: v1
metadata:
  name: test-pod
spec:
  containers:
  - name: test-pod
    image: busybox:latest
    command:
      - "/bin/sh"
    args:
      - "-c"
      - "touch /mnt/index.html && echo firsh>>/mnt/index.html && exit 0 || exit 1"
    volumeMounts:
      - name: nfs-pvc
        mountPath: "/mnt"
  restartPolicy: "Never"
  volumes:
    - name: nfs-pvc
      persistentVolumeClaim:
        claimName: test-claim

EOF

檢視pod在nfs下建立的檔案


【注意!!!】

關於在k8s-v1.20以上版本使用nfs作為storageclass出現selfLink was empty, can‘t make reference

在使用nfs建立storageclass 實現儲存的動態載入
分別建立 rbac、nfs-deployment、nfs-storageclass之後都正常執行
但在建立pvc時一直處於pending狀態
kubectl describe pvc test-claim 檢視pvc資訊提示如下

  Normal  ExternalProvisioning  13s (x2 over 25s)  persistentvolume-controller  waiting for a volume to be created, either by external provisioner "nfs-client" or manually created by system administrator

然後檢視kubectl logs nfs-client-provisioner-6df55f9474-fdnpc的日誌有如下資訊:

E1022 07:01:24.615869       1 controller.go:1004] provision "default/test-claim" class "nfs-storage": unexpected error getting claim reference: selfLink was empty, can't make reference

selfLink was empty 在k8s叢集 v1.20之前都存在,在v1.20之後被刪除,需要在/etc/kubernetes/manifests/kube-apiserver.yaml 新增引數
增加 - --feature-gates=RemoveSelfLink=false

spec:
  containers:
  - command:
    - kube-apiserver
    - --feature-gates=RemoveSelfLink=false

新增之後使用kubeadm部署的叢集會自動載入部署pod

kubeadm安裝的apiserver是Static Pod,它的配置檔案被修改後,立即生效。
Kubelet 會監聽該檔案的變化,當您修改了 /etc/kubenetes/manifest/kube-apiserver.yaml 檔案之後,kubelet 將自動終止原有的 kube-apiserver-{nodename} 的 Pod,並自動建立一個使用了新配置引數的 Pod 作為替代。
如果您有多個 Kubernetes Master 節點,您需要在每一個 Master 節點上都修改該檔案,並使各節點上的引數保持一致。

這裡需注意如果api-server啟動失敗 需重新在執行一遍

kubectl apply -f /etc/kubernetes/manifests/kube-apiserver.yaml
1


在nfs服務端就看到pvc目錄了

這個問題已經在github上有詳細介紹
https://github.com/kubernetes-sigs/nfs-subdir-external-provisioner/issues/25

更多精彩關注公眾號“51運維com” 個人部落格