k8s動態儲存類的部署
阿新 • • 發佈:2022-03-07
第一步,需要一個安裝好的k8s叢集,這裡省略
第二步,搭建nfs儲存,把/share目錄共享出來
[root@master active_pvc]# vim /etc/exports /share *(insecure,rw,sync,fsid=0,crossmnt,no_subtree_check,anonuid=666,anongid=666,no_root_squash)
第三步,重啟nfs服務,然後驗證
第四步,因為我的k8s用的是nfs儲存,不支援動態補給,如果需要動態補給,就需要外掛
nfs-client-provisioner
網址:https://github.com/kubernetes-retired/external-storage/tree/master/nfs-client/deploy
需要這些k8s配置清單來部署,我們現在下載下來,如果github上下載很慢,可以去gitee,搜尋external-storage也可以,很快
[root@master active_pvc]#uri="https://raw.githubusercontent.com/kubernetes-retired/external-storage/master/nfs-client/deploy/" [root@master active_pvc]#for i in class.yaml deployment.yaml test-claim.yaml test-pod.yaml;do wget -c $uri$i;done
第五步,修改deployment.yaml
應用deployment
[root@master active_pvc]# kubectl apply -f deployment.yaml
deployment.apps/nfs-client-provisioner created
應用rbac,需要給deploy許可權,否則,pod建立不出來
[root@master active_pvc]# kubectl apply -f rbac.yaml serviceaccount/nfs-client-provisioner created clusterrole.rbac.authorization.k8s.io/nfs-client-provisioner-runner created clusterrolebinding.rbac.authorization.k8s.io/run-nfs-client-provisioner created role.rbac.authorization.k8s.io/leader-locking-nfs-client-provisioner created rolebinding.rbac.authorization.k8s.io/leader-locking-nfs-client-provisioner created
應用sc
[root@master active_pvc]# kubectl apply -f class.yaml
storageclass.storage.k8s.io/managed-nfs-storage created
至於test-claim和test-pod可以不用,我這裡直接給一個真實環境elasticsearch叢集
es-cluster.yaml
--- kind: Service apiVersion: v1 metadata: name: es namespace: bigdata labels: app: elasticsearch spec: selector: app: elasticsearch type: NodePort ports: - port: 9200 nodePort: 30080 name: rest - port: 9300 nodePort: 30070 name: inter-node --- apiVersion: apps/v1 kind: StatefulSet metadata: name: es-cluster namespace: bigdata spec: serviceName: es replicas: 3 selector: matchLabels: app: elasticsearch template: metadata: labels: app: elasticsearch spec: affinity: podAntiAffinity: requiredDuringSchedulingIgnoredDuringExecution: - labelSelector: matchExpressions: - key: "app" operator: In values: - elasticsearch - key: "kubernetes.io/hostname" operator: NotIn values: - master topologyKey: "kubernetes.io/hostname" containers: - name: elasticsearch image: elasticsearch:7.2.0 imagePullPolicy: IfNotPresent resources: limits: cpu: 1000m requests: cpu: 100m ports: - containerPort: 9200 name: rest protocol: TCP - containerPort: 9300 name: inter-node protocol: TCP volumeMounts: - name: data mountPath: /usr/share/elasticsearch/data env: - name: cluster.name value: k8s-logs - name: node.name valueFrom: fieldRef: fieldPath: metadata.name - name: discovery.seed_hosts value: "es-cluster-0.es,es-cluster-1.es,es-cluster-2.es" - name: cluster.initial_master_nodes value: "es-cluster-0,es-cluster-1,es-cluster-2" - name: ES_JAVA_OPTS value: "-Xms512m -Xmx512m" initContainers: - name: fix-permissions image: busybox imagePullPolicy: IfNotPresent command: ["sh", "-c", "chown -R 1000:1000 /usr/share/elasticsearch/data"] securityContext: privileged: true volumeMounts: - name: pvc01 mountPath: /usr/share/elasticsearch/data - name: increase-vm-max-map image: busybox imagePullPolicy: IfNotPresent command: ["sysctl", "-w", "vm.max_map_count=262144"] securityContext: privileged: true - name: increase-fd-ulimit image: busybox imagePullPolicy: IfNotPresent command: ["sh", "-c", "ulimit -n 65536"] securityContext: privileged: true volumeClaimTemplates: - metadata: name: pvc01 labels: app: elasticsearch spec: accessModes: [ "ReadWriteMany" ] storageClassName: es resources: requests: storage: 10GiView Code
補充
如果遇到這種報錯
那麼需要在所在的節點安裝nfs-utils包,最好是每一個節點都安裝