Kubernetes儲存——NFS
阿新 • • 發佈:2021-10-07
目錄
一、環境準備——搭建nfs(叢集)
1.1 伺服器規劃
master(k8s叢集) | node1(k8s叢集) | node2(k8s叢集) | nfs 服務端 |
---|---|---|---|
192.168.99.201 | 192.168.99.202 | 192.168.99.203 | 192.168.99.204 |
1.2 nfs服務端
$ yum install -y nfs-utils $ systemctl enable nfs-server rpcbind --now # 所有服務端節點安裝nfs並啟動 $ mkdir -p /data/nfs-volume && chmod -R 777 /data/nfs-volume # 建立nfs共享目錄、授權 $ cat > /etc/exports << EOF /data/nfs-volume 192.168.99.0/24(rw,sync,no_root_squash) EOF # 寫入exports $ systemctl reload nfs-server # 啟動nfs服務端 # 使用如下命令進行驗證 $ showmount -e 192.168.99.204 Export list for 192.168.99.204: /data/nfs-volume 192.168.99.0/24
1.3 nfs客戶端
$ yum install -y nfs-utils # 所有使用nfs的k8s節點安裝nfs客戶端 $ systemctl enable rpcbind --now # 啟動rpcbind程式 # 所有客戶端也可以使用如下命令進行驗證 $ showmount -e 192.168.99.204 Export list for 192.168.99.204: /data/nfs-volume 192.168.99.0/24 $ mkdir /opt/nfs-volume $ mount -t nfs 192.168.99.204:/data/nfs-volume /opt/nfs-volume $ df -h | tail -1 # 測試掛載並使用
1.4 配置客戶端開機自動掛載
$ cat >> /etc/fstab << EOF
192.168.99.204:/data/nfs-volume /opt/nfs-volume defaults,_netdev 0 0
EOF
二、配置k8s使用nfs持久化儲存
2.1 手動靜態建立
$ kubectl create ns nfs-pv-pvc $ cat > nfs-pv.yaml << EOF apiVersion: v1 # 指定的api版本,要符合kubectl apiVersion規定,v1是穩定版本,必選引數 kind: PersistentVolume # k8s資源型別,PersistentVolume資源 metadata: # 資源的元資料語句塊,是針對kind對應資源的全域性屬性 name: nfs-pv001 # 自定義名稱nfs-pv001 spec: # 規格語句塊 capacity: # PV的儲存空間語句塊 storage: 5Gi # PV的具體儲存空間大小,Mi表示1024進位制 accessModes: # 訪問模式,有三種:ReadWriteOnce、ReadOnlyMany、ReadWriteMany - ReadWriteMany persistentVolumeReclaimPolicy: Retain # 回收策略,有三種:Retain、Recycle、Delete storageClassName: nfs # 注意此處修改 nfs: # NFS檔案系統配置語句塊 path: /data/nfs-volume # 在NFS檔案系統上建立的共享檔案目錄 server: 192.168.99.204 # NFS伺服器的IP地址 EOF $ cat > nfs-pvc.yaml << EOF apiVersion: v1 # 指定的api版本,要符合kubectl apiVersion規定,v1是穩定版本,必選引數 kind: PersistentVolumeClaim # k8s資源型別 metadata: # 資源的元資料語句塊,是針對kind對應資源的全域性屬性 name: nfs-pvc001 # PVC名稱,自定義 namespace: nfs-pv-pvc # 指定名稱空間 spec: # 規格語句塊 accessModes: # 訪問模式 - ReadWriteMany storageClassName: nfs # 注意此處修改 resources: # 訪問模式下的資源語句塊 requests: # 請求語句塊 storage: 5Gi # 請求儲存空間 EOF $ cat > nginx-apline.yaml << EOF apiVersion: apps/v1 kind: Deployment metadata: name: nginx-deployment namespace: nfs-pv-pvc labels: app: nginx spec: replicas: 1 # 注意此處修改 selector: matchLabels: app: nginx template: metadata: labels: app: nginx spec: containers: - name: nginx image: nginx:alpine imagePullPolicy: IfNotPresent ports: - containerPort: 80 volumeMounts: - name: nfs-pvc mountPath: "/usr/share/nginx/html" volumes: - name: nfs-pvc persistentVolumeClaim: claimName: nfs-pvc001 # 與pvc名字一樣 --- kind: Service apiVersion: v1 metadata: name: my-svc-nginx-alpine namespace: nfs-pv-pvc spec: type: ClusterIP selector: app: nginx ports: - protocol: TCP port: 80 targetPort: 80 EOF
在NFS服務端(192.168.99.204)對應NFS共享目錄建立檔案
$ echo "2021-7-20" > /data/nfs-volume/index.html
$ kubectl get pod -n nfs-pv-pvc -o custom-columns=':metadata.name'
$ kubectl exec -it nginx-deployment-799b74d8dc-7fmnl -n nfs-pv-pvc -- cat /usr/share/nginx/html/index.html
$ kubectl get pod -n nfs-pv-pvc -owide
$ kubectl get svc -n nfs-pv-pvc -owide
訪問驗證
$ kubectl get po -n nfs-pv-pvc -o custom-columns=':status.podIP' |xargs curl
# 訪問pod IP
$ kubectl get svc -n nfs-pv-pvc -o custom-columns=':spec.clusterIP' |xargs curl
# 訪問svc IP
2.2 動態建立NFS儲存
方式一:使用github上資源
# git clone https://github.com/kubernetes-retired/external-storage.git
# cd ~/external-storage/nfs-client/deploy
方式二:手動建立資原始檔
$ mkdir my-nfs-client-provisioner && cd my-nfs-client-provisioner
$ cat > rbac.yaml << EOF
apiVersion: v1
kind: ServiceAccount
metadata:
name: nfs-client-provisioner
# replace with namespace where provisioner is deployed
namespace: default
---
kind: ClusterRole
apiVersion: rbac.authorization.k8s.io/v1
metadata:
name: nfs-client-provisioner-runner
rules:
- apiGroups: [""]
resources: ["persistentvolumes"]
verbs: ["get", "list", "watch", "create", "delete"]
- apiGroups: [""]
resources: ["persistentvolumeclaims"]
verbs: ["get", "list", "watch", "update"]
- apiGroups: ["storage.k8s.io"]
resources: ["storageclasses"]
verbs: ["get", "list", "watch"]
- apiGroups: [""]
resources: ["events"]
verbs: ["create", "update", "patch"]
---
kind: ClusterRoleBinding
apiVersion: rbac.authorization.k8s.io/v1
metadata:
name: run-nfs-client-provisioner
subjects:
- kind: ServiceAccount
name: nfs-client-provisioner
# replace with namespace where provisioner is deployed
namespace: default
roleRef:
kind: ClusterRole
name: nfs-client-provisioner-runner
apiGroup: rbac.authorization.k8s.io
---
kind: Role
apiVersion: rbac.authorization.k8s.io/v1
metadata:
name: leader-locking-nfs-client-provisioner
# replace with namespace where provisioner is deployed
namespace: default
rules:
- apiGroups: [""]
resources: ["endpoints"]
verbs: ["get", "list", "watch", "create", "update", "patch"]
---
kind: RoleBinding
apiVersion: rbac.authorization.k8s.io/v1
metadata:
name: leader-locking-nfs-client-provisioner
# replace with namespace where provisioner is deployed
namespace: default
subjects:
- kind: ServiceAccount
name: nfs-client-provisioner
# replace with namespace where provisioner is deployed
namespace: default
roleRef:
kind: Role
name: leader-locking-nfs-client-provisioner
apiGroup: rbac.authorization.k8s.io
EOF
- 1、
kind: ServiceAccount
: 定義一個服務賬戶,該賬戶負責向叢集申請資源 - 2、
kind: ClusterRole
: 定義叢集角色 - 3、
kind: ClusterRoleBinding
: 叢集角色與服務賬戶繫結 - 4、
kind: Role
: 角色 - 5、
kind: RoleBinding
: 角色與服務賬戶繫結
$ cat > class.yaml << EOF
apiVersion: storage.k8s.io/v1
kind: StorageClass
metadata:
name: managed-nfs-storage
provisioner: fuseim.pri/ifs # or choose another name, must match deployment's env PROVISIONER_NAME'
parameters:
archiveOnDelete: "false"
EOF
$ cat > deployment.yaml << EOF
apiVersion: apps/v1
kind: Deployment
metadata:
name: nfs-client-provisioner
labels:
app: nfs-client-provisioner
# replace with namespace where provisioner is deployed
namespace: default
spec:
replicas: 1
strategy:
type: Recreate
selector:
matchLabels:
app: nfs-client-provisioner
template:
metadata:
labels:
app: nfs-client-provisioner
spec:
serviceAccountName: nfs-client-provisioner
containers:
- name: nfs-client-provisioner
image: quay.io/external_storage/nfs-client-provisioner:latest
volumeMounts:
- name: nfs-client-root
mountPath: /persistentvolumes
env:
- name: PROVISIONER_NAME
value: fuseim.pri/ifs
- name: NFS_SERVER
value: 192.168.99.204 # 注意此處修改
- name: NFS_PATH
value: /data/nfs-volume/
volumes:
- name: nfs-client-root
nfs:
server: 192.168.99.204 # 注意此處修改
path: /data/nfs-volume
EOF
- 修改成自己的nfs服務端ip(192.168.1.204)
- 修改共享目錄(/data/nfs-volume)
$ cat > test-claim.yaml <<EOF
kind: PersistentVolumeClaim
apiVersion: v1
metadata:
name: test-claim
annotations:
volume.beta.kubernetes.io/storage-class: "managed-nfs-storage"
spec:
accessModes:
- ReadWriteMany
resources:
requests:
storage: 1Mi
EOF
$ cat > test-pod.yaml << EOF
kind: Pod
apiVersion: v1
metadata:
name: test-pod
spec:
containers:
- name: test-pod
image: busybox:1.24
command:
- "/bin/sh"
args:
- "-c"
- "touch /mnt/SUCCESS && exit 0 || exit 1"
volumeMounts:
- name: nfs-pvc
mountPath: "/mnt"
restartPolicy: "Never"
volumes:
- name: nfs-pvc
persistentVolumeClaim:
claimName: test-claim
EOF
部署
$ kubectl apply -f rbac.yaml
$ kubectl apply -f class.yaml
$ kubectl apply -f deployment.yaml
$ kubectl apply -f test-claim.yaml
$ kubectl apply -f test-pod.yaml
*************** 當你發現自己的才華撐不起野心時,就請安靜下來學習吧!***************