k8s入坑之路(15)kubernetes共享儲存與StatefulSet有狀態
共享儲存
docker預設是無狀態,當有狀態服務時需要用到共享儲存
- 為什麼需要共享儲存:
- 1.最常見有狀態服務,本地儲存有些程式會把檔案儲存在伺服器目錄中,如果容器重新啟停則會丟失。
- 2.如果使用volume將目錄掛載到容器中,涉及到備份及高可用問題。如果宿主機出現問題則會造成不可用狀態。
kubernetes中提供了共享儲存
1.pv(PresistentVolume持久卷)
2.pvc (PresistentVolumeClaim持久卷宣告)
PV
pv中定義了: pv的容量 pv的訪問模式(readWriteOnce:可讀可寫,但支援被單個pod掛載,replicas為1
readOnlyMany:表示以只讀的方式被多個pod掛載,就是replicas可以大於1
readWriteMany:這種儲存可以以讀寫方式被多個pod共享,就是replicas可以大於1) pv連線的儲存後端地址
pv使用nfs型別:
###將nfs mount到本地目錄中,然後掛載到pod裡。
StorageClass管理pv與pvc
StorageClass管理GFS pv例子:
kubernetes中自動管理共享儲存pv api,當pod數量過多共享儲存需求量大,所以對應的有了storage-class,能夠幫助我們自動的去建立pv。省去了pv的建立與回收。
##pvc通過pv StorageClass-name去繫結pv
架構圖如下:
##手動pv事先建立好,一個pv只能繫結一個後端。當pvc使用時進行繫結。
##自動的後端對應一個StorageClass,pvc根據StorageClass去建立相應大小的pv。pvc與pod是由使用者去負責,使用者建立了pvc匹配不到的話 pod及pvc會處於pendding狀態。如果匹配到k8s就會為他們自動建立起繫結關係。
##一個pv可以給多個pvc使用,一個pvc只能繫結一個pv,一個pv只能繫結一個後端儲存。
storageclass建立pv
apiVersion: storage.k8s.io/v1 kind: StorageClass metadata: name: glusterfs-storage-class provisioner: kubernetes.io/glusterfs parameters: resturl: "http://10.155.20.120:30001" restauthenabled: "false"glusterfs-storage-class.yaml
##指定了後端儲存地址以及storageclass name
storageclass建立pv
kind: PersistentVolumeClaim apiVersion: v1 metadata: name: glusterfs-pvc spec: storageClassName: glusterfs-storage-class accessModes: - ReadWriteOnce resources: requests: storage: 1Giglusterfs-pvc.yaml
###指定了storageclass name以及許可權和大小
驗證pvc
kubectl apply -f gluster-pvc.yaml kubetctl get pvc kubectl get pv 檢視是否繫結 檢視yaml中是否互相綁定了volumeName
pod使用pvc
apiVersion: apps/v1 kind: Deployment metadata: name: web-deploy spec: strategy: rollingUpdate: maxSurge: 25% maxUnavailable: 25% type: RollingUpdate selector: matchLabels: app: web-deploy replicas: 2 template: metadata: labels: app: web-deploy spec: containers: - name: web-deploy image: hub.mooc.com/kubernetes/springboot-web:v1 ports: - containerPort: 8080 volumeMounts: - name: gluster-volume mountPath: "/mooc-data" readOnly: false volumes: - name: gluster-volume persistentVolumeClaim: claimName: glusterfs-pvcpod-pvc.yaml
glusterFS部署
glusterfs部署要求:
- 至少需要3個節點(保證資料存在三個副本)
- 每個節點要有一塊裸磁碟沒有經過分割槽
1、各個節點執行 yum -y install glusterfs glusterfs-fuse 2、檢視api-server和kubelet是否支援 ps -ef |grep apiserver |grep allow-pri 需要--allow-privileged=trueglusterfs安裝
執行glusterfs以deamonset方式執行
kind: DaemonSet apiVersion: extensions/v1beta1 metadata: name: glusterfs labels: glusterfs: daemonset annotations: description: GlusterFS DaemonSet tags: glusterfs spec: template: metadata: name: glusterfs labels: glusterfs: pod glusterfs-node: pod spec: nodeSelector: storagenode: glusterfs hostNetwork: true containers: - image: gluster/gluster-centos:latest imagePullPolicy: IfNotPresent name: glusterfs env: # alternative for /dev volumeMount to enable access to *all* devices - name: HOST_DEV_DIR value: "/mnt/host-dev" # set GLUSTER_BLOCKD_STATUS_PROBE_ENABLE to "1" so the # readiness/liveness probe validate gluster-blockd as well - name: GLUSTER_BLOCKD_STATUS_PROBE_ENABLE value: "1" - name: GB_GLFS_LRU_COUNT value: "15" - name: TCMU_LOGDIR value: "/var/log/glusterfs/gluster-block" resources: requests: memory: 100Mi cpu: 100m volumeMounts: --- kind: DaemonSet apiVersion: extensions/v1beta1 metadata: name: glusterfs labels: glusterfs: daemonset annotations: description: GlusterFS DaemonSet tags: glusterfs spec: template: metadata: name: glusterfs labels: glusterfs: pod glusterfs-node: pod spec: nodeSelector: storagenode: glusterfs #在要部署的node上打上標籤 hostNetwork: true containers: - image: gluster/gluster-centos:latest imagePullPolicy: IfNotPresent name: glusterfs env: # alternative for /dev volumeMount to enable access to *all* devices - name: HOST_DEV_DIR value: "/mnt/host-dev" # set GLUSTER_BLOCKD_STATUS_PROBE_ENABLE to "1" so the # readiness/liveness probe validate gluster-blockd as well - name: GLUSTER_BLOCKD_STATUS_PROBE_ENABLE value: "1" - name: GB_GLFS_LRU_COUNT value: "15" - name: TCMU_LOGDIR value: "/var/log/glusterfs/gluster-block" resources: requests: memory: 100Mi cpu: 100m volumeMounts: - name: glusterfs-heketi mountPath: "/var/lib/heketi" - name: glusterfs-run mountPath: "/run" - name: glusterfs-lvm mountPath: "/run/lvm" - name: glusterfs-etc mountPath: "/etc/glusterfs" - name: glusterfs-logs mountPath: "/var/log/glusterfs" - name: glusterfs-config mountPath: "/var/lib/glusterd" - name: glusterfs-host-dev mountPath: "/mnt/host-dev" - name: glusterfs-misc mountPath: "/var/lib/misc/glusterfsd" - name: glusterfs-block-sys-class mountPath: "/sys/class" - name: glusterfs-block-sys-module mountPath: "/sys/module" - name: glusterfs-cgroup mountPath: "/sys/fs/cgroup" readOnly: true - name: glusterfs-ssl mountPath: "/etc/ssl" readOnly: true - name: kernel-modules mountPath: "/usr/lib/modules" readOnly: true securityContext: capabilities: {} privileged: true readinessProbe: timeoutSeconds: 3 initialDelaySeconds: 40 exec: command: - "/bin/bash" - "-c" - "if command -v /usr/local/bin/status-probe.sh; then /usr/local/bin/status-probe.sh readiness; else systemctl status glusterd.service; fi" periodSeconds: 25 successThreshold: 1 failureThreshold: 50 livenessProbe: timeoutSeconds: 3 initialDelaySeconds: 40 exec: command: - "/bin/bash" - "-c" - "if command -v /usr/local/bin/status-probe.sh; then /usr/local/bin/status-probe.sh liveness; else systemctl status glusterd.service; fi" periodSeconds: 25 successThreshold: 1 failureThreshold: 50 volumes: - name: glusterfs-heketi hostPath: path: "/var/lib/heketi" - name: glusterfs-run - name: glusterfs-lvm hostPath: path: "/run/lvm" - name: glusterfs-etc hostPath: path: "/etc/glusterfs" - name: glusterfs-logs hostPath: path: "/var/log/glusterfs" - name: glusterfs-config hostPath: path: "/var/lib/glusterd" - name: glusterfs-host-dev hostPath: path: "/dev" - name: glusterfs-misc hostPath: path: "/var/lib/misc/glusterfsd" - name: glusterfs-block-sys-class hostPath: path: "/sys/class" - name: glusterfs-block-sys-module hostPath: path: "/sys/module" - name: glusterfs-cgroup hostPath: path: "/sys/fs/cgroup" - name: glusterfs-ssl hostPath: path: "/etc/ssl" - name: kernel-modules hostPath: path: "/usr/lib/modules"glusterfs-deamonset.yaml
為glusterfs節點打上標籤並部署
kubectl label node node-2 storagenode=glusterfs kubectl apply -f glusterfs-deamonset.yaml kubectl get pods -o wide
為了方便操作引用heketi服務
heketi部署
apiVersion: rbac.authorization.k8s.io/v1 kind: ClusterRoleBinding metadata: name: heketi-clusterrolebinding roleRef: apiGroup: rbac.authorization.k8s.io kind: ClusterRole name: heketi-clusterrole subjects: - kind: ServiceAccount name: heketi-service-account namespace: default --- apiVersion: v1 kind: ServiceAccount metadata: name: heketi-service-account namespace: default --- apiVersion: rbac.authorization.k8s.io/v1 kind: ClusterRole metadata: name: heketi-clusterrole rules: - apiGroups: - "" resources: - pods - pods/status - pods/exec verbs: - get - list - watch - create建立heketi service-account
kind: Service apiVersion: v1 metadata: name: heketi labels: glusterfs: heketi-service deploy-heketi: support annotations: kind: Service apiVersion: v1 metadata: name: heketi labels: glusterfs: heketi-service deploy-heketi: support annotations: description: Exposes Heketi Service spec: selector: name: heketi ports: - name: heketi port: 80 targetPort: 8080 --- apiVersion: v1 kind: ConfigMap metadata: name: tcp-services namespace: ingress-nginx data: "30001": default/heketi:80 --- kind: Deployment apiVersion: extensions/v1beta1 metadata: name: heketi labels: glusterfs: heketi-deployment annotations: description: Defines how to deploy Heketi spec: replicas: 1 template: metadata: name: heketi labels: name: heketi glusterfs: heketi-pod spec: serviceAccountName: heketi-service-account containers: - image: heketi/heketi:dev imagePullPolicy: Always name: heketi env: - name: HEKETI_EXECUTOR value: "kubernetes" - name: HEKETI_DB_PATH value: "/var/lib/heketi/heketi.db" - name: HEKETI_FSTAB value: "/var/lib/heketi/fstab" - name: HEKETI_SNAPSHOT_LIMIT value: "14" - name: HEKETI_KUBE_GLUSTER_DAEMONSET value: "y" ports: - containerPort: 8080 volumeMounts: - name: db mountPath: /var/lib/heketi readinessProbe: timeoutSeconds: 3 initialDelaySeconds: 3 httpGet: path: /hello port: 8080 livenessProbe: timeoutSeconds: 3 initialDelaySeconds: 30 httpGet: path: /hello port: 8080 volumes: - name: db hostPath: path: "/heketi-data"部署heketi deployment
進入heketi容器中環境變數
export HEKETI_CLI_SERVER=http://localhost:8080
修改clusterfs配置檔案指明clusterfs node ip以及裸磁碟路徑
{ "clusters": [ { { { { { "nodes": [ { "node": { "hostnames": { "manage": [ "gluster-01" ], "storage": [ "10.155.56.56" ] }, "zone": 1 }, "devices": [ { "name": "/dev/sdb", "destroydata": false } ] }, { "node": { "hostnames": { "manage": [ "gluster-02" ], "storage": [ "10.155.56.57" ] }, "zone": 1 }, "devices": [ { "name": "/dev/sdb", "destroydata": false } ] }, { "node": { "hostnames": { "manage": [ "gluster-03" ], "storage": [ "10.155.56.102" ] }, "zone": 1 }, "devices": [ { "name": "/dev/sdb", "destroydata": false } ] } ] } ] }topology.json
把配置檔案寫入heketi容器中
heketi-cli topology load --json topology.json#heketi根據配置檔案找到glusterfs node對glusterfs做初始化操作 heketi-cli topology info #檢視當前clusterfs叢集拓撲 進入clusterfs node中驗證是否成功 gluster peer status #檢視資訊
pvc
pvc中定義了對所需的資源的一個描述,以及需要的許可權
pv與pvc進行繫結 1.pv要滿足pvc的需求(儲存大小,讀寫許可權) 2.pv要與pvc storage-classname要相同 3.描述中根據欄位storageclassname去自動的繫結互相繫結對方volumeName
#本質上在pvc資源描述物件中把pv的名字新增進去
pvc的使用
#原理:通過pv及pvc的兩層抽象,pod在使用共享儲存時非常的簡單。pod中聲明瞭pvc的名字,pvc中描述了pod的需求。pvc綁定了pv,pv中描述了具體儲存後端,如何訪問,具體引數。
簡單總結:
1.pv獨立於pod存在 2.pv可以建立動態pv或者靜態pv。動態pv不需要手動去建立。靜態pv需要手動建立 3.訪問模式:ReadWriteOnce:可讀可寫只能mount到一個節點. ReadOnlyMany:PV能模式掛載到多個節點 4.回收規則:PV 支援的回收策略有: Retain. Recycle.delete Retain 管理員回收:kubectl delete pv pv-name 建立:kubectl apply -f pv-name.yaml ;Retain策略 在刪除pvc後PV變為Released不可用狀態, 若想重新被使用,需要管理員刪除pv,重新建立pv,刪除pv並不會刪除儲存的資源,只是刪除pv物件而已;若想保留資料,請使用該Retain, Recycle策略 – 刪除pvc自動清除PV中的資料,效果相當於執行 rm -rf /thevolume/*. 刪除pvc時.pv的狀態由Bound變為Available.此時可重新被pvc申請繫結 Delete – 刪除儲存上的對應儲存資源,例如 AWS EBS、GCE PD、Azure Disk、OpenStack Cinder Volume 等,NFS不支援delete策略 5.storageClassName:在pvc的請求儲存大小和訪問許可權與建立的pv一致的情況下 根據storageClassName進行與pv繫結。常用在pvc需要和特定pv進行繫結的情況下。舉例:當有建立多個pv設定儲存的大小和訪問許可權一致時,且pv,pvc沒有配置storageClassName時,pvc會根據儲存大小和訪問許可權去隨機匹配。如果配置了storageClassName會根據這三個條件進行匹配。當然也可以用其他方法實現pvc與特定pv的繫結如標籤.標籤方法上一篇就是,這裡就不再贅述。