1. 程式人生 > 實用技巧 >ROOK部署ceph叢集並建立PVC

ROOK部署ceph叢集並建立PVC

Rook概述

Ceph簡介
Ceph是一種高度可擴充套件的分散式儲存解決方案,提供物件、檔案和塊儲存。在每個儲存節點上,將找到Ceph儲存物件的檔案系統和Ceph OSD(物件儲存守護程式)程序。在Ceph叢集上,還存在Ceph MON(監控)守護程式,它們確保Ceph叢集保持高可用性。

Rook簡介
Rook 是一個開源的cloud-native storage編排, 提供平臺和框架;為各種儲存解決方案提供平臺、框架和支援,以便與雲原生環境本地整合。目前主要專用於Cloud-Native環境的檔案、塊、物件儲存服務。它實現了一個自我管理的、自我擴容的、自我修復的分散式儲存服務。
Rook支援自動部署、啟動、配置、分配(provisioning)、擴容/縮容、升級、遷移、災難恢復、監控,以及資源管理。為了實現所有這些功能,Rook依賴底層的容器編排平臺,例如 kubernetes、CoreOS 等。。
Rook 目前支援Ceph、NFS、Minio Object Store、Edegefs、Cassandra、CockroachDB 儲存的搭建。
Rook機制:
Rook 提供了卷外掛,來擴充套件了 K8S 的儲存系統,使用 Kubelet 代理程式 Pod 可以掛載 Rook 管理的塊裝置和檔案系統。
Rook Operator 負責啟動並監控整個底層儲存系統,例如 Ceph Pod、Ceph OSD 等,同時它還管理 CRD、物件儲存、檔案系統。
Rook Agent 代理部署在 K8S 每個節點上以 Pod 容器執行,每個代理 Pod 都配置一個 Flexvolume 驅動,該驅動主要用來跟 K8S 的卷控制框架整合起來,每個節點上的相關的操作,例如新增儲存裝置、掛載、格式化、刪除儲存等操作,都有該代理來完成。

Rook架構

Rook架構如下:

環境
192.168.200.3 master1
192.168.200.4 master2
192.168.200.5 master3
192.168.200.6 node1
192.168.200.7 node2
192.168.200.8 node3

1.三臺node節點分別新增一塊磁碟並輸入以下命令識別磁碟(不用重啟)
echo "- - -" >/sys/class/scsi_host/host0/scan
echo "- - -" >/sys/class/scsi_host/host1/scan
echo "- - -" >/sys/class/scsi_host/host2/scan

2.克隆github專案
git clone https://github.com/rook/rook.git

3.切換到需要的版本分支
git checkout -b release-1.1 remotes/origin/release-1.1
git branch -a

4.使用node節點儲存,在master1上需要修改引數
kubectl taint node master1 node-role.kubernetes.io/master="":NoSchedule
kubectl taint node master2 node-role.kubernetes.io/master="":NoSchedule
kubectl taint node master1 node-role.kubernetes.io/master="":NoSchedule
kubectl label nodes {node1,node2,node3} ceph-osd=enabled
kubectl label nodes {node1,node2,node3} ceph-mon=enabled
kubectl label nodes node1 ceph-mgr=enabled

5.進入專案路徑安裝operator
cd rook/cluster/examples/kubernetes/ceph
kubectl apply -f common.yaml
kubectl apply -f operator.yaml

6.配置cluster.yaml
apiVersion: ceph.rook.io/v1
kind: CephCluster
metadata:
name: rook-ceph
namespace: rook-ceph
spec:
cephVersion:
image: ceph/ceph:v14.2.4-20190917
allowUnsupported: false
dataDirHostPath: /var/lib/rook
skipUpgradeChecks: false
mon:
count: 3
allowMultiplePerNode: false
dashboard:
enabled: true
ssl: true
monitoring:
enabled: false
rulesNamespace: rook-ceph
network:
hostNetwork: false
rbdMirroring:
workers: 0
placement:

all:

nodeAffinity:

requiredDuringSchedulingIgnoredDuringExecution:

nodeSelectorTerms:

- matchExpressions:

- key: role

operator: In

values:

- storage-node

podAffinity:

podAntiAffinity:

tolerations:

- key: storage-node

operator: Exists

mon:
  nodeAffinity:
    requiredDuringSchedulingIgnoredDuringExecution:
      nodeSelectorTerms:
      - matchExpressions:
        - key: ceph-mon
          operator: In
          values:
          - enabled
  tolerations:
  - key: ceph-mon
    operator: Exists
ods:
  nodeAffinity:
    requiredDuringSchedulingIgnoredDuringExecution:
      nodeSelectorTerms:
      - matchExpressions:
        - key: ceph-osd
          operator: In
          values:
          - enabled
  tolerations:
  - key: ceph-osd
    operator: Exists
mgr:
  nodeAffinity:
    requiredDuringSchedulingIgnoredDuringExecution:
      nodeSelectorTerms:
      - matchExpressions:
        - key: ceph-mgr
          operator: In
          values:
          - enabled
  tolerations:
  - key: ceph-mgr
    operator: Exists

annotations:
resources:
removeOSDsIfOutAndSafeToRemove: false
storage:
useAllNodes: false #關閉使用所有Node
useAllDevices: false #關閉使用所有裝置
deviceFilter: sdb
config:
metadataDevice:
databaseSizeMB: "1024"
journalSizeMB: "1024"
nodes:
- name: "node1" #指定儲存節點主機
config:
storeType: bluestore #指定型別為裸磁碟
devices:
- name: "sdb" #指定磁碟為sdb
- name: "node2"
config:
storeType: bluestore
devices:
- name: "sdb"
- name: "node3"
config:
storeType: bluestore
devices:
- name: "sdb"
disruptionManagement:
managePodBudgets: false
osdMaintenanceTimeout: 30
manageMachineDisruptionBudgets: false
machineDisruptionBudgetNamespace: openshift-machine-api

7.安裝cluster.yaml
kubectl apply -f cluster.yaml

8.檢視狀態
kubectl get pod -n rook-ceph

9.安裝Toolbox
toolbox是一個rook的工具集容器,該容器中的命令可以用來除錯、測試Rook,對Ceph臨時測試的操作一般在這個容器內執行。
kubectl apply -f toolbox.yaml
kubectl -n rook-ceph get pod -l "app=rook-ceph-tools"

10.測試Rook
kubectl -n rook-ceph exec -it $(kubectl -n rook-ceph get pod -l "app=rook-ceph-tools" -o jsonpath='{.items[0].metadata.name}') bash




Ceph塊儲存

11.建立StorageClass
在提供(Provisioning)塊儲存之前,需要先建立StorageClass和儲存池。K8S需要這兩類資源,才能和Rook互動,進而分配持久卷(PV)。
詳解:如下配置檔案中會建立一個名為replicapool的儲存池,和rook-ceph-block的storageClass。
apiVersion: ceph.rook.io/v1
kind: CephBlockPool
metadata:
name: replicapool
namespace: rook-ceph
spec:
failureDomain: host
replicated:
size: 3

apiVersion: storage.k8s.io/v1
kind: StorageClass
metadata:
name: rook-ceph-block
provisioner: rook-ceph.rbd.csi.ceph.com
parameters:
clusterID: rook-ceph
pool: replicapool
imageFormat: "2"
imageFeatures: layering
csi.storage.k8s.io/provisioner-secret-name: rook-csi-rbd-provisioner
csi.storage.k8s.io/provisioner-secret-namespace: rook-ceph
csi.storage.k8s.io/node-stage-secret-name: rook-csi-rbd-node
csi.storage.k8s.io/node-stage-secret-namespace: rook-ceph
csi.storage.k8s.io/fstype: ext4
reclaimPolicy: Delete

12.安裝storageclass.yaml
kubectl apply -f storageclass.yaml

13.建立PVC
詳解:這裡建立相應的PVC,storageClassName:為基於rook Ceph叢集的rook-ceph-block。
apiVersion: v1
kind: PersistentVolumeClaim
metadata:
name: block-pvc
spec:
storageClassName: rook-ceph-block
accessModes:

  • ReadWriteOnce
    resources:
    requests:
    storage: 200Mi

14.安裝pvc.yaml
kubectl apply -f pvc.yaml

dashboard

15.已建立dashboard,但僅使用clusterIP暴露服務,使用如下官方提供的預設yaml可部署外部nodePort方式暴露服務的dashboard。

apiVersion: v1
kind: Service
metadata:
name: rook-ceph-mgr-dashboard-external-https
namespace: rook-ceph
labels:
app: rook-ceph-mgr
rook_cluster: rook-ceph
spec:
ports:

  • name: dashboard
    port: 8443
    protocol: TCP
    targetPort: 8443
    selector:
    app: rook-ceph-mgr
    rook_cluster: rook-ceph
    sessionAffinity: None
    type: NodePort

kubectl create -f dashboard-external-https.yaml

16.檢視狀態
kubectl get svc -n rook-ceph

17.檢視密碼並登陸
kubectl -n rook-ceph get secret rook-ceph-dashboard-password -o jsonpath='{.data.password}' | base64 --decode

https://ip:port

到此結束