5. Ceph CephFS檔案儲存
阿新 • • 發佈:2021-01-05
一、 建立MDS(CephFS叢集)
1、建立mds叢集
ceph-deploy mds create ceph01 ceph02 ceph03
2、建立元資料pool和資料pool
ceph osd pool create cephfs_metadata 64
ceph osd pool create cephfs_data 64
ceph fs new cephfs cephfs_metadata cephfs_data
[[email protected] my-cluster]# ceph fs ls
name: cephfs, metadata pool: cephfs_metadata, data pools: [ cephfs_data ]
3、CephFS mount方式掛載
[[email protected] ~]# cat /etc/ceph/ceph.client.admin.keyring
[client.admin]
key = AQDVV+Nf6nwoJRAA9DQaKs8jcHEXuby8YInhhQ==
caps mds = "allow *"
caps mgr = "allow *"
caps mon = "allow *"
caps osd = "allow *"
[[email protected] ~]# mount -t ceph ceph02:/ /media/ -o name=admin,secret=AQDVV+Nf6nwoJRAA9DQaKs8jcHEXuby8YInhhQ==
[[email protected] ~]# df -h /media/
開機自動掛載
#1.將金鑰匯入到檔案中
echo 'AQDVV+Nf6nwoJRAA9DQaKs8jcHEXuby8YInhhQ==' > /root/.secret.key
echo 'ceph01:6789:/ /media ceph name=admin,secretfile=/root/.secret.key,noatime,_netdev 0 0' >> /etc/fstab
mount -a
4、cephfs管理命令
#查詢所有的檔案系統
ceph fs ls
#查詢檔案系統狀態
ceph fs status cephfs
#更改檔案系統多活節點
ceph fs cephfs set max_mds 2
ceph fs cephfs set max_mds 1
1. 設定儲存池副本數
#ceph osd pool get testpool size
#ceph osd pool set testpool size 3
2. 列印儲存池列表
#ceph osd lspools
3. 建立 刪除 儲存池
建立pool
# ceph osd pool create testPool 64
重新命名pool
# ceph osd pool rename testPool amizPool
獲取pool 副本數
# ceph osd pool get amizPool size
設定pool 副本數
# ceph osd pool set amizPool size 3
獲取pool pg_num/pgp_num
# ceph osd pool get amizPool pg_num
# ceph osd pool get amizPool pgp_num
設定pool pg_num/pgp_num
# ceph osd pool set amizPool pg_num 128
# ceph osd pool set amizPool pgp_num 128
刪除儲存池
# ceph osd pool delete amizPool amizPool --yes-i-really-really-mean-it
刪除池提示錯誤
Error EBUSY: pool 'testpool' is in use by CephFS
# ceph mds remove_data_pool testpool
# ceph osd pool delete testpool testpool --yes-i-really-really-mean-it
4. 設定檢視儲存池pool 配額
# ceph osd pool set-quota cephfs max_bytes 6G
檢視儲存池pool 配額
# ceph osd pool get-quota poolroom1
quotas for pool 'poolroom1':
max objects: N/A
max bytes : 6144MB # 儲存池pool配額 6G
二、Kuboard K8S 叢集新增CephFS持久化儲存
請參考 kuboard閘道器文件 https://www.kuboard.cn/learning/k8s-intermediate/persistent/ceph/k8s-config.html#%E5%89%8D%E6%8F%90%E6%9D%A1%E4%BB%B6