1. 程式人生 > 實用技巧 >ceph 運維-檔案儲存

ceph 運維-檔案儲存

一 摘要

本文是載centos8.1 上部署ceph 檔案儲存客戶端。

二 環境

(一) ceph server 端資訊

(二) ceph client 端資訊

三 檔案儲存運維

(一) 部署ceph fs

3.1.1 ceph-deploy 部署ceph fs

部署節點,cephadmin 使用者執行

[cephadmin@ceph001 cephcluster]$ ceph-deploy mds create ceph001 ceph002 ceph003
[ceph_deploy.conf][DEBUG ] found configuration file at: /home/cephadmin/.cephdeploy.conf
[ceph_deploy.cli][INFO  ] Invoked (2.0.1): /bin/ceph-deploy mds create ceph001 ceph002 ceph003
[ceph_deploy.cli][INFO  ] ceph-deploy options:
[ceph_deploy.cli][INFO  ]  username                      : None
[ceph_deploy.cli][INFO  ]  verbose                       : False
[ceph_deploy.cli][INFO  ]  overwrite_conf                : False
[ceph_deploy.cli][INFO  ]  subcommand                    : create
[ceph_deploy.cli][INFO  ]  quiet                         : False
[ceph_deploy.cli][INFO  ]  cd_conf                       : <ceph_deploy.conf.cephdeploy.Conf instance at 0x7fe39399da28>
[ceph_deploy.cli][INFO  ]  cluster                       : ceph
[ceph_deploy.cli][INFO  ]  func                          : <function mds at 0x7fe393befe60>
[ceph_deploy.cli][INFO  ]  ceph_conf                     : None
[ceph_deploy.cli][INFO  ]  mds                           : [('ceph001', 'ceph001'), ('ceph002', 'ceph002'), ('ceph003', 'ceph003')]
[ceph_deploy.cli][INFO  ]  default_release               : False
[ceph_deploy.mds][DEBUG ] Deploying mds, cluster ceph hosts ceph001:ceph001 ceph002:ceph002 ceph003:ceph003

每個節點部署成功可以看到

[ceph001][INFO  ] Running command: sudo ceph --cluster ceph --name client.bootstrap-mds --keyring /var/lib/ceph/bootstrap-mds/ceph.keyring auth get-or-create mds.ceph001 osd allow rwx mds allow mon allow profile mds -o /var/lib/ceph/mds/ceph-ceph001/keyring
[ceph001][INFO  ] Running command: sudo systemctl enable ceph-mds@ceph001
[ceph001][WARNIN] Created symlink from /etc/systemd/system/ceph-mds.target.wants/[email protected] to /usr/lib/systemd/system/[email protected].
[ceph001][INFO  ] Running command: sudo systemctl start ceph-mds@ceph001
[ceph001][INFO  ] Running command: sudo systemctl enable ceph.target
[ceph002][DEBUG ] connection detected need for sudo

3.1.2 建立cephfs 儲存池

檢視osd 硬碟種類是機械盤 還是ssd,若效能要求較高 ,可以使用ssd 做為元資料池


[root@ceph001 ~]# ceph osd tree
ID CLASS WEIGHT  TYPE NAME        STATUS REWEIGHT PRI-AFF
-1       0.14639 root default
-3       0.04880     host ceph001
 0   hdd 0.04880         osd.0        up  1.00000 1.00000
-5       0.04880     host ceph002
 1   hdd 0.04880         osd.1        up  1.00000 1.00000
-7       0.04880     host ceph003
 2   hdd 0.04880         osd.2        up  1.00000 1.00000

分 別建立資料池和元資料池,我這裡既使用了root 使用者又使用了cephadmin 使用者,最好只有cephadmin 使用者

[root@ceph001 ~]# ceph osd pool create cephfs_data 64
pool 'cephfs_data' created
[root@ceph001 ~]# ceph -s
  cluster:
    id:     69002794-cf45-49fa-8849-faadae48544f
    health: HEALTH_WARN
            application not enabled on 1 pool(s)

  services:
    mon: 3 daemons, quorum ceph001,ceph002,ceph003 (age 21h)
    mgr: ceph002(active, since 20h), standbys: ceph003, ceph001
    mds:  3 up:standby
    osd: 3 osds: 3 up (since 21h), 3 in (since 21h)

  data:
    pools:   2 pools, 128 pgs
    objects: 42 objects, 116 MiB
    usage:   3.4 GiB used, 147 GiB / 150 GiB avail
    pgs:     128 active+clean

[root@ceph001 ~]# su - cephadmin
Last login: Tue Dec  1 14:16:31 CST 2020 on pts/0
[cephadmin@ceph001 ~]$ ceph osd pool create cephfs_metadata 64
pool 'cephfs_metadata' created
[cephadmin@ceph001 ~]$

3.1.3 啟用檔案系統

命令格式 ceph fs new 檔案系統名 元資料池 資料池

ceph fs new <fs_name>

$ ceph fs new cephfs cephfs_metadata cephfs_data # 啟用檔案系統

[cephadmin@ceph001 cephcluster]$ ceph fs new cephfs cephfs_metadata cephfs_data
new fs with metadata pool 3 and data pool 2
[cephadmin@ceph001 cephcluster]$

3.1.4 檢視

[cephadmin@ceph001 cephcluster]$ ceph mds stat
cephfs:1 {0=ceph003=up:active} 2 up:standby
[cephadmin@ceph001 cephcluster]$ ceph osd pool ls
rbd
cephfs_data
cephfs_metadata
[cephadmin@ceph001 cephcluster]$ ceph fs ls
name: cephfs, metadata pool: cephfs_metadata, data pools: [cephfs_data ]
[cephadmin@ceph001 cephcluster]$

3.1.5 建立使用者

建立使用者(可選,因為部署時,已經生成,不過我們更傾向於定義一個普通賬戶)

在/home/cephadmin/cephcluster該目錄下執行命令,生成 ceph.client.cephfs.keyring

[cephadmin@ceph001 cephcluster]$ pwd
/home/cephadmin/cephcluster
[cephadmin@ceph001 cephcluster]$ ceph auth get-or-create client.cephfs mon 'allow r' mds 'allow r, allow rw path=/' osd 'allow rw pool=cephfs_data' -o ceph.client.cephfs.keyring
[cephadmin@ceph001 cephcluster]$ ll
total 156
-rw------- 1 cephadmin cephadmin    113 Nov 30 17:17 ceph.bootstrap-mds.keyring
-rw------- 1 cephadmin cephadmin    113 Nov 30 17:17 ceph.bootstrap-mgr.keyring
-rw------- 1 cephadmin cephadmin    113 Nov 30 17:17 ceph.bootstrap-osd.keyring
-rw------- 1 cephadmin cephadmin    113 Nov 30 17:17 ceph.bootstrap-rgw.keyring
-rw------- 1 cephadmin cephadmin    151 Nov 30 17:17 ceph.client.admin.keyring
-rw-rw-r-- 1 cephadmin cephadmin     64 Dec  1 15:11 ceph.client.cephfs.keyring
-rw-rw-r-- 1 cephadmin cephadmin     61 Dec  1 09:45 ceph.client.rbd.keyring
-rw-rw-r-- 1 cephadmin cephadmin    313 Nov 30 17:09 ceph.conf
-rw-rw-r-- 1 cephadmin cephadmin    247 Nov 30 17:00 ceph.conf.bak.orig
-rw-rw-r-- 1 cephadmin cephadmin 115251 Dec  1 14:16 ceph-deploy-ceph.log
-rw------- 1 cephadmin cephadmin     73 Nov 30 16:50 ceph.mon.keyring
[cephadmin@ceph001 cephcluster]$

(二) 掛載ceph fs

客戶端掛載cepffs 由兩種方式,一是 linux 核心驅動掛載,二是 fuse 掛載ceph fs

3.2.1 fuse 掛載ceph fs

首先要安裝fuse ,配置yum 等 參考上篇 ceph運維繫列-塊儲存

3.2.1.1 下載fuse 安裝包及安裝cephfuse

下載

[root@cephclient ~]# yum -y install --downloadonly --downloaddir=/root/software/ceph-fusecentos8/ ceph-fuse

安裝

[root@cephclient ceph-fusecentos8]# yum -y install  ceph-fuse

3.2.1.2 掛載目錄

首先將生成測key 從server 端拷貝到client

[cephadmin@ceph001 cephcluster]$ pwd
/home/cephadmin/cephcluster
[cephadmin@ceph001 cephcluster]$ scp ceph.client.cephfs.keyring [email protected]:/etc/ceph/
[email protected]'s password:
ceph.client.cephfs.keyring                                                                                   100%   64    13.1KB/s   00:00
[cephadmin@ceph001 cephcluster]$

將ceph.conf 拷貝到 客戶端/etc/ceph/

本機器以前以拷貝過,此處略。

客戶端執行掛載

[root@cephclient ~]# mkdir /mnt/cephfs
[root@cephclient ~]# ceph-fuse --keyring /etc/ceph/ceph.client.cephfs.keyring --name client.cephfs -m ceph001:6789 /mnt/cephfs
ceph-fuse[2020-12-01 18:02:34.065 7ff1b6c121c0 -1 init, newargv = 0x55d0027051b0 newargc=9
24671]: starting ceph client
ceph-fuse[24671]: starting fuse
[root@cephclient ~]#

設定開機自動掛載


none /mnt/cephfs fuse.ceph ceph.id=cephfs,_netdev,defaults 0 0