使用ceph-deploy 部署叢集
Quick Ceph Deploy
叢集內有兩個節點(tom-1, tom-2),在 tom-1 中通過 ceph-deploy 來部署安裝整個叢集。均為 centos7.1 系統。
PREFLIGHT CHECKLIST
1. Add ceph repositories
官方的映象源較慢,這裡使用阿里提供的yum源
[root@tom-1 yum.repos.d]# cat ceph.repo
[Ceph]
name=Ceph packages for $basearch
baseurl=http://mirrors.aliyun.com/ceph/rpm-jewel/el7/$basearch
enabled =1
gpgcheck=0
type=rpm-md
priority=1
[Ceph-noarch]
name=Ceph noarch packages
baseurl=http://mirrors.aliyun.com/ceph/rpm-jewel/el7/noarch
enabled=1
gpgcheck=0
type=rpm-md
priority=1
[ceph-source]
name=Ceph source packages
baseurl=http://mirrors.aliyun.com/ceph/rpm-jewel/el7/SRPMS
enabled=1
gpgcheck=0
type=rpm-md
priority =1
除此之外還需要 EPEL 倉庫:
yum install -y https://dl.fedoraproject.org/pub/epel/epel-release-latest-7.noarch.rpm
以上步驟在 tom-1 和 tom-2 上都需要執行
2. Install NTP
3. Enable password-less ssh
使用ceph-deploy安裝叢集的時候,需要在其他node上執行安裝命令和配置檔案,因此需要做免密碼登入。
在 tom-1 上執行:
ssh-keygen
Generating public/private key pair .
Enter file in which to save the key (/ceph-admin/.ssh/id_rsa):
Enter passphrase (empty for no passphrase):
Enter same passphrase again:
Your identification has been saved in /ceph-admin/.ssh/id_rsa.
Your public key has been saved in /ceph-admin/.ssh/id_rsa.pub.
拷貝金鑰至tom-2
ssh-copy-id tom-2
4. Open reqiured ports
關閉防火牆或者放行Ceph使用埠
放行 6789 (MON), 6800:7300 (OSD)
iptables -A INPUT -i {iface} -p tcp -s {ip-address}/{netmask} --dport 6789 -j ACCEPT
iptables -A INPUT -i {iface} -m multiport -p tcp -s {ip-address}/{netmask} --dports 6800:7300 -j ACCEPT
5. Close selinux
setenforce 0
CREATE A CLUSTER
在 tom-1 上
- 建立一個目錄夾,用來儲存部署叢集時生成的配置檔案和日誌
[root@tom-1 ceph-cluster]# mkdir ceph-cluster
[root@tom-1 ceph-cluster]# cd ceph-cluster
初始化叢集配置
[root@tom-1 ceph-cluster]# ceph-deploy new tom-1
將 tom-1 設為 monitor 節點,此時當前目錄會生成一個
ceph.conf
的叢集配置檔案[root@tom-1 ceph-cluster]# echo "osd pool default size = 2" >> ceph.conf [root@tom-1 ceph-cluster]# echo "public network = 172.16.6.0/24" >> ceph.conf
將叢集
osd
個數設為 2, 以及設定ceph
對外提供服務的網路(伺服器上存在多網絡卡且不同網段時需要設定)[root@tom-1 ceph-cluster]# cat ceph.conf [global] fsid = c02c3880-2879-4ee8-93dc-af0e9dba3727 mon_initial_members = tom-1 mon_host = 172.16.6.249 auth_cluster_required = cephx auth_service_required = cephx auth_client_required = cephx osd pool default size = 2 public network = 172.16.6.0/24
安裝 ceph
[root@tom-1 ceph-cluster]# ceph-deploy --username root install tom-{1,2}
如果網速較慢,可能會導致命令執行失敗。不過可以手動安裝所需包。
yum -y install yum-plugin-priorities yum -y install ceph ceph-radosgw
初始化monitor節點
[root@tom-1 ceph-cluster]# ceph-deploy --overwrite-conf mon create-initial [ceph_deploy.conf][DEBUG ] found configuration file at: /root/.cephdeploy.conf [ceph_deploy.cli][INFO ] Invoked (1.5.37): /usr/bin/ceph-deploy --overwrite-conf mon create-initial [ceph_deploy.cli][INFO ] ceph-deploy options: [ceph_deploy.cli][INFO ] username : None [ceph_deploy.cli][INFO ] verbose : False [ceph_deploy.cli][INFO ] overwrite_conf : True [ceph_deploy.cli][INFO ] subcommand : create-initial [ceph_deploy.cli][INFO ] quiet : False [ceph_deploy.cli][INFO ] cd_conf : <ceph_deploy.conf.cephdeploy.Conf instance at 0x2515c68> [ceph_deploy.cli][INFO ] cluster : ceph [ceph_deploy.cli][INFO ] func : <function mon at 0x24c89b0> [ceph_deploy.cli][INFO ] ceph_conf : None [ceph_deploy.cli][INFO ] default_release : False [ceph_deploy.cli][INFO ] keyrings : None [ceph_deploy.mon][DEBUG ] Deploying mon, cluster ceph hosts tom-1 [ceph_deploy.mon][DEBUG ] detecting platform for host tom-1 ... [tom-1][DEBUG ] connected to host: tom-1 [tom-1][DEBUG ] detect platform information from remote host [tom-1][DEBUG ] detect machine type [tom-1][DEBUG ] find the location of an executable [ceph_deploy.mon][INFO ] distro info: CentOS Linux 7.1.1503 Core [tom-1][DEBUG ] determining if provided host has same hostname in remote [tom-1][DEBUG ] get remote short hostname [tom-1][DEBUG ] deploying mon to tom-1 [tom-1][DEBUG ] get remote short hostname [tom-1][DEBUG ] remote hostname: tom-1 [tom-1][DEBUG ] write cluster configuration to /etc/ceph/{cluster}.conf [tom-1][DEBUG ] create the mon path if it does not exist [tom-1][DEBUG ] checking for done path: /var/lib/ceph/mon/ceph-tom-1/done [tom-1][DEBUG ] done path does not exist: /var/lib/ceph/mon/ceph-tom-1/done [tom-1][INFO ] creating keyring file: /var/lib/ceph/tmp/ceph-tom-1.mon.keyring [tom-1][DEBUG ] create the monitor keyring file [tom-1][INFO ] Running command: ceph-mon --cluster ceph --mkfs -i tom-1 --keyring /var/lib/ceph/tmp/ceph-tom-1.mon.keyring --setuser 167 --setgroup 167 [tom-1][DEBUG ] ceph-mon: renaming mon.noname-a 172.16.6.249:6789/0 to mon.tom-1 [tom-1][DEBUG ] ceph-mon: set fsid to c02c3880-2879-4ee8-93dc-af0e9dba3727 [tom-1][DEBUG ] ceph-mon: created monfs at /var/lib/ceph/mon/ceph-tom-1 for mon.tom-1 [tom-1][INFO ] unlinking keyring file /var/lib/ceph/tmp/ceph-tom-1.mon.keyring [tom-1][DEBUG ] create a done file to avoid re-doing the mon deployment [tom-1][DEBUG ] create the init path if it does not exist [tom-1][INFO ] Running command: systemctl enable ceph.target [tom-1][INFO ] Running command: systemctl enable ceph-mon@tom-1 [tom-1][WARNIN] Created symlink from /etc/systemd/system/ceph-mon.target.wants/[email protected] to /usr/lib/systemd/system/[email protected]. [tom-1][INFO ] Running command: systemctl start ceph-mon@tom-1 [tom-1][INFO ] Running command: ceph --cluster=ceph --admin-daemon /var/run/ceph/ceph-mon.tom-1.asok mon_status [tom-1][DEBUG ] ******************************************************************************** [tom-1][DEBUG ] status for monitor: mon.tom-1 [tom-1][DEBUG ] { [tom-1][DEBUG ] "election_epoch": 3, [tom-1][DEBUG ] "extra_probe_peers": [], [tom-1][DEBUG ] "monmap": { [tom-1][DEBUG ] "created": "2017-06-16 11:08:55.887144", [tom-1][DEBUG ] "epoch": 1, [tom-1][DEBUG ] "fsid": "c02c3880-2879-4ee8-93dc-af0e9dba3727", [tom-1][DEBUG ] "modified": "2017-06-16 11:08:55.887144", [tom-1][DEBUG ] "mons": [ [tom-1][DEBUG ] { [tom-1][DEBUG ] "addr": "172.16.6.249:6789/0", [tom-1][DEBUG ] "name": "tom-1", [tom-1][DEBUG ] "rank": 0 [tom-1][DEBUG ] } [tom-1][DEBUG ] ] [tom-1][DEBUG ] }, [tom-1][DEBUG ] "name": "tom-1", [tom-1][DEBUG ] "outside_quorum": [], [tom-1][DEBUG ] "quorum": [ [tom-1][DEBUG ] 0 [tom-1][DEBUG ] ], [tom-1][DEBUG ] "rank": 0, [tom-1][DEBUG ] "state": "leader", [tom-1][DEBUG ] "sync_provider": [] [tom-1][DEBUG ] } [tom-1][DEBUG ] ******************************************************************************** [tom-1][INFO ] monitor: mon.tom-1 is running [tom-1][INFO ] Running command: ceph --cluster=ceph --admin-daemon /var/run/ceph/ceph-mon.tom-1.asok mon_status [ceph_deploy.mon][INFO ] processing monitor mon.tom-1 [tom-1][DEBUG ] connected to host: tom-1 [tom-1][DEBUG ] detect platform information from remote host [tom-1][DEBUG ] detect machine type [tom-1][DEBUG ] find the location of an executable [tom-1][INFO ] Running command: ceph --cluster=ceph --admin-daemon /var/run/ceph/ceph-mon.tom-1.asok mon_status [ceph_deploy.mon][INFO ] mon.tom-1 monitor has reached quorum! [ceph_deploy.mon][INFO ] all initial monitors are running and have formed quorum [ceph_deploy.mon][INFO ] Running gatherkeys... [ceph_deploy.gatherkeys][INFO ] Storing keys in temp directory /tmp/tmpzD6TIM [tom-1][DEBUG ] connected to host: tom-1 [tom-1][DEBUG ] detect platform information from remote host [tom-1][DEBUG ] detect machine type [tom-1][DEBUG ] get remote short hostname [tom-1][DEBUG ] fetch remote file [tom-1][INFO ] Running command: /usr/bin/ceph --connect-timeout=25 --cluster=ceph --admin-daemon=/var/run/ceph/ceph-mon.tom-1.asok mon_status [tom-1][INFO ] Running command: /usr/bin/ceph --connect-timeout=25 --cluster=ceph --name mon. --keyring=/var/lib/ceph/mon/ceph-tom-1/keyring auth get client.admin [tom-1][INFO ] Running command: /usr/bin/ceph --connect-timeout=25 --cluster=ceph --name mon. --keyring=/var/lib/ceph/mon/ceph-tom-1/keyring auth get client.bootstrap-mds [tom-1][INFO ] Running command: /usr/bin/ceph --connect-timeout=25 --cluster=ceph --name mon. --keyring=/var/lib/ceph/mon/ceph-tom-1/keyring auth get client.bootstrap-osd [tom-1][INFO ] Running command: /usr/bin/ceph --connect-timeout=25 --cluster=ceph --name mon. --keyring=/var/lib/ceph/mon/ceph-tom-1/keyring auth get client.bootstrap-rgw [ceph_deploy.gatherkeys][INFO ] Storing ceph.client.admin.keyring [ceph_deploy.gatherkeys][INFO ] Storing ceph.bootstrap-mds.keyring [ceph_deploy.gatherkeys][INFO ] keyring 'ceph.mon.keyring' already exists [ceph_deploy.gatherkeys][INFO ] Storing ceph.bootstrap-osd.keyring [ceph_deploy.gatherkeys][INFO ] Storing ceph.bootstrap-rgw.keyring [ceph_deploy.gatherkeys][INFO ] Destroy temp directory /tmp/tmpzD6TIM
Note: The bootstrap-rgw keyring is only created during installation of clusters running Hammer or newer
Note: If this process fails with a message similar to
“Unable to find /etc/ceph/ceph.client.admin.keyring”
, please ensure that the IP listed for the monitor node in ceph.conf is the Public IP, not the Private IP.新增 OSD
OSD 需要配置資料和日誌兩部分。以作業系統資料夾為例(也可指定塊裝置)。
在 tom-1, tom-2上執行如下命令mkdir -p /ceph/osd/0 && chown -R ceph:ceph /ceph
建立OSD的資料目錄,和日誌檔案。
在 tom-1 上執行:
[root@tom-1 ceph-cluster]# ceph-deploy osd prepare tom-{1,2}:/ceph/osd/0 [ceph_deploy.conf][DEBUG ] found configuration file at: /root/.cephdeploy.conf [ceph_deploy.cli][INFO ] Invoked (1.5.37): /usr/bin/ceph-deploy osd prepare tom-1:/ceph/osd/0 tom-2:/ceph/osd/0 [ceph_deploy.cli][INFO ] ceph-deploy options: [ceph_deploy.cli][INFO ] username : None [ceph_deploy.cli][INFO ] disk : [('tom-1', '/ceph/osd/0', None), ('tom-2', '/ceph/osd/0', None)] [ceph_deploy.cli][INFO ] dmcrypt : False [ceph_deploy.cli][INFO ] verbose : False [ceph_deploy.cli][INFO ] bluestore : None [ceph_deploy.cli][INFO ] overwrite_conf : False [ceph_deploy.cli][INFO ] subcommand : prepare [ceph_deploy.cli][INFO ] dmcrypt_key_dir : /etc/ceph/dmcrypt-keys [ceph_deploy.cli][INFO ] quiet : False [ceph_deploy.cli][INFO ] cd_conf : <ceph_deploy.conf.cephdeploy.Conf instance at 0x1abb2d8> [ceph_deploy.cli][INFO ] cluster : ceph [ceph_deploy.cli][INFO ] fs_type : xfs [ceph_deploy.cli][INFO ] func : <function osd at 0x1a6b2a8> [ceph_deploy.cli][INFO ] ceph_conf : None [ceph_deploy.cli][INFO ] default_release : False [ceph_deploy.cli][INFO ] zap_disk : False [ceph_deploy.osd][DEBUG ] Preparing cluster ceph disks tom-1:/ceph/osd/0: tom-2:/ceph/osd/0: [tom-1][DEBUG ] connected to host: tom-1 [tom-1][DEBUG ] detect platform information from remote host [tom-1][DEBUG ] detect machine type [tom-1][DEBUG ] find the location of an executable [ceph_deploy.osd][INFO ] Distro info: CentOS Linux 7.1.1503 Core [ceph_deploy.osd][DEBUG ] Deploying osd to tom-1 [tom-1][DEBUG ] write cluster configuration to /etc/ceph/{cluster}.conf [ceph_deploy.osd][DEBUG ] Preparing host tom-1 disk /ceph/osd/0 journal None activate False [tom-1][DEBUG ] find the location of an executable [tom-1][INFO ] Running command: /usr/sbin/ceph-disk -v prepare --cluster ceph --fs-type xfs -- /ceph/osd/0 [tom-1][WARNIN] command: Running command: /usr/bin/ceph-osd --cluster=ceph --show-config-value=fsid [tom-1][WARNIN] command: Running command: /usr/bin/ceph-osd --check-allows-journal -i 0 --cluster ceph [tom-1][WARNIN] command: Running command: /usr/bin/ceph-osd --check-wants-journal -i 0 --cluster ceph [tom-1][WARNIN] command: Running command: /usr/bin/ceph-osd --check-needs-journal -i 0 --cluster ceph [tom-1][WARNIN] command: Running command: /usr/bin/ceph-osd --cluster=ceph --show-config-value=osd_journal_size [tom-1][WARNIN] populate_data_path: Preparing osd data dir /ceph/osd/0 [tom-1][WARNIN] command: Running command: /usr/sbin/restorecon -R /ceph/osd/0/ceph_fsid.26575.tmp [tom-1][WARNIN] command: Running command: /usr/bin/chown -R ceph:ceph /ceph/osd/0/ceph_fsid.26575.tmp [tom-1][WARNIN] command: Running command: /usr/sbin/restorecon -R /ceph/osd/0/fsid.26575.tmp [tom-1][WARNIN] command: Running command: /usr/bin/chown -R ceph:ceph /ceph/osd/0/fsid.26575.tmp [tom-1][WARNIN] command: Running command: /usr/sbin/restorecon -R /ceph/osd/0/magic.26575.tmp [tom-1][WARNIN] command: Running command: /usr/bin/chown -R ceph:ceph /ceph/osd/0/magic.26575.tmp [tom-1][INFO ] checking OSD status... [tom-1][DEBUG ] find the location of an executable [tom-1][INFO ] Running command: /bin/ceph --cluster=ceph osd stat --format=json [ceph_deploy.osd][DEBUG ] Host tom-1 is now ready for osd use. [tom-2][DEBUG ] connected to host: tom-2 [tom-2][DEBUG ] detect platform information from remote host [tom-2][DEBUG ] detect machine type [tom-2][DEBUG ] find the location of an executable [ceph_deploy.osd][INFO ] Distro info: CentOS Linux 7.1.1503 Core [ceph_deploy.osd][DEBUG ] Deploying osd to tom-2 [tom-2][DEBUG ] write cluster configuration to /etc/ceph/{cluster}.conf [tom-2][WARNIN] osd keyring does not exist yet, creating one [tom-2][DEBUG ] create a keyring file [ceph_deploy.osd][DEBUG ] Preparing host tom-2 disk /ceph/osd/0 journal None activate False [tom-2][DEBUG ] find the location of an executable [tom-2][INFO ] Running command: /usr/sbin/ceph-disk -v prepare --cluster ceph --fs-type xfs -- /ceph/osd/0 [tom-2][WARNIN] command: Running command: /usr/bin/ceph-osd --cluster=ceph --show-config-value=fsid [tom-2][WARNIN] command: Running command: /usr/bin/ceph-osd --check-allows-journal -i 0 --cluster ceph [tom-2][WARNIN] command: Running command: /usr/bin/ceph-osd --check-wants-journal -i 0 --cluster ceph [tom-2][WARNIN] command: Running command: /usr/bin/ceph-osd --check-needs-journal -i 0 --cluster ceph [tom-2][WARNIN] command: Running command: /usr/bin/ceph-osd --cluster=ceph --show-config-value=osd_journal_size [tom-2][WARNIN] populate_data_path: Preparing osd data dir /ceph/osd/0 [tom-2][WARNIN] command: Running command: /usr/sbin/restorecon -R /ceph/osd/0/ceph_fsid.24644.tmp [tom-2][WARNIN] command: Running command: /usr/bin/chown -R ceph:ceph /ceph/osd/0/ceph_fsid.24644.tmp [tom-2][WARNIN] command: Running command: /usr/sbin/restorecon -R /ceph/osd/0/fsid.24644.tmp [tom-2][WARNIN] command: Running command: /usr/bin/chown -R ceph:ceph /ceph/osd/0/fsid.24644.tmp [tom-2][WARNIN] command: Running command: /usr/sbin/restorecon -R /ceph/osd/0/magic.24644.tmp [tom-2][WARNIN] command: Running command: /usr/bin/chown -R ceph:ceph /ceph/osd/0/magic.24644.tmp [tom-2][INFO ] checking OSD status... [tom-2][DEBUG ] find the location of an executable [tom-2][INFO ] Running command: /bin/ceph --cluster=ceph osd stat --format=json [ceph_deploy.osd][DEBUG ] Host tom-2 is now ready for osd use.
啟用 OSD
[root@tom-1 ceph-cluster]# ceph-deploy osd activate tom-{1,2}:/ceph/osd/0
拷貝配置檔案和金鑰
Use
ceph-deploy
to copy the configuration file and admin key to your admin node and your Ceph Nodes so that you can use the ceph CLI without having to specify the monitor address andceph.client.admin.keyring
each time you execute a command.[root@tom-1 ceph-cluster]# ceph-deploy --username root admin tom-{1,2}
在tom-1 和 tom-2 上分別執行:
[root@tom-1 ceph]# pwd /etc/ceph [root@tom-1 ceph]# ls -l total 12 -rw------- 1 root root 129 Jun 16 15:58 ceph.client.admin.keyring -rw-r--r-- 1 root root 252 Jun 16 15:58 ceph.conf -rwxr-xr-x 1 root root 92 Sep 21 2016 rbdmap -rw------- 1 root root 0 Jun 16 11:07 tmp9g7ZGm -rw------- 1 root root 0 Jun 16 11:08 tmpB8roOG [root@tom-1 ceph]# chmod +r ceph.client.admin.keyring
確保
ceph.client.admin.keyring
有可讀許可權檢查叢集狀態
執行
ceph health
, 如果叢集執行正常會返回HEALTH_OK
Note:
如果你使用了ext4檔案系統,則會出現如下情況:[root@tom-1 ceph-cluster]# ceph health HEALTH_ERR 64 pgs are stuck inactive for more than 300 seconds; 64 pgs stuck inactive [root@tom-1 ceph-cluster]# ceph -s cluster c02c3880-2879-4ee8-93dc-af0e9dba3727 health HEALTH_ERR 64 pgs are stuck inactive for more than 300 seconds 64 pgs stuck inactive monmap e1: 1 mons at {tom-1=172.16.6.249:6789/0} election epoch 3, quorum 0 tom-1 osdmap e5: 2 osds: 0 up, 0 in flags sortbitwise pgmap v6: 64 pgs, 1 pools, 0 bytes data, 0 objects 0 kB used, 0 kB / 0 kB avail 64 creating
叢集狀態不健康,通過
/var/log/message
中可以看到如下錯誤:Jun 16 15:53:32 tom-1 ceph-osd: 2017-06-16 15:53:32.347940 7ff9d80dd800 -1 osd.0 0 backend (filestore) is unable to support max object name[space] len Jun 16 15:53:32 tom-1 ceph-osd: 2017-06-16 15:53:32.347949 7ff9d80dd800 -1 osd.0 0 osd max object name len = 2048 Jun 16 15:53:32 tom-1 ceph-osd: 2017-06-16 15:53:32.347951 7ff9d80dd800 -1 osd.0 0 osd max object namespace len = 256 Jun 16 15:53:32 tom-1 ceph-osd: 2017-06-16 15:53:32.347952 7ff9d80dd800 -1 osd.0 0 (36) File name too long Jun 16 15:53:32 tom-1 ceph-osd: 2017-06-16 15:53:32.354561 7ff9d80dd800 -1 ** ERROR: osd init failed: (36) File name too long Jun 16 15:53:32 tom-1 ceph-osd: 2017-06-16 15:53:32.354561 7ff9d80dd800 -1 ** ERROR: osd init failed: (36) File name too long Jun 16 15:53:32 tom-1 systemd: ceph-osd@0.service: main process exited, code=exited, status=1/FAILURE
關於檔案系統要求,官方解釋如下:
We recommend against using ext4 due to limitations in the size of xattrs it can store, and the problems this causes with the way Ceph handles long RADOS object names. Although these issues will generally not surface with Ceph clusters using only short object names (e.g., an RBD workload that does not include long RBD image names), other users like RGW make extensive use of long object names and can break.
Starting with the Jewel release, the ceph-osd daemon will refuse to start if the configured max object name cannot be safely stored on ext4. If the cluster is only being used with short object names (e.g., RBD only), you can continue using ext4 by setting the following configuration option:
osd max object name len = 256
osd max object namespace len = 64
Note This may result in difficult-to-diagnose errors if you try to use RGW or other librados clients that do not properly handle or politely surface any resulting ENAMETOOLONG errors.
修改 tom-1 和 tom-2 的
/etc/ceph/ceph.conf
[root@tom-1 0]# cat /etc/ceph/ceph.conf [global] fsid = c02c3880-2879-4ee8-93dc-af0e9dba3727 mon_initial_members = tom-1 mon_host = 172.16.6.249 auth_cluster_required = cephx auth_service_required = cephx auth_client_required = cephx osd pool default size = 2 public network = 172.16.6.0/24 # for ext4 osd max object name len = 256 osd max object namespace len = 64
重啟
osd
服務[root@tom-1 ceph-cluster]# systemctl restart ceph-osd@0.service [root@tom-1 ceph-cluster]# systemctl status ceph-osd@0.service ● ceph-osd@0.service - Ceph object storage daemon Loaded: loaded (/usr/lib/systemd/system/ceph-osd@.service; enabled; vendor preset: disabled) Active: active (running) since Fri 2017-06-16 17:22:35 CST; 22s ago Process: 15644 ExecStartPre=/usr/lib/ceph/ceph-osd-prestart.sh --cluster ${CLUSTER} --id %i (code=exited, status=0/SUCCESS) Main PID: 15695 (ceph-osd) CGroup: /system.slice/system-ceph\x2dosd.slice/ceph-osd@0.service └─15695 /usr/bin/ceph-osd -f --cluster ceph --id 0 --setuser ceph --setgroup ceph Jun 16 17:22:35 tom-1 systemd[1]: Starting Ceph object storage daemon... Jun 16 17:22:35 tom-1 ceph-osd-prestart.sh[15644]: create-or-move updated item name 'osd.0' weight 0.0279 at location {host=tom...sh map Jun 16 17:22:35 tom-1 systemd[1]: Started Ceph object storage daemon. Jun 16 17:22:35 tom-1 ceph-osd[15695]: starting osd.0 at :/0 osd_data /var/lib/ceph/osd/ceph-0 /var/lib/ceph/osd/ceph-0/journal Jun 16 17:22:35 tom-1 ceph-osd[15695]: 2017-06-16 17:22:35.933500 7f3193a75800 -1 journal FileJournal::_open: disabling aio fo... anyway Jun 16 17:22:35 tom-1 ceph-osd[15695]: 2017-06-16 17:22:35.985202 7f3193a75800 -1 osd.0 0 log_to_monitors {default=true} Hint: Some lines were ellipsized, use -l to show in full. [root@tom-1 ceph-cluster]# ceph -s cluster c02c3880-2879-4ee8-93dc-af0e9dba3727 health HEALTH_OK monmap e1: 1 mons at {tom-1=172.16.6.249:6789/0} election epoch 3, quorum 0 tom-1 osdmap e10: 2 osds: 2 up, 2 in flags sortbitwise pgmap v22: 64 pgs, 1 pools, 0 bytes data, 0 objects 33896 MB used, 22017 MB / 58515 MB avail 64 active+clean
解除安裝叢集
如果你想要重新部署 ceph cluster ,可以通過如下命令來清空ceph 安裝包、產生的資料以及配置檔案(tom-1上執行)。
[root@tom-1 ceph-cluster]# ceph-deploy purge {ceph-node} [{ceph-node}] [root@tom-1 ceph-cluster]# ceph-deploy purgedata {ceph-node} [{ceph-node}] [root@tom-1 ceph-cluster]# ceph-deploy forgetkeys
CEPH FILESYSTEM
Create MDS
在tom-1上執行:
ceph-deploy mds create tom-1
檢查mds是否啟動:
systemctl status ceph-mds@tom-1
Create pool
Tip : The
ceph fs new
command was introduced in Ceph 0.84. Prior to this release, no manual steps are required to create a filesystem, and pools nameddata
andmetadata
exist by default.The Ceph command line now includes commands for creating and removing filesystems, but at present only one filesystem may exist at a time.
Ceph filesystem 至少需要兩個RADOS pool, 一個用作資料儲存,一個用於metadata儲存。建立 pool 的時候,你需要考慮如下幾點:
- metadata pool 中任何的資料丟失都可能造成整個filesystem的不可用,因此需要提高pool size .
- 使用高速裝置,如SSD來作為metadata pool 的底層儲存,著將直接影響到客戶端和filesystem的操作延遲。
執行如下命令,建立兩個 pool
ceph osd pool create walker_data 128
ceph osd pool create walker_metadata 128
一旦建立好了 pool , 需要通過 fs new
來啟用filesystem
ceph fs new <fs_name> <metadata> <data>
ceph fs new walkerfs walker_metadata walker_data
通過以下命令檢查 Ceph filesystem 狀態:
[root@tom-1 ceph-cluster]# ceph fs ls
name: walkerfs, metadata pool: walker_metadata, data pools: [walker_data ]
[root@tom-1 ceph-cluster]# ceph mds stat
e5: 1/1/1 up {0=tom-1=up:active}
成功之後,可以通過以下兩種方式使用Cephfilesystem