ceph在CentOS7.2部署教程
阿新 • • 發佈:2018-12-15
系統
[[email protected] ~]# cat /etc/redhat-release
CentOS Linux release 7.2.1511 (Core)
主機
hostname | ip | 功能 |
---|---|---|
ceph-1 | 10.39.47.63 | deploy、mon1、osd1 |
ceph-2 | 10.39.47.64 | mon1、osd1 |
ceph-3 | 10.39.47.65 | mon1、osd1 |
主機硬碟
[[email protected] ~]# lsblk
NAME MAJ:MIN RM SIZE RO TYPE MOUNTPOINT
vda 253:0 0 20G 0 disk
└─vda1 253:1 0 20G 0 part /
vdb 253:16 0 4G 0 disk [SWAP]
vdc 253:32 0 80G 0 disk
安裝wget ntp vim
工具
yum -y install wget ntp vim
新增host
[ [email protected] ~]# cat /etc/hosts
...
10.39.47.63 ceph-1
10.39.47.64 ceph-2
10.39.47.65 ceph-3
如果以前安裝失敗需要環境清理
ps aux|grep ceph |awk '{print $2}'|xargs kill -9 ps -ef|grep ceph #確保此時所有ceph程序都已經關閉!!!如果沒有關閉,多執行幾次。 umount /var/lib/ceph/osd/* rm -rf /var/lib/ceph/osd/* rm -rf /var/lib/ceph/mon/* rm -rf /var/lib/ceph/mds/* rm -rf /var/lib/ceph/bootstrap-mds/* rm -rf /var/lib/ceph/bootstrap-osd/* rm -rf /var/lib/ceph/bootstrap-rgw/* rm -rf /var/lib/ceph/tmp/* rm -rf /etc/ceph/* rm -rf /var/run/ceph/*
需要在每個主機上執行以下指令
修改yum源
yum clean all
curl http://mirrors.aliyun.com/repo/Centos-7.repo >/etc/yum.repos.d/CentOS-Base.repo
curl http://mirrors.aliyun.com/repo/epel-7.repo >/etc/yum.repos.d/epel.repo
sed -i '/aliyuncs/d' /etc/yum.repos.d/CentOS-Base.repo
sed -i '/aliyuncs/d' /etc/yum.repos.d/epel.repo
yum makecache
增加ceph的源
vim /etc/yum.repos.d/ceph.repo
##內容如下
[ceph]
name=ceph
baseurl=http://mirrors.163.com/ceph/rpm-jewel/el7/x86_64/
gpgcheck=0
[ceph-noarch]
name=cephnoarch
baseurl=http://mirrors.163.com/ceph/rpm-jewel/el7/noarch/
gpgcheck=0
安裝ceph客戶端
yum makecache
yum install ceph ceph-radosgw rdate -y
關閉selinux&firewalld
sed -i 's/SELINUX=.*/SELINUX=disabled/' /etc/selinux/config
setenforce 0
systemctl stop firewalld
systemctl disable firewalld
同步各個節點時間
yum -y install rdate
rdate -s time-a.nist.gov
echo rdate -s time-a.nist.gov >> /etc/rc.d/rc.local
chmod +x /etc/rc.d/rc.local
開始部署
在部署節點(ceph-1)安裝ceph-deploy,下文的部署節點統一指ceph-1
[[email protected] ~]# yum -y install ceph-deploy
[[email protected] ~]# ceph-deploy --version
1.5.39
[[email protected] ~]# ceph -v
ceph version 10.2.11 (e4b061b47f07f583c92a050d9e84b1813a35671e)
設定免密碼登入
[[email protected] cluster]# ssh-keygen
Generating public/private rsa key pair.
Enter file in which to save the key (/root/.ssh/id_rsa):
Created directory '/root/.ssh'.
Enter passphrase (empty for no passphrase):
Enter same passphrase again:
Your identification has been saved in /root/.ssh/id_rsa.
Your public key has been saved in /root/.ssh/id_rsa.pub.
The key fingerprint is:
54:f8:9b:25:56:3b:b1:ce:fc:6d:c5:61:b1:55:79:49 [email protected]
The key's randomart image is:
+--[ RSA 2048]----+
| .. .E=|
| .. o +o|
| .. . + =|
| . + = + |
| S. O ....|
| o + o|
| . ..|
| . o|
| . |
+-----------------+
[[email protected] cluster]# ssh-copy-id 10.39.47.63
/usr/bin/ssh-copy-id: INFO: attempting to log in with the new key(s), to filter out any that are already installed
/usr/bin/ssh-copy-id: INFO: 1 key(s) remain to be installed -- if you are prompted now it is to install the new keys
Warning: Permanently added '10.39.47.63' (ECDSA) to the list of known hosts.
[email protected]'s password:
Number of key(s) added: 1
Now try logging into the machine, with: "ssh '10.39.47.63'"
and check to make sure that only the key(s) you wanted were added.
[[email protected] cluster]# ssh-copy-id 10.39.47.64
/usr/bin/ssh-copy-id: INFO: attempting to log in with the new key(s), to filter out any that are already installed
/usr/bin/ssh-copy-id: INFO: 1 key(s) remain to be installed -- if you are prompted now it is to install the new keys
Warning: Permanently added '10.39.47.64' (ECDSA) to the list of known hosts.
[email protected]'s password:
Number of key(s) added: 1
Now try logging into the machine, with: "ssh '10.39.47.64'"
and check to make sure that only the key(s) you wanted were added.
[[email protected] cluster]# ssh-copy-id 10.39.47.65
/usr/bin/ssh-copy-id: INFO: attempting to log in with the new key(s), to filter out any that are already installed
/usr/bin/ssh-copy-id: INFO: 1 key(s) remain to be installed -- if you are prompted now it is to install the new keys
Warning: Permanently added '10.39.47.65' (ECDSA) to the list of known hosts.
[email protected]'s password:
Number of key(s) added: 1
Now try logging into the machine, with: "ssh '10.39.47.65'"
and check to make sure that only the key(s) you wanted were added.
驗證
[[email protected] cluster]# ssh 10.39.47.65
Warning: Permanently added '10.39.47.65' (ECDSA) to the list of known hosts.
Last login: Fri Nov 2 10:06:39 2018 from 10.4.95.63
[[email protected] ~]#
在部署節點建立部署目錄並開始部署
[[email protected] ~]# mkdir cluster
[[email protected] ~]# cd cluster/
[[email protected] cluster]# ceph-deploy new ceph-1 ceph-2 ceph-3
執行完之後生成一下檔案
[[email protected] cluster]# ls -l
total 16
-rw-r--r-- 1 root root 235 Nov 2 10:40 ceph.conf
-rw-r--r-- 1 root root 4879 Nov 2 10:40 ceph-deploy-ceph.log
-rw------- 1 root root 73 Nov 2 10:40 ceph.mon.keyring
根據自己的IP配置向ceph.conf中新增public_network,並稍微增大mon之間時差允許範圍(預設為0.05s,現改為2s):
[[email protected] cluster]# echo public_network=10.39.47.0/24 >> ceph.conf
[[email protected] cluster]# echo mon_clock_drift_allowed = 2 >> ceph.conf
[[email protected] cluster]# cat ceph.conf
[global]
fsid = 4a3e86f0-1511-4ad7-9f69-b435ae16dc28
mon_initial_members = ceph-1, ceph-2, ceph-3
mon_host = 10.39.47.63,10.39.47.64,10.39.47.65
auth_cluster_required = cephx
auth_service_required = cephx
auth_client_required = cephx
public_network=10.39.47.0/24
mon_clock_drift_allowed = 2
開始部署monitor
[[email protected] cluster] ceph-deploy mon create-initial
//執行成功之後顯示
[[email protected] cluster]# ls -l
total 56
-rw------- 1 root root 113 Nov 2 10:45 ceph.bootstrap-mds.keyring
-rw------- 1 root root 71 Nov 2 10:45 ceph.bootstrap-mgr.keyring
-rw------- 1 root root 113 Nov 2 10:45 ceph.bootstrap-osd.keyring
-rw------- 1 root root 113 Nov 2 10:45 ceph.bootstrap-rgw.keyring
-rw------- 1 root root 129 Nov 2 10:45 ceph.client.admin.keyring
-rw-r--r-- 1 root root 292 Nov 2 10:43 ceph.conf
-rw-r--r-- 1 root root 27974 Nov 2 10:45 ceph-deploy-ceph.log
-rw------- 1 root root 73 Nov 2 10:40 ceph.mon.keyring
檢視叢集狀態
[[email protected] cluster]# ceph -s
cluster 4a3e86f0-1511-4ad7-9f69-b435ae16dc28
health HEALTH_ERR
no osds
monmap e1: 3 mons at {ceph-1=10.39.47.63:6789/0,ceph-2=10.39.47.64:6789/0,ceph-3=10.39.47.65:6789/0}
election epoch 6, quorum 0,1,2 ceph-1,ceph-2,ceph-3
osdmap e1: 0 osds: 0 up, 0 in
flags sortbitwise,require_jewel_osds
pgmap v2: 64 pgs, 1 pools, 0 bytes data, 0 objects
0 kB used, 0 kB / 0 kB avail
64 creating
開始部署OSD
ceph-deploy --overwrite-conf osd prepare ceph-1:/dev/vdc ceph-2:/dev/vdc ceph-3:/dev/vdc --zap-disk
ceph-deploy --overwrite-conf osd activate ceph-1:/dev/vdc1 ceph-2:/dev/vdc1 ceph-3:/dev/vdc1
部署完成之後檢視叢集狀態
[[email protected] cluster]# ceph -s
cluster 4a3e86f0-1511-4ad7-9f69-b435ae16dc28
health HEALTH_OK
monmap e1: 3 mons at {ceph-1=10.39.47.63:6789/0,ceph-2=10.39.47.64:6789/0,ceph-3=10.39.47.65:6789/0}
election epoch 6, quorum 0,1,2 ceph-1,ceph-2,ceph-3
osdmap e14: 3 osds: 3 up, 3 in
flags sortbitwise,require_jewel_osds
pgmap v28: 64 pgs, 1 pools, 0 bytes data, 0 objects
322 MB used, 224 GB / 224 GB avail
64 active+clean
檢視osd
[[email protected] cluster]# ceph osd tree
ID WEIGHT TYPE NAME UP/DOWN REWEIGHT PRIMARY-AFFINITY
-1 0.21959 root default
-2 0.07320 host ceph-1
0 0.07320 osd.0 up 1.00000 1.00000
-3 0.07320 host ceph-2
1 0.07320 osd.1 up 1.00000 1.00000
-4 0.07320 host ceph-3
2 0.07320 osd.2 up 1.00000 1.00000
檢視pool有多種方式
這個rdb pool預設建立的pool
[[email protected] cluster]# rados lspools
rbd
[[email protected] cluster]# ceph osd lspools
0 rbd,
建立POOL
[[email protected] cluster]# ceph osd pool create testpool 64
pool 'testpool' created
[[email protected] cluster]# ceph osd lspools
0 rbd,1 testpool,
參考:
INSTALLATION (CEPH-DEPLOY)
Ceph 快速部署(Centos7+Jewel)
ceph學習之pool