生產級別的Ceph叢集搭建
阿新 • • 發佈:2021-08-24
準備環境
主機名 | 業務IP | 叢集IP | 磁碟規劃 | 功能介紹 |
ceph-deploy | 172.16.143.110 | 172.16.138.110 | 200/1 | 管理Ceph叢集 |
ceph-mon1 | 172.16.143.111 | 172.16.138.131 | 200/1 | 監控Ceph叢集狀態,拓撲等資訊 |
ceph-mon2 | 172.16.143.112 | 172.16.138.132 | 200/1 | 監控Ceph叢集狀態,拓撲等資訊 |
ceph-mon3 | 172.16.143.113 | 172.16.138.113 | 200/1 | 監控Ceph叢集狀態,拓撲等資訊 |
ceph-mgr1 | 172.16.143.114 | 172.16.138.114 | 200/1 | Ceph 叢集的管理,實現統一介面 |
ceph-mgr2 | 172.16.143.115 | 172.16.138.115 | 200/1 | Ceph 叢集的管理,實現統一介面 |
ceph-node1 | 172.16.143.116 | 172.16.138.116 | 200/1 100/2 | Ceph 叢集的儲存節點 |
ceph-node2 | 172.16.143.117 | 172.16.138.117 | 200/1 100/2 | Ceph 叢集的儲存節點 |
ceph-node3 | 172.16.143.118 | 172.16.138.118 | 200/1 100/2 | Ceph 叢集的儲存節點 |
ceph-node4 | 172.16.143.119 | 172.16.138.119 | 200/1 100/2 | Ceph 叢集的儲存節點 |
- 配置2張網絡卡,一個是業務訪問使用,一個是叢集內部訪問使用
- 關閉防火牆
- 關閉selinux
- 配置時間同步
- 配置hosts
配置Ceph的源
wget -q -O- 'https://mirrors.tuna.tsinghua.edu.cn/ceph/keys/release.asc' | sudo apt-key add - sudo apt-add-repository 'deb https://mirrors.tuna.tsinghua.edu.cn/ceph/debian-pacific/ bionic main' sudo apt update
配置ceph使用者
groupadd -r -g 2022 ceph && useradd -r -m -s /bin/bash -u 2022 -g 2022 ceph && echo ceph:123456 | chpasswd echo "ceph ALL=(ALL) NOPASSWD:ALL" >> /etc/sudoers
配置ceph-deploy節點免密登入
ssh-keygen -t rsa ssh-copy-id -i .ssh/id_rsa.pub ceph@172.16.143.110 ... ... ssh-copy-id -i .ssh/id_rsa.pub ceph@172.16.143.119
安裝Ceph的部署工具ceph-deploy
sudo apt install ceph-deploy
安裝Ceph
初始化mon
mkdir ceph-cluster cd ceph-cluster
生成配置檔案(ceph-deploy節點)
$ ceph-deploy new --cluster-network 172.16.138.0/24 --public-network 172.16.143.0/24 ceph-mon1 $ ls ceph.conf ceph-deploy-ceph.log ceph.mon.keyring
安裝ceph-mon(ceph-moin)
$ sudo apt install ceph-mon -y
回車
回車
初始化mon
ceph-deploy mon create-initial
驗證mon程序是否啟動
$ ps -ef |grep mon message+ 757 1 0 04:40 ? 00:00:00 /usr/bin/dbus-daemon --system --address=systemd: --nofork --nopidfile --systemd-activation --syslog-only root 802 1 0 04:40 ? 00:00:00 /usr/lib/accountsservice/accounts-daemon daemon 832 1 0 04:40 ? 00:00:00 /usr/sbin/atd -f ceph 6739 1 0 05:14 ? 00:00:00 /usr/bin/ceph-mon -f --cluster ceph --id ceph-mon1 --setuser ceph --setgroup ceph root 7343 1566 0 05:16 pts/0 00:00:00 grep --color=auto mon
安裝ceph-common及分發ceph-admin管理祕鑰
//root使用者執行 $ apt install ceph-common -y $ ceph-deploy admin ceph-deploy
$ sudo chown ceph:ceph /etc/ceph/ceph.client.admin.keyring
$ ceph -s
cluster:
id: 6e278817-8019-4a06-82b3-b4d24d7dd743
health: HEALTH_WARN
mon is allowing insecure global_id reclaim
services:
mon: 1 daemons, quorum ceph-mon1 (age 60m)
mgr: no daemons active
osd: 0 osds: 0 up, 0 in
data:
pools: 0 pools, 0 pgs
objects: 0 objects, 0 B
usage: 0 B used, 0 B / 0 B avail
pgs:
安裝ceph-mgr
mgr節點執行
apt install ceph-mgr -y
deploy節點執行
ceph-deploy mgr create ceph-mgr1
安裝ceph-node
deploy節點執行
//分別執行1,2,3,4節點 ceph-deploy install --no-adjust-repos --nogpgcheck ceph-node4
擦除遠端磁碟(node節點的都要執行,sdb sdc)
ceph-deploy disk zap ceph-node1 /dev/sdb
新增主機OSD(node節點的所有磁碟都要執行)
ceph-deploy osd create ceph-node2 --data /dev/sdb
檢視叢集狀態
$ ceph -s cluster: id: 6e278817-8019-4a06-82b3-b4d24d7dd743 health: HEALTH_OK services: mon: 1 daemons, quorum ceph-mon1 (age 3d) mgr: ceph-mgr1(active, since 3d) osd: 8 osds: 8 up (since 9s), 8 in (since 17s) data: pools: 1 pools, 1 pgs objects: 0 objects, 0 B usage: 45 MiB used, 800 GiB / 800 GiB avail pgs: 1 active+clean
擴充套件mon服務
mon節點2/3 安裝ceph-mon
$ apt install ceph-mon
新增mon
$ ceph-deploy mon add ceph-mon3
$ ceph -s cluster: id: 6e278817-8019-4a06-82b3-b4d24d7dd743 health: HEALTH_OK services: mon: 3 daemons, quorum ceph-mon1,ceph-mon2,ceph-mon3 (age 29s) mgr: ceph-mgr1(active, since 3d) osd: 8 osds: 8 up (since 2h), 8 in (since 2h) data: pools: 1 pools, 1 pgs objects: 0 objects, 0 B usage: 45 MiB used, 800 GiB / 800 GiB avail pgs: 1 active+clean
檢視mon狀態
$ ceph quorum_status --format json-pretty
擴充套件mgr
mgr節點安裝mgr
$ apt install ceph-mgr
新增mgr
$ ceph-deploy mgr create ceph-mgr2
檢視狀態
$ ceph -s cluster: id: 6e278817-8019-4a06-82b3-b4d24d7dd743 health: HEALTH_OK services: mon: 3 daemons, quorum ceph-mon1,ceph-mon2,ceph-mon3 (age 5m) mgr: ceph-mgr1(active, since 3d), standbys: ceph-mgr2 osd: 8 osds: 8 up (since 2h), 8 in (since 2h) data: pools: 1 pools, 1 pgs objects: 0 objects, 0 B usage: 45 MiB used, 800 GiB / 800 GiB avail pgs: 1 active+clean碎片化時間學習和你一起終身學習