Ceph添加新節點
阿新 • • 發佈:2018-08-14
文章 election clas res work .repo app ack alt 開始我打算更新“部署mimic版本的Ceph分布式存儲系統”,但是發覺不好布局,所以新續寫了這篇文章。如下配置、部署依然是在ceph01節點完成,部署的新節點為ceph04。
1.配置ansible
# cat /etc/ansible/hosts | grep -v ^# | grep -v ^$ [node] 192.168.100.117 192.168.100.118 [new-node] 192.168.100.119
2.配置hosts
# cat /etc/hosts 192.168.100.116 ceph01 192.168.100.117 ceph02 192.168.100.118 ceph03 192.168.100.119 ceph04
3.復制密鑰
# ssh-copy-id -i .ssh/id_rsa.pub root@ceph04
4.使用ansible配置節點
# vim ceph.yaml - hosts: new-node remote_user: root tasks: - name: 關閉Selinux lineinfile: path: /etc/selinux/config regexp: '^SELINUX=' line: 'SELINUX=disabled' - name: 關閉Firewall service: name: firewalld state: stopped enabled: no - hosts: new-node remote_user: root tasks: - name: 復制hosts copy: src=/etc/hosts dest=/etc/hosts - name: 復制EPEL源 copy: src=/etc/yum.repos.d/epel.repo dest=/etc/yum.repos.d/epel.repo - rpm_key: state: present key: https://mirrors.aliyun.com/ceph/keys/release.asc - name: 復制ceph源 copy: src=/etc/yum.repos.d/ceph.repo dest=/etc/yum.repos.d/ceph.repo - name: 刪除緩存數據 command: yum clean all args: warn: no - name: 創建元數據緩存 command: yum makecache args: warn: no - hosts: new-node remote_user: root tasks: - name: 安裝包 yum: name: "{{ packages }}" vars: packages: - yum-plugin-priorities - snappy - leveldb - gdisk - python-argparse - gperftools-libs - ntp - ntpdate - ntp-doc - name: 啟動ntpdate service: name: ntpdate state: started enabled: yes - name: 啟動ntpd service: name: ntpd state: started enabled: yes # ansible-playbook ceph.yaml
5.使用ansible-deploy為節點安裝ceph
# cd /etc/ceph/ # ceph-deploy install --release mimic ceph04
6.將監視器添加到現有群集
# ceph-deploy mon add ceph04 --address 192.168.100.119 # cat ceph.conf [global] fsid = 09f5d004-5759-4b54-8c4b-e3ebb0b416b2 public_network = 192.168.100.0/24 cluster_network = 192.168.100.0/24 mon_initial_members = ceph01, ceph02, ceph03,ceph04 mon_host = 192.168.100.116,192.168.100.117,192.168.100.118,192.168.100.119 auth_cluster_required = cephx auth_service_required = cephx auth_client_required = cephx # ceph mon stat e2: 4 mons at {ceph01=192.168.100.116:6789/0,ceph02=192.168.100.117:6789/0,ceph03=192.168.100.118:6789/0,ceph04=192.168.100.119:6789/0}, election epoch 60, leader 0 ceph01, quorum 0,1,2,3 ceph01,ceph02,ceph03,ceph04
7.將管理密鑰拷貝到該節點
# ceph-deploy admin ceph04
8.創建 ceph 管理進程服務
# ceph-deploy mgr create ceph04 # ceph -s|grep mgr mgr: ceph01(active), standbys: ceph03, ceph02, ceph04
9.創建OSD
# ceph-deploy osd create --data /dev/sdb ceph04 # ceph osd stat 4 osds: 4 up, 4 in; epoch: e53 # ceph -s cluster: id: 09f5d004-5759-4b54-8c4b-e3ebb0b416b2 health: HEALTH_OK services: mon: 4 daemons, quorum ceph01,ceph02,ceph03,ceph04 mgr: ceph01(active), standbys: ceph03, ceph02, ceph04 osd: 4 osds: 4 up, 4 in data: pools: 1 pools, 128 pgs objects: 64 objects, 136 MiB usage: 4.3 GiB used, 76 GiB / 80 GiB avail pgs: 128 active+clean
註:如果就是想加一個磁盤(或者說OSD)的話,只需該步驟即可。
Ceph添加新節點