1. 程式人生 > >Openstack 之 正常關閉一個物理宿主機

Openstack 之 正常關閉一個物理宿主機

openstack 正常關閉物理宿主機

記錄一下今天正常關閉一個物理宿主機的過程。環境是3node HA,控制存儲計算融合節點,kolla部署環境,啟用ceph存儲,關閉其中一臺融合節點controller03 。大概過程是先熱遷移這個物理機上的虛擬機,然後設置ceph集群osd noout ,使關閉這個節點後ceph的osd數據不會重平衡,避免大的數據震蕩,接著在web界面上關閉節點,最後ssh登陸節點關機:

1.熱遷移這個節點的虛擬機。登陸web管理界面,“管理員”->“實例”->選擇這個節點中的虛擬機->“熱遷移” ->選擇其他節點 ,等待遷移成功,並驗證;

2.設置所有ceph節點osd noout ,登陸所有ceph節點,並運行: docker exec -it ceph_mon ceph osd set noout ;

3.web界面上關閉節點。登陸web管理界面,“管理員”->“虛擬機管理器”->“計算主機”-> 選擇對應的宿主機->“關閉服務” ;

4.ssh登陸節點關機。執行命令: shutdown -h now ;


關閉的時候執行命令ceph -w 實時查看ceph集群osd數據是否有重平衡動作:

[root@control02 mariadb]# docker exec -it ceph_mon ceph -w
        cluster 33932e16-1909-4d68-b085-3c01d0432adc
         health HEALTH_WARN
                noout flag(s) set
         monmap e2: 3 mons at {192.168.1.130=192.168.1.130:6789/0,192.168.1.131=192.168.1.131:6789/0,192.168.1.132=192.168.1.132:6789/0}
                election epoch 72, quorum 0,1,2 192.168.1.130,192.168.1.131,192.168.1.132
         osdmap e466: 9 osds: 9 up, 9 in
                flags noout,sortbitwise,require_jewel_osds
          pgmap v712835: 640 pgs, 13 pools, 14902 MB data, 7300 objects
                30288 MB used, 824 GB / 854 GB avail
                     640 active+clean


用ceph -s查看狀態:

 [root@control01 kolla]# docker exec -it ceph_mon ceph osd set noout
    set noout
    [root@control01 kolla]# docker exec -it ceph_mon ceph -s
        cluster 33932e16-1909-4d68-b085-3c01d0432adc
         health HEALTH_WARN
                412 pgs degraded
                404 pgs stuck unclean
                412 pgs undersized
                recovery 4759/14600 objects degraded (32.596%)
                3/9 in osds are down
                noout flag(s) set
                1 mons down, quorum 0,1 192.168.1.130,192.168.1.131
         monmap e2: 3 mons at {192.168.1.130=192.168.1.130:6789/0,192.168.1.131=192.168.1.131:6789/0,192.168.1.132=192.168.1.132:6789/0}
                election epoch 74, quorum 0,1 192.168.1.130,192.168.1.131
         osdmap e468: 9 osds: 6 up, 9 in; 412 remapped pgs
                flags noout,sortbitwise,require_jewel_osds
          pgmap v712931: 640 pgs, 13 pools, 14902 MB data, 7300 objects
                30285 MB used, 824 GB / 854 GB avail
                4759/14600 objects degraded (32.596%)
                     412 active+undersized+degraded
                     228 active+clean
    [root@control01 kolla]# 
    [root@control01 kolla]# 
    [root@control01 kolla]# docker exec -it ceph_mon ceph -s
        cluster 33932e16-1909-4d68-b085-3c01d0432adc
         health HEALTH_WARN
                412 pgs degraded
                405 pgs stuck unclean
                412 pgs undersized
                recovery 4759/14600 objects degraded (32.596%)
                3/9 in osds are down
                noout flag(s) set
                1 mons down, quorum 0,1 192.168.1.130,192.168.1.131
         monmap e2: 3 mons at {192.168.1.130=192.168.1.130:6789/0,192.168.1.131=192.168.1.131:6789/0,192.168.1.132=192.168.1.132:6789/0}
                election epoch 74, quorum 0,1 192.168.1.130,192.168.1.131
         osdmap e468: 9 osds: 6 up, 9 in; 412 remapped pgs
                flags noout,sortbitwise,require_jewel_osds
          pgmap v712981: 640 pgs, 13 pools, 14902 MB data, 7300 objects
                30285 MB used, 824 GB / 854 GB avail
                4759/14600 objects degraded (32.596%)
                     412 active+undersized+degraded
                     228 active+clean
      client io 7559 B/s rd, 20662 B/s wr, 11 op/s rd, 1 op/s wr


發現3個 osd down,但是還是 in狀態,同時 pgmap 始終都是 412 active+undersized+degraded ,228 active+clean ,說明數據沒有重平衡。

另外,檢查所有的虛擬機,正常運行。


Openstack 之 正常關閉一個物理宿主機