ProxmoxVE 之 系統重灌磁碟清理
阿新 • • 發佈:2018-11-09
上面左邊是我的個人 微 信,如需進一步溝通,請加 微 信。 右邊是我的公眾號“Openstack私有云”,如有興趣,請關注。
上次有一臺物理機裝了PVE,由於當初規劃沒有規劃好,在配置使用儲存的時候出了問題,後面索性就重灌,當重灌後發現除了系統盤會格式化之外,其他原先的盤的分割槽資訊還是存在,特別是殘留了大量的lvm的資訊,所以還是需要手工去幹預處理。這裡做一個記錄。
按照一般的lvm刪除lv vg pv的順序,首先是刪除vg下的lv ,再刪除vg,最後刪除pv ,但是一般情況下pve在使用過程中會產生大量的lv,所以在刪除的時候,可以使用vgremove命令直接刪除vg,會同時將vg下的lv刪除。
重灌完成後,pve節點會自己建立一個叫pve的vg ,除此之外的,都是原來殘留的vg,如下所示:
[email protected]:~# vgdisplay |more --- Volume group --- VG Name ceph-d910d1d3-3595-4c5a-93ed-579e4a0968b4 System ID Format lvm2 Metadata Areas 1 Metadata Sequence No 17 VG Access read/write VG Status resizable MAX LV 0 Cur LV 1 Open LV 0 Max PV 0 Cur PV 1 Act PV 1 VG Size 30.00 GiB PE Size 4.00 MiB Total PE 7679 Alloc PE / Size 7679 / 30.00 GiB Free PE / Size 0 / 0 VG UUID Ngu9qK-BILc-sY4K-BFLG-0ICd-k4Fp-5mVXpg --- Volume group --- VG Name ceph-3ab6b8cb-a06c-458f-9947-eaaf40fc0525 System ID Format lvm2 Metadata Areas 1 Metadata Sequence No 17 VG Access read/write VG Status resizable MAX LV 0 Cur LV 1 Open LV 0 Max PV 0 Cur PV 1 Act PV 1 VG Size 30.00 GiB PE Size 4.00 MiB Total PE 7679 Alloc PE / Size 7679 / 30.00 GiB Free PE / Size 0 / 0 VG UUID T8ReNU-YOdO-Bver-KdYe-NG1p-qoiv-T6tUO1 --- Volume group --- VG Name ceph-aa50bad9-b042-40bd-b060-438cddaf8ff2 System ID Format lvm2 Metadata Areas 1 Metadata Sequence No 17 VG Access read/write VG Status resizable MAX LV 0 Cur LV 1 Open LV 0 Max PV 0 Cur PV 1 Act PV 1 VG Size 30.00 GiB PE Size 4.00 MiB Total PE 7679 Alloc PE / Size 7679 / 30.00 GiB Free PE / Size 0 / 0 VG UUID Wnn3OH-X11z-TovS-VFxn-sbz7-btsH-jPbUUa --- Volume group --- VG Name ceph-38029056-30a1-4bb8-b94f-da32dde62217 System ID Format lvm2 Metadata Areas 1 Metadata Sequence No 17 VG Access read/write VG Status resizable MAX LV 0 Cur LV 1 Open LV 0 Max PV 0 Cur PV 1 Act PV 1 VG Size 30.00 GiB PE Size 4.00 MiB Total PE 7679 Alloc PE / Size 7679 / 30.00 GiB Free PE / Size 0 / 0 VG UUID IT06de-ABnj-yE07-4VFS-WqES-0cep-7j3O4I --- Volume group --- VG Name vg-sdb System ID Format lvm2 Metadata Areas 1 Metadata Sequence No 4 VG Access read/write VG Status resizable MAX LV 0 Cur LV 1 Open LV 0 Max PV 0 Cur PV 1 Act PV 1 VG Size 32.00 GiB PE Size 4.00 MiB Total PE 8191 Alloc PE / Size 7696 / 30.06 GiB Free PE / Size 495 / 1.93 GiB VG UUID QDHI0E-K5Vd-DZ77-o6N9-JRGC-AezH-eIai7Q --- Volume group --- VG Name pvevg2 System ID Format lvm2 Metadata Areas 1 Metadata Sequence No 493 VG Access read/write VG Status resizable MAX LV 0 Cur LV 23 Open LV 7 Max PV 0 Cur PV 1 Act PV 1 VG Size 556.93 GiB PE Size 4.00 MiB Total PE 142573 Alloc PE / Size 140836 / 550.14 GiB Free PE / Size 1737 / 6.79 GiB VG UUID 0NWJ32-iGhh-jPU8-BwxZ-46Ie-5DDB-md0XFB --- Volume group --- VG Name ceph-18ccfe46-ab1a-44e4-9b5e-910d55679b2d System ID Format lvm2 Metadata Areas 1 Metadata Sequence No 17 VG Access read/write VG Status resizable MAX LV 0 Cur LV 1 Open LV 0 Max PV 0 Cur PV 1 Act PV 1 VG Size 30.00 GiB PE Size 4.00 MiB Total PE 7679 Alloc PE / Size 7679 / 30.00 GiB Free PE / Size 0 / 0 VG UUID 6A7ViB-O1UM-sbBw-iCSn-RR3D-IA2o-Uf5nMW --- Volume group --- VG Name ceph-ca21f994-6ea2-4d97-9377-b71e1713089d System ID Format lvm2 Metadata Areas 1 Metadata Sequence No 17 VG Access read/write VG Status resizable MAX LV 0 Cur LV 1 Open LV 0 Max PV 0 Cur PV 1 Act PV 1 VG Size 30.00 GiB PE Size 4.00 MiB Total PE 7679 Alloc PE / Size 7679 / 30.00 GiB Free PE / Size 0 / 0 VG UUID bBvYWE-skn5-lD6G-A0aa-HvC1-cufv-HPh59e --- Volume group --- VG Name pve System ID Format lvm2 Metadata Areas 1 Metadata Sequence No 7 VG Access read/write VG Status resizable MAX LV 0 Cur LV 3 Open LV 2 Max PV 0 Cur PV 1 Act PV 1 VG Size 556.68 GiB PE Size 4.00 MiB Total PE 142509 Alloc PE / Size 138413 / 540.68 GiB Free PE / Size 4096 / 16.00 GiB VG UUID 3rZ2nK-8oci-O0gh-c6ac-1XXX-slCA-5kh3jZ
接下來刪除除了pve以外的所有vg:
[email protected]:~# vgremove ceph-d910d1d3-3595-4c5a-93ed-579e4a0968b4 Do you really want to remove volume group "ceph-d910d1d3-3595-4c5a-93ed-579e4a0968b4" containing 1 logical volumes? [y/n]: y Do you really want to remove and DISCARD active logical volume ceph-d910d1d3-3595-4c5a-93ed-579e4a0968b4/osd-block-8b281dbd-5dac-40c7-86a9-2eadcd9d876b? [y/n]: y Logical volume "osd-block-8b281dbd-5dac-40c7-86a9-2eadcd9d876b" successfully removed Volume group "ceph-d910d1d3-3595-4c5a-93ed-579e4a0968b4" successfully removed[email protected]:~# [email protected]:~# vgremove ceph-3ab6b8cb-a06c-458f-9947-eaaf40fc0525 Do you really want to remove volume group "ceph-3ab6b8cb-a06c-458f-9947-eaaf40fc0525" containing 1 logical volumes? [y/n]: y Do you really want to remove and DISCARD active logical volume ceph-3ab6b8cb-a06c-458f-9947-eaaf40fc0525/osd-block-7b89b395-2506-4997-a545-14eb1a5a029f? [y/n]: y Logical volume "osd-block-7b89b395-2506-4997-a545-14eb1a5a029f" successfully removed Volume group "ceph-3ab6b8cb-a06c-458f-9947-eaaf40fc0525" successfully removed [email protected]:~# [email protected]:~# vgremove -f ceph-aa50bad9-b042-40bd-b060-438cddaf8ff2 Logical volume "osd-block-c1ccede6-d03b-409b-952d-57fd702b08fd" successfully removed Volume group "ceph-aa50bad9-b042-40bd-b060-438cddaf8ff2" successfully removed [email protected]:~# [email protected]:~# vgremove -f ceph-38029056-30a1-4bb8-b94f-da32dde62217 Logical volume "osd-block-a2331e8c-907b-48dd-a053-071d0e1d88ca" successfully removed Volume group "ceph-38029056-30a1-4bb8-b94f-da32dde62217" successfully removed [email protected]:~# vgremove -f vg-sdb Logical volume "lvm-sdb" successfully removed Volume group "vg-sdb" successfully removed [email protected]:~# vgremove -f pvevg2 Logical volume "vm-112-disk-3" successfully removed Logical volume "vm-112-disk-4" successfully removed Logical volume "vm-111-disk-5" successfully removed Logical volume "vm-112-disk-5" successfully removed Logical volume "vm-113-disk-5" successfully removed Logical volume "vm-112-disk-7" successfully removed Logical volume "snap_vm-112-disk-2_pve52init1" successfully removed Logical volume "snap_vm-112-disk-6_pve52init1" successfully removed Logical volume "vm-111-state-pve52ceph1" successfully removed Logical volume "vm-113-state-pve52ceph1" successfully removed Logical volume "vm-111-state-pve52cephok" successfully removed Logical volume "vm-113-state-pve52cephok" successfully removed Logical volume pvevg2/vm-111-disk-2 is used by another device. [email protected]:~# vgremove ceph-18ccfe46-ab1a-44e4-9b5e-910d55679b2d -f Logical volume "osd-block-7e68233c-97b2-4bd7-8d24-a48212e02943" successfully removed Volume group "ceph-18ccfe46-ab1a-44e4-9b5e-910d55679b2d" successfully removed [email protected]:~# vgremove ceph-ca21f994-6ea2-4d97-9377-b71e1713089d -f Logical volume "osd-block-a489f6f9-e24d-408c-ada2-850dbe876c23" successfully removed Volume group "ceph-ca21f994-6ea2-4d97-9377-b71e1713089d" successfully removed [email protected]:~#
再次檢視vg和pv情況:
[email protected]:~# vgdisplay --- Volume group --- VG Name pve System ID Format lvm2 Metadata Areas 1 Metadata Sequence No 7 VG Access read/write VG Status resizable MAX LV 0 Cur LV 3 Open LV 2 Max PV 0 Cur PV 1 Act PV 1 VG Size 556.68 GiB PE Size 4.00 MiB Total PE 142509 Alloc PE / Size 138413 / 540.68 GiB Free PE / Size 4096 / 16.00 GiB VG UUID 3rZ2nK-8oci-O0gh-c6ac-1XXX-slCA-5kh3jZ [email protected]:~# [email protected]:~# pvdisplay --- Physical volume --- PV Name /dev/sda3 VG Name pve PV Size 556.68 GiB / not usable 1.98 MiB Allocatable yes PE Size 4.00 MiB Total PE 142509 Free PE 4096 Allocated PE 138413 PV UUID 6LqVrb-YLWy-doYc-RMVR-XA58-6XuQ-sSVpwD "/dev/sdb1" is a new physical volume of "556.93 GiB" --- NEW Physical volume --- PV Name /dev/sdb1 VG Name PV Size 556.93 GiB Allocatable NO PE Size 0 Total PE 0 Free PE 0 Allocated PE 0 PV UUID 45XFJa-Uixa-4zi7-2Bbe-LnEs-vRea-XAKjiB
清理成功!
另外,記錄一下,如果碰到一些lv像狗皮膏藥一樣怎麼刪都提示裝置忙被佔用的情況下怎麼刪除的方法。我參考了這個網址的內容:
http://blog.roberthallam.org/2017/12/solved-logical-volume-is-used-by-another-device/comment-page-1/
主要是用到了dmsetup這個更加底層的指令,主要步驟如下:
# lvremove -v /dev/vg/lv-old DEGRADED MODE. Incomplete RAID LVs will be processed. Using logical volume(s) on command line Logical volume vg/lv-old is used by another device. # dmsetup info -c | grep old vg-lv--old 253 9 L--w 1 2 1 LVM-6O3jLvI6ZR3fg6ZpMgTlkqAudvgkfphCyPcP8AwpU2H57VjVBNmFBpL Tis8ia0NE $ ls -la /sys/dev/block/253\:9/holders drwxr-xr-x 2 root root 0 Dec 12 01:07 . drwxr-xr-x 8 root root 0 Dec 12 01:07 .. lrwxrwxrwx 1 root root 0 Dec 12 01:07 dm-18 -> ../../dm-18 # dmsetup remove /dev/dm-18 # lvremove -v /dev/vgraid6/lv-old