ceph(luminous)-Bluestore,更換ssd和wal位置(不改變大小)
阿新 • • 發佈:2018-12-15
簡介
隨著業務的增長,osd中資料很多,如果db或者wal裝置需要更換,刪除osd並且新建osd會引發大量遷移。 本文主要介紹需要更換db或者wal裝置時(可能由於需要更換其他速度更快的ssd;可能時這個db的部分分割槽損壞,但是db或者wal分割槽完好,所以需要更換),如何只更換db或者wal裝置,減少資料遷移(不允許db或者wal裝置容量變大或者變小).
具體步驟如下:
- 設定osd noout ,停止相應osd
[[email protected] ~]# ceph osd set noout
noout is set
[[email protected] ~]# systemctl stop [email protected]
- 找到osd對應的lv裝置,修改data-device上的lvtags.
[[email protected] tool]# ll /var/lib/ceph/osd/ceph-1/
total 48
-rw-r--r-- 1 ceph ceph 402 Oct 15 14:05 activate.monmap
lrwxrwxrwx 1 ceph ceph 93 Oct 15 14:05 block -> /dev/ceph-cd2b78f1-957b-4de2-8b68-f41d3b5a42fb/osd-block-a4b0d600-eed7-4dc6-b20e-6f5dab561be5
lrwxrwxrwx 1 ceph ceph 9 Oct 15 14:05 block.db -> /dev/vdf4
lrwxrwxrwx 1 ceph ceph 9 Oct 15 14:05 block.wal -> /dev/vdf3
-rw-r--r-- 1 ceph ceph 2 Oct 15 14:05 bluefs
-rw-r--r-- 1 ceph ceph 37 Oct 15 14:05 ceph_fsid
-rw-r--r-- 1 ceph ceph 37 Oct 15 14:05 fsid
-rw------- 1 ceph ceph 55 Oct 15 14:05 keyring
-rw-r--r-- 1 ceph ceph 8 Oct 15 14:05 kv_backend
-rw-r--r-- 1 ceph ceph 21 Oct 15 14:05 magic
-rw-r--r-- 1 ceph ceph 4 Oct 15 14:05 mkfs_done
-rw-r--r-- 1 ceph ceph 41 Oct 15 14:05 osd_key
-rw-r--r-- 1 ceph ceph 6 Oct 15 14:05 ready
-rw-r--r-- 1 ceph ceph 10 Oct 15 14:05 type
-rw-r--r-- 1 ceph ceph 2 Oct 15 14:05 whoami
##檢視device的lvtags
[ [email protected] tool]# lvs --separator=';' -o lv_tags /dev/ceph-cd2b78f1-957b-4de2-8b68-f41d3b5a42fb/osd-block-a4b0d600-eed7-4dc6-b20e-6f5dab561be5
LV Tags
ceph.block_device=/dev/ceph-cd2b78f1-957b-4de2-8b68-f41d3b5a42fb/osd-block-a4b0d600-eed7-4dc6-b20e-6f5dab561be5,ceph.block_uuid=fvIZR9-G6Pd-o3BR-Vir2-imEH-e952-sIED0E,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=acc6dc6a-79cd-45dc-bf1f-83a576eb8039,ceph.cluster_name=ceph,ceph.crush_device_class=None,ceph.db_device=/dev/vdf4,ceph.db_uuid=5fdf11bf-7a3d-4e05-bf68-a03e8360c2b8,ceph.encrypted=0,ceph.osd_fsid=a4b0d600-eed7-4dc6-b20e-6f5dab561be5,ceph.osd_id=1,ceph.type=block,ceph.vdo=0,ceph.wal_device=/dev/vdf3,ceph.wal_uuid=d82d9bb0-ffda-451b-95e1-a16b4baec69
##刪除ceph.db_device
[ [email protected] tool]# lvchange --deltag ceph.db_device=/dev/vdf4 /dev/ceph-cd2b78f1-957b-4de2-8b68-f41d3b5a42fb/osd-block-a4b0d600-eed7-4dc6-b20e-6f5dab561be5
Logical volume ceph-cd2b78f1-957b-4de2-8b68-f41d3b5a42fb/osd-block-a4b0d600-eed7-4dc6-b20e-6f5dab561be5 changed.
##刪除ceph.db_uuid
[[email protected] tool]# lvchange --deltag ceph.db_uuid=5fdf11bf-7a3d-4e05-bf68-a03e8360c2b8 /dev/ceph-cd2b78f1-957b-4de2-8b68-f41d3b5a42fb/osd-block-a4b0d600-eed7-4dc6-b20e-6f5dab561be5
Logical volume ceph-cd2b78f1-957b-4de2-8b68-f41d3b5a42fb/osd-block-a4b0d600-eed7-4dc6-b20e-6f5dab561be5 changed.
##刪除ceph.wal_device
[[email protected] tool]# lvchange --deltag ceph.wal_device=/dev/vdf3 /dev/ceph-cd2b78f1-957b-4de2-8b68-f41d3b5a42fb/osd-block-a4b0d600-eed7-4dc6-b20e-6f5dab561be5
Logical volume ceph-cd2b78f1-957b-4de2-8b68-f41d3b5a42fb/osd-block-a4b0d600-eed7-4dc6-b20e-6f5dab561be5 changed.
##刪除ceph.wal_uuid
[[email protected] tool]# lvchange --deltag ceph.wal_uuid=d82d9bb0-ffda-451b-95e1-a16b4baec697 /dev/ceph-cd2b78f1-957b-4de2-8b68-f41d3b5a42fb/osd-block-a4b0d600-eed7-4dc6-b20e-6f5dab561be5
Logical volume ceph-cd2b78f1-957b-4de2-8b68-f41d3b5a42fb/osd-block-a4b0d600-eed7-4dc6-b20e-6f5dab561be5 changed.
##新增新的db,wal和他們的uuid,uuid再/dev/disk/by-partuuid/中可以找到
[[email protected] tool]# lvchange --addtag ceph.db_device=/dev/vdh4 /dev/ceph-cd2b78f1-957b-4de2-8b68-f41d3b5a42fb/osd-block-a4b0d600-eed7-4dc6-b20e-6f5dab561be5
Logical volume ceph-cd2b78f1-957b-4de2-8b68-f41d3b5a42fb/osd-block-a4b0d600-eed7-4dc6-b20e-6f5dab561be5 changed.
[[email protected] tool]# lvchange --addtag ceph.wal_device=/dev/vdh3 /dev/ceph-cd2b78f1-957b-4de2-8b68-f41d3b5a42fb/osd-block-a4b0d600-eed7-4dc6-b20e-6f5dab561be5
Logical volume ceph-cd2b78f1-957b-4de2-8b68-f41d3b5a42fb/osd-block-a4b0d600-eed7-4dc6-b20e-6f5dab561be5 changed.
[[email protected] tool]# lvchange --addtag ceph.wal_uuid=74b93324-49fb-426e-9fc0-9fc4d5db9286 /dev/ceph-cd2b78f1-957b-4de2-8b68-f41d3b5a42fb/osd-block-a4b0d600-eed7-4dc6-b20e-6f5dab561be5
Logical volume ceph-cd2b78f1-957b-4de2-8b68-f41d3b5a42fb/osd-block-a4b0d600-eed7-4dc6-b20e-6f5dab561be5 changed.
[[email protected] tool]# lvchange --addtag ceph.db_uuid=d6de0e5b-f935-46d2-94b0-762b196028de /dev/ceph-cd2b78f1-957b-4de2-8b68-f41d3b5a42fb/osd-block-a4b0d600-eed7-4dc6-b20e-6f5dab561be5
Logical volume ceph-cd2b78f1-957b-4de2-8b68-f41d3b5a42fb/osd-block-a4b0d600-eed7-4dc6-b20e-6f5dab561be5 changed.
- 把原db和wal裝置上的資料拷貝到新的裝置上.
[[email protected] tool]# dd if=/dev/vdf4 of=/dev/vdh4 bs=4M
7680+0 records in
7680+0 records out
32212254720 bytes (32 GB) copied, 219.139 s, 147 MB/s
[[email protected] tool]# dd if=/dev/vdf3 of=/dev/vdh3 bs=4M
7680+0 records in
7680+0 records out
32212254720 bytes (32 GB) copied, 431.513 s, 74.6 MB/s
- umount原來得osd目錄,重新activate對應osd
[[email protected] tool]# umount /var/lib/ceph/osd/ceph-1/
[[email protected] tool]# ceph-volume lvm activate 1 a4b0d600-eed7-4dc6-b20e-6f5dab561be5
Running command: mount -t tmpfs tmpfs /var/lib/ceph/osd/ceph-1
Running command: ceph-bluestore-tool --cluster=ceph prime-osd-dir --dev /dev/ceph-cd2b78f1-957b-4de2-8b68-f41d3b5a42fb/osd-block-a4b0d600-eed7-4dc6-b20e-6f5dab561be5 --path /var/lib/ceph/osd/ceph-1
Running command: ln -snf /dev/ceph-cd2b78f1-957b-4de2-8b68-f41d3b5a42fb/osd-block-a4b0d600-eed7-4dc6-b20e-6f5dab561be5 /var/lib/ceph/osd/ceph-1/block
Running command: chown -h ceph:ceph /var/lib/ceph/osd/ceph-1/block
Running command: chown -R ceph:ceph /dev/dm-1
Running command: chown -R ceph:ceph /var/lib/ceph/osd/ceph-1
Running command: ln -snf /dev/vdh4 /var/lib/ceph/osd/ceph-1/block.db
Running command: chown -R ceph:ceph /dev/vdh4
Running command: chown -h ceph:ceph /var/lib/ceph/osd/ceph-1/block.db
Running command: chown -R ceph:ceph /dev/vdh4
Running command: ln -snf /dev/vdh3 /var/lib/ceph/osd/ceph-1/block.wal
Running command: chown -R ceph:ceph /dev/vdh3
Running command: chown -h ceph:ceph /var/lib/ceph/osd/ceph-1/block.wal
Running command: chown -R ceph:ceph /dev/vdh3
Running command: systemctl enable [email protected]
Running command: systemctl start [email protected]
--> ceph-volume lvm activate successful for osd ID: 1
[[email protected] tool]# ll /var/lib/ceph/osd/ceph-1/
total 24
lrwxrwxrwx 1 ceph ceph 93 Oct 15 15:59 block -> /dev/ceph-cd2b78f1-957b-4de2-8b68-f41d3b5a42fb/osd-block-a4b0d600-eed7-4dc6-b20e-6f5dab561be5
lrwxrwxrwx 1 ceph ceph 9 Oct 15 15:59 block.db -> /dev/vdh4
lrwxrwxrwx 1 ceph ceph 9 Oct 15 15:59 block.wal -> /dev/vdh3
-rw------- 1 ceph ceph 37 Oct 15 15:59 ceph_fsid
-rw------- 1 ceph ceph 37 Oct 15 15:59 fsid
-rw------- 1 ceph ceph 55 Oct 15 15:59 keyring
-rw------- 1 ceph ceph 6 Oct 15 15:59 ready
-rw------- 1 ceph ceph 10 Oct 15 15:59 type
-rw------- 1 ceph ceph 2 Oct 15 15:59 whoami
至此,db和wal已經更換完成了,再次強調,更換db,wal得裝置需要更原裝置大小相同.