ceph(luminous)-Bluestore ceph-bluestore-tool工具
阿新 • • 發佈:2018-12-14
ceph-bluestore-tool工具的簡介
[[email protected] /]# ceph-bluestore-tool --help All options: Options: -h [ --help ] produce help message --path arg bluestore path //osd的路徑 --out-dir arg output directory //匯出時候的目錄,比如bluefs-export -l [ --log-file ] arg log file //log檔案的位置,很多command就是呼叫bluestore.cc中的函式,其中會列印很多log --log-level arg log level (30=most, 20=lots, 10=some, 1=little)// log的等級 --dev arg device(s) //可以是blockdev,dbdev,waldev --deep arg deep fsck (read all data) -k [ --key ] arg label metadata key name -v [ --value ] arg label metadata value Positional options: --command arg fsck, repair, bluefs-export, bluefs-bdev-sizes, bluefs-bdev-expand, show-label, set-label-key, rm-label-key, prime-osd-dir
1.fsck 需要停止osd後使用
[[email protected] ~]# ceph-bluestore-tool fsck --path /var/lib/ceph/osd/ceph-0 --deep 1
fsck success
osd的元資料一致性檢測(),deep為1的時候也檢測物件資料.呼叫的bluestore::_fsck(bool deep, bool repair)
2.repair
[[email protected] ~]# ceph-bluestore-tool repair --path /var/lib/ceph/osd/ceph-0 --deep 1 repair success
一致性檢測並修復osd. 3.bluefs-export
[[email protected] ~]# ceph-bluestore-tool bluefs-export --path /var/lib/ceph/osd/ceph-0 --out-dir /home/osd-0/
infering bluefs devices from bluestore path
slot 0 /var/lib/ceph/osd/ceph-0/block.wal
slot 1 /var/lib/ceph/osd/ceph-0/block.db
slot 2 /var/lib/ceph/osd/ceph-0/block
db/
db/000139.sst
db/CURRENT
db/IDENTITY
db/LOCK
db/MANIFEST-000147
db/OPTIONS-000147
db/OPTIONS-000150
db.slow/
db.wal/
db.wal/000148.log
[ [email protected] osd-0]# tree
.
├── db
│ ├── 000139.sst
│ ├── CURRENT
│ ├── IDENTITY
│ ├── LOCK
│ ├── MANIFEST-000147
│ ├── OPTIONS-000147
│ └── OPTIONS-000150
├── db.slow
└── db.wal
└── 000148.log
3 directories, 8 files
把rocksdb匯出成檔案系統的形式。 本身bluestore通過bluefs作為RocksEnv來進行底層的io,是看不到這些以目錄形式組織的rocksdb內容,此工具提供了一種匯出成目錄的方法. 4.bluefs-bdev-sizes
[[email protected] ~]# ceph-bluestore-tool bluefs-bdev-sizes --path /var/lib/ceph/osd/ceph-0/
infering bluefs devices from bluestore path
slot 0 /var/lib/ceph/osd/ceph-0//block.wal
slot 1 /var/lib/ceph/osd/ceph-0//block.db
slot 2 /var/lib/ceph/osd/ceph-0//block
0 : size 0x780000000 : own 0x[1000~77ffff000]
1 : size 0x780000000 : own 0x[2000~77fffe000]
2 : size 0x4affc00000 : own 0x[23ffe00000~300000000]
5.bluefs-bdev-expand
[[email protected] ~]# ceph-bluestore-tool bluefs-bdev-expand --path /var/lib/ceph/osd/ceph-0/
infering bluefs devices from bluestore path
slot 0 /var/lib/ceph/osd/ceph-0//block.wal
slot 1 /var/lib/ceph/osd/ceph-0//block.db
slot 2 /var/lib/ceph/osd/ceph-0//block
start:
0 : size 0x780000000 : own 0x[1000~77ffff000]
1 : size 0x780000000 : own 0x[2000~77fffe000]
2 : size 0x4affc00000 : own 0x[23ffe00000~300000000]
6.show-label
[[email protected] ~]# ceph-bluestore-tool show-label --path /var/lib/ceph/osd/ceph-0/
infering bluefs devices from bluestore path
{
"/var/lib/ceph/osd/ceph-0//block": {
"osd_uuid": "8b0394e4-1dcc-44c1-82b7-864b2162de38",
"size": 322118352896,
"btime": "2018-10-08 10:26:39.252910",
"description": "main",
"bluefs": "1",
"ceph_fsid": "acc6dc6a-79cd-45dc-bf1f-83a576eb8039",
"kv_backend": "rocksdb",
"magic": "ceph osd volume v026",
"mkfs_done": "yes",
"osd_key": "AQBcwLpbGh89JRAAoEbi/OgMvKABkZmI9r/B8g==",
"ready": "ready",
"whoami": "0"
},
"/var/lib/ceph/osd/ceph-0//block.wal": {
"osd_uuid": "8b0394e4-1dcc-44c1-82b7-864b2162de38",
"size": 32212254720,
"btime": "2018-10-08 10:26:39.285854",
"description": "bluefs wal"
},
"/var/lib/ceph/osd/ceph-0//block.db": {
"osd_uuid": "8b0394e4-1dcc-44c1-82b7-864b2162de38",
"size": 32212254720,
"btime": "2018-10-08 10:26:39.255250",
"description": "bluefs db"
}
}
[[email protected] ~]# ceph-bluestore-tool show-label --dev /dev/ceph-e7878472-0d23-42a4-a9be-d69edc9ed4b0/osd-block-8b0394e4-1dcc-44c1-82b7-864b2162de38
{
"/dev/ceph-e7878472-0d23-42a4-a9be-d69edc9ed4b0/osd-block-8b0394e4-1dcc-44c1-82b7-864b2162de38": {
"osd_uuid": "8b0394e4-1dcc-44c1-82b7-864b2162de38",
"size": 322118352896,
"btime": "2018-10-08 10:26:39.252910",
"description": "main",
"bluefs": "1",
"ceph_fsid": "acc6dc6a-79cd-45dc-bf1f-83a576eb8039",
"kv_backend": "rocksdb",
"magic": "ceph osd volume v026",
"mkfs_done": "yes",
"osd_key": "AQBcwLpbGh89JRAAoEbi/OgMvKABkZmI9r/B8g==",
"ready": "ready",
"whoami": "0"
}
}
[[email protected] ~]# ceph-bluestore-tool show-label --dev /dev/vde
vde vde1 vde2 vde3 vde4 vde5 vde6
[[email protected] ~]# ceph-bluestore-tool show-label --dev /dev/vde2
{
"/dev/vde2": {
"osd_uuid": "8b0394e4-1dcc-44c1-82b7-864b2162de38",
"size": 32212254720,
"btime": "2018-10-08 10:26:39.255250",
"description": "bluefs db"
}
}
[[email protected] ~]# ceph-bluestore-tool show-label --dev /dev/vde1
{
"/dev/vde1": {
"osd_uuid": "8b0394e4-1dcc-44c1-82b7-864b2162de38",
"size": 32212254720,
"btime": "2018-10-08 10:26:39.285854",
"description": "bluefs wal"
}
}
顯示dev或者path的一些標籤.
7.set/rm-label-key
[[email protected] ~]# ceph-bluestore-tool set-label-key -k aaa -v bbb --dev /dev/vde1
[[email protected] ~]# ceph-bluestore-tool show-label --dev /dev/vde1
{
"/dev/vde1": {
"osd_uuid": "8b0394e4-1dcc-44c1-82b7-864b2162de38",
"size": 32212254720,
"btime": "2018-10-08 10:26:39.285854",
"description": "bluefs wal",
"aaa": "bbb"
}
}
[[email protected] ~]# ceph-bluestore-tool rm-label-key -k aaa --dev /dev/vde1
[[email protected] ~]# ceph-bluestore-tool show-label --dev /dev/vde1
{
"/dev/vde1": {
"osd_uuid": "8b0394e4-1dcc-44c1-82b7-864b2162de38",
"size": 32212254720,
"btime": "2018-10-08 10:26:39.285854",
"description": "bluefs wal"
}
}
插入或者刪除一些標籤.
目前還沒遇到fsck和repair的使用場景,後續遇到了會更新,show-lable用於檢視osd的一些資訊時有用,尤其是當osd已經umount掉了的時候.