lvm與raid的配置使用
阿新 • • 發佈:2019-01-12
- lvs
- raid
lvm: logical volumes manager,從物理裝置建立虛擬塊裝置,將一個或多個底層塊裝置組織成一個邏輯裝置的模組,能夠更好的管理磁碟,通過對底層的封裝,對磁碟進行動態管理。
基本概念:
pv :physical volume物理卷處於最底層
vg :volume group卷組,建立在物理卷之上,最小單位為pe
lv :logical volume 邏輯卷建立在卷組之上,卷組中的未分配空間可以用於建立新的邏輯卷,邏輯卷建立後可以動態地擴充套件和縮小空間
管理使用:
pv管理工具 | vg管理工具 | lv管理工具 |
---|---|---|
pvs:簡要顯示PV資訊 | vgs :顯示vg簡要資訊 | lvs :顯示lv簡要資訊 |
pvdisplay:顯示詳細資訊 | vgdisplay 顯示詳細資訊 | lvdisplay 顯示詳細資訊 |
pvcreate /dev/device... :建立PV | vgcreate [-s #[kKmMgGtTpPeE]] VG_Name /dev/device...]#建立vg,-s指明pe大小,預設4m | lvcreate -L #[mMgGtT] -n NAME VolumeGroup #-l 指明pe多少個 ,-L lv大小 |
pvremove /dev/device...刪除PV | vgextend VG_Name /dev/device... #擴容vg | lvextend -L [+]#[mMgGtT] /dev/VG_NAME/LV_NAME: lv擴容後還需要修改邏輯大小:resizefs /dev/VG_NAME/LV_NAME |
pvmove /dev/device /dev/device:資料遷移 | vgreduce VG_Name /dev/device... #縮容 | 縮減邏輯卷看縮減步驟 |
vgremove vg_name刪除卷組 :注收縮需先遷移資料 | lvremove /dev/VG_NAME/LV_NAME #刪除邏輯卷 |
lv縮減步驟:
# umount /dev/VG_NAME/LV_NAME#先解除安裝,xfs可以不用解除安裝
# e2fsck -f /dev/VG_NAME/LV_NAME :強制檢測檔案系統
# resize2fs /dev/VG_NAME/LV_NAME #[mMgGtT] #執行縮減操作
# lvreduce -L [-]#[mMgGtT] /dev/VG_NAME/LV_NAME #縮減lv
# mount
簡單實踐:
[[email protected] ~]# pvcreate /dev/sdb{1,2}
Physical volume "/dev/sdb1" successfully created.
Physical volume "/dev/sdb2" successfully created.
[[email protected] ~]# pvs
PV VG Fmt Attr PSize PFree
/dev/sdb1 lvm2 --- 1.00g 1.00g
/dev/sdb2 lvm2 --- 1.00g 1.00g
[[email protected] ~]# pvdisplay
"/dev/sdb2" is a new physical volume of "1.00 GiB"
--- NEW Physical volume ---
PV Name /dev/sdb2
VG Name
PV Size 1.00 GiB
Allocatable NO
PE Size 0
Total PE 0
Free PE 0
Allocated PE 0
PV UUID 3XTOcT-uIc3-73Hc-alkn-EKWJ-yyqv-CZYDsR
"/dev/sdb1" is a new physical volume of "1.00 GiB"
--- NEW Physical volume ---
PV Name /dev/sdb1
VG Name
PV Size 1.00 GiB
Allocatable NO
PE Size 0
Total PE 0
Free PE 0
Allocated PE 0
PV UUID KZ6Mwf-GPQc-xGQf-xWwS-FCnS-yLtn-XqJofh
= ==================================================
[[email protected] ~]# vgcreate myvg /dev/sdb1
Volume group "myvg" successfully created
[[email protected] ~]# vgs
VG #PV #LV #SN Attr VSize VFree
myvg 1 0 0 wz--n- 1020.00m 1020.00m
[[email protected] ~]# vgextend myvg /dev/sdb2
Volume group "myvg" successfully extended
[[email protected] ~]# vgs
VG #PV #LV #SN Attr VSize VFree
myvg 2 0 0 wz--n- 1.99g 1.99g
[[email protected] ~]# vgdisplay
--- Volume group ---
VG Name myvg
System ID
Format lvm2
Metadata Areas 2
Metadata Sequence No 2
VG Access read/write
VG Status resizable
MAX LV 0
Cur LV 0
Open LV 0
Max PV 0
Cur PV 2
Act PV 2
VG Size 1.99 GiB
PE Size 4.00 MiB
Total PE 510
Alloc PE / Size 0 / 0
Free PE / Size 510 / 1.99 GiB
VG UUID b5t7N7-xMBg-5O8w-xS5m-2OTV-IpBd-98mR5w
===================================================
[[email protected] ~]# lvcreate -L 200m --name mylv myvg
Logical volume "mylv" created.
[[email protected] ~]# lvs
LV VG Attr LSize Pool Origin Data% Meta% Move Log Cpy%Sync Convert
mylv myvg -wi-a----- 200.00m
[[email protected] ~]# lvdisplay
--- Logical volume ---
LV Path /dev/myvg/mylv
LV Name mylv
VG Name myvg
LV UUID PelWrs-D13Q-8btE-q0RV-tFsA-6miC-FTqwns
LV Write Access read/write
LV Creation host, time xt.com, 2019-01-11 22:24:56 +0800
LV Status available
# open 0
LV Size 200.00 MiB
Current LE 50
Segments 1
Allocation inherit
Read ahead sectors auto
- currently set to 8192
Block device 253:0
=================================================
[[email protected] ~]# mkfs.ext4 /dev/myvg/mylv
mke2fs 1.42.9 (28-Dec-2013)
Filesystem label=
OS type: Linux
Block size=1024 (log=0)
Fragment size=1024 (log=0)
Stride=0 blocks, Stripe width=0 blocks
51200 inodes, 204800 blocks
10240 blocks (5.00%) reserved for the super user
First data block=1
Maximum filesystem blocks=33816576
25 block groups
8192 blocks per group, 8192 fragments per group
2048 inodes per group
Superblock backups stored on blocks:
8193, 24577, 40961, 57345, 73729
Allocating group tables: done
Writing inode tables: done
Creating journal (4096 blocks): done
Writing superblocks and filesystem accounting information: done
[[email protected] ~]# mount /dev/myvg/mylv /mnt
[[email protected] ~]# df -h
Filesystem Size Used Avail Use% Mounted on
/dev/sda1 10G 4.0G 6.1G 40% /
devtmpfs 984M 0 984M 0% /dev
tmpfs 993M 0 993M 0% /dev/shm
tmpfs 993M 8.7M 985M 1% /run
tmpfs 993M 0 993M 0% /sys/fs/cgroup
/dev/sda3 256M 83M 173M 33% /boot
tmpfs 199M 0 199M 0% /run/user/0
/dev/mapper/myvg-mylv 190M 1.6M 175M 1% /mnt
==========================================
模擬資料遷移:需要同一個vg
[[email protected] mnt]# mount /dev/myvg/mylv /mnt
[[email protected] mnt]# cp -r /tmp/ /mnt/
[[email protected] tmp]# pvs
PV VG Fmt Attr PSize PFree
/dev/sdb1 myvg lvm2 a-- 1020.00m 820.00m #資料主要在sdb1,遷移到目標的容量要比現有大
/dev/sdb2 myvg lvm2 a-- 1020.00m 1020.00m
[[email protected] tmp]# pvmove /dev/sdb1 /dev/sdb2
/dev/sdb1: Moved: 4.00%
/dev/sdb1: Moved: 100.00%
[[email protected] tmp]# pvs
PV VG Fmt Attr PSize PFree
/dev/sdb1 myvg lvm2 a-- 1020.00m 1020.00m
/dev/sdb2 myvg lvm2 a-- 1020.00m 820.00m
[[email protected] tmp]# vgreduce myvg /dev/sdb1
Removed "/dev/sdb1" from volume group "myvg"
[[email protected] tmp]# pvs
PV VG Fmt Attr PSize PFree
/dev/sdb1 lvm2 --- 1.00g 1.00g
/dev/sdb2 myvg lvm2 a-- 1020.00m 820.00m
[[email protected] tmp]# vgs
VG #PV #LV #SN Attr VSize VFree
myvg 1 1 0 wz--n- 1020.00m 820.00m