openstack之儲存
阿新 • • 發佈:2019-01-07
在 nova 裡面, 它將 volume 分成了兩個型別, 一種是關機後就會消失的 ephemeral volume, 另一種則是會繼續存在的 persistent volume, 即使 Virtual Machine 關機後, 你仍可以產生新的 Virtual Machine, 然後將這個 persistent volume 掛上去
而 ephemeral volume 目前會用在兩個地方, 第一個是開機用的主硬碟 (Root Disk), 如果你跑的程式有需要使用額外的空間, 那可以要求多一顆暫用硬碟(Ephemeral Disk), 但仍要注意這個 disk 在關機後就會消失.
我們可以在 Flavor 內看到 Openstack 預設的幾種組合(tiny 的那個雖然寫著沒有 root disk, 但其實仍然有, 只是比較小, 只有 2G)
m1.tiny (1VCPU / 0GB disk / 512MB RAM) 的狀況
m1.small (1VCPU / 10GB disk / 2048MB RAM) 的狀況, 可以看到比 m1.tiny 多了一個 /dev/vdb , 大小是 20G
回到 compute node 上面去看
另下來就是測試一下 persistent volume. 透過 UI 或是 command line, 我們生成一個大小為 10G 的 volume, 並且把它 attach 到 vm2 (m1.small)
而 ephemeral volume 目前會用在兩個地方, 第一個是開機用的主硬碟 (Root Disk), 如果你跑的程式有需要使用額外的空間, 那可以要求多一顆暫用硬碟(Ephemeral Disk), 但仍要注意這個 disk 在關機後就會消失.
我們可以在 Flavor 內看到 Openstack 預設的幾種組合(tiny 的那個雖然寫著沒有 root disk, 但其實仍然有, 只是比較小, 只有 2G)
m1.tiny (1VCPU / 0GB disk / 512MB RAM) 的狀況
[email protected]:~$ sudo fdisk -l Disk /dev/vda: 2147 MB, 2147483648 bytes 255 heads, 63 sectors/track, 261 cylinders, total 4194304 sectors Units = sectors of 1 * 512 = 512 bytes Sector size (logical/physical): 512 bytes / 512 bytes I/O size (minimum/optimal): 512 bytes / 512 bytes Disk identifier: 0x00000000 Device Boot Start End Blocks Id System /dev/vda1 * 16065 4192964 2088450 83 Linux[email protected]:~$ sudo less /proc/meminfo | grep -i total MemTotal: 503520 kB SwapTotal: 0 kB VmallocTotal: 34359738367 kB HugePages_Total: 0 [email protected]:~$ sudo less /proc/cpuinfo | grep CPU model name : QEMU Virtual CPU version 1.0 [email protected]:~$ sudo df -h Filesystem Size Used Avail Use% Mounted on /dev/vda1 2.0G 667M 1.3G 35% / udev 242M 12K 242M 1% /dev tmpfs 99M 204K 99M 1% /run none 5.0M 0 5.0M 0% /run/lock none 246M 0 246M 0% /run/shm
m1.small (1VCPU / 10GB disk / 2048MB RAM) 的狀況, 可以看到比 m1.tiny 多了一個 /dev/vdb , 大小是 20G
[email protected]:~$ sudo fdisk -l Disk /dev/vda: 10.7 GB, 10737418240 bytes 255 heads, 63 sectors/track, 1305 cylinders, total 20971520 sectors Units = sectors of 1 * 512 = 512 bytes Sector size (logical/physical): 512 bytes / 512 bytes I/O size (minimum/optimal): 512 bytes / 512 bytes Disk identifier: 0x00000000 Device Boot Start End Blocks Id System /dev/vda1 * 16065 20964824 10474380 83 Linux Disk /dev/vdb: 21.5 GB, 21474836480 bytes 16 heads, 63 sectors/track, 41610 cylinders, total 41943040 sectors Units = sectors of 1 * 512 = 512 bytes Sector size (logical/physical): 512 bytes / 512 bytes I/O size (minimum/optimal): 512 bytes / 512 bytes Disk identifier: 0x00000000 Disk /dev/vdb doesn't contain a valid partition table[email protected]:~$ sudo less /proc/meminfo | grep -i total MemTotal: 2051772 kB SwapTotal: 0 kB VmallocTotal: 34359738367 kB HugePages_Total: 0 [email protected]:~$ sudo less /proc/cpuinfo | grep CPU model name : QEMU Virtual CPU version 1.0 [email protected]:~$ sudo df -h Filesystem Size Used Avail Use% Mounted on /dev/vda1 9.9G 670M 8.7G 7% / udev 998M 8.0K 998M 1% /dev tmpfs 401M 208K 401M 1% /run none 5.0M 0 5.0M 0% /run/lock none 1002M 0 1002M 0% /run/shm /dev/vdb 20G 173M 19G 1% /mnt
回到 compute node 上面去看
# 可以看到有兩個 virtual machine 正在跑 [email protected]:~$ sudo virsh list [sudo] password for wistor: Id Name State ---------------------------------- 3 instance-00000007 running 4 instance-00000009 running # 這個只有 vda, 合理的判斷應該是 m1.tiny [email protected]:~$ sudo virsh domblklist 3 Target Source ------------------------------------------------ vda /var/lib/nova/instances/instance-00000007/disk # 這個有 vda 和 vdb, 應該就是 m1.small [email protected]:~$ sudo virsh domblklist 4 Target Source ------------------------------------------------ vda /var/lib/nova/instances/instance-00000009/disk vdb /var/lib/nova/instances/instance-00000009/disk.local # 先到 tiny 的資料夾去看, 可以發現有 disk 這個檔案, 還有一個 libvirt.xml # 但為什麼 disk 大小隻有 118M ? 而不是 2G? [email protected]:/var/lib/nova/instances/instance-00000007$ ll -h total 115M drwxrwxr-x 2 nova nova 4.0K May 29 13:54 ./ drwxr-xr-x 6 nova nova 4.0K May 29 13:57 ../ -rw-rw---- 1 libvirt-qemu kvm 21K May 29 13:55 console.log -rw-r--r-- 1 libvirt-qemu kvm 118M May 29 14:07 disk -rw-rw-r-- 1 nova nova 1.5K May 29 13:54 libvirt.xml # 看一下 disk 是什麼東西, 結果發現他是 QCOW 格式, backing file 在 _base 裡面 [email protected]:/var/lib/nova/instances/instance-00000007$ file disk disk: QEMU QCOW Image (v2), has backing file (path /var/lib/nova/instances/_base/20938b475c7d805e707888fb2a3196550), 2147483648 bytes # 看一下 _base 底下有什麼東西 [email protected]:/var/lib/nova/instances/_base$ ll -h total 3.1G drwxrwxr-x 2 nova nova 4.0K May 27 23:35 ./ drwxr-xr-x 6 nova nova 4.0K May 29 13:57 ../ -rw-r--r-- 1 libvirt-qemu kvm 2.0G May 27 23:57 20938b475c7d805e707888fb2a31965508d0bb4b -rw-r--r-- 1 libvirt-qemu kvm 10G May 27 23:57 20938b475c7d805e707888fb2a31965508d0bb4b_10 -rw-r--r-- 1 libvirt-qemu kvm 20G May 27 23:28 ephemeral_0_20_None # 而這個 backend 的大小剛好就是 2G, 而且是個 boot sector, 所以他就是 tiny 的 boot disk [email protected]:/var/lib/nova/instances/_base$ file 20938b475c7d805e707888fb2a31965508d0bb4b 20938b475c7d805e707888fb2a31965508d0bb4b: x86 boot sector; partition 1: ID=0x83, active, starthead 0, startsector 16065, 4176900 sectors, code offset 0x63 # 接下來看看 small 的資料夾, 可以發現多一個 disk.local [email protected]:/var/lib/nova/instances/instance-00000009$ ll -h total 395M drwxrwxr-x 2 nova nova 4.0K May 29 13:57 ./ drwxr-xr-x 6 nova nova 4.0K May 29 13:57 ../ -rw-rw---- 1 libvirt-qemu kvm 21K May 29 13:58 console.log -rw-r--r-- 1 libvirt-qemu kvm 390M May 29 14:13 disk -rw-r--r-- 1 libvirt-qemu kvm 12M May 29 13:58 disk.local -rw-rw-r-- 1 nova nova 1.7K May 29 13:57 libvirt.xml # 發現他也是 qcow 格式, backing file 也在 _base 下 [email protected]:/var/lib/nova/instances/instance-00000009$ file disk.local disk.local: QEMU QCOW Image (v2), has backing file (path /var/lib/nova/instances/_base/ephemeral_0_20_None), 21474836480 bytes # 這個就是 ephemeral disk, 大小為 20G [email protected]:/var/lib/nova/instances/_base$ file ephemeral_0_20_None ephemeral_0_20_None: Linux rev 1.0 ext3 filesystem data, UUID=e886c848-3a77-4e2c-b6d6-4d77eb51f8da, volume name "ephemeral0" (large files)
另下來就是測試一下 persistent volume. 透過 UI 或是 command line, 我們生成一個大小為 10G 的 volume, 並且把它 attach 到 vm2 (m1.small)
# 從 nova 的 command line 可以看到新增的 volume [email protected]:~$ sudo -i nova volume-list +----+-----------+--------------+------+-------------+-------------+ | ID | Status | Display Name | Size | Volume Type | Attached to | +----+-----------+--------------+------+-------------+-------------+ | 3 | available | volume1 | 10 | None | | +----+-----------+--------------+------+-------------+-------------+ # 在 lvm 內也可以看到這個新的 volume [email protected]:~$ sudo lvscan ACTIVE '/dev/nova-volumes/volume-00000003' [10.00 GiB] inherit # 可以看到 tgt 把這個 volume export 出去變成一個 iscsi target [email protected]:~$ sudo tgt-admin -s Target 1: iqn.2010-10.org.openstack:volume-00000003 System information: Driver: iscsi State: ready I_T nexus information: LUN information: LUN: 0 Type: controller SCSI ID: IET 00010000 SCSI SN: beaf10 Size: 0 MB, Block size: 1 Online: Yes Removable media: No Readonly: No Backing store type: null Backing store path: None Backing store flags: LUN: 1 Type: disk SCSI ID: IET 00010001 SCSI SN: beaf11 Size: 10737 MB, Block size: 512 Online: Yes Removable media: No Readonly: No Backing store type: rdwr Backing store path: /dev/nova-volumes/volume-00000003 Backing store flags: Account information: ACL information: ALL # 然後使用 open iscsi initiator 連線到這個 iscsi target [email protected]:~$ sudo iscsiadm -m session tcp: [5] 172.17.123.83:3260,1 iqn.2010-10.org.openstack:volume-00000003 # 然後就多一個 device [email protected]:/dev/disk/by-path$ ll lrwxrwxrwx 1 root root 9 May 29 14:23 ip-172.17.123.83:3260-iscsi-iqn.2010-10.org.openstack:volume-00000003-lun-1 -> ../../sdf [email protected]:~$ sudo fdisk -l /dev/sdf Disk /dev/sdf: 10.7 GB, 10737418240 bytes 64 heads, 32 sectors/track, 10240 cylinders, total 20971520 sectors Units = sectors of 1 * 512 = 512 bytes Sector size (logical/physical): 512 bytes / 512 bytes I/O size (minimum/optimal): 512 bytes / 512 bytes Disk identifier: 0x00000000 Disk /dev/sdf doesn't contain a valid partition table # 然後也可以看到 vm2 上面又多了一個 device [email protected]:~$ sudo virsh domblklist 4 Target Source ------------------------------------------------ vda /var/lib/nova/instances/instance-00000009/disk vdb /var/lib/nova/instances/instance-00000009/disk.local vdc /dev/disk/by-path/ip-172.17.123.83:3260-iscsi-iqn.2010-10.org.openstack:volume-00000003-lun-1PS1. 目前發現在 vm 有掛載額外的 volume 時, 重開機會有問題