1. 程式人生 > 實用技巧 >安裝OpenStack塊儲存服務cinder

安裝OpenStack塊儲存服務cinder

一、塊儲存服務cinder的介紹

1. 塊儲存cinder概覽

塊儲存服務(cinder)為例項提供塊儲存。儲存的分配和消耗是由塊儲存驅動器,或者多後端配置的驅動器決定的。還有很多驅動程式可用:NAS/SAN,NFS,LVM,Ceph等。

OpenStack塊儲存服務(cinder)為虛擬機器新增持久的儲存,塊儲存提供一個基礎設施為了管理卷以及和OpenStack計算服務互動,為例項提供卷。此服務也會啟用管理卷的快照和卷型別的功能

2. 塊儲存服務通常包含下列元件

cinder-api:接受API請求,並將其路由到cinder-volume執行。即接收和響應外部有關塊儲存請求,

cinder-volume:提供儲存空間。與塊儲存服務和例如cinder-scheduler的程序進行直接互動。它也可以與這些程序通過一個訊息佇列進行互動。cinder-volume服務響應送到塊儲存服務的讀寫請求來維持狀態。它也可以和多種儲存提供者在驅動架構下進行互動。

cinder-scheduler守護程序:選擇最優儲存提供節點來建立卷。其與nova-scheduler元件類似。即排程器,決定將要分配的空間由哪一個cinder-volume提供。

cinder-backup守護程序:備份卷。cinder-backup服務提供任何種類備份捲到一個備份儲存提供者。就像cinder-volume服務,它與多種儲存提供者在驅動架構下進行互動。

訊息佇列:在塊儲存的程序之間路由資訊。

二、安裝和配置塊裝置儲存服務cinder

在控制節點上安裝和配置塊裝置儲存服務。

1.先決條件

1)建立資料庫

[root@controller ~]# mysql -uroot -p
Enter password: 
Welcome to the MariaDB monitor.  Commands end with ; or \g.
Your MariaDB connection id 
is 453 Server version: 10.1.20-MariaDB MariaDB Server Copyright (c) 2000, 2016, Oracle, MariaDB Corporation Ab and others. Type 'help;' or '\h' for help. Type '\c' to clear the current input statement. MariaDB [(none)]> CREATE DATABASE cinder; Query OK, 1 row affected (0.01 sec) MariaDB [(none)]> GRANT ALL PRIVILEGES ON cinder.* TO '
cinder'@'localhost' IDENTIFIED BY '123456'; Query OK, 0 rows affected (0.23 sec) MariaDB [(none)]> GRANT ALL PRIVILEGES ON cinder.* TO 'cinder'@'%' IDENTIFIED BY '123456'; Query OK, 0 rows affected (0.00 sec) MariaDB [(none)]> exit Bye

2)獲得admin憑證來獲取只有管理員能執行的命令的訪問許可權

[root@controller ~]# source admin-openrc 

3)建立服務證書

a.建立一個cinder使用者

[root@controller ~]# openstack user create --domain default --password 123456 cinder
+-----------+----------------------------------+
| Field     | Value                            |
+-----------+----------------------------------+
| domain_id | efcc929416f1468299c302cb607305e0 |
| enabled   | True                             |
| id        | 6b0b7c05ccc34c1292058cd282515895 |
| name      | cinder                           |
+-----------+----------------------------------+

b.新增admin角色到cinder使用者上

[root@controller ~]# openstack role add --project service --user cinder admin

3)建立cindercinderv2服務實體

[root@controller ~]# openstack service create --name cinder --description "OpenStack Block Storage" volume
+-------------+----------------------------------+
| Field       | Value                            |
+-------------+----------------------------------+
| description | OpenStack Block Storage          |
| enabled     | True                             |
| id          | 5ce9956d003a42d3a845eff54166b1f8 |
| name        | cinder                           |
| type        | volume                           |
+-------------+----------------------------------+
[root@controller ~]# openstack service create --name cinderv2 --description "OpenStack Block Storage" volumev2
+-------------+----------------------------------+
| Field       | Value                            |
+-------------+----------------------------------+
| description | OpenStack Block Storage          |
| enabled     | True                             |
| id          | 70ba60de155b4dce8a5ee18d94758c13 |
| name        | cinderv2                         |
| type        | volumev2                         |
+-------------+----------------------------------+

注:塊裝置儲存服務要求兩個服務實體

4)建立塊裝置儲存服務的 API 入口點

注:塊裝置儲存服務每個服務實體都需要端點

[root@controller ~]# openstack endpoint create --region RegionOne volume public http://controller:8776/v1/%\(tenant_id\)s
+--------------+-----------------------------------------+
| Field        | Value                                   |
+--------------+-----------------------------------------+
| enabled      | True                                    |
| id           | e28193621ca54e9faaafc88b26a6c8f9        |
| interface    | public                                  |
| region       | RegionOne                               |
| region_id    | RegionOne                               |
| service_id   | 5ce9956d003a42d3a845eff54166b1f8        |
| service_name | cinder                                  |
| service_type | volume                                  |
| url          | http://controller:8776/v1/%(tenant_id)s |
+--------------+-----------------------------------------+
[root@controller ~]# openstack endpoint create --region RegionOne volume internal http://controller:8776/v1/%\(tenant_id\)s
+--------------+-----------------------------------------+
| Field        | Value                                   |
+--------------+-----------------------------------------+
| enabled      | True                                    |
| id           | c2f103b543c74187aaffcaebec902a66        |
| interface    | internal                                |
| region       | RegionOne                               |
| region_id    | RegionOne                               |
| service_id   | 5ce9956d003a42d3a845eff54166b1f8        |
| service_name | cinder                                  |
| service_type | volume                                  |
| url          | http://controller:8776/v1/%(tenant_id)s |
+--------------+-----------------------------------------+
[root@controller ~]# openstack endpoint create --region RegionOne volume admin http://controller:8776/v1/%\(tenant_id\)s
+--------------+-----------------------------------------+
| Field        | Value                                   |
+--------------+-----------------------------------------+
| enabled      | True                                    |
| id           | 61716c3e8ebb4715a394a98f8ad649fa        |
| interface    | admin                                   |
| region       | RegionOne                               |
| region_id    | RegionOne                               |
| service_id   | 5ce9956d003a42d3a845eff54166b1f8        |
| service_name | cinder                                  |
| service_type | volume                                  |
| url          | http://controller:8776/v1/%(tenant_id)s |
+--------------+-----------------------------------------+
[root@controller ~]# openstack endpoint create --region RegionOne volumev2 public http://controller:8776/v2/%\(tenant_id\)s
+--------------+-----------------------------------------+
| Field        | Value                                   |
+--------------+-----------------------------------------+
| enabled      | True                                    |
| id           | 3fd334bead1c4472babd45e603bf161a        |
| interface    | public                                  |
| region       | RegionOne                               |
| region_id    | RegionOne                               |
| service_id   | 70ba60de155b4dce8a5ee18d94758c13        |
| service_name | cinderv2                                |
| service_type | volumev2                                |
| url          | http://controller:8776/v2/%(tenant_id)s |
+--------------+-----------------------------------------+
[root@controller ~]# openstack endpoint create --region RegionOne volumev2 internal http://controller:8776/v2/%\(tenant_id\)s
+--------------+-----------------------------------------+
| Field        | Value                                   |
+--------------+-----------------------------------------+
| enabled      | True                                    |
| id           | 9f68f2fee6274a77929d71df65cff017        |
| interface    | internal                                |
| region       | RegionOne                               |
| region_id    | RegionOne                               |
| service_id   | 70ba60de155b4dce8a5ee18d94758c13        |
| service_name | cinderv2                                |
| service_type | volumev2                                |
| url          | http://controller:8776/v2/%(tenant_id)s |
+--------------+-----------------------------------------+
[root@controller ~]# openstack endpoint create --region RegionOne volumev2 admin http://controller:8776/v2/%\(tenant_id\)s
+--------------+-----------------------------------------+
| Field        | Value                                   |
+--------------+-----------------------------------------+
| enabled      | True                                    |
| id           | b59b96df0ae64e4b9a42f97ad8ceb831        |
| interface    | admin                                   |
| region       | RegionOne                               |
| region_id    | RegionOne                               |
| service_id   | 70ba60de155b4dce8a5ee18d94758c13        |
| service_name | cinderv2                                |
| service_type | volumev2                                |
| url          | http://controller:8776/v2/%(tenant_id)s |
+--------------+-----------------------------------------+

2 安裝配置

1)安裝軟體包

[root@controller ~]# yum install openstack-cinder -y

2)編輯/etc/cinder/cinder.conf,同時完成如下動作

[root@controller ~]# cp /etc/cinder/cinder.conf{,.bak}
[root@controller ~]# grep -Ev '^$|#' /etc/cinder/cinder.conf.bak >/etc/cinder/cinder.conf
[root@controller ~]# cat /etc/cinder/cinder.conf
[DEFAULT]
[BACKEND]
[BRCD_FABRIC_EXAMPLE]
[CISCO_FABRIC_EXAMPLE]
[COORDINATION]
[FC-ZONE-MANAGER]
[KEYMGR]
[cors]
[cors.subdomain]
[database]
[keystone_authtoken]
[matchmaker_redis]
[oslo_concurrency]
[oslo_messaging_amqp]
[oslo_messaging_notifications]
[oslo_messaging_rabbit]
[oslo_middleware]
[oslo_policy]
[oslo_reports]
[oslo_versionedobjects]
[ssl]

a.在[database]部分,配置資料庫訪問

b.在 [DEFAULT]和 [oslo_messaging_rabbit]部分,配置 “RabbitMQ” 訊息佇列訪問

c.在 “[DEFAULT]” 和 “[keystone_authtoken]” 部分,配置認證服務訪問

d.在[DEFAULT部分,配置``my_ip`` 來使用控制節點的管理介面的IP 地址

e.在[oslo_concurrency]部分,配置鎖路徑

[root@controller ~]# openstack-config --set /etc/cinder/cinder.conf   DEFAULT  rpc_backend  rabbit
[root@controller ~]# openstack-config --set /etc/cinder/cinder.conf   DEFAULT  auth_strategy  keystone
[root@controller ~]# openstack-config --set /etc/cinder/cinder.conf   DEFAULT  my_ip  10.0.0.11
[root@controller ~]# openstack-config --set /etc/cinder/cinder.conf   database connection mysql+pymysql://cinder:123456@controller/cinder
[root@controller ~]# openstack-config --set /etc/cinder/cinder.conf   keystone_authtoken   auth_uri  http://controller:5000
[root@controller ~]# openstack-config --set /etc/cinder/cinder.conf   keystone_authtoken   auth_url  http://controller:35357
[root@controller ~]# openstack-config --set /etc/cinder/cinder.conf   keystone_authtoken   memcached_servers  controller:11211
[root@controller ~]# openstack-config --set /etc/cinder/cinder.conf   keystone_authtoken   auth_type  password
[root@controller ~]# openstack-config --set /etc/cinder/cinder.conf   keystone_authtoken   project_domain_name  default
[root@controller ~]# openstack-config --set /etc/cinder/cinder.conf   keystone_authtoken   user_domain_name  default
[root@controller ~]# openstack-config --set /etc/cinder/cinder.conf   keystone_authtoken   project_name  service
[root@controller ~]# openstack-config --set /etc/cinder/cinder.conf   keystone_authtoken   username  cinder
[root@controller ~]# openstack-config --set /etc/cinder/cinder.conf   keystone_authtoken   password  123456
[root@controller ~]# openstack-config --set /etc/cinder/cinder.conf   oslo_concurrency  lock_path  /var/lib/cinder/tmp
[root@controller ~]# openstack-config --set /etc/cinder/cinder.conf   oslo_messaging_rabbit  rabbit_host  controller
[root@controller ~]# openstack-config --set /etc/cinder/cinder.conf   oslo_messaging_rabbit  rabbit_userid  openstack
[root@controller ~]# openstack-config --set /etc/cinder/cinder.conf   oslo_messaging_rabbit  rabbit_password  123456
[root@controller ~]# cat /etc/cinder/cinder.conf
[DEFAULT]
rpc_backend = rabbit
auth_strategy = keystone
my_ip = 10.0.0.11
[BACKEND]
[BRCD_FABRIC_EXAMPLE]
[CISCO_FABRIC_EXAMPLE]
[COORDINATION]
[FC-ZONE-MANAGER]
[KEYMGR]
[cors]
[cors.subdomain]
[database]
connection = mysql+pymysql://cinder:123456@controller/cinder
[keystone_authtoken]
auth_uri = http://controller:5000
auth_url = http://controller:35357
memcached_servers = controller:11211
auth_type = password
project_domain_name = default
user_domain_name = default
project_name = service
username = cinder
password = 123456
[matchmaker_redis]
[oslo_concurrency]
lock_path = /var/lib/cinder/tmp
[oslo_messaging_amqp]
[oslo_messaging_notifications]
[oslo_messaging_rabbit]
rabbit_host = controller
rabbit_userid = openstack
rabbit_password = 123456
[oslo_middleware]
[oslo_policy]
[oslo_reports]
[oslo_versionedobjects]
[ssl]

3)初始化塊裝置服務的資料庫

[root@controller ~]# su -s /bin/sh -c "cinder-manage db sync" cinder
[root@controller ~]# mysql -uroot -p123456 cinder
Reading table information for completion of table and column names
You can turn off this feature to get a quicker startup with -A

Welcome to the MariaDB monitor.  Commands end with ; or \g.
Your MariaDB connection id is 493
Server version: 10.1.20-MariaDB MariaDB Server

Copyright (c) 2000, 2016, Oracle, MariaDB Corporation Ab and others.

Type 'help;' or '\h' for help. Type '\c' to clear the current input statement.

MariaDB [cinder]> show tables;
+----------------------------+
| Tables_in_cinder           |
+----------------------------+
| backups                    |
| cgsnapshots                |
| consistencygroups          |
| driver_initiator_data      |
| encryption                 |
| image_volume_cache_entries |
| iscsi_targets              |
| migrate_version            |
| quality_of_service_specs   |
| quota_classes              |
| quota_usages               |
| quotas                     |
| reservations               |
| services                   |
| snapshot_metadata          |
| snapshots                  |
| transfers                  |
| volume_admin_metadata      |
| volume_attachment          |
| volume_glance_metadata     |
| volume_metadata            |
| volume_type_extra_specs    |
| volume_type_projects       |
| volume_types               |
| volumes                    |
+----------------------------+
25 rows in set (0.00 sec)

MariaDB [cinder]> exit
Bye

3.配置計算服務以使用塊裝置儲存

[root@controller ~]# openstack-config --set /etc/nova/nova.conf   cinder   os_region_name   RegionOne
[root@controller ~]# cat /etc/nova/nova.conf
[DEFAULT]
enabled_apis = osapi_compute,metadata
rpc_backend = rabbit
auth_strategy = keystone
my_ip = 10.0.0.11
use_neutron = True
firewall_driver = nova.virt.firewall.NoopFirewallDriver
[api_database]
connection = mysql+pymysql://nova:123456@controller/nova_api
[barbican]
[cache]
[cells]
[cinder]
os_region_name = RegionOne
[conductor]
[cors]
[cors.subdomain]
[database]
connection = mysql+pymysql://nova:123456@controller/nova
[ephemeral_storage_encryption]
[glance]
api_servers = http://computer2:9292
[guestfs]
[hyperv]
[image_file_url]
[ironic]
[keymgr]
[keystone_authtoken]
auth_uri = http://controller:5000
auth_url = http://controller:35357
memcached_servers = controller:11211
auth_type = password
project_domain_name = default
user_domain_name = default
project_name = service
username = nova
password = 123456
[libvirt]
[matchmaker_redis]
[metrics]
[neutron]
url = http://controller:9696
auth_url = http://controller:35357
auth_type = password
project_domain_name = default
user_domain_name = default
region_name = RegionOne
project_name = service
username = neutron
password = 123456
service_metadata_proxy = True
metadata_proxy_shared_secret = 123456
[osapi_v21]
[oslo_concurrency]
lock_path = /var/lib/nova/tmp
[oslo_messaging_amqp]
[oslo_messaging_notifications]
[oslo_messaging_rabbit]
rabbit_host = 10.0.0.11
rabbit_userid = openstack
rabbit_password = 123456
[oslo_middleware]
[oslo_policy]
[rdp]
[serial_console]
[spice]
[ssl]
[trusted_computing]
[upgrade_levels]
[vmware]
[vnc]
vncserver_listen = $my_ip
vncserver_proxyclient_address = $my_ip
[workarounds]
[xenserver]

4. 啟動服務

1)重啟計算api服務

[root@controller ~]#  systemctl restart openstack-nova-api.service
[root@controller ~]#  systemctl status openstack-nova-api.service
● openstack-nova-api.service - OpenStack Nova API Server
   Loaded: loaded (/usr/lib/systemd/system/openstack-nova-api.service; enabled; vendor preset: disabled)
   Active: active (running) since Sun 2020-11-22 21:19:34 CST; 2min 38s ago
 Main PID: 96253 (nova-api)
   CGroup: /system.slice/openstack-nova-api.service
           ├─96253 /usr/bin/python2 /usr/bin/nova-api
           ├─96410 /usr/bin/python2 /usr/bin/nova-api
           └─96445 /usr/bin/python2 /usr/bin/nova-api

Nov 22 21:18:53 controller systemd[1]: Starting OpenStack Nova API Server...
Nov 22 21:19:29 controller sudo[96413]:     nova : TTY=unknown ; PWD=/ ; USER=root ; COMMAND=/bin/nova-rootwrap /etc/nova/roo...save -c
Nov 22 21:19:32 controller sudo[96433]:     nova : TTY=unknown ; PWD=/ ; USER=root ; COMMAND=/bin/nova-rootwrap /etc/nova/roo...tore -c
Nov 22 21:19:34 controller systemd[1]: Started OpenStack Nova API Server.
Hint: Some lines were ellipsized, use -l to show in full.

2)啟動塊裝置儲存服務,並將其配置為開機自啟

[root@controller ~]# systemctl enable openstack-cinder-api.service openstack-cinder-scheduler.service
Created symlink from /etc/systemd/system/multi-user.target.wants/openstack-cinder-api.service to /usr/lib/systemd/system/openstack-cinder-api.service.
Created symlink from /etc/systemd/system/multi-user.target.wants/openstack-cinder-scheduler.service to /usr/lib/systemd/system/openstack-cinder-scheduler.service.
[root@controller ~]# systemctl start openstack-cinder-api.service openstack-cinder-scheduler.service
[root@controller ~]# systemctl status openstack-cinder-api.service openstack-cinder-scheduler.service
● openstack-cinder-api.service - OpenStack Cinder API Server
   Loaded: loaded (/usr/lib/systemd/system/openstack-cinder-api.service; enabled; vendor preset: disabled)
   Active: active (running) since Sun 2020-11-22 21:23:13 CST; 9s ago
 Main PID: 97493 (cinder-api)
   CGroup: /system.slice/openstack-cinder-api.service
           └─97493 /usr/bin/python2 /usr/bin/cinder-api --config-file /usr/share/cinder/cinder-dist.conf --config-file /etc/cinder/c...

Nov 22 21:23:13 controller systemd[1]: Started OpenStack Cinder API Server.
Nov 22 21:23:13 controller systemd[1]: Starting OpenStack Cinder API Server...

● openstack-cinder-scheduler.service - OpenStack Cinder Scheduler Server
   Loaded: loaded (/usr/lib/systemd/system/openstack-cinder-scheduler.service; enabled; vendor preset: disabled)
   Active: active (running) since Sun 2020-11-22 21:23:13 CST; 8s ago
 Main PID: 97496 (cinder-schedule)
   CGroup: /system.slice/openstack-cinder-scheduler.service
           └─97496 /usr/bin/python2 /usr/bin/cinder-scheduler --config-file /usr/share/cinder/cinder-dist.conf --config-file /etc/ci...

Nov 22 21:23:13 controller systemd[1]: Started OpenStack Cinder Scheduler Server.
Nov 22 21:23:13 controller systemd[1]: Starting OpenStack Cinder Scheduler Server...

5. 檢查

[root@controller ~]# cinder service-list
+------------------+------------+------+---------+-------+----------------------------+-----------------+
|      Binary      |    Host    | Zone |  Status | State |         Updated_at         | Disabled Reason |
+------------------+------------+------+---------+-------+----------------------------+-----------------+
| cinder-scheduler | controller | nova | enabled |   up  | 2020-11-22T13:30:31.000000 |        -        |
+------------------+------------+------+---------+-------+----------------------------+-----------------+

三、安裝配置儲存節點

目的:向例項提供卷

1.先決條件

在安裝和配置塊儲存服務之前,必須準備好儲存裝置(新增兩塊硬碟30G和10G)

1)安裝支援的工具包

a.安裝 LVM 包

[root@computer1 ~]# yum install lvm2 -y
[root@computer1 ~]# rpm -qa |grep lvm2
lvm2-2.02.171-8.el7.x86_64
lvm2-libs-2.02.171-8.el7.x86_64

b.啟動LVM的metadata服務並且設定該服務隨系統啟動

[root@computer1 ~]# systemctl enable lvm2-lvmetad.service
Created symlink from /etc/systemd/system/sysinit.target.wants/lvm2-lvmetad.service to /usr/lib/systemd/system/lvm2-lvmetad.service.
[root@computer1 ~]# systemctl start lvm2-lvmetad.service
[root@computer1 ~]# systemctl status lvm2-lvmetad.service
● lvm2-lvmetad.service - LVM2 metadata daemon
   Loaded: loaded (/usr/lib/systemd/system/lvm2-lvmetad.service; enabled; vendor preset: enabled)
   Active: active (running) since Wed 2020-11-18 21:29:12 CST; 4 days ago
     Docs: man:lvmetad(8)
 Main PID: 376 (lvmetad)
   CGroup: /system.slice/lvm2-lvmetad.service
           └─376 /usr/sbin/lvmetad -f

Nov 18 21:29:12 computer1 systemd[1]: Started LVM2 metadata daemon.
Nov 18 21:29:12 computer1 systemd[1]: Starting LVM2 metadata daemon...

注:一些發行版預設包含了LVM

[root@computer1 ~]# fdisk -l

Disk /dev/sda: 53.7 GB, 53687091200 bytes, 104857600 sectors
Units = sectors of 1 * 512 = 512 bytes
Sector size (logical/physical): 512 bytes / 512 bytes
I/O size (minimum/optimal): 512 bytes / 512 bytes
Disk label type: dos
Disk identifier: 0x000a2c65

   Device Boot      Start         End      Blocks   Id  System
/dev/sda1            2048     4196351     2097152   82  Linux swap / Solaris
/dev/sda2   *     4196352   104857599    50330624   83  Linux
#系統重新掃描硬碟 [root@computer1
~]# echo '- - -' >/sys/class/scsi_host/host0/scan
[root@computer1
~]# fdisk -l Disk /dev/sda: 53.7 GB, 53687091200 bytes, 104857600 sectors Units = sectors of 1 * 512 = 512 bytes Sector size (logical/physical): 512 bytes / 512 bytes I/O size (minimum/optimal): 512 bytes / 512 bytes Disk label type: dos Disk identifier: 0x000a2c65 Device Boot Start End Blocks Id System /dev/sda1 2048 4196351 2097152 82 Linux swap / Solaris /dev/sda2 * 4196352 104857599 50330624 83 Linux Disk /dev/sdc: 10.7 GB, 10737418240 bytes, 20971520 sectors Units = sectors of 1 * 512 = 512 bytes Sector size (logical/physical): 512 bytes / 512 bytes I/O size (minimum/optimal): 512 bytes / 512 bytes Disk /dev/sdb: 32.2 GB, 32212254720 bytes, 62914560 sectors Units = sectors of 1 * 512 = 512 bytes Sector size (logical/physical): 512 bytes / 512 bytes I/O size (minimum/optimal): 512 bytes / 512 bytes

2)建立LVM 物理卷

[root@computer1 ~]# pvcreate /dev/sdb
  Physical volume "/dev/sdb" successfully created.
[root@computer1 ~]# pvcreate /dev/sdc
  Physical volume "/dev/sdc" successfully created.
[root@computer1 ~]# pvdisplay 
  "/dev/sdc" is a new physical volume of "10.00 GiB"
  --- NEW Physical volume ---
  PV Name               /dev/sdc
  VG Name               
  PV Size               10.00 GiB
  Allocatable           NO
  PE Size               0   
  Total PE              0
  Free PE               0
  Allocated PE          0
  PV UUID               9xV3kt-ZnV1-NmgN-W3gf-nqQD-lWpx-xOUhK6
   
  "/dev/sdb" is a new physical volume of "30.00 GiB"
  --- NEW Physical volume ---
  PV Name               /dev/sdb
  VG Name               
  PV Size               30.00 GiB
  Allocatable           NO
  PE Size               0   
  Total PE              0
  Free PE               0
  Allocated PE          0
  PV UUID               07cx4b-Atmh-IxBW-VMxf-PEJa-lfSw-oBNvRB
[root@computer1 ~]# pvs
  PV         VG          Fmt  Attr PSize   PFree  
  /dev/sdb   cinder-ssd  lvm2 a--  <30.00g <30.00g
  /dev/sdc   cinder-sata lvm2 a--  <10.00g <10.00g

3)建立 LVM 卷組

[root@computer1 ~]# vgcreate cinder-ssd /dev/sdb
  Volume group "cinder-ssd" successfully created
[root@computer1 ~]# vgcreate cinder-sata /dev/sdc
  Volume group "cinder-sata" successfully created
[root@computer1 ~]# vgdisplay 
  --- Volume group ---
  VG Name               cinder-ssd
  System ID             
  Format                lvm2
  Metadata Areas        1
  Metadata Sequence No  1
  VG Access             read/write
  VG Status             resizable
  MAX LV                0
  Cur LV                0
  Open LV               0
  Max PV                0
  Cur PV                1
  Act PV                1
  VG Size               <30.00 GiB
  PE Size               4.00 MiB
  Total PE              7679
  Alloc PE / Size       0 / 0   
  Free  PE / Size       7679 / <30.00 GiB
  VG UUID               pmFOmJ-ZWcL-byUI-oyES-EMoT-EWXs-haozT7
   
  --- Volume group ---
  VG Name               cinder-sata
  System ID             
  Format                lvm2
  Metadata Areas        1
  Metadata Sequence No  1
  VG Access             read/write
  VG Status             resizable
  MAX LV                0
  Cur LV                0
  Open LV               0
  Max PV                0
  Cur PV                1
  Act PV                1
  VG Size               <10.00 GiB
  PE Size               4.00 MiB
  Total PE              2559
  Alloc PE / Size       0 / 0   
  Free  PE / Size       2559 / <10.00 GiB
  VG UUID               xJDIWs-CEzJ-wvtx-TljX-a8Th-3W5r-Uo9XGz
[root@computer1 ~]# vgs
  VG          #PV #LV #SN Attr   VSize   VFree  
  cinder-sata   1   0   0 wz--n- <10.00g <10.00g
  cinder-ssd    1   0   0 wz--n- <30.00g <30.00g

4)編輯/etc/lvm/lvm.conf配置檔案

注:重新配置LVM,讓它只掃描包含``cinder-volume``卷組的裝置,即130行下插入一行

[root@computer1 ~]# cp /etc/lvm/lvm.conf{,.bak}
[root@computer1 ~]# vim /etc/lvm/lvm.conf
[root@computer1 ~]# grep 'sdb' /etc/lvm/lvm.conf
    filter = [ "a/sdb/", "a/sdc/","r/.*/"]

2. 安裝配置元件

1)安裝軟體包

[root@computer1 ~]# yum install openstack-cinder targetcli python-keystone -y

2)編輯/etc/cinder/cinder.conf,同時完成如下動作

[root@computer1 ~]# cp /etc/cinder/cinder.conf{,.bak}
[root@computer1 ~]# grep -Ev '^$|#' /etc/cinder/cinder.conf.bak >/etc/cinder/cinder.conf

a.在[database]部分,配置資料庫訪問

b.在 “[DEFAULT]” 和 “[oslo_messaging_rabbit]”部分,配置 “RabbitMQ” 訊息佇列訪問

c.在 “[DEFAULT]” 和 “[keystone_authtoken]” 部分,配置認證服務訪問

d.在[DEFAULT]部分,配置my_ip選項

e.在``[lvm]``部分,配置LVM後端以LVM驅動結束,卷組``cinder-volumes`` ,iSCSI 協議和正確的 iSCSI服務

f.在[DEFAULT]部分,啟用 LVM 後端

g.在[DEFAULT]區域,配置映象服務 API 的位置

h.在[oslo_concurrency]部分,配置鎖路徑

[root@computer1 ~]# vim /etc/cinder/cinder.conf
[root@computer1 ~]# cat  /etc/cinder/cinder.conf
[DEFAULT]
rpc_backend = rabbit
auth_strategy = keystone
my_ip = 10.0.0.12
glance_api_servers = http://computer2:9292
enabled_backends = ssd,sata
[BACKEND]
[BRCD_FABRIC_EXAMPLE]
[CISCO_FABRIC_EXAMPLE]
[COORDINATION]
[FC-ZONE-MANAGER]
[KEYMGR]
[cors]
[cors.subdomain]
[database]
connection = mysql+pymysql://cinder:123456@controller/cinder
[keystone_authtoken]
auth_uri = http://controller:5000
auth_url = http://controller:35357
memcached_servers = controller:11211
auth_type = password
project_domain_name = default
user_domain_name = default
project_name = service
username = cinder
password = 123456
[matchmaker_redis]
[oslo_concurrency]
lock_path = /var/lib/cinder/tmp
[oslo_messaging_amqp]
[oslo_messaging_notifications]
[oslo_messaging_rabbit]
rabbit_host = controller
rabbit_userid = openstack
rabbit_password = 123456
[oslo_middleware]
[oslo_policy]
[oslo_reports]
[oslo_versionedobjects]
[ssl]
[ssd]
volume_driver = cinder.volume.drivers.lvm.LVMVolumeDriver
volume_group = cinder-ssd
iscsi_protocol = iscsi
iscsi_helper = lioadm
volume_backend_name = ssd
[sata]
volume_driver = cinder.volume.drivers.lvm.LVMVolumeDriver
volume_group = cinder-sata
iscsi_protocol = iscsi
iscsi_helper = lioadm
volume_backend_name = sata

3. 啟動服務

啟動塊儲存卷服務及其依賴的服務,並將其配置為隨系統啟動

[root@computer1 ~]# systemctl enable openstack-cinder-volume.service target.service
Created symlink from /etc/systemd/system/multi-user.target.wants/openstack-cinder-volume.service to /usr/lib/systemd/system/openstack-cinder-volume.service.
Created symlink from /etc/systemd/system/multi-user.target.wants/target.service to /usr/lib/systemd/system/target.service.
[root@computer1 ~]# systemctl start openstack-cinder-volume.service target.service
[root@computer1 ~]# systemctl status openstack-cinder-volume.service target.service
● openstack-cinder-volume.service - OpenStack Cinder Volume Server
   Loaded: loaded (/usr/lib/systemd/system/openstack-cinder-volume.service; enabled; vendor preset: disabled)
   Active: active (running) since Sun 2020-11-22 23:33:35 CST; 1min 1s ago
 Main PID: 76797 (cinder-volume)
   CGroup: /system.slice/openstack-cinder-volume.service
           ├─76797 /usr/bin/python2 /usr/bin/cinder-volume --config-file /usr/share/cinder/cinder-dist.conf --config-file /etc/cinde...
           ├─76839 /usr/bin/python2 /usr/bin/cinder-volume --config-file /usr/share/cinder/cinder-dist.conf --config-file /etc/cinde...
           └─76853 /usr/bin/python2 /usr/bin/cinder-volume --config-file /usr/share/cinder/cinder-dist.conf --config-file /etc/cinde...

Nov 22 23:34:26 computer1 cinder-volume[76797]: 2020-11-22 23:34:26.055 76839 INFO cinder.volume.manager [req-083a4b09-8ab0-40...fully.
Nov 22 23:34:26 computer1 cinder-volume[76797]: 2020-11-22 23:34:26.154 76853 INFO cinder.volume.manager [req-47d770db-e031-47...fully.
Nov 22 23:34:26 computer1 cinder-volume[76797]: 2020-11-22 23:34:26.456 76839 INFO cinder.volume.manager [req-083a4b09-8ab0-40...3.0.0)
Nov 22 23:34:26 computer1 cinder-volume[76797]: 2020-11-22 23:34:26.462 76853 INFO cinder.volume.manager [req-47d770db-e031-47...3.0.0)
Nov 22 23:34:26 computer1 sudo[76904]:   cinder : TTY=unknown ; PWD=/ ; USER=root ; COMMAND=/bin/cinder-rootwrap /etc/cinder/...der-ssd
Nov 22 23:34:26 computer1 sudo[76905]:   cinder : TTY=unknown ; PWD=/ ; USER=root ; COMMAND=/bin/cinder-rootwrap /etc/cinder/...er-sata
Nov 22 23:34:29 computer1 sudo[76912]:   cinder : TTY=unknown ; PWD=/ ; USER=root ; COMMAND=/bin/cinder-rootwrap /etc/cinder/...er-sata
Nov 22 23:34:29 computer1 sudo[76914]:   cinder : TTY=unknown ; PWD=/ ; USER=root ; COMMAND=/bin/cinder-rootwrap /etc/cinder/...der-ssd
Nov 22 23:34:29 computer1 cinder-volume[76797]: 2020-11-22 23:34:29.642 76853 INFO cinder.volume.manager [req-47d770db-e031-47...fully.
Nov 22 23:34:29 computer1 cinder-volume[76797]: 2020-11-22 23:34:29.667 76839 INFO cinder.volume.manager [req-083a4b09-8ab0-40...fully.

● target.service - Restore LIO kernel target configuration
   Loaded: loaded (/usr/lib/systemd/system/target.service; enabled; vendor preset: disabled)
   Active: active (exited) since Sun 2020-11-22 23:33:38 CST; 58s ago
  Process: 76798 ExecStart=/usr/bin/targetctl restore (code=exited, status=0/SUCCESS)
 Main PID: 76798 (code=exited, status=0/SUCCESS)

Nov 22 23:33:35 computer1 systemd[1]: Starting Restore LIO kernel target configuration...
Nov 22 23:33:38 computer1 target[76798]: No saved config file at /etc/target/saveconfig.json, ok, exiting
Nov 22 23:33:38 computer1 systemd[1]: Started Restore LIO kernel target configuration.
Hint: Some lines were ellipsized, use -l to show in full.

4. 檢查

[root@controller ~]# cinder service-list
+------------------+----------------+------+---------+-------+----------------------------+-----------------+
|      Binary      |      Host      | Zone |  Status | State |         Updated_at         | Disabled Reason |
+------------------+----------------+------+---------+-------+----------------------------+-----------------+
| cinder-scheduler |   controller   | nova | enabled |   up  | 2020-11-22T15:36:29.000000 |        -        |
|  cinder-volume   | computer1@sata | nova | enabled |   up  | 2020-11-22T15:36:29.000000 |        -        |
|  cinder-volume   | computer1@ssd  | nova | enabled |   up  | 2020-11-22T15:36:29.000000 |        -        |
+------------------+----------------+------+---------+-------+----------------------------+-----------------+

四、驗證

1. web頁面建立卷,在例項中掛載卷

[root@computer1 ~]# lvs
  LV                                          VG         Attr       LSize Pool Origin Data%  Meta%  Move Log Cpy%Sync Convert
  volume-f2281392-712d-4532-a004-a6dbc9225b3a cinder-ssd -wi-a----- 1.00g                                                    

其他預設,啟動例項

控制檯登入,檢視例項磁碟情況

專案——>計算——>卷:管理連線

2. 擴容邏輯卷大小

分離卷——>擴展卷

[root@computer1 ~]# lvs
  LV                                          VG         Attr       LSize Pool Origin Data%  Meta%  Move Log Cpy%Sync Convert
  volume-f2281392-712d-4532-a004-a6dbc9225b3a cinder-ssd -wi-a----- 2.00g                                                    
[root@computer1 ~]# 

重新將卷掛載到例項上

在例項控制檯檢視

擴容成功!!!

3. 在儲存節點檢視邏輯卷

[root@computer1 ~]# cd /opt
[root@computer1 opt]# ls /dev/mapper/cinder--ssd-volume--f2281392--712d--4532--a004--a6dbc9225b3a 
/dev/mapper/cinder--ssd-volume--f2281392--712d--4532--a004--a6dbc9225b3a
[root@computer1 opt]# dd if=/dev/mapper/cinder--ssd-volume--f2281392--712d--4532--a004--a6dbc9225b3a of=/opt/test.raw
4194304+0 records in
4194304+0 records out
2147483648 bytes (2.1 GB) copied, 747.684 s, 2.9 MB/s
[root@computer1 ~]# ll -h /opt
total 2.3G
-rw-r--r-- 1 root root 237M Nov 18 17:01 openstack_rpm.tar.gz
drwxr-xr-x 3 root root  36K Jul 19  2017 repo
-rw-r--r-- 1 root root 2.0G Nov 23 12:51 test.raw
[root@computer1 opt]# qemu-img info test.raw 
image: test.raw
file format: raw
virtual size: 2.0G (2147483648 bytes)
disk size: 2.0G

檢視資料

[root@computer1 ~]# mount -o loop /opt/test.raw /srv
[root@computer1 ~]# df -h
Filesystem      Size  Used Avail Use% Mounted on
/dev/sda2        48G  4.8G   44G  10% /
devtmpfs        983M     0  983M   0% /dev
tmpfs           993M     0  993M   0% /dev/shm
tmpfs           993M  8.7M  984M   1% /run
tmpfs           993M     0  993M   0% /sys/fs/cgroup
/dev/sr0        4.3G  4.3G     0 100% /mnt
tmpfs           199M     0  199M   0% /run/user/0
/dev/loop0      2.0G  1.6M  1.9G   1% /srv
[root@computer1 ~]# cd /srv/
[root@computer1 srv]# ll
total 20
-rw-r--r-- 1 root root    37 Nov 23 12:25 hosts
drwx------ 2 root root 16384 Nov 23 12:23 lost+found

注:雲上的資料不安全