ceph對接openstack
一、使用rbd方式提供儲存如下資料:
(1)image(glance):儲存glanc中的image;
(2)volume(cinder)儲存:儲存cinder的volume;儲存建立虛擬機器時選擇建立新卷;
(3)vms(nova)的儲存:儲存建立虛擬機器時不選擇建立新卷;
二、實施步驟:
(1)客戶端也要有cent使用者:(比如說我openstack環境有100多個節點,我不可能說所有的節點都建立cent這個使用者吧,那要選擇的去建立,在我節點部署像cinder、nova、glance這三個服務的節點上去建立cent使用者)
useradd cent && echo "123" | passwd --stdin centecho -e 'Defaults:cent !requiretty\ncent ALL = (root) NOPASSWD:ALL' | tee /etc/sudoers.d/ceph
chmod 440 /etc/sudoers.d/ceph
(2)openstack要用ceph的節點(比如compute-node和storage-node)安裝下載的軟體包:
yum localinstall ./* -y
或則:每個節點安裝 clients(要訪問ceph叢集的節點):
yum install python-rbd yum install ceph-common #ceph的命令工具
如果先採用上面的方式安裝客戶端,其實這兩個包在rpm包中早已經安裝過了
(3)部署節點上執行,為openstack節點安裝ceph:
ceph-deploy install controller ceph-deploy admin controller
(4)客戶端執行
sudo chmod 644 /etc/ceph/ceph.client.admin.keyring
(5)create pools,只需在一個ceph節點上操作即可:
在ceph環境裡建立三個pool,這三個pool是分別儲存我們openstack平臺的映象、虛擬機器、卷。
ceph osd pool create images 1024 ceph osd pool create vms 1024 ceph osd pool create volumes 1024
顯示pool的狀態
ceph osd lspools
(6)在ceph叢集中,建立glance和cinder使用者, 只需在一個ceph節點上操作即可:
我ceph叢集要給你openstack平臺的glance和cinder用。
部署節點建立glance和cinder使用者
useradd glance useradd cinder
而後做授權
ceph auth get-or-create client.glance mon 'allow r' osd 'allow class-read object_prefix rbd_children, allowrwx pool=images'
ceph auth get-or-create client.cinder mon 'allow r' osd 'allow class-read object_prefix rbd_children, allowrwx pool=volumes, allow rwx pool=vms, allow rx pool=images'
controller compute storage三個節點已經有了這兩個使用者
nova使用cinder使用者,就不單獨建立了
(7)拷貝ceph-ring(生成令牌環), 只需在一個ceph節點上操作即可:
ceph auth get-or-create client.glance > /etc/ceph/ceph.client.glance.keyring
ceph auth get-or-create client.cinder > /etc/ceph/ceph.client.cinder.keyring
使用scp拷貝到其他節點(ceph叢集節點和openstack的要用ceph的節點比如compute-node和storage-node,本次對接的是一個all-in-one的環境,所以copy到controller節點即可 )
[root@yunwei ceph]# ls ceph.client.admin.keyring ceph.client.cinder.keyring ceph.client.glance.keyring ceph.conf rbdmap tmpR3uL7W
[root@yunwei ceph]#
[root@yunwei ceph]# scp ceph.client.glance.keyring ceph.client.cinder.keyring controller:/etc/ceph/
(8)更改檔案的許可權(所有客戶端節點均執行)
chown glance:glance /etc/ceph/ceph.client.glance.keyring chown cinder:cinder /etc/ceph/ceph.client.cinder.keyring
(9)更改libvirt許可權(只需在nova-compute節點上操作即可,每個計算節點都做)
uuidgen 940f0485-e206-4b49-b878-dcd0cb9c70a4
在/etc/ceph/目錄下(在什麼目錄沒有影響,放到/etc/ceph目錄方便管理):
cat > secret(認證的意思).xml <<EOF <secret ephemeral='no' private='no'> <uuid>940f0485-e206-4b49-b878-dcd0cb9c70a4</uuid> <usage type='ceph'> <name>client.cinder secret</name> </usage> </secret> EOF
將 secret.xml 拷貝到所有compute節點,並執行::
virsh secret-define --file secret.xml ceph auth get-key client.cinder > ./client.cinder.key virsh secret-set-value --secret 940f0485-e206-4b49-b878-dcd0cb9c70a4 --base64 $(cat ./client.cinder.key)
最後所有compute節點的client.cinder.key和secret.xml都是一樣的, 記下之前生成的uuid:940f0485-e206-4b49-b878-dcd0cb9c70a4
如遇如下錯誤:
[root@controller ceph]# virsh secret-define --file secret.xml
錯誤:使用 secret.xml 設定屬性失敗
錯誤:internal error: 已將 UUID 為d448a6ee-60f3-42a3-b6fa-6ec69cab2378 的 secret 定義為與 client.cinder secret 一同使用 [root@controller ~]# virsh secret-list
UUID 用量 -------------------------------------------------------------------------------- d448a6ee-60f3-42a3-b6fa-6ec69cab2378 ceph client.cinder secret [root@controller ~]# virsh secret-undefine d448a6ee-60f3-42a3-b6fa-6ec69cab2378
已刪除 secret d448a6ee-60f3-42a3-b6fa-6ec69cab2378 [root@controller ~]# virsh secret-list
UUID 用量 -------------------------------------------------------------------------------- [root@controller ceph]# virsh secret-define --file secret.xml
生成 secret 940f0485-e206-4b49-b878-dcd0cb9c70a4 [root@controller ~]# virsh secret-list
UUID 用量 -------------------------------------------------------------------------------- 940f0485-e206-4b49-b878-dcd0cb9c70a4 ceph client.cinder secret virsh secret-set-value --secret 940f0485-e206-4b49-b878-dcd0cb9c70a4 --base64 $(cat ./client.cinder.key)
(10)配置Glance, 在所有的controller節點上做如下更改:
vim /etc/glance/glance-api.conf
[DEFAULT] default_store = rbd [cors] [cors.subdomain] [database] connection = mysql+pymysql://glance:GLANCE_DBPASS@controller/glance [glance_store] stores = rbd default_store = rbd rbd_store_pool = images rbd_store_user = glance rbd_store_ceph_conf = /etc/ceph/ceph.conf rbd_store_chunk_size = 8 [image_format] [keystone_authtoken] auth_uri = http://controller:5000 auth_url = http://controller:35357 memcached_servers = controller:11211 auth_type = password project_domain_name = default user_domain_name = default project_name = service username = glance password = glance [matchmaker_redis] [oslo_concurrency] [oslo_messaging_amqp] [oslo_messaging_kafka] [oslo_messaging_notifications] [oslo_messaging_rabbit] [oslo_messaging_zmq] [oslo_middleware] [oslo_policy] [paste_deploy] flavor = keystone [profiler] [store_type_location_strategy] [task] [taskflow_executor]
在所有的controller節點上做如下更改
systemctl restart openstack-glance-api.service systemctl status openstack-glance-api.service
建立image驗證:
[root@controller ~]# openstack image create "cirros" --file cirros-0.3.3-x86_64-disk.img.img --disk-format qcow2 --container-format bare --public [root@controller ~]# rbd ls images
9ce5055e-4217-44b4-a237-e7b577a20dac
**********有輸出映象說明成功了
(8)配置 Cinder:
vim /etc/cinder/cinder.conf [DEFAULT] my_ip = #當前主機IP glance_api_servers = http://controller:9292 auth_strategy = keystone enabled_backends = ceph state_path = /var/lib/cinder transport_url = rabbit://openstack:admin@controller [backend] [barbican] [brcd_fabric_example] [cisco_fabric_example] [coordination] [cors] [cors.subdomain] [database] connection = mysql+pymysql://cinder:CINDER_DBPASS@controller/cinder [fc-zone-manager] [healthcheck] [key_manager] [keystone_authtoken] auth_uri = http://controller:5000 auth_url = http://controller:35357 memcached_servers = controller:11211 auth_type = password project_domain_name = default user_domain_name = default project_name = service username = cinder password = cinder [matchmaker_redis] [oslo_concurrency] lock_path = /var/lib/cinder/tmp [oslo_messaging_amqp] [oslo_messaging_kafka] [oslo_messaging_notifications] [oslo_messaging_rabbit] [oslo_messaging_zmq] [oslo_middleware] [oslo_policy] [oslo_reports] [oslo_versionedobjects] [profiler] [ssl] [ceph] volume_driver = cinder.volume.drivers.rbd.RBDDriver rbd_pool = volumes rbd_ceph_conf = /etc/ceph/ceph.conf rbd_flatten_volume_from_snapshot = false rbd_max_clone_depth = 5 rbd_store_chunk_size = 4 rados_connect_timeout = -1 glance_api_version = 2 rbd_user = cinder rbd_secret_uuid = 940f0485-e206-4b49-b878-dcd0cb9c70a4 volume_backend_name=ceph
重啟cinder服務:
#控制節點重啟
systemctl restart openstack-cinder-api.service openstack-cinder-scheduler.service
#儲存節點重啟
openstack-cinder-volume.service
systemctl status openstack-cinder-api.service openstack-cinder-scheduler.service openstack-cinder-volume.service
建立volume驗證:
[root@controller gfs]# rbd ls volumes volume-43b7c31d-a773-4604-8e4a-9ed78ec18996
(9)配置Nova:
vim /etc/nova/nova.conf [DEFAULT] my_ip=#當前主機IP use_neutron = True firewall_driver = nova.virt.firewall.NoopFirewallDriver enabled_apis=osapi_compute,metadata transport_url = rabbit://openstack:admin@controller [api] auth_strategy = keystone [api_database] connection = mysql+pymysql://nova:NOVA_DBPASS@controller/nova_api [barbican] [cache] [cells] [cinder] os_region_name = RegionOne [cloudpipe] [conductor] [console] [consoleauth] [cors] [cors.subdomain] [crypto] [database] connection = mysql+pymysql://nova:NOVA_DBPASS@controller/nova [ephemeral_storage_encryption] [filter_scheduler] [glance] api_servers = http://controller:9292 [guestfs] [healthcheck] [hyperv] [image_file_url] [ironic] [key_manager] [keystone_authtoken] auth_uri = http://controller:5000 auth_url = http://controller:35357 memcached_servers = controller:11211 auth_type = password project_domain_name = default user_domain_name = default project_name = service username = nova password = nova [libvirt] virt_type=qemu images_type = rbd images_rbd_pool = vms images_rbd_ceph_conf = /etc/ceph/ceph.conf rbd_user = cinder rbd_secret_uuid = 940f0485-e206-4b49-b878-dcd0cb9c70a4 [matchmaker_redis] [metrics] [mks] [neutron] url = http://controller:9696 auth_url = http://controller:35357 auth_type = password project_domain_name = default user_domain_name = default region_name = RegionOne project_name = service username = neutron password = neutron service_metadata_proxy = true metadata_proxy_shared_secret = METADATA_SECRET [notifications] [osapi_v21] [oslo_concurrency] lock_path=/var/lib/nova/tmp [oslo_messaging_amqp] [oslo_messaging_kafka] [oslo_messaging_notifications] [oslo_messaging_rabbit] [oslo_messaging_zmq] [oslo_middleware] [oslo_policy] [pci] [placement] os_region_name = RegionOne auth_type = password auth_url = http://controller:35357/v3 project_name = service project_domain_name = Default username = placement password = placement user_domain_name = Default [quota] [rdp] [remote_debug] [scheduler] [serial_console] [service_user] [spice] [ssl] [trusted_computing] [upgrade_levels] [vendordata_dynamic_auth] [vmware] [vnc] enabled=true vncserver_listen=$my_ip vncserver_proxyclient_address=$my_ip novncproxy_base_url = http://172.16.254.63:6080/vnc_auto.html [workarounds] [wsgi] [xenserver] [xvp]
重啟nova服務:
#控制節點重啟
systemctl restart openstack-nova-api.service openstack-nova-consoleauth.service openstack-nova-scheduler.service openstack-nova-compute.service
#計算節點重啟
openstack-nova-compute.service
#儲存節點重啟
openstack-nova-compute.service systemctl status openstack-nova-api.service openstack-nova-consoleauth.service openstack-nova-scheduler.service openstack-nova-compute.service