OpenStack+Ceph平臺集成
OpenStack+Ceph平臺構建(已排好版)
參考文檔
官方文檔
OpenStack集成Ceph
如何將Ceph與OpenStack集成
部署步驟
Ceph配置
創建Pool
# ceph osd pool create volumes 64
# ceph osd pool create images 64
# ceph osd pool create vms 64
OpenStack配置
安裝Ceph Client包
在glance-api(控制節點)節點上
yum install python-rbd -y
在nova-compute(計算節點)和cinder-volume節點上
yum install ceph-common -y
復制配置文件到OpenStack相關節點
ssh controller sudo tee /etc/ceph/ceph.conf < /etc/ceph/ceph.conf
ssh compute sudo tee /etc/ceph/ceph.conf < /etc/ceph/ceph.conf
為Nova/Cinder and Glance創建新的用戶
只有開啟了cephx authentication,才需要
1、創建密鑰,用的是auth get-or-create
ceph auth get-or-create client.cinder mon 'allow r' osd 'allow class-read object_prefix rbd_children, allow rwx pool=volumes, allow rwx pool=vms, allow rx pool=images'
ceph auth get-or-create client.glance mon 'allow r' osd 'allow class-read object_prefix rbd_children, allow rwx pool=images'
2、為client.cinder, client.glance添加keyring,並修改所屬主/組
ceph auth get-or-create client.glance | ssh controller sudo tee /etc/ceph/ceph.client.glance.keyring
ssh controller sudo chown glance:glance /etc/ceph/ceph.client.glance.keyring
ceph auth get-or-create client.cinder | ssh compute sudo tee /etc/ceph/ceph.client.cinder.keyring
ssh compute sudo chown cinder:cinder /etc/ceph/ceph.client.cinder.keyring
3、為nova-compute節點上創建臨時密鑰
ceph auth get-key client.cinder | ssh {your-compute-node} tee client.cinder.key
此處為:
ceph auth get-key client.cinder | ssh compute tee client.cinder.key
4、在所有計算節點上(本例就只有一臺計算節點)執行如下操作:在計算節點上為libvert替換新的key。
因為libvirt創建磁盤時,需要訪問ceph集群。所以需要替換key
uuidgen
536f43c1-d367-45e0-ae64-72d987417c91
cat > secret.xml <<EOF
#粘貼以下內容,註意將紅色key替換為新生成的key。
<secret ephemeral='no' private='no'>
<uuid>536f43c1-d367-45e0-ae64-72d987417c91</uuid>
<usage type='ceph'>
<name>client.cinder secret</name>
</usage>
</secret>
EOF
virsh secret-define --file secret.xml
以--base64 後的秘鑰為計算節點上/root目錄下的client.cinder.key。是之前為計算節點創建的臨時秘鑰文件
virsh secret-set-value 536f43c1-d367-45e0-ae64-72d987417c91 AQCliYVYCAzsEhAAMSeU34p3XBLVcvc4r46SyA==
這是通過--base64() 作用臨時密鑰生成的
AQCliYVYCAzsEhAAMSeU34p3XBLVcvc4r46SyA==
這裏也可以替換為
--base64 $(cat client.cinder.key)
然後刪除臨時密鑰
rm –f client.cinder.key secret.xml
5、修改配置文件
glance-api.conf
[DEFAULT]
…
default_store = rbd
show_image_direct_url = True
show_multiple_locations = True
…
[glance_store]
stores = rbd
default_store = rbd
rbd_store_pool = images
rbd_store_user = glance
rbd_store_ceph_conf = /etc/ceph/ceph.conf
rbd_store_chunk_size = 8
取消Glance cache管理,去掉cachemanagement
[paste_deploy]
flavor = keystone
cinder-voluem的cinder.conf
[DEFAULT]
保留之前的
enabled_backends = ceph
#glance_api_version = 2
…
[ceph]
volume_driver = cinder.volume.drivers.rbd.RBDDriver
rbd_pool = volumes
rbd_ceph_conf = /etc/ceph/ceph.conf
rbd_flatten_volume_from_snapshot = false
rbd_max_clone_depth = 5
rbd_store_chunk_size = 4
rados_connect_timeout = -1
glance_api_version = 2
rbd_user = cinder
volume_backend_name = ceph
rbd_secret_uuid =536f43c1-d367-45e0-ae64-72d987417c91
註意, 所有計算節點上的 UUID 不一定非要一樣。但考慮到平臺的一致性, 最好使用同一個 UUID
註意,如果配置多個cinder後端,glance_api_version = 2必須添加到[DEFAULT]中。本例註釋了
compute節點nova.conf
[libvirt]
virt_type = qemu
hw_disk_discard = unmap
images_type = rbd
images_rbd_pool = vms
images_rbd_ceph_conf = /etc/ceph/ceph.conf
rbd_user = cinder
rbd_secret_uuid = 536f43c1-d367-45e0-ae64-72d987417c91
disk_cachemodes="network=writeback"
libvirt_inject_password = false
libvirt_inject_key = false
libvirt_inject_partition = -2
live_migration_flag=VIR_MIGRATE_UNDEFINE_SOURCE, VIR_MIGRATE_PEER2PEER, VIR_MIGRATE_LIVE, VIR_MIGRATE_TUNNELLED
6、重啟OpenStack
systemctl restart openstack-glance-api.service
systemctl restart openstack-nova-compute.service openstack-cinder-volume.service
驗證
glance驗證
1、下載Cirros鏡像並將其添加到Glance。
wget http://download.cirros-cloud.net/0.3.4/cirros-0.3.4-x86_64-disk.img
2、將QCOW2轉換為RAW。 使用Ceph,image必須是 RAW格式。
qemu-img convert cirros-0.3.4-x86_64-disk.img cirros-0.3.4-x86_64-disk.raw
3、將鏡像添加到Glance
glance image-create --name "Cirros 0.3.4" --disk-format raw --container-format bare --visibility public --file cirros-0.3.4-x86_64-disk.raw
cinder驗證
1、創建Cinder卷
cinder create --display-name="test" 1
2、在Ceph中列出Cinder卷。
$ sudo rbd ls volumes
volume-d251bb74-5c5c-4c40-a15b-2a4a17bbed8b
$ sudo rbd info volumes/volume-d251bb74-5c5c-4c40-a15b-2a4a17bbed8b
nova驗證
1、啟動使用在Glance步驟中添加的Cirros鏡像的臨時VM實例
nova boot --flavor m1.small --nic net-id=4683d03d-30fc-4dd1-9b5f-eccd87340e70 --image='Cirros 0.3.4' cephvm
2、等待VM處於活動狀態
nova list
3、在Ceph虛擬機池中列出鏡像。我們現在應該看到鏡像存儲在Ceph中
sudo rbd -p vms ls
OpenStack+Ceph平臺集成