在centOS7.3中安裝ocata Openstack
centOS 7.3中部署Openstack
一,環境
在centOS 7.3中部署Openstack,按照官網只需要控制節點和計算節點,網路節點安裝和控制節點安裝在一起。安裝過程中一直出現的問題以及解決方法用紅色已經標誌出來了。
(1)網路
控制節點和計算節點都需要兩個網路介面,一個作為管理網路介面,一個作為外部網路介面。介面配置如下:
控制節點:管理網路 IP地址10.0.0.11
子網掩碼 255.255.255.0
預設閘道器 10.0.0.1
外部網路
子網掩碼 255.255.255.0
預設閘道器 10.190.16.1
計算節點:管理網路 IP地址10.0.0.31
子網掩碼 255.255.255.0
預設閘道器 10.0.0.1
外部網路 IP地址10.190.16.41
子網掩碼 255.255.255.0
預設閘道器 10.190.16.1
(2)網路時間協議 (NTP)
控制節點:
1,配置/etc/chrony.conf檔案,按照你環境的要求,對下面的鍵進行新增,修改或者刪除:
server NTP_SERVER iburst
使用NTP伺服器的主機名或者IP地址替換NTP_SERVER。配置支援設定多個server值。
2,為了允許其他節點可以連線到控制節點的 chrony後臺程序,“/etc/chrony.conf ”檔案新增下面的鍵:
allow 10.0.0.0/24
3,啟動 NTP服務並將其配置為隨系統啟動:
# systemctl enable chronyd.service
# systemctl start chronyd.service
計算節點:
# yum install chrony
1,編輯``/etc/chrony.conf``檔案並註釋除controller``server``值外的所有”server”內容。修改它引用控制節點:
server controller iburst
2,啟動 NTP服務並將其配置為隨系統啟動:
# systemctl enable chronyd.service
# systemctl start chronyd.service
驗證操作:
1,在控制節點上執行這個命令
chronyc sources
在 Name/IP address 列的內容應顯示NTP伺服器的主機名或者IP地址。在S列的內容應該在NTP服務目前同步的上游伺服器前顯示*。
2,在所有其他節點執行相同命令
chronyc sources
在 Name/IP address 列的內容應顯示控制節點的主機名。
(3)OpenStack包(所有節點)
1,安裝包
# yum install centos-release-openstack-ocata
2,安裝 OpenStack客戶端:
# yum install python-openstackclient
3,RHEL 和 CentOS 預設啟用了 SELinux .安裝openstack-selinux軟體包以便自動管理OpenStack服務的安全策略:
# yum install openstack-selinux
(4)SQL資料庫(控制節點)
1,安裝包
# yum install mariadb mariadb-server python2-PyMySQL
2,建立並編輯 /etc/my.cnf.d/openstack.cnf
[mysqld]
bind-address = 10.0.0.11
default-storage-engine = innodb
innodb_file_per_table = on
max_connections = 4096
collation-server = utf8_general_ci
character-set-server = utf8
3,啟動資料庫服務,並將其配置為開機自啟:
# systemctl enable mariadb.service
# systemctl start mariadb.service
4,為了保證資料庫服務的安全性,執行``mysql_secure_installation``指令碼。特別需要說明的是,為資料庫的root使用者設定一個適當的密碼。
# mysql_secure_installation
(5)訊息佇列(控制節點)
1,安裝包:
# yum install rabbitmq-server
2,啟動訊息佇列服務並將其配置為隨系統啟動:
# systemctl enable rabbitmq-server.service
# systemctl start rabbitmq-server.service
3,新增 openstack使用者:
# rabbitmqctl add_user openstack RABBIT_PASS
4,給``openstack``使用者配置寫和讀許可權:
# rabbitmqctl set_permissions openstack ".*" ".*" ".*"
(6)Memcached(控制節點)
1,安裝軟體包:
# yum install memcached python-memcached
2,配置 /etc/sysconfig/memcached
後面沒有controller,要不後面網頁打不開
3,啟動Memcached服務,並且配置它隨機啟動。
# systemctl enable memcached.service
二,認證服務(keystone)
(1)安裝並配置
1,建立一個數據庫和管理員令牌
mysql -u root -p
建立 keystone 資料庫:
CREATE DATABASE keystone;
對``keystone``資料庫授予恰當的許可權:
GRANT ALL PRIVILEGES ON keystone.* TO 'keystone'@'localhost' \
IDENTIFIED BY 'KEYSTONE_DBPASS';
GRANT ALL PRIVILEGES ON keystone.* TO 'keystone'@'%' \
IDENTIFIED BY 'KEYSTONE_DBPASS';
2,安裝包
# yum install openstack-keystone httpd mod_wsgi
3,配置/etc/keystone/keystone.conf(在檔案前面直接新增下面配置)
[database]
# ...
connection = mysql+pymysql://keystone:[email protected]/keystone
[token]
# ...
provider = fernet
4,初始化身份認證服務的資料庫:
# su -s /bin/sh -c "keystone-manage db_sync" keystone
5,初始化Fernet keys:
# keystone-manage fernet_setup --keystone-user keystone --keystone-group keystone
# keystone-manage credential_setup --keystone-user keystone --keystone-group keystone
6,Bootstrap身份認證服務
# keystone-manage bootstrap --bootstrap-password admin\
--bootstrap-admin-url http://controller:35357/v3/ \
--bootstrap-internal-url http://controller:5000/v3/ \
--bootstrap-public-url http://controller:5000/v3/ \
--bootstrap-region-id RegionOne
替換 ADMIN_PASS為一個適當的密碼,這裡建立了一個admin使用者,密碼ADMIN_PASS。
7,配置 Apache HTTP服務,編輯/etc/httpd/conf/httpd.conf
ServerName controller
建立一個軟連結:
# ln -s /usr/share/keystone/wsgi-keystone.conf /etc/httpd/conf.d/
8,啟動 Apache HTTP服務並配置其隨系統啟動:
# systemctl enable httpd.service
# systemctl start httpd.service
在瀏覽器輸入外部網路IP,看是否進入Apache伺服器,如果進不去,可能防火牆限制了80埠,開啟80埠即可。
9,建立環境變數
export OS_USERNAME=admin
export OS_PASSWORD=admin
export OS_PROJECT_NAME=admin
export OS_USER_DOMAIN_NAME=Default
export OS_PROJECT_DOMAIN_NAME=Default
export OS_AUTH_URL=http://controller:35357/v3
export OS_IDENTITY_API_VERSION=3
替換ADMIN_PASS為6中設定的密碼。
(2)建立域、專案、使用者和角色
1,建立service專案
$ openstack project create --domain default \
--description "Service Project" service
2,建立demo專案
$ openstack project create --domain default \
--description "Demo Project" demo
3,建立demo使用者
$ openstack user create --domain default \
--password-prompt demo
4,建立user角色
#openstack role create user
5,新增``admin``角色到admin專案和使用者上:
$ openstack role add --project demo --user demo user
後面專案、使用者和角色建立不再解釋說明,直接建立。
(3)驗證操作
1,因為安全性的原因,關閉臨時認證令牌機制:
編輯 /etc/keystone/keystone-paste.ini 檔案,從``[pipeline:public_api]``, [pipeline:admin_api]``和``[pipeline:api_v3]``部分刪除``admin_token_auth。
2,重置``OS_TOKEN``和``OS_URL``環境變數:
$ unset OS_TOKEN OS_URL
3,作為 admin使用者,請求認證令牌:
$ openstack --os-auth-url http://controller:35357/v3 \
--os-project-domain-name default --os-user-domain-name default \
--os-project-name admin --os-username admin token issue
作為demo使用者,請求認證令牌:
$ openstack --os-auth-url http://controller:5000/v3 \
--os-project-domain-name default --os-user-domain-name default \
--os-project-name demo --os-username demo token issue
(4)建立 OpenStack 客戶端環境指令碼
1,編輯檔案 admin-openrc並新增如下內容(替換ADMIN_PASS為(2)6中設定的密碼):
export OS_PROJECT_DOMAIN_NAME=default
export OS_USER_DOMAIN_NAME=default
export OS_PROJECT_NAME=admin
export OS_USERNAME=admin
export OS_PASSWORD=ADMIN_PASS
export OS_AUTH_URL=http://controller:35357/v3
export OS_IDENTITY_API_VERSION=3
export OS_IMAGE_API_VERSION=2
2,編輯檔案 demo-openrc並新增如下內容(替換DEMO_PASS為demo使用者密碼):
export OS_PROJECT_DOMAIN_NAME=Default
export OS_USER_DOMAIN_NAME=Default
export OS_PROJECT_NAME=demo
export OS_USERNAME=demo
export OS_PASSWORD=DEMO_PASS
export OS_AUTH_URL=http://controller:5000/v3
export OS_IDENTITY_API_VERSION=3
export OS_IMAGE_API_VERSION=2
3,使用指令碼
$ . admin-openrc
三,映象服務(glance)
(1)建立資料庫
1,mysql -u root -p
2,CREATE DATABASE glance;
3, GRANT ALL PRIVILEGES ON glance.* TO 'glance'@'localhost' \
IDENTIFIED BY 'GLANCE_DBPASS';
GRANT ALL PRIVILEGES ON glance.* TO 'glance'@'%' \
IDENTIFIED BY 'GLANCE_DBPASS';
(2)建立服務證書和映象服務的 API 端點
1,$ . admin-openrc
$ openstack user create --domain default --password-prompt glance
$ openstack role add --project service --user glance admin
$ openstack service create --name glance \
--description "OpenStack Image" image
2,建立映象服務的 API端點
$ openstack endpoint create --region RegionOne \
$ openstack endpoint create --region RegionOne \
$ openstack endpoint create --region RegionOne \
(3)安裝配置
1,安裝包
# yum install openstack-glance
2,配置/etc/glance/glance-api.conf(直接新增到最前面就可以)
[database]
# ...
connection = mysql+pymysql://glance:[email protected]/glance
[keystone_authtoken]
# ...
auth_uri = http://controller:5000
auth_url = http://controller:35357
memcached_servers = controller:11211
auth_type = password
project_domain_name = default
user_domain_name = default
project_name = service
username = glance
#修改GLANCE_PASS為glance使用者密碼
[paste_deploy]
# ...
flavor = keystone
[glance_store]
# ...
stores = file,http
default_store = file
filesystem_store_datadir = /var/lib/glance/images/
3,配置/etc/glance/glance-registry.conf
[database]
# ...
connection = mysql+pymysql://glance:[email protected]/glance
[keystone_authtoken]
# ...
auth_uri = http://controller:5000
auth_url = http://controller:35357
memcached_servers = controller:11211
auth_type = password
project_domain_name = default
user_domain_name = default
project_name = service
username = glance
password = GLANCE_PASS
#改GLANCE_PASS為glance使用者密碼。
[paste_deploy]
# ...
flavor = keystone
4,寫入映象服務資料庫:
# su -s /bin/sh -c "glance-manage db_sync" glance
5,完成安裝,啟動映象服務、配置他們隨機啟動:
# systemctl enable openstack-glance-api.service \
openstack-glance-registry.service
openstack-glance-registry.service
(4)驗證操作
1,$ . admin-openrc
2,下載一個映象:
3,使用 QCOW2磁碟格式,bare容器格式上傳映象到映象服務並設定公共可見,這樣所有的專案都可以訪問它:
$ openstack image create "cirros" \
--file cirros-0.3.5-x86_64-disk.img \
--disk-format qcow2 --container-format bare \
--public
4,確認映象的上傳並驗證屬性:
$ openstack image list
四,計算服務(nova)
先安裝控制節點,按照官網教程一步一步走。
(1)建立資料庫
1, mysql -u root -p
CREATE DATABASE nova_api;
CREATE DATABASE nova;
CREATE DATABASE nova_cell0;
GRANT ALL PRIVILEGES ON nova_api.* TO 'nova'@'localhost' \
IDENTIFIED BY 'NOVA_DBPASS';
GRANT ALL PRIVILEGES ON nova_api.* TO 'nova'@'%' \
IDENTIFIED BY 'NOVA_DBPASS';
GRANT ALL PRIVILEGES ON nova.* TO 'nova'@'localhost' \
IDENTIFIED BY 'NOVA_DBPASS';
GRANT ALL PRIVILEGES ON nova.* TO 'nova'@'%' \
IDENTIFIED BY 'NOVA_DBPASS';
GRANT ALL PRIVILEGES ON nova_cell0.* TO 'nova'@'localhost' \
IDENTIFIED BY 'NOVA_DBPASS';
GRANT ALL PRIVILEGES ON nova_cell0.* TO 'nova'@'%' \
IDENTIFIED BY 'NOVA_DBPASS';
2,建立服務證書、建立 Compute服務API端點
. admin-openrc
$ openstack user create --domain default --password-prompt nova
$ openstack role add --project service --user nova admin
$ openstack service create --name nova --description "OpenStack Compute" compute
$ openstack endpoint create --region RegionOne \
$ openstack endpoint create --region RegionOne \
$ openstack endpoint create --region RegionOne \
$ openstack user create --domain default --password-prompt placement
$ openstack role add --project service --user placement admin
$ openstack service create --name placement --description "Placement API" placement
$ openstack endpoint create --region RegionOne placement public \
$ openstack endpoint create --region RegionOne placement internal \
$ openstack endpoint create --region RegionOne placement admin \
(2)安裝並配置控制節點
1,安裝包
# yum install openstack-nova-api openstack-nova-conductor \
openstack-nova-console openstack-nova-novncproxy \
openstack-nova-scheduler openstack-nova-placement-api
2,配置/etc/nova/nova.conf,在原有配置增加如下配置:
[DEFAULT]
# ...
enabled_apis = osapi_compute,metadata
transport_url = rabbit://openstack:[email protected]
my_ip = 10.0.0.11
use_neutron = True
firewall_driver = nova.virt.firewall.NoopFirewallDriver
[api_database]
# ...
connection = mysql+pymysql://nova:[email protected]/nova_api
[database]
# ...
connection = mysql+pymysql://nova:[email protected]/nova
[api]
# ...
auth_strategy = keystone
[keystone_authtoken]
# ...
auth_uri = http://controller:5000
auth_url = http://controller:35357
memcached_servers = controller:11211
auth_type = password
project_domain_name = default
user_domain_name = default
project_name = service
username = nova
password = nova
[vnc]
enabled = true
# ...
vncserver_listen = $my_ip
vncserver_proxyclient_address = $my_ip
[glance]
# ...
api_servers = http://controller:9292
[oslo_concurrency]
# ...
lock_path = /var/lib/nova/tmp
[placement]
# ...
os_region_name = RegionOne
project_domain_name = Default
project_name = service
auth_type = password
user_domain_name = Default
auth_url = http://controller:35357/v3
username = placement
password = placement
2,配置 /etc/httpd/conf.d/00-nova-placement-api.conf,新增如下配置
<Directory /usr/bin>
<IfVersion >= 2.4>
Require all granted
</IfVersion>
<IfVersion < 2.4>
Order allow,deny
Allow from all
</IfVersion>
</Directory>
3,同步nova-api資料庫
# su -s /bin/sh -c "nova-manage api_db sync" nova
4,註冊cell0資料庫
# su -s /bin/sh -c "nova-manage cell_v2 map_cell0" nova
5,建立cell11 cell
# su -s /bin/sh -c "nova-manage cell_v2 create_cell --name=cell1 --verbose" nova
6,同步nova資料庫
# su -s /bin/sh -c "nova-manage db sync" nova
7,驗證cell0和cell1正確註冊
# nova-manage cell_v2 list_cells
8,完成安裝
# systemctl enable openstack-nova-api.service \
openstack-nova-consoleauth.service openstack-nova-scheduler.service \
openstack-nova-conductor.service openstack-nova-novncproxy.service
openstack-nova-consoleauth.service openstack-nova-scheduler.service \
openstack-nova-conductor.service openstack-nova-novncproxy.service
(3)安裝並配置計算節點
1,安裝包
# yum install openstack-nova-compute
2,配置 /etc/nova/nova.conf
[DEFAULT]
enabled_apis = osapi_compute,metadata
transport_url = rabbit://openstack:[email protected]
my_ip = 10.0.0.31
use_neutron = True
firewall_driver = nova.virt.firewall.NoopFirewallDriver
instance_usage_audit = True
instance_usage_audit_period = hour
notify_on_state_change = vm_and_task_state
notification_driver = nova.openstack.common.notifier.rpc_notifier
notification_driver = ceilometer.compute.nova_notifier
[vnc]
enabled = True
vncserver_listen = 0.0.0.0
vncserver_proxyclient_address = 10.0.0.31
novncproxy_base_url = http://controller:6080/vnc_auto.html
[api]
auth_strategy = keystone
[keystone_authtoken]
auth_uri = http://controller:5000
auth_url = http://controller:35357
memcached_servers = controller:11211
auth_type = password
project_domain_name = default
user_domain_name = default
project_name = service
username = nova
password = nova
[glance]
api_servers = http://controller:9292
[oslo_concurrency]
lock_path = /var/lib/nova/tmp
[placement]
os_region_name = RegionOne
project_domain_name = Default
project_name = service
auth_type = password
user_domain_name = Default
auth_url = http://controller:35357/v3
username = placement
password = placement
3,確定計算節點是否支援虛擬機器的硬體加速。
$ egrep -c '(vmx|svm)' /proc/cpuinfo
返回結果小於1,在/etc/nova/nova.conf中新增virt_type = qemu
4,完成安裝
# systemctl enable libvirtd.service openstack-nova-compute.service
# systemctl start libvirtd.service openstack-nova-compute.service
可能啟動的時候一直啟動不了,等五六分鐘沒反應,檢視日誌發現5672 is unreachable;
解決方法:啟用iptables,在rabbitmq server端加入如下規則,開放rabbitmq埠(5672),
允許其他主機訪問rabbitmq server。
#service iptables save #儲存設定
#service iptables restart #重啟iptables,生效規則
(4)新增計算節點到cell資料庫(控制節點中操作)
Placement API not response問題:
關閉防火牆:systemctl stop firewalld.service
iptables -F
iptables -L -n -v
iptables -I INPUT -p tcp --dport 8778 -j ACCEPT
iptables -I OUTPUT -p tcp --dport 8778 -j ACCEPT
iptables -L -n -v
/etc/init.d/httpd status
iptables -A OUTPUT -p tcp --sport 80 -j ACCEPT
iptables -A INPUT -p tcp --dport 80 -j ACCEPT
iptables -L
service httpd restart(主要是這個)
重啟所有計算相關服務
1,確認計算節點主機在資料庫中
$ . admin-openrc
$ openstack hypervisor list
2,發現計算節點主機
# su -s /bin/sh -c "nova-manage cell_v2 discover_hosts --verbose" nova
(5)驗證操作
$ . admin-openrc
五,網路服務(neutron)
(1)建立資料庫
mysql -u root -p
CREATE DATABASE neutron;
GRANT ALL PRIVILEGES ON neutron.* TO 'neutron'@'localhost' \
IDENTIFIED BY 'NEUTRON_DBPASS';
GRANT ALL PRIVILEGES ON neutron.* TO 'neutron'@'%' \
IDENTIFIED BY 'NEUTRON_DBPASS';
(2)建立neutron使用者,服務和API埠等
$ openstack user create --domain default --password-prompt neutron
$ openstack role add --project service --user neutron admin
$ openstack service create --name neutron \
--description "OpenStack Networking" network
$ openstack endpoint create --region RegionOne \
$ openstack endpoint create --region RegionOne \
$ openstack endpoint create --region RegionOne \
(3)安裝並配置控制節點網路
選擇私有網路,同樣支援例項連線到公共網路。
1,安裝包
# yum install openstack-neutron openstack-neutron-ml2 \
openstack-neutron-openvswitch ebtables
(換成ovs,官網用的是linuxbridge)
2,配置/etc/neutron/neutron.conf
[database]
# ...
connection = mysql+pymysql://neutron:[email protected]/neutron
[DEFAULT]
# ...
core_plugin = ml2
service_plugins = router
allow_overlapping_ips = true
transport_url = rabbit://openstack:[email protected]
auth_strategy = keystone
[keystone_authtoken]
# ...
auth_uri = http://controller:5000
auth_url = http://controller:35357
memcached_servers = controller:11211
auth_type = password
project_domain_name = default
user_domain_name = default
project_name = service
username = neutron
password = neutron
[DEFAULT]
# ...
notify_nova_on_port_status_changes = true
notify_nova_on_port_data_changes = true
[nova]
# ...
auth_url = http://controller:35357
auth_type = password
project_domain_name = default
user_domain_name = default
region_name = RegionOne
project_name = service
username = nova
password = nova
[oslo_concurrency]
# ...
lock_path = /var/lib/neutron/tmp
3,配置/etc/neutron/plugins/ml2/openvswitch_agent.ini:
在[agent]部分新增
tunnel_types = vxlan
l2_population = True
在[ovs]部分新增:
local_ip = 10.0.0.11
bridge_mappings = external:br-ex
ps aux | grep openvswitch
cd /etc/openvswitch/
ll
mv conf.db conf.db.bk
/bin/systemctl stop openvswitch.service
/bin/systemctl stop ovsdb-server
ps aux | grep openvswitch
kill -9 35506(殺掉所有和ovs有關的程序)
mv conf.db conf.db.bk
/bin/systemctl start ovsdb-server openvswitch.service(OK)
4,配置/etc/neutron/plugins/ml2/ml2_conf.ini
[ml2]
# ...
type_drivers = flat,vlan,vxlan,gre
tenant_network_types = vxlan
mechanism_drivers = openvswitch,l2population
extension_drivers = port_security
[ml2_type_flat]
# ...
flat_networks = external
#這裡用default,provider還是external還不確定,後面好像網路有問題配成external解決問題的
[ml2_type_vxlan]
# ...
vni_ranges = 1:1000
[securitygroup]
# ...
enable_ipset = true
5,配置 /etc/neutron/l3_agent.ini
interface_driver = openvswitch
6,配置/etc/neutron/dhcp_agent.ini
[DEFAULT]
interface_driver = openvswitch
dhcp_driver = neutron.agent.linux.dhcp.Dnsmasq
enable_isolated_metadata = true
7,配置/etc/neutron/metadata_agent.ini
[DEFAULT]
# ...
nova_metadata_ip = controller
metadata_proxy_shared_secret = METADAT