ubuntu16.04中搭建openstack詳細記錄
阿新 • • 發佈:2019-01-06
(安裝手冊)(中文文件:)(詞彙表:)文章來自作者維護的社群微信公眾號【虛擬化雲端計算】)
準備環境首先設定控制節點和計算節點的hosts名字為:controller和compute在所有節點上更新源# apt install software-properties-common# add-apt-repository cloud-archive:newton# apt update && apt dist-upgrade以下在控制節點上進行一. 安裝openstack配置工具# apt install python-openstackclient二. 安裝SQL# apt install mariadb-server python-pymysqlvi /etc/mysql/mariadb.conf.d/99-openstack.cnf
(控制節點ip:192.168.5.1)# service mysql restart# mysql_secure_installation(設定了sql database root密碼)三. 安裝Message queue# apt install rabbitmq-server# rabbitmqctl add_user openstack RABBIT_PASS (設定了RABBIT_PASS 密碼)# rabbitmqctl set_permissions openstack ".*" ".*" ".*"四. 安裝Memcached# apt install memcached python-memcachevi /etc/memcached.conf
# service memcached restartKeystone主要作用:管理使用者及其許可權;管理openstack的service列表,並提供這些service的API endpoint;openstack的其他元件都要在Keystone中建立使用者並註冊endpoint。一. 建立keystone資料庫:$ mysql -u root -pmysql> CREATE DATABASE keystone;mysql> GRANT ALL PRIVILEGES ON keystone.* TO 'keystone'@'localhost' IDENTIFIED BY 'KEYSTONE_DBPASS';mysql> GRANT ALL PRIVILEGES ON keystone.* TO 'keystone'@'%' IDENTIFIED BY 'KEYSTONE_DBPASS';(設定了KEYSTONE_DBPASS 密碼)二. 安裝並修改keystone# apt install keystone# vi /etc/keystone/keystone.conf[database]connection = mysql+pymysql://keystone: [email protected]/keystone[token]provider = fernet(支援的幾種provider:`fernet`, `pkiz`, `pki`, `uuid`,以fernet為例子)三. 填充keystone資料庫# su -s /bin/sh -c "keystone-manage db_sync" keystone四. 初始化Fernet key# keystone-manage fernet_setup --keystone-user keystone --keystone-group keystone# keystone-manage credential_setup --keystone-user keystone --keystone-group keystone五. 啟動認證服務# keystone-manage bootstrap --bootstrap-password ADMIN_PASS \ --bootstrap-admin-url http://controller:35357/v3/ \ --bootstrap-internal-url http://controller:35357/v3/ \ --bootstrap-public-url http://controller:5000/v3/ \ --bootstrap-region-id RegionOne(設定了ADMIN_PASS 密碼)六. 配置httpd# vi /etc/apache2/apache2.conf設定ServerName 為 controller (這裡的controller是控制節點的hosts名稱) # service apache2 restart七. 建立domains, projects, users, and roles.# rm -f /var/lib/keystone/keystone.db$ export OS_USERNAME=admin$ export OS_PASSWORD=ADMIN_PASS$ export OS_PROJECT_NAME=admin$ export OS_USER_DOMAIN_NAME=default$ export OS_PROJECT_DOMAIN_NAME=default$ export OS_AUTH_URL=http://controller:35357/v3$ export OS_IDENTITY_API_VERSION=3$ openstack project create --domain default --description "Service Project" service$ openstack project create --domain default --description "Demo Project" demo$ openstack user create --domain default --password-prompt demo(設定了DEMO_PASS 密碼 )$ openstack role create user$ openstack role add --project demo --user demo user八. 結束後清理配置:1.出於安全考慮,把臨時授權令牌取消掉# vi /etc/keystone/keystone-paste.ini 把[pipeline:public_api], [pipeline:admin_api], [pipeline:api_v3] 段中的admin_token_auth移除。2.清除授權環境變數$ unset OS_AUTH_URL OS_PASSWORD九. 測試是否安裝成功1.使用admin和demo使用者請求令牌測試$ openstack --os-auth-url http://controller:35357/v3 \ --os-project-domain-name default --os-user-domain-name default \ --os-project-name admin --os-username admin token issue$ openstack --os-auth-url http://controller:5000/v3 \ --os-project-domain-name default --os-user-domain-name default \ --os-project-name demo --os-username demo token issue
$ . admin-openrc$ openstack token issue
$ . demo-openrc$ openstack token issueGlance一.建立glance資料庫:$ mysql -u root -pmysql> CREATE DATABASE glance;mysql> GRANT ALL PRIVILEGES ON glance.* TO 'glance'@'localhost' IDENTIFIED BY 'GLANCE_DBPASS';mysql> GRANT ALL PRIVILEGES ON glance.* TO 'glance'@'%' IDENTIFIED BY 'GLANCE_DBPASS'; (設定了GLANCE_DBPASS 密碼)二. 進入admin模式$ . admin-openrc三. 建立使用者、服務、api$ openstack user create --domain default --password-prompt glance (設定了GLANCE_PASS 密碼 )$ openstack role add --project service --user glance admin$ openstack service create --name glance --description "OpenStack Image" image$ openstack endpoint create --region RegionOne image public http://controller:9292$ openstack endpoint create --region RegionOne image internal http://controller:9292$ openstack endpoint create --region RegionOne image admin http://controller:9292四. 安裝並修改glance# apt install glance# vi /etc/glance/glance-api.conf[database]connection = mysql+pymysql://glance:[email protected]/glance(這裡的controller是控制節點的hosts名稱)[keystone_authtoken]auth_uri = http://controller:5000auth_url = http://controller:35357memcached_servers = controller:11211auth_type = passwordproject_domain_name = defaultuser_domain_name = defaultproject_name = serviceusername = glancepassword = GLANCE_PASS ([keystone_authtoken] 內其他內容全部清空)[paste_deploy]flavor = keystone[glance_store]stores = file,httpdefault_store = filefilesystem_store_datadir = /var/lib/glance/images/# vi /etc/glance/glance-registry.conf[database]connection = mysql+pymysql://glance:[email protected]/glance[keystone_authtoken]auth_uri = http://controller:5000auth_url = http://controller:35357memcached_servers = controller:11211auth_type = passwordproject_domain_name = defaultuser_domain_name = defaultproject_name = serviceusername = glancepassword = GLANCE_PASS([keystone_authtoken] 內其他內容全部清空)[paste_deploy]flavor = keystone 五. 填充glance資料庫# su -s /bin/sh -c "glance-manage db_sync" glance六. 啟動glance服務# service glance-registry restart# service glance-api restart七. 建立一個映象$ . admin-openrc$ openstack image create "cirros" \ --file cirros-0.3.4-x86_64-disk.img \ --disk-format qcow2 --container-format bare \ --public$ openstack image listNova-Controller一.建立nova資料庫:$ mysql -u root -pmysql> CREATE DATABASE nova_api;mysql> CREATE DATABASE nova;mysql> GRANT ALL PRIVILEGES ON nova_api.* TO 'nova'@'localhost' IDENTIFIED BY 'NOVA_DBPASS';mysql> GRANT ALL PRIVILEGES ON nova_api.* TO 'nova'@'%' IDENTIFIED BY 'NOVA_DBPASS';mysql> GRANT ALL PRIVILEGES ON nova.* TO 'nova'@'localhost' IDENTIFIED BY 'NOVA_DBPASS';mysql> GRANT ALL PRIVILEGES ON nova.* TO 'nova'@'%' IDENTIFIED BY 'NOVA_DBPASS';二. 進入admin模式$ . admin-openrc三. 建立使用者、服務、api$ openstack user create --domain default --password-prompt nova (設定了NOVA_PASS 密碼 )$ openstack role add --project service --user nova admin$ openstack service create --name nova --description "OpenStack Compute" compute$ openstack endpoint create --region RegionOne compute public http://controller:8774/v2.1/%\(tenant_id\)s$ openstack endpoint create --region RegionOne compute internal http://controller:8774/v2.1/%\(tenant_id\)s$ openstack endpoint create --region RegionOne compute admin http://controller:8774/v2.1/%\(tenant_id\)s四. 安裝並修改nova# apt install nova-api nova-conductor nova-consoleauth nova-novncproxy nova-scheduler# vi /etc/nova/nova.conf[api_database]connection = mysql+pymysql://nova:[email protected]/nova_api[database]connection = mysql+pymysql://nova:[email protected]/nova[DEFAULT]transport_url = rabbit://openstack:[email protected][DEFAULT]auth_strategy = keystone[keystone_authtoken]auth_uri = http://controller:5000auth_url = http://controller:35357memcached_servers = controller:11211auth_type = passwordproject_domain_name = defaultuser_domain_name = defaultproject_name = serviceusername = novapassword = NOVA_PASS ([keystone_authtoken] 內其他內容全部清空)[DEFAULT]my_ip = 192.168.5.1 (控制節點的管理網ip)[DEFAULT]use_neutron = Truefirewall_driver = nova.virt.firewall.NoopFirewallDriver[vnc]vncserver_listen = $my_ipvncserver_proxyclient_address = $my_ip[glance]api_servers = http://controller:9292[oslo_concurrency]lock_path = /var/lib/nova/tmp五. 填充nova資料庫# su -s /bin/sh -c "nova-manage api_db sync" nova# su -s /bin/sh -c "nova-manage db sync" nova六. 啟動nova服務# service nova-api restart# service nova-consoleauth restart# service nova-scheduler restart# service nova-conductor restart# service nova-novncproxy restartNova-Compute# apt install nova-compute# vi /etc/nova/nova.conf[DEFAULT]transport_url = rabbit://openstack:[email protected][DEFAULT]auth_strategy = keystone[keystone_authtoken]auth_uri = http://controller:5000auth_url = http://controller:35357memcached_servers = controller:11211auth_type = passwordproject_domain_name = defaultuser_domain_name = defaultproject_name = serviceusername = novapassword = NOVA_PASS ([keystone_authtoken] 內其他內容全部清空)[DEFAULT]my_ip = 192.168.5.13 (計算節點的管理網ip)[DEFAULT]use_neutron = Truefirewall_driver = nova.virt.firewall.NoopFirewallDriver[vnc]enabled = Truevncserver_listen = 0.0.0.0vncserver_proxyclient_address = $my_ipnovncproxy_base_url = http://controller:6080/vnc_auto.html[glance]api_servers = http://controller:9292[oslo_concurrency]lock_path = /var/lib/nova/tmp# service nova-compute restart(在控制節點上在管理狀態使用$ openstack compute service list檢視是否有本計算節點)Neutron-Controller一.建立neutron資料庫:$ mysql -u root -pmysql> CREATE DATABASE neutron;mysql> GRANT ALL PRIVILEGES ON neutron.* TO 'neutron'@'localhost' IDENTIFIED BY 'NEUTRON_DBPASS';mysql> GRANT ALL PRIVILEGES ON neutron.* TO 'neutron'@'%' IDENTIFIED BY 'NEUTRON_DBPASS';二. 進入admin模式$ . admin-openrc三. 建立使用者、服務、api$ openstack user create --domain default --password-prompt neutron (設定了NEUTRON_PASS 密碼 )$ openstack role add --project service --user neutron admin$ openstack service create --name neutron --description "OpenStack Networking" network$ openstack endpoint create --region RegionOne network public http://controller:9696$ openstack endpoint create --region RegionOne network internal http://controller:9696$ openstack endpoint create --region RegionOne network admin http://controller:9696四.使用網路選項1:提供網路(和自服務網路選項2只有這一步區別)# apt install neutron-server neutron-plugin-ml2 \ neutron-linuxbridge-agent neutron-dhcp-agent \ neutron-metadata-agent# vi /etc/neutron/neutron.conf[database]connection = mysql+pymysql://neutron:[email protected]/neutron[DEFAULT]core_plugin = ml2service_plugins =[DEFAULT]transport_url = rabbit://openstack:[email protected][DEFAULT]auth_strategy = keystone[keystone_authtoken]auth_uri = http://controller:5000auth_url = http://controller:35357memcached_servers = controller:11211auth_type = passwordproject_domain_name = defaultuser_domain_name = defaultproject_name = serviceusername = neutronpassword = NEUTRON_PASS[DEFAULT]notify_nova_on_port_status_changes = Truenotify_nova_on_port_data_changes = True[nova]auth_url = http://controller:35357auth_type = passwordproject_domain_name = defaultuser_domain_name = defaultregion_name = RegionOneproject_name = serviceusername = novapassword = NOVA_PASS# vi /etc/neutron/plugins/ml2/ml2_conf.ini[ml2]type_drivers = flat,vlan[ml2]tenant_network_types =[ml2]mechanism_drivers = linuxbridge[ml2]extension_drivers = port_security[ml2_type_flat]flat_networks = provider[securitygroup]enable_ipset = True# vi /etc/neutron/plugins/ml2/linuxbridge_agent.ini [linux_bridge]physical_interface_mappings = provider:enp2s0 (provider物理網路介面)[vxlan]enable_vxlan = False[securitygroup]enable_security_group = Truefirewall_driver = neutron.agent.linux.iptables_firewall.IptablesFirewallDriver# vi /etc/neutron/dhcp_agent.ini [DEFAULT]interface_driver = neutron.agent.linux.interface.BridgeInterfaceDriverdhcp_driver = neutron.agent.linux.dhcp.Dnsmasqenable_isolated_metadata = True五. 配置元資料代理# vi /etc/neutron/metadata_agent.ini[DEFAULT]nova_metadata_ip = controllermetadata_proxy_shared_secret = METADATA_SECRET六. 為計算節點配置網路服務# vi /etc/nova/nova.conf[neutron]url = http://controller:9696auth_url = http://controller:35357auth_type = passwordproject_domain_name = defaultuser_domain_name = defaultregion_name = RegionOneproject_name = serviceusername = neutronpassword = NEUTRON_PASSservice_metadata_proxy = Truemetadata_proxy_shared_secret = METADATA_SECRET七. 填充neutron資料庫# su -s /bin/sh -c "neutron-db-manage --config-file /etc/neutron/neutron.conf \ --config-file /etc/neutron/plugins/ml2/ml2_conf.ini upgrade head" neutron八. 啟動neutron服務# service nova-api restart# service neutron-server restart# service neutron-linuxbridge-agent restart# service neutron-dhcp-agent restart# service neutron-metadata-agent restart(對自服務型的網路需要多啟動下面這個服務)# service neutron-l3-agent restart Neutron-Compute# apt install neutron-linuxbridge-agent# vi /etc/neutron/neutron.conf[database](註釋掉connection,因為計算節點不需要訪問資料庫) [DEFAULT]transport_url = rabbit://openstack:[email protected][DEFAULT]auth_strategy = keystone[keystone_authtoken]auth_uri = http://controller:5000auth_url = http://controller:35357memcached_servers = controller:11211auth_type = passwordproject_domain_name = defaultuser_domain_name = defaultproject_name = serviceusername = neutronpassword = NEUTRON_PASS ([keystone_authtoken] 內其他內容全部清空)二.使用網路選項1:提供網路(和自服務網路選項2只有這一步區別)# vi /etc/neutron/plugins/ml2/linuxbridge_agent.ini[linux_bridge]physical_interface_mappings = provider:enp2s0 (provider物理網路介面)[vxlan]enable_vxlan = False[securitygroup]enable_security_group = Truefirewall_driver = neutron.agent.linux.iptables_firewall.IptablesFirewallDriver三. 為計算節點配置網路服務# vi /etc/nova/nova.conf[neutron]url = http://controller:9696auth_url = http://controller:35357auth_type = passwordproject_domain_name = defaultuser_domain_name = defaultregion_name = RegionOneproject_name = serviceusername = neutronpassword = NEUTRON_PASS四.# service nova-compute restart# service neutron-linuxbridge-agent restart(在控制節點上在管理狀態使用$ neutron ext-list列出載入的擴充套件來驗證``neutron-server``程序是否正常啟動)有個keystone、glance、nova、neutron後,現在你的 OpenStack 環境包含了啟動一個基本例項所必須的核心元件。你可以參考 :launch-instance 或者新增更多的 OpenStack 服務到你的環境中。如果使用ceph作為儲存後端,需要安裝cinder後再建立instance,否則現在就可以建立基於檔案系統的instance了。Cinder-Controller一.建立cinder資料庫:$ mysql -u root -pmysql> CREATE DATABASE cinder;mysql> GRANT ALL PRIVILEGES ON cinder.* TO 'cinder'@'localhost' IDENTIFIED BY 'CINDER_DBPASS';mysql> GRANT ALL PRIVILEGES ON cinder.* TO 'cinder'@'%' IDENTIFIED BY 'CINDER_DBPASS';二. 進入admin模式$ . admin-openrc三. 建立使用者、服務、api$ openstack user create --domain default --password-prompt cinder (設定了CINDER_PASS 密碼 )$ openstack role add --project service --user cinder admin$ openstack service create --name cinder \ --description "OpenStack Block Storage" volume$ openstack service create --name cinderv2 \ --description "OpenStack Block Storage" volumev2$ openstack endpoint create --region RegionOne \ volume public http://controller:8776/v1/%\(tenant_id\)s$ openstack endpoint create --region RegionOne \ volume internal http://controller:8776/v1/%\(tenant_id\)s$ openstack endpoint create --region RegionOne \ volume admin http://controller:8776/v1/%\(tenant_id\)s$ openstack endpoint create --region RegionOne \ volumev2 public http://controller:8776/v2/%\(tenant_id\)s$ openstack endpoint create --region RegionOne \ volumev2 internal http://controller:8776/v2/%\(tenant_id\)s$ openstack endpoint create --region RegionOne \ volumev2 admin http://controller:8776/v2/%\(tenant_id\)s四. 安裝並修改cinder# apt install cinder-api cinder-scheduler# vi /etc/cinder/cinder.conf[database]connection = mysql+pymysql://cinder:[email protected]/cinder[DEFAULT]transport_url = rabbit://openstack:[email protected][DEFAULT]auth_strategy = keystone[keystone_authtoken]auth_uri = http://controller:5000auth_url = http://controller:35357memcached_servers = controller:11211auth_type = passwordproject_domain_name = defaultuser_domain_name = defaultproject_name = serviceusername = cinderpassword = CINDER_PASS ([keystone_authtoken] 內其他內容全部清空)[DEFAULT]my_ip = 192.168.5.1 (控制節點的管理網ip)[oslo_concurrency]lock_path = /var/lib/cinder/tmp五. 填充cinder資料庫# su -s /bin/sh -c "cinder-manage db sync" cinder六. 在控制節點配置nova使用塊儲存# vi /etc/nova/nova.conf[cinder]os_region_name = RegionOne六. 啟動nova和cinder服務# service nova-api restart# service cinder-scheduler restart# service cinder-api restartCinder-Storage一. 建立一個名字為cinder-volumes的卷組略二. 安裝cinder-volume並配置# apt install cinder-volume# vi /etc/cinder/cinder.conf [database]connection = mysql+pymysql://cinder:[email protected]/cinder[DEFAULT]transport_url = rabbit://openstack:[email protected][DEFAULT]auth_strategy = keystone[keystone_authtoken]auth_uri = http://controller:5000auth_url = http://controller:35357memcached_servers = controller:11211auth_type = passwordproject_domain_name = defaultuser_domain_name = defaultproject_name = serviceusername = cinderpassword = CINDER_PASS[DEFAULT]my_ip =192.168.5.13 (儲存節點的管理網ip)[oslo_concurrency]lock_path = /var/lib/cinder/tmp(以上與控制節點配置除ip之外完全相同,儲存節點需要另外新增以下的)[lvm1]volume_driver = cinder.volume.drivers.lvm.LVMVolumeDrivervolume_group = cinder-volumesiscsi_protocol = iscsiiscsi_helper = tgtadm[DEFAULT]enabled_backends = lvm1 (其他節點上定義其他名,不同節點名稱是否可以相同?)[DEFAULT]glance_api_servers = http://controller:9292三. 啟動cinder-volume服務# service tgt restart# service cinder-volume restart四. 檢視volume服務狀態# cinder service-list (或openstack volume service list) +------------------+-----------------+------+---------+-------+----------------------------+-----------------+| Binary | Host | Zone | Status | State | Updated_at | Disabled Reason |+------------------+-----------------+------+---------+-------+----------------------------+-----------------+| cinder-scheduler | controller | nova | enabled | up | 2016-12-26T02:14:51.000000 | - || cinder-volume | [email protected] | nova | enabled | up | 2016-12-26T02:14:54.000000 | - || cinder-volume | [email protected] | nova | enabled | up | 2016-12-26T02:14:50.000000 | - || cinder-volume | [email protected] | nova | enabled | up | 2016-12-26T02:14:52.000000 | - |+------------------+-----------------+------+---------+-------+----------------------------+-----------------+五. 關於配置檔案說明1. Cinder為每一個backend執行一個cinder-volume服務2. 通過在cinder.conf中設定 storage_availability_zone=az1 可以指定cinder-volume host的Zone。使用者建立volume的時候可以選擇AZ,配合cinder-scheduler的AvailabilityZoneFilter可以將volume建立到指定的AZ中。 預設的情況下Zone為nova。 建立一個例項一. 建立虛擬網路(opt1)(在控制節點上)$ . admin-openrc$ openstack network create --share \ --provider-physical-network provider \ --provider-network-type flat provider$ openstack subnet create --network provider \ --allocation-pool start=192.168.5.10,end=192.168.5.100 \ --dns-nameserver 219.146.1.66 --gateway 192.168.5.1 \ --subnet-range 192.168.5.0/24 provider二.建立m1.nano規格的主機模板(flavor )$ openstack flavor create --id 0 --vcpus 1 --ram 64 --disk 1 m1.nano三. 生成一個鍵值對$ . demo-openrc$ ssh-keygen -q -N ""$ openstack keypair create --public-key ~/.ssh/id_rsa.pub mykey$ openstack keypair list四. 增加安全組規則$ openstack security group rule create --proto icmp default (default安全組允許 ICMP (ping))$ openstack security group rule create --proto tcp --dst-port 22 default (default安全組允許安全 shell (SSH))五. 啟動一個例項(opt1型網路)1.確認環境是否正常$ . demo-openrc$ openstack flavor list$ openstack image list$ openstack network list$ openstack security group list2.使用m1.nano模板建立一個例項$ openstack server create --flavor m1.nano --image cirros \ --nic net-id=56708ee8-b6c7-4112-b3d1-231bd8db659f --security-group default \ --key-name mykey instance-a(net-id 通過openstack network list 檢視; instance-a是根據自己需要取的例項名稱)3.檢視現有的例項列表$ openstack server list(當構建過程完全成功後,狀態會從 BUILD變為ACTIVE。並且在計算節點能找到對應的qemu程序)六. 訪問一個例項1.檢視某個例項的url(例如訪問前面建立的名為instance-a 的例項)$ openstack console url show 5b08017b-00d4-4476-9380-4f5b6165c6d7+-------+---------------------------------------------------------------------------------+| Field | Value |+-------+---------------------------------------------------------------------------------+| type | novnc || url | http://controller:6080/vnc_auto.html?token=6643713d-f4c8-411c-ac9e-2c5b5a419935 |+-------+---------------------------------------------------------------------------------+(5b08017b-00d4-4476-9380-4f5b6165c6d7 通過openstack server list 檢視,是要啟動例項的ID)2.訪問一個例項(在瀏覽器中輸入http://controller:6080/vnc_auto.html?token=6643713d-f4c8-411c-ac9e-2c5b5a419935 就可以訪問虛擬機器了,前提是controller主機名能被識別,或直接換成ip)3.通過ssh訪問一個例項[email protected]:~# ssh [email protected]$ uname -aLinux instance-a 3.2.0-80-virtual #116-Ubuntu SMP Mon Mar 23 17:28:52 UTC 2015 x86_64 GNU/Linux
Swift-Controller一.建立swift資料庫:$ swift不需要建立資料庫二. 進入admin模式$ . admin-openrc三. 建立使用者、服務、api$ openstack user create --domain default --password-prompt swift (設定了SWIFT_PASS 密碼 )$ openstack role add --project service --user swift admin$ openstack service create --name swift --description "OpenStack Object Storage" object-store$ openstack endpoint create --region RegionOne object-store admin http://controller:8080/v1四. 安裝並修改swift代理# apt-get install swift swift-proxy python-swiftclient python-keystoneclient python-keystonemiddleware memcached# mkdir /etc/swift# vi /etc/swift/proxy-server.conf[DEFAULT]bind_port = 8080user = swiftswift_dir = /etc/swift[pipeline:main]pipeline = catch_errors gatekeeper healthcheck proxy-logging cache container_sync bulk ratelimit authtoken keystoneauth container-quotas account-quotas slo dlo versioned_writes proxy-logging proxy-server[app:proxy-server]use = egg:swift#proxyaccount_autocreate = True[filter:keystoneauth]use = egg:swift#keystoneauthoperator_roles = admin,user[filter:authtoken]paste.filter_factory = keystonemiddleware.auth_token:filter_factorymemcached_servers = controller:11211auth_type = passwordproject_domain_name = defaultuser_domain_name = defaultproject_name = serviceusername = swiftpassword = SWIFT_PASSdelay_auth_decision = True ([filter:authtoken] 內其他內容全部清空)
準備環境首先設定控制節點和計算節點的hosts名字為:controller和compute在所有節點上更新源# apt install software-properties-common# add-apt-repository cloud-archive:newton# apt update && apt dist-upgrade以下在控制節點上進行一. 安裝openstack配置工具# apt install python-openstackclient二. 安裝SQL# apt install mariadb-server python-pymysqlvi /etc/mysql/mariadb.conf.d/99-openstack.cnf
[mysqld]bind-address = 192.168.5.1 (控制節點的管理網ip)default-storage-engine = innodbinnodb_file_per_tablemax_connections = 4096collation-server = utf8_general_cicharacter-set-server = utf8 |
-l 192.168.5.1 (控制節點的管理網ip) |
admin-openrc |
export OS_PROJECT_DOMAIN_NAME=defaultexport OS_USER_DOMAIN_NAME=defaultexport OS_PROJECT_NAME=adminexport OS_USERNAME=adminexport OS_PASSWORD=ADMIN_PASSexport OS_AUTH_URL=http://controller:35357/v3export OS_IDENTITY_API_VERSION=3export OS_IMAGE_API_VERSION=2 |
demo-openrc |
export OS_PROJECT_DOMAIN_NAME=defaultexport OS_USER_DOMAIN_NAME=defaultexport OS_PROJECT_NAME=demoexport OS_USERNAME=demoexport OS_PASSWORD=DEMO_PASSexport OS_AUTH_URL=http://controller:5000/v3export OS_IDENTITY_API_VERSION=3export OS_IMAGE_API_VERSION=2 |
建立映象 |
. admin-openrcopenstack image create "cirros" --file cirros-0.3.4-x86_64-disk.img --disk-format qcow2 --container-format bare --publicopenstack image create "cirros" --file cirros-0.3.4-x86_64-disk.raw --disk-format raw --container-format bare --public |
建立flavor |
. admin-openrcopenstack flavor create --id 0 --vcpus 1 --ram 64 --disk 1 m1.nano |
建立網路 |
建立provider網路# . admin-openrc# openstack network create --share --provider-physical-network provider --provider-network-type flat provider# openstack subnet create --network provider --allocation-pool start=192.168.5.10,end=192.168.5.100 --dns-nameserver 219.146.1.66 --gateway 192.168.5.254 --subnet-range 192.168.5.0/24 provider# neutron net-update provider --router:external建立self_service網路# openstack network create selfservice# openstack subnet create --network selfservice --dns-nameserver 219.146.1.66 --gateway 172.16.1.1 --subnet-range 172.16.1.0/24 selfservice建立路由器# openstack router create router 把self_service子網新增到路由器router上# neutron router-interface-add router selfservice# neutron router-gateway-set router provider |
建立虛擬機器 |
nova boot --flavor m1.nano --image cirros --nic net-id=2a1132f6-d3e8-4842-a200-a17dab5be38c instance-anova boot --flavor m1.nano --image cirros --nic net-id=2a1132f6-d3e8-4842-a200-a17dab5be38c instance-a |