1. 程式人生 > >openstack安裝配置—— compute node配置

openstack安裝配置—— compute node配置

安裝配置nova客戶端 安裝配置neutron客戶端 openstack compute節點配置

計算節點需要配置的主要是nova和neutron的客戶端,控制節點在進行資源調度及配置時需要計算節點配合方能實現的,計算節點配置內容相對較少,實際生產環境中,需要配置的計算節點數量相當龐大,那麽我們就需要借助ansible或者puppet這樣的自動化工具進行了, 廢話不多講,直接進入配置狀態。


compute節點基礎配置

[[email protected] ~]# lscpu

Architecture: x86_64

CPU op-mode(s): 32-bit, 64-bit

Byte Order: Little Endian

CPU(s): 8

On-line CPU(s) list: 0-7

Thread(s) per core: 1

Core(s) per socket: 1

Socket(s): 8

NUMA node(s): 1

Vendor ID: GenuineIntel

CPU family: 6

Model: 44

Model name: Westmere E56xx/L56xx/X56xx (Nehalem-C)

Stepping: 1

CPU MHz: 2400.084

BogoMIPS: 4800.16

Virtualization: VT-x

Hypervisor vendor: KVM

Virtualization type: full

L1d cache: 32K

L1i cache: 32K

L2 cache: 4096K

NUMA node0 CPU(s): 0-7


[[email protected] ~]# free -h

total used free shared buff/cache available

Mem: 15G 142M 15G 8.3M 172M 15G

Swap: 0B 0B 0B

[[email protected] ~]# lsblk

NAME MAJ:MIN RM SIZE RO TYPE MOUNTPOINT

sr0 11:0 1 1024M 0 rom

vda 252:0 0 400G 0 disk

├─vda1 252:1 0 500M 0 part /boot

└─vda2 252:2 0 399.5G 0 part

├─centos-root 253:0 0 50G 0 lvm /

├─centos-swap 253:1 0 3.9G 0 lvm

└─centos-data 253:2 0 345.6G 0 lvm /data


[[email protected] ~]# ifconfig

eth0: flags=4163<UP,BROADCAST,RUNNING,MULTICAST> mtu 1500

inet 192.168.10.31 netmask 255.255.255.0 broadcast 192.168.10.255

inet6 fe80::5054:ff:fe18:bb1b prefixlen 64 scopeid 0x20<link>

ether 52:54:00:18:bb:1b txqueuelen 1000 (Ethernet)

RX packets 16842 bytes 1460696 (1.3 MiB)

RX errors 0 dropped 1416 overruns 0 frame 0

TX packets 747 bytes 199340 (194.6 KiB)

TX errors 0 dropped 0 overruns 0 carrier 0 collisions 0


eth1: flags=4163<UP,BROADCAST,RUNNING,MULTICAST> mtu 1500

inet 10.0.0.31 netmask 255.255.0.0 broadcast 10.0.255.255

inet6 fe80::5054:ff:fe28:e0a7 prefixlen 64 scopeid 0x20<link>

ether 52:54:00:28:e0:a7 txqueuelen 1000 (Ethernet)

RX packets 16213 bytes 1360633 (1.2 MiB)

RX errors 0 dropped 1402 overruns 0 frame 0

TX packets 23 bytes 1562 (1.5 KiB)

TX errors 0 dropped 0 overruns 0 carrier 0 collisions 0


eth2: flags=4163<UP,BROADCAST,RUNNING,MULTICAST> mtu 1500

inet 111.40.215.9 netmask 255.255.255.240 broadcast 111.40.215.15

inet6 fe80::5054:ff:fe28:e07a prefixlen 64 scopeid 0x20<link>

ether 52:54:00:28:e0:7a txqueuelen 1000 (Ethernet)

RX packets 40 bytes 2895 (2.8 KiB)

RX errors 0 dropped 0 overruns 0 frame 0

TX packets 24 bytes 1900 (1.8 KiB)

TX errors 0 dropped 0 overruns 0 carrier 0 collisions 0


lo: flags=73<UP,LOOPBACK,RUNNING> mtu 65536

inet 127.0.0.1 netmask 255.0.0.0

inet6 ::1 prefixlen 128 scopeid 0x10<host>

loop txqueuelen 0 (Local Loopback)

RX packets 841 bytes 44167 (43.1 KiB)

RX errors 0 dropped 0 overruns 0 frame 0

TX packets 841 bytes 44167 (43.1 KiB)

TX errors 0 dropped 0 overruns 0 carrier 0 collisions 0


[[email protected] ~]# getenforce

Disabled

[[email protected] ~]# iptables -vnL

Chain INPUT (policy ACCEPT 0 packets, 0 bytes)

pkts bytes target prot opt in out source destination


Chain FORWARD (policy ACCEPT 0 packets, 0 bytes)

pkts bytes target prot opt in out source destination


Chain OUTPUT (policy ACCEPT 0 packets, 0 bytes)

pkts bytes target prot opt in out source destination

[[email protected] ~]# cat /etc/hosts

127.0.0.1 localhost localhost.localdomain localhost4 localhost4.localdomain4

::1 localhost localhost.localdomain localhost6 localhost6.localdomain6

192.168.10.10 controller

192.168.10.20 block

192.168.10.31 compute1

192.168.10.32 compute2

[[email protected] ~]#


配置時間同步服務

[[email protected] ~]# yum install -y chrony

[[email protected] ~]# vim /etc/chrony.conf

[[email protected] ~]# grep -v ^# /etc/chrony.conf | tr -s [[:space:]]

server controller iburst

stratumweight 0

driftfile /var/lib/chrony/drift

rtcsync

makestep 10 3

bindcmdaddress 127.0.0.1

bindcmdaddress ::1

keyfile /etc/chrony.keys

commandkey 1

generatecommandkey

noclientlog

logchange 0.5

logdir /var/log/chrony

[[email protected] ~]# systemctl enable chronyd.service

[[email protected] ~]# systemctl start chronyd.service

[[email protected] ~]# chronyc sources

210 Number of sources = 1

MS Name/IP address Stratum Poll Reach LastRx Last sample

===============================================================================

^* controller 3 6 17 52 -15us[ -126us] +/- 138ms

[[email protected] ~]#


安裝 OpenStack 客戶端

[[email protected] ~]# yum install -y python-openstackclient


安裝配置nova客戶端

[[email protected] ~]# yum install -y openstack-nova-compute

[[email protected] ~]# cp /etc/nova/nova.conf{,.bak}

[[email protected] ~]# vim /etc/nova/nova.conf

[[email protected] ~]# grep -v ^# /etc/nova/nova.conf | tr -s [[:space:]]

[DEFAULT]

rpc_backend = rabbit

auth_strategy = keystone

my_ip = 192.168.10.31

use_neutron = True

firewall_driver = nova.virt.firewall.NoopFirewallDriver

[api_database]

[barbican]

[cache]

[cells]

[cinder]

[conductor]

[cors]

[cors.subdomain]

[database]

[ephemeral_storage_encryption]

[glance]

api_servers = http://controller:9292

[guestfs]

[hyperv]

[image_file_url]

[ironic]

[keymgr]

[keystone_authtoken]

auth_uri = http://controller:5000

auth_url = http://controller:35357

memcached_servers = controller:11211

auth_type = password

project_domain_name = default

user_domain_name = default

project_name = service

username = nova

password = NOVA_PASS

[libvirt]

[matchmaker_redis]

[metrics]

[neutron]

[osapi_v21]

[oslo_concurrency]

lock_path = /var/lib/nova/tmp

[oslo_messaging_amqp]

[oslo_messaging_notifications]

[oslo_messaging_rabbit]

rabbit_host = controller

rabbit_userid = openstack

rabbit_password = RABBIT_PASS

[oslo_middleware]

[oslo_policy]

[rdp]

[serial_console]

[spice]

[ssl]

[trusted_computing]

[upgrade_levels]

[vmware]

[vnc]

enabled = True

vncserver_listen = 0.0.0.0

vncserver_proxyclient_address = $my_ip

novncproxy_base_url = http://controller:6080/vnc_auto.html

[workarounds]

[xenserver]

[[email protected] ~]# egrep -c ‘(vmx|svm)‘ /proc/cpuinfo //檢驗是否支持虛擬機的硬件加速

8

[[email protected] ~]#

如果此處檢驗結果為0就請參考openstack環境準備一文中kvm虛擬機如何開啟嵌套虛擬化欄目內容


[[email protected] ~]# systemctl enable libvirtd.service openstack-nova-compute.service

Created symlink from /etc/systemd/system/multi-user.target.wants/openstack-nova-compute.service to /usr/lib/systemd/system/openstack-nova-compute.service.

[[email protected] ~]# systemctl start libvirtd.service openstack-nova-compute.service //計算節點上不會啟動相應端口,只能通過服務狀態進行查看

[[email protected] ~]# systemctl status libvirtd.service openstack-nova-compute.service

● libvirtd.service - Virtualization daemon

Loaded: loaded (/usr/lib/systemd/system/libvirtd.service; enabled; vendor preset: enabled)

Active: active (running) since Sun 2017-07-16 19:10:26 CST; 12min ago

Docs: man:libvirtd(8)

http://libvirt.org

Main PID: 1002 (libvirtd)

CGroup: /system.slice/libvirtd.service

└─1002 /usr/sbin/libvirtd


Jul 16 19:10:26 compute1 systemd[1]: Starting Virtualization daemon...

Jul 16 19:10:26 compute1 systemd[1]: Started Virtualization daemon.

Jul 16 19:21:06 compute1 systemd[1]: Started Virtualization daemon.


● openstack-nova-compute.service - OpenStack Nova Compute Server

Loaded: loaded (/usr/lib/systemd/system/openstack-nova-compute.service; enabled; vendor preset: disabled)

Active: active (running) since Sun 2017-07-16 19:21:11 CST; 1min 21s ago

Main PID: 1269 (nova-compute)

CGroup: /system.slice/openstack-nova-compute.service

└─1269 /usr/bin/python2 /usr/bin/nova-compute


Jul 16 19:21:06 compute1 systemd[1]: Starting OpenStack Nova Compute Server...

Jul 16 19:21:11 compute1 nova-compute[1269]: /usr/lib/python2.7/site-packages/pkg_resources/__init__.py:187: RuntimeWarning: You have...

Jul 16 19:21:11 compute1 nova-compute[1269]: stacklevel=1,

Jul 16 19:21:11 compute1 systemd[1]: Started OpenStack Nova Compute Server.

Hint: Some lines were ellipsized, use -l to show in full.

[[email protected] ~]#


前往controller節點驗證計算服務配置


安裝配置neutron客戶端

控制節點網絡配置完成後開始繼續以下步驟

[[email protected] ~]# yum install -y openstack-neutron-linuxbridge ebtables ipset

[[email protected] ~]# cp /etc/neutron/neutron.conf{,.bak}

[[email protected] ~]# vim /etc/neutron/neutron.conf

[[email protected] ~]# grep -v ^# /etc/neutron/neutron.conf | tr -s [[:space:]]

[DEFAULT]

rpc_backend = rabbit

auth_strategy = keystone

[agent]

[cors]

[cors.subdomain]

[database]

[keystone_authtoken]

auth_uri = http://controller:5000

auth_url = http://controller:35357

memcached_servers = controller:11211

auth_type = password

project_domain_name = default

user_domain_name = default

project_name = service

username = neutron

password = NEUTRON_PASS

[matchmaker_redis]

[nova]

[oslo_concurrency]

lock_path = /var/lib/neutron/tmp

[oslo_messaging_amqp]

[oslo_messaging_notifications]

[oslo_messaging_rabbit]

rabbit_host = controller

rabbit_userid = openstack

rabbit_password = RABBIT_PASS

[oslo_policy]

[qos]

[quotas]

[ssl]

[[email protected] ~]#


linuxbridge代理配置

[[email protected] ~]# cp /etc/neutron/plugins/ml2/linuxbridge_agent.ini{,.bak}

[[email protected] ~]# vim /etc/neutron/plugins/ml2/linuxbridge_agent.ini

[[email protected] ~]# grep -v ^# /etc/neutron/plugins/ml2/linuxbridge_agent.ini | tr -s [[:space:]]

[DEFAULT]

[agent]

[linux_bridge]

physical_interface_mappings = provider:eth1

[securitygroup]

enable_security_group = True

firewall_driver = neutron.agent.linux.iptables_firewall.IptablesFirewallDriver

[vxlan]

enable_vxlan = True

local_ip = 192.168.10.31

l2_population = True

[[email protected] ~]#


再次編輯nova配置文件,追加網絡配置

[[email protected] ~]# vim /etc/nova/nova.conf

url = http://controller:9696

auth_url = http://controller:35357

auth_type = password

project_domain_name = default

user_domain_name = default

region_name = RegionOne

project_name = service

username = neutron

password = NEUTRON_PASS


重啟計算節點服務,啟用並啟動linuxbridge代理服務

[[email protected] ~]# systemctl restart openstack-nova-compute.service

[[email protected] ~]# systemctl enable neutron-linuxbridge-agent.service

Created symlink from /etc/systemd/system/multi-user.target.wants/neutron-linuxbridge-agent.service to /usr/lib/systemd/system/neutron-linuxbridge-agent.service.

[[email protected] ~]# systemctl start neutron-linuxbridge-agent.service


前往controller節點驗證網絡服務配置

本文出自 “愛情防火墻” 博客,請務必保留此出處http://183530300.blog.51cto.com/894387/1957732

openstack安裝配置—— compute node配置