openstack環境準備
眾所周知,不管是公有雲也好,私有雲也罷,openstack都是赫赫有名的,那麽今天就給大家分享一下oenstack官方目前較新版本mitaka的安裝和配置,本次將會帶大家配置openstack的Identity service、Compute service、Image service、Networking service、Block Storage service、Dashboard、Orchestration service、Shared File Systems service一些常用服務模塊,以下是本實驗的一些前期準備。
網絡環境準備
公有網絡:10.0.0.0/16
私有網絡:172.16.0.0/16
管理網絡:192.168.10.0/24
直連接口:111.40.215.0/28
由於我使用的是遠程機房中的高配服務器虛擬化出來的三臺kvm虛擬機做的本實驗,所以我這裏使用10段地址模擬公網網絡,並且新增了一個直連接口給三臺虛擬機各自配置了一個公網IP, 以方便直連三臺虛擬機執行配置操作,希望大家能夠本實驗中的網絡環境,以免影響你對openstack網絡的理解,在此特此說明。
controller節點:
公網IP:10.0.0.10 管理IP:192.168.10.10 直連IP:111.40.215.8
compute節點:
公網IP:10.0.0.20 管理IP:192.168.10.20 直連IP:111.40.215.9
compute節點:
公網IP:10.0.0.31 管理IP:192.168.10.31 直連IP:111.40.215.10
由於是用kvm虛擬化出來的虛機做的實驗,所以需要提前開啟compute節點的嵌套虛擬化配置,否則無法在虛擬出來的compute節點上再次使用kvm創建虛機
kvm虛擬機開啟嵌套虛擬化過程
[[email protected]_test ~]# modinfo kvm_intel | grep nested //查看kvm宿主機能否支持嵌套虛擬化
parm: nested:bool
[[email protected]_test ~]# cat /sys/module/kvm_intel/parameters/nested //查看kvm宿主機是否開啟嵌套虛擬化(Y是開啟)
N
[[email protected]_test ~]# //上述情況屬於宿主機本身支持嵌套虛擬化,但沒有開啟,只需要系統級別開啟即可
[[email protected]_test ~]# modprobe -r kvm_intel //卸載kvm模塊
[[email protected]_test ~]# echo $?
0
[[email protected]_test ~]# lsmod | grep kvm_intel
[[email protected]_test ~]# modprobe kvm_intel nested=1 //重載kvm模塊,並開啟kvm嵌套虛擬化功能
[[email protected]_test ~]# lsmod | grep kvm_intel
kvm_intel 162153 0
kvm 525259 1 kvm_intel
[[email protected]_test ~]# cat /sys/module/kvm_intel/parameters/nested //檢驗kvm嵌套虛擬化是否開啟成功
Y
[[email protected]_test ~]#
修改vm虛擬機的配置文件,cpu標簽中添加類似如下的內容
<cpu mode=‘custom‘ match=‘exact‘>
<model fallback=‘allow‘>Westmere</model>
<vendor>Intel</vendor>
<feature policy=‘require‘ name=‘lahf_lm‘/>
<feature policy=‘require‘ name=‘xtpr‘/>
<feature policy=‘require‘ name=‘cx16‘/>
<feature policy=‘require‘ name=‘tm2‘/>
<feature policy=‘require‘ name=‘est‘/>
<feature policy=‘require‘ name=‘vmx‘/>
<feature policy=‘require‘ name=‘pbe‘/>
<feature policy=‘require‘ name=‘tm‘/>
<feature policy=‘require‘ name=‘ht‘/>
<feature policy=‘require‘ name=‘ss‘/>
<feature policy=‘require‘ name=‘acpi‘/>
<feature policy=‘require‘ name=‘ds‘/>
</cpu>
kvm宿主機網絡配置
br1使用eth0接口 openstack管理網絡:192.168.10.0/24
br2使用eth1接口 openstack外部網絡:10.0.0.0/16
br3使用eth3接口 openstack直轄接口:111.40.215.0/28
[[email protected]_test network-scripts]# cat ifcfg-eth0
DEVICE=eth0
ONBOOT=yes
BRIDGE=br1
[[email protected]_test network-scripts]# cat ifcfg-br1
TYPE=Bridge
BOOTPROTO=none
DEFROUTE=yes
PEERDNS=yes
PEERROUTES=yes
IPV4_FAILURE_FATAL=no
IPV6INIT=yes
IPV6_AUTOCONF=yes
IPV6_DEFROUTE=yes
IPV6_PEERDNS=yes
IPV6_PEERROUTES=yes
IPV6_FAILURE_FATAL=no
NAME=br1
DEVICE=br1
ONBOOT=yes
IPADDR=192.168.10.11
PREFIX=24
[[email protected]_test network-scripts]# cat ifcfg-eth1
DEVICE=eth1
ONBOOT=yes
BRIDGE=br2
[[email protected]_test network-scripts]# cat ifcfg-br2
TYPE=Bridge
BOOTPROTO=static
DEFROUTE=yes
PEERDNS=yes
PEERROUTES=yes
IPV4_FAILURE_FATAL=no
IPV6INIT=yes
IPV6_AUTOCONF=yes
IPV6_DEFROUTE=yes
IPV6_PEERDNS=yes
IPV6_PEERROUTES=yes
IPV6_FAILURE_FATAL=no
NAME=br2
DEVICE=br2
ONBOOT=yes
IPADDR=10.0.0.11
PREFIX=16
[[email protected]_test network-scripts]# cat ifcfg-eth2
DEVICE=eth2
ONBOOT=yes
BRIDGE=br3
[[email protected]_test network-scripts]# cat ifcfg-br3
TYPE=Bridge
BOOTPROTO=none
DEFROUTE=yes
PEERDNS=yes
PEERROUTES=yes
IPV4_FAILURE_FATAL=no
IPV6INIT=yes
IPV6_AUTOCONF=yes
IPV6_DEFROUTE=yes
IPV6_PEERDNS=yes
IPV6_PEERROUTES=yes
IPV6_FAILURE_FATAL=no
NAME=br3
DEVICE=br3
ONBOOT=yes
IPADDR=111.40.215.14
NETMASK=255.255.255.240
GATEWAY=111.40.215.1
[[email protected]_test network-scripts]# cat /etc/resolv.conf
nameserver 223.5.5.5
[[email protected]_test network-scripts]#
kvm虛機準備
[[email protected]_test ~]# virsh list --all
Id Name State
----------------------------------------------------
- base shut off
[[email protected]_test ~]# virt-clone -o base -n controller -f /kvm/images/controller.qcow2
Allocating ‘controller.qcow2‘ | 400 GB 00:00:02
Clone ‘controller‘ created successfully.
[[email protected]_test ~]# virt-clone -o base -n block1 -f /kvm/images/block1.qcow2
Allocating ‘block1.qcow2‘ | 400 GB 00:00:02
Clone ‘block1‘ created successfully.
[[email protected]_test ~]# virt-clone -o base -n compute1 -f /kvm/images/compute1.qcow2
Allocating ‘compute1.qcow2‘ | 400 GB 00:00:02
Clone ‘compute1‘ created successfully.
[[email protected]_test ~]# virt-clone -o base -n compute2 -f /kvm/images/compute2.qcow2
Allocating ‘compute2.qcow2‘ | 400 GB 00:00:03
Clone ‘compute2‘ created successfully.
[[email protected]_test ~]# virsh list --all
Id Name State
----------------------------------------------------
- base shut off
- block1 shut off
- compute1 shut off
- compute2 shut off
- controller shut off
[[email protected]_test ~]#
至此,還只是一些最基礎的準備,每個角色的基礎配置我們統一放到各個具體角色配置過程中了。
本文出自 “愛情防火墻” 博客,請務必保留此出處http://183530300.blog.51cto.com/894387/1957705
openstack環境準備