1. 程式人生 > 其它 >OpenStack Newton HA高可用搭建

OpenStack Newton HA高可用搭建

OpenStack Newton HA高可用搭建

更新源

yum clean all

yum makecache

修改網絡卡名稱:

vi /etc/default/grub

net.ifnames=0

grub2-mkconfig -o /boot/grub2/grub.cfg

修改網絡卡檔名稱

安裝常用工具:

yum install vim net-tools wget ntpdate ntp bash-completion -y

vim /etc/hosts配置

10.1.1.141 controller1

10.1.1.142 controller2

10.1.1.143 controller3

10.1.1.144 compute1

10.1.1.146 cinder1

網路配置

external:10.254.15.128/27

admin mgt:10.1.1.0/24

tunnel:10.2.2.0/24

時間同步

ntpdate 10.6.0.2

vim /etc/ntp.conf

新增:

server 10.6.0.2 iburst

systemctl enable ntpd

systemctl restart ntpd

systemctl status ntpd

ntpq -p

免密碼登陸:

ssh-keygen -t rsa

ssh-copy-id -i /root/.ssh/id_rsa.pub -p 22 [email protected]

ssh-copy-id -i /root/.ssh/id_rsa.pub -p 22 [email protected]

ssh-copy-id -i /root/.ssh/id_rsa.pub -p 22 [email protected]

一、搭建mariadb galera cluster

1. mariadb galera cluster叢集介紹

mariadb galera clustermysql高可用性和可擴充套件性的解決方案

官網:http://galeracluster.com/products/

mariadb galera cluster是一套在mysql innodb儲存引擎上面實現

multi-master及資料實時同步的系統結構,業務層面無需做讀寫分離工作,資料庫讀寫壓力都能按照既定的規則分發到各節點上去。在資料方面完全相容mariadbmysql

特性:

(1).同步複製synchronous replication

(2).active-active multi-master拓撲邏輯

(3).可對叢集中任一節點進行資料讀寫

(4).自動成員控制,故障節點自動從叢集中移除

(5).自動節點加入

(6).真正並行的複製,基於行級

(7).直接客戶端連線,原生的mysql介面

(8).每個節點都包含完整的資料副本

(9).多臺資料庫中資料同步由wsrep介面實現

缺點:

(1).目前的複製僅僅支援innodb儲存引擎,任何寫入其他引擎的表,包括mysql.*表將不會複製,但ddl語句會被複制的,因此建立使用者將會被複制,但是insert into mysql.user..將不會被複制的。

(2).delete操作不支援沒有主鍵的表,沒有主鍵的表在不同的節點順序將不同,如果執行select...limit...將出現不同的結果集。

(3).在多主環境下lock/unlock tables不支援,以及鎖函式get_lock(),release_lock()...

(4).查詢日誌不能儲存表中。如果開啟查詢日誌,只能儲存到檔案中。

(5).允許最大的事務大小由wsrep_max_ws_rowswsrep_max_ws_size定義。任何大型操作將被拒絕。如大型的load data操作。

(6).由於叢集是樂觀的併發控制,事務commit可能在該階段中止。如果有兩個事務向在叢集中不同的節點向同一行寫入並提交,失敗的節點將中止。對於叢集別的中止,叢集返回死鎖錯誤程式碼(error:1213 sqlstate:40001(er_lock_deadlock))

(7).xa事務不支援,由於在提交上可能回滾。

(8).整個叢集的寫入吞吐量是由最弱的節點限制,如果有一個節點變得緩慢,那麼整個叢集將是緩慢的。為了穩定的高效能要求,所有的節點應使用統一硬體。

(9).叢集節點最少3個。

(10).如果ddl語句有問題將破壞叢集。

2.安裝mariadb galera

在三個node上分別執行

yum install -y MariaDB-server MariaDB-client galera xinetd rsync ntp ntpdate bash-completion percona-xtrabackup socat gcc gcc-c++ vim

systemctl start mariadb.service

mysql_secure_installation (執行設定mysql密碼)

3.在三個節點上建立sst使用者,目的是給xtrabakcup-v2資料同步使用

mysql -uroot -p

grant all privileges on *.* to 'sst'@'localhost' identified by 'gdxc1902';

flush privileges;

4.配置mariadb cluster叢集

第一個節點上,新增內容如下:

vim /etc/my.cnf.d/client.cnf

[client]

port = 3306

socket = /var/lib/mysql/mysql.sock

vim /etc/my.cnf.d/server.cnf

新增內容

[isamchk]

key_buffer_size = 16M

#key_buffer_size這個引數是用來設定索引塊(index blocks)快取的大小,它被所有執行緒共享,嚴格說是它決定了資料庫索引處理的速度,尤其是索引讀的速度。

[mysqld]

datadir=/var/lib/mysql

#指定資料data目錄絕對路徑

innodb-data-home-dir = /var/lib/mysql

#指定innodb資料存放的家目錄

basedir = /usr

#指定mariadb的安裝路徑,填寫全路徑可以解決相對路徑所造成的問題

binlog_format=ROW

#該引數可以有三種設定值:rowstatementmixedrow代表二進位制日誌中記錄資料表每一行經過寫操作後被修改的最終值。各個參與同步的salve節點,也會參照這個最終值,將自己資料表上的資料進行修改;statement形式是在日誌中記錄資料操作過程,而非最終執行結果。各個參與同步的salve節點會解析這個過程,並形成最終記錄;mixed設定值,是以上兩種記錄方式的混合體,mysql服務會自動選擇當前執行狀態下最適合的日誌記錄方式。

character-set-server = utf8

#設定資料庫的字符集

collation-server = utf8_general_ci

#collation是描述資料在資料庫中是按照什麼規則來描述字元的,以及字元是如何被排序和比較,這裡我們設定的字元格式是utf8_general_ci

max_allowed_packet = 256M

#設定mysql接受資料包最大值,比如你執行的sql語句過大,可能會執行失敗,這個引數就是讓能執行的sql語句大小調高

max_connections = 10000

#設定mysql叢集最大的connection連線數,這個在openstack環境裡非常重要

ignore-db-dirs = lost+found

#設定忽略把lost+found當做資料目錄

init-connect = SET NAMES utf8

#設定初始化字符集編碼(僅對非超級使用者有效)

innodb_autoinc_lock_mode = 2

#這種模式下任何型別的inserts都不採用auto-inc鎖,效能最好,但是在同一條語句內部產生auto-increment值間隙

innodb_buffer_pool_size = 2000M

#設定緩衝池位元組大小,innodb快取表和索引資料的記憶體區域;這個值設定的越大,在不止一次的訪問相同的資料表資料時,消耗的磁碟i/o就越少。在一個專用的資料庫伺服器,則可能將其設定為高達80%的機器實體記憶體大小。不過在實際的測試中,發現無限的增大這個值,帶來的效能提升也並不顯著,對cpu的壓力反而增大,設定合理的值才是最優。參考:https://my.oschina.net/realfighter/blog/368225

innodb_doublewrite = 0

#設定0是禁用doublewrite,一般在不關心資料一致性(比如使用了raid0)或檔案系統可以保證不會出現部分寫失效,你可以通過將innodb_doublewrite引數設定為0還禁用doublewrite

innodb_file_format = Barracuda #設定檔案格式為barracudabarracudeinnodb-plugin後引入的檔案格式,同時barracude也支援antelope檔案格式,barracude在資料壓縮上優於antelope,配合下面的innodb_file_per_table=1使用

innodb_file_per_table = 1

#開啟獨立的表空間,使每個innodb的表,有自己的獨立空間。如刪除檔案後可以回收那部分空間。

innodb_flush_log_at_trx_commit = 2

#預設值1:每一次事務提交或事務外的指令都需要把日誌寫入(flush)硬碟,這是很費時的。特別是使用電池供電快取(battery backed up cache)時。設成2:意思是不寫入硬碟而是寫入系統快取,日誌仍然會每秒flush到硬碟,所以你一般不會丟失超過1-2秒的更新。設成0寫入會更快一點,但安全方面比較差,mysql掛了可能會丟失事務的資料。設定為2只會在整個作業系統掛了時才可能丟失資料,所以在openstack環境裡設定為2安全點。

innodb_flush_method = O_DIRECT

#設定開啟、刷寫模式,設定O_DIRECT的意思是innodb使用O_DIRECT開啟資料檔案,使用fsync()更新日誌和資料檔案;檔案的開啟方式為O_DIRECT會最小化緩衝對io的影響,該檔案的io是直接在使用者空間的buffer上操作的,並且io操作是同步的,因此不管是read()系統呼叫還是write()系統呼叫,資料都保證是從磁碟上讀取的,所以單純從寫入的角度講,O_DIRECT模式效能最差,但是在openstack環境下,設定這個模式可以減少作業系統級別vfs的快取使用記憶體過多和innodb本身的buffer的快取衝突,同時也算是給作業系統減少點壓力。

innodb_io_capacity = 500

#這個引數資料控制innodb checkpoint時的io能力,一般可以按一塊sas 15000轉的磁碟200個計算,6塊盤的sas做的raid10這個值可以配到600即可。如果是普通的sata一塊盤只能按100算。這裡要注意,對於普通機械硬碟,由於其隨機ioiops最多也就是300,所以innodb_io_capacity開的過大,反而會造成磁碟io不均勻;如果是ssd場景,由於io能力大大增強,所以innodb_io_capacity可以調高,可以配置到2000以上,但是目前的openstack環境為了節約成本都是普通機械硬碟,所以這裡一般根據自己的環境情況除錯值

innodb_locks_unsafe_for_binlog = 1

#開啟事物鎖機制,強制mysql使用多版本資料一致性讀。

innodb_log_file_size = 2000M

#如果對innodb資料表有大量的寫入操作,那麼選擇合適的innodb_log_file_size值對提升mysql效能很重要。然而設定太大了,就會增加恢復的時間,因此在mysql崩潰或者突然斷電等情況會令mysql伺服器花很長時間來恢復。在筆者維護的openstack環境裡,計算節點是12臺,虛擬機器在500左右。

innodb_read_io_threads = 8

#設定資料庫從磁碟讀檔案的執行緒數,用於併發;根據伺服器cpu核心數以及讀寫頻率設定

innodb_write_io_threads = 8

#設定資料庫寫磁碟檔案的執行緒數,用於併發;根據伺服器cpu核心數以及讀寫頻率設定

key_buffer_size = 64

#這個引數是用來設定索引塊(index blocks)快取的大小,它被所有執行緒共享,嚴格說是它決定了資料庫索引處理的速度,尤其是索引讀的速度。那我們怎麼才能知道key_buffer_size的設定是否合理呢,一般可以檢查狀態值key_read_requestskey_reads,比例key_reads/key_read_requests應該儘可能的低,比如1:100,1:10001:10000。其值可以用show status like 'key_read%';命令查得。

myisam-recover-options = BACKUP

#設定自動修復myisam表的方式,backup模式會自動修復;這種模式如果在恢復過程中,資料檔案被更改了,會將tbl_name.myd檔案備份為tbl_name-datetime.bak

myisam_sort_buffer_size = 64M

#設定myisam表發生變化時重新排序所需的緩衝值,一般64M足夠

open_files_limit = 102400

#設定最大檔案開啟數,需要參照osulimit值和max_aonnections的大小

performance_schema = on

#開啟收集資料庫效能引數的資料庫

query_cache_limit = 1M

#設定單個查詢能夠使用的緩衝區大小

query_cache_size = 0

#關閉query_cache

query_cache_type = 0

#關閉query_cache_type

skip-external-locking

#設定跳過“外部鎖定”,當“外部鎖定”起作用時,每個程序若要訪問資料表,則必須等待之前的程序完成操作並解綁鎖定;由於伺服器訪問資料表時經常需要等待解鎖,因此會讓mysql效能下降。所以這裡設定跳過。

skip-name-resolve

#禁用DNS主機名查詢

socket = /var/lib/mysql/mysql.sock

#設定mysql.sock的絕對路徑

table_open_cache = 10000

#描述符快取大小,可減少檔案開啟/關閉次數

thread_cache_size = 8

#可以用show status like 'open%tables';檢視當前open_tables的值是多少,然後適當調整

thread_stack = 256K

#用來存放每個執行緒的標識資訊,如執行緒id,執行緒執行時環境等,可以通過設定thread_stack來決定給每個執行緒分配多大記憶體

tmpdir = /tmp

#設定mysql臨時檔案存放的目錄

user = mysql

#設定mysql資料的系統賬戶名

wait_timeout = 1800

#設定單個connection空閒連線的超時時間,mysql預設是8小時,這裡我們設定為1800秒,也就是說,一個connection如果空閒超過30分鐘,那麼就會被釋放

[galera]

wsrep_on=ON

wsrep_provider=/usr/lib64/galera/libgalera_smm.so

wsrep_cluster_address="gcomm://10.1.1.141,10.1.1.142,10.1.1.143"

#gcomm是特殊的地址,僅僅是galera cluster初始化啟動時候使用

wsrep_cluster_name = openstack

#mysql cluster叢集名稱

wsrep_node_name=controller1

wsrep_node_address=10.1.1.141

#此節點IP地址

wsrep_sst_method=xtrabackup-v2

#xtrabakcup模式和rsync模式,新版本的支援xtrabackup-v2模式。rsync在資料同步(sstist)的時候,速度最快,但是會鎖住提供資料的節點,然後無法提供訪問;xtrabackup只會短暫的鎖住節點,基本不影響訪問。SST:state snapshot transfer,節點初始化的方式,做資料的全量同步;ISTincremental state transfer,當一個節點加入,他當前的guid與現叢集相同,且確實的資料能夠在donorwritesetcache中找到,則可以進行ist,否則只能全部初始化資料走sst模式。經過相關調研,目前xtravackup-v2模式是最好sst方式。

wsrep_sst_auth=sst:gdxc1902

#設定sst同步資料所需的mysql使用者和密碼,因為上面我們用的是xtrabackup-v2模式,那麼xtrabackup-v2模式就會用到sst這個mysql使用者去做節點之間的驗證,gdxc是密碼

wsrep_slave_threads=4

#指定執行緒數量,建議每個core啟動4個複製執行緒,這個引數很大程度受到i/o能力的影響(本人維護的openstack叢集伺服器配置是:cpu 48core disk 1.8T 10000轉的,這個引數設定的是12

default_storage_engine=InnoDB

#設定資料庫預設引擎為innodb

bind-address=10.1.1.141

#mysql服務繫結的IP

[mysqld_safe]

nice = 0

#呼叫系統的nice命令設定程序優先順序,linux系統的普通使用者只能在0-19中設定,mysql使用者為普通使用者,設定為0應該就是讓mysql程序優先順序最高了。

socket = /var/lib/mysql/mysql.sock

syslog

vim /etc/my.cnf.d/mysql-clients.cnf

新增內容

[mysqldump]

max_allowed_packet = 16M

#mysql根據配置檔案會限制server接受的資料包大小。有時候大的插入和更新會受max_allowed_packet引數限制,導致寫入或者更新失敗。

quick

#強制mysqldump從伺服器查詢取得記錄直接輸出而不是取得所有記錄後將他們快取到記憶體中

quote-names

#使用()引起表和列名。預設為開啟狀態,使用--skip-quote-names取消該選項。

注:相關詳細引數可以參考官網:http://galeracluster.com/documentation-webpages/mysqlwsrepoptions.html

第二個和第三個節點的my.cnf配置如下,注意改下相關ip和節點名稱:

vim /etc/my.cnf.d/client.cnf

[client]

port = 3306

socket = /var/lib/mysql/mysql.sock

vim /etc/my.cnf.d/server.cnf

[isamchk]

key_buffer_size = 16M

[mysqld]

datadir=/var/lib/mysql

innodb-data-home-dir = /var/lib/mysql

basedir = /usr

binlog_format=ROW

character-set-server = utf8

collation-server = utf8_general_ci

max_allowed_packet = 256M

max_connections = 10000

ignore-db-dirs = lost+found

init-connect = SET NAMES utf8

innodb_autoinc_lock_mode = 2

innodb_buffer_pool_size = 2000M

innodb_doublewrite = 0

innodb_file_format = Barracuda

innodb_file_per_table = 1

innodb_flush_log_at_trx_commit = 2

innodb_flush_method = O_DIRECT

innodb_io_capacity = 500

innodb_locks_unsafe_for_binlog = 1

innodb_log_file_size = 2000M

innodb_read_io_threads = 8

innodb_write_io_threads = 8

key_buffer_size = 64

myisam-recover-options = BACKUP

myisam_sort_buffer_size = 64M

open_files_limit = 102400

performance_schema = on

query_cache_limit = 1M

query_cache_size = 0

query_cache_type = 0

skip-external-locking

skip-name-resolve

socket = /var/lib/mysql/mysql.sock

table_open_cache = 10000

thread_cache_size = 8

thread_stack = 256K

tmpdir = /tmp

user = mysql

wait_timeout = 1800

[galera]

wsrep_on=ON

wsrep_provider=/usr/lib64/galera/libgalera_smm.so

wsrep_cluster_address="gcomm://10.1.1.141,10.1.1.142,10.1.1.143"

wsrep_cluster_name = openstack

wsrep_node_name=controller2

wsrep_node_address=10.1.1.142

wsrep_sst_method=xtrabackup-v2

wsrep_sst_auth=sst:gdxc1902

wsrep_slave_threads=4

default_storage_engine=InnoDB

bind-address=10.1.1.142

[mysqld_safe]

nice = 0

socket = /var/lib/mysql/mysql.sock

syslog

vim /etc/my.cnf.d/mysql-clients.cnf

[mysqldump]

max_allowed_packet = 16M

quick

quote-names

4.設定mysql最大連線數

修改完server.cnf,然後修改下mysql.service檔案,讓資料庫最大支援連線數調整到10000(這樣做是很有用的,筆者在維護openstack環境中,在vm數量比較大負載較高的時候,經常出現數據庫連線數不夠導致訪問介面出現各種內容刷不出的情況)

vim /usr/lib/systemd/system/mariadb.service

[Service]新增兩行如下引數:

LimitNOFILE = 10000

LimitNPROC = 10000

都修改完畢後,執行

systemctl daemon-reload

等啟動了mariadb服務就能通過show variables like 'max_connections';可檢視當前連線數值

5.關於mysql服務的啟動順序

三個節點my.cnf都配置完成後,全部執行

systemctl stop mariadb.service

然後在第一個節點用下面的命令初始化啟動mariadb叢集服務

/usr/sbin/mysqld --wsrep-new-cluster --user=root &

其他兩個節點分別啟動mariadb

systemctl start mariadb.service

systemctl status mariadb.service

上面無法連線可嘗試下面的命令:

/usr/sbin/mysqld --wsrep-cluster-address="gcomm://10.1.1.141:4567"

最後,其他兩個節點啟動成功了,再回到第一個節點執行:

pkill -9 mysql

pkill -9 mysql

systemctl start mariadb.service

systemctl status mariadb.service

注意:如果遇到啟動不了服務的情況,看下具體的錯誤,如果是[ERROR]Can't init tc log錯誤可以通過以下方法解決:

cd /var/lib/mysql

chown mysql:mysql *

然後再重啟服務即可,這是因為/var/lib/mysql/tc.log使用者組和使用者名稱不是mysql,更改下許可權就可以了

6.檢視mariadb資料庫叢集狀態

mysql -uroot -p

show status like 'wsrep_cluster_size%';

show variables like 'wsrep_sst_meth%';

登陸這兩節點的mysql裡,發現mysql句群數變成了3,說明這個叢集有3個節點了,叢集已經搭建成功!

7.測試

下面來測試一把,在ctr3中建立一張表,並插入記錄,看ctr1ctr2中能否查詢得到。

具體狀況截圖:

ctr3:建立test資料庫

GREATE DATABASE test;

ctr2:使用ctr3建立的test資料庫,並且建立一個名為example的表

USE TEST;

GREATE TABLE example (node_id INT PRIMARY KEY,node_name VARCHAR(30));

ctr1:插入一條測試資料

INSERT INTO test.example VALUES (1,'TEST1');

SELECT * FROM test.example;

ctr1 ctr2 ctr3分別查詢這條資料,發現都能查詢到

SELECT * FROM test.example;

二、安裝rabbitmq cluster叢集

1.每個節點都安裝erlang

yum install -y erlang

2.每個節點都安裝rabbitmq

yum install -y rabbitmq-server

3.每個節點都啟動rabbitmq及設定開機啟動

systemctl enable rabbitmq-server.service

systemctl restart rabbitmq-server.service

systemctl status rabbitmq-server.service

systemctl list-unit-files |grep rabbitmq-server.service

4.建立openstack,注意替換為自己合適的密碼

rabbitmqctl add_user openstack gdxc1902

5.openstack使用者賦予許可權

rabbitmqctl set_permissions openstack ".*" ".*" ".*"

rabbitmqctl set_user_tags openstack administrator

rabbitmqctl list_users

6.看下監聽埠,rabbitmq用的是5672

netstat -ntlp |grep 5672

7.檢視rabbitmq外掛

/usr/lib/rabbitmq/bin/rabbitmq-plugins list

8.每個節點都開啟rabbitmq相關外掛

/usr/lib/rabbitmq/bin/rabbitmq-plugins enable rabbitmq_management mochiweb webmachine rabbitmq_web_dispatch amqp_client rabbitmq_management_agent

開啟相關外掛後,重啟下rabbitmq服務

systemctl restart rabbitmq-server

瀏覽器輸入:http://10.254.15.141:15672 預設使用者名稱密碼:guest/guest

通過這個介面,我們能直觀的看到rabbitmq的執行和負載情況

當然我們可以不用guest,我們換一個另外使用者,比如mqadmin

rabbitmqctl add_user mqadmin mqadmin

rabbitmqctl set_user_tags mqadmin administrator

rabbitmqctl set_permissions -p / mqadmin ".*" ".*" ".*"

我們還可以通過命令把密碼換了,比如把guest使用者的密碼變成passw0rd

rabbitmqctl change_password guest passw0rd

9.檢視rabbitma狀態

rabbitmqctl status

10.叢集配置

cat /var/lib/rabbitmq/.erlang.cookie 到每個節點上檢視cookie

controller1上操作:

scp /var/lib/rabbitmq/.erlang.cookie controller2:/var/lib/rabbitmq/.erlang.cookie

scp /var/lib/rabbitmq/.erlang.cookie controller3:/var/lib/rabbitmq/.erlang.cookie

controller2上操作:

systemctl restart rabbitmq-server

rabbitmqctl stop_app

rabbitmqctl join_cluster --ram rabbit@controller1

rabbitmqctl start_app

controller3上操作:

systemctl restart rabbitmq-server

rabbitmqctl stop_app

rabbitmqctl join_cluster --ram rabbit@controller1

rabbitmqctl start_app

檢視叢集狀態:

rabbitmqctl cluster_status

11.叢集管理

如果遇到rabbitmq腦裂情況,按以下步驟操作,重新設定叢集:

登陸沒加入叢集的節點:

rabbitmqctl stop_app

rabbitmqctl reset

rabbitmqctl start_app

最後再重新執行新增叢集操作即可!

如果某個節點下面這個路徑/var/lib/rabbitmq 有多餘的檔案,請全部刪除掉!

正常下面只有mnesia這一個資料夾

如果有一堆這樣的檔案就狀態不對:

12.rabbitmq優化

rabbitmq一般優化的地方比較少,本人總結了下往上的一些資料以及mirantis官方的一篇部落格,對rabbitmq的優化總結了下面幾點,大家可以採納:

a.儘可能的吧rabbitmq部署在單獨的伺服器中

因為使用專用節點,rabbitmq服務能盡全部的享受cpu資源,這樣效能更高

b.rabbitmq跑在hipe模式下

rabbitmq是用erlang語言編寫的,而開啟hipe能讓erlang預編譯執行,這樣效能可以提升30%以上(關於測試結果,可以參考:https://github.com/binarin/rabbit-simple-benchmark/blob/master/report.md)。

但是開啟hipe模式會讓rabbitmq第一次啟動很慢,大概需要2分鐘;另外就是如果啟用了hiperabbitmq的除錯可能變得很難,因為hipe可以破壞錯誤回溯,使它們不可讀。

enable hipe方法:

vim /etc/rabbitmq/rabbitmq.config

去掉{hipe_compile, true}前面註釋(包括後面的逗號)即可,然後重啟rabbitmq服務(重啟過程中,你會發現啟動過程很慢)。

scp -p /etc/rabbitmq/rabbitmq.config controller2:/etc/rabbitmq/rabbitmq.config

scp -p /etc/rabbitmq/rabbitmq.config controller3:/etc/rabbitmq/rabbitmq.config

c.不要對rpc佇列使用佇列映象

研究表明,在3節點叢集上啟用佇列映象會使訊息吞吐量下降兩倍。另一方面,rpc訊息生命週期會變短,如果訊息變短丟失,它只導致當前正在進行的操作失敗,因此沒有映象的整體rpc佇列似乎是一個很好的權衡,不過也不是所有的訊息佇列都不啟用映象,ceilometer佇列可以啟用佇列映象,因為ceilometer的訊息必須保留;但是如果你的環境裝了ceilometer元件,最好給ceilometer單獨一個rabbitmq叢集,因為在通常情況下,ceilometer不會產生大量的訊息佇列,但是,如果ceilometer卡住有問題,那麼關於ceilometer的訊息佇列就會很多溢位,這會造成rabbitmq叢集的崩潰,這樣必然導致其他openstack服務中斷。

d.減少傳送的指標數量或者頻率

openstack環境下執行rabbitmq的另一個最佳實踐是減少傳送的指標數量和或其頻率。減少了相關指標數量和或頻率,也自然減少了訊息在rabbitmq服務中堆積的機會,這樣rabbitmq就可以把更多的資源用來處理重要的openstack服務佇列,以間接的提高rabbitmq效能。一般ceilometermongodb的訊息佇列可以儘量的挪開。

e.增加rabbitmq socket最大開啟數

vim /etc/sysctl.conf

最下面新增:fs.file-max = 1000000

sysctl -p 執行生效

scp -p /etc/sysctl.conf controller2:/etc/sysctl.conf

scp -p /etc/sysctl.conf controller3:/etc/sysctl.conf

設定ulimit最大開啟數

vim /etc/security/limits.conf

新增兩行:

* soft nofile 655350

* hard nofile 655350

scp -p /etc/security/limits.conf controller2:/etc/security/limits.conf

scp -p /etc/security/limits.conf controller3:/etc/security/limits.conf

設定systemctl管理的服務檔案最大開啟數為1024000

vim /etc/systemd/system.conf

新增兩行:

DefaultLimitNOFILE=1024000

DefaultLimitNPROC=1024000

scp -p /etc/systemd/system.conf controller2:/etc/systemd/system.conf

scp -p /etc/systemd/system.conf controller3:/etc/systemd/system.conf

改完後伺服器都重啟下,重啟完畢檢視值是否更改好,執行ulimit -Hn 檢視

修改後,登陸rabbitmq web外掛可以看到最大檔案開啟數和socket數都變大,預設值是最大檔案開啟數是1024socket數是829.

參考:https://www.mirantis.com/blog/best-practices-rabbitmq-openstack/

https://www.qcloud.com/community/article/135

pcs restart導致rabbitmq使用者丟失的bughttps://access.redhat.com/solutions/2374351

三、安裝pacemaker

三個ctr節點需要安裝以下包:

pacemaker

pcs(centos or rhel) or crmsh

corosync

fence-agent (centos or rhel) or cluster-glue

resource-agents

yum install -y lvm2 cifs-utils quota psmisc

yum install -y pcs pacemaker corosync fence-agents-all resource-agents

yum install -y crmsh

1.三個ctl節點都設定pcs服務開機啟動

systemctl enable pcsd

systemctl enable corosync

systemctl start pcsd

systemctl status pcsd

2.設定hacluster使用者密碼,每個節點都需要設定

passwd hacluster

3.配置編寫corosync.conf檔案

vim /etc/corosync/corosync.conf

新增以下內容:

totem{

version:2

secauth:off

cluster_name:openstack_cluster

transport:udpu

}

nodelist{

node{

ring0_addr:controller1

nodeid:1

}

node{

ring0_addr:controller2

nodeid:2

}

node{

ring0_addr:controller3

nodeid:3

}

}

quorum{

provider:corosync_votequorum

}

logging{

to_logfile:yes

logfile:/var/log/cluster/corosync.log

to_syslog:yes

}

scp -p /etc/corosync/corosync.conf controller2:/etc/corosync/corosync.conf

scp -p /etc/corosync/corosync.conf controller3:/etc/corosync/corosync.conf

3個節點上分別啟動corosync服務

systemctl enable corosync

systemctl restart corosync

systemctl status corosync

4.設定叢集相互驗證,在controller1上操作即可

pcs cluster auth controller1 controller2 controller3 -u hacluster -p gdxc1902 --force

5.controller1上建立並啟動名為openstack_cluster的叢集,其中controller1 controller2 controller3為叢集成員

pcs cluster setup --force --name openstack_cluster controller1 controller2 controller3

6.設定叢集自啟動

pcs cluster enable --all

7.設定叢集啟動

pcs cluster start --all

8.檢視並設定叢集屬性

pcs cluster status

9.檢查pacemaker服務

ps aux |grep pacemaker

10.檢驗corosync的安裝及當前corosync狀態

corosync-cfgtool -s

corosync-cmapctl |grep members

pcs status corosync

11.檢查配置是否正確(假若沒有輸出任何則配置正確)

crm_verify -L -V

如果想忽略這個錯誤,那麼做以下操作

禁用STONITH

pcs property set stonith-enabled=false

無法仲裁時候,選擇忽略

pcs property set no-quorum-policy=ignore

12.pcs其他命令

檢視pcs支援的資源代理標準

pcs resource providers

13.通過crm設定VIP

crm

configure

crm(live)configure# primitive vip_public ocf:heartbeat:IPaddr2 params ip="10.254.15.140" cidr_netmask="27" nic=eth0 op monitor interval="30s"

crm(live)configure# primitive vip_management ocf:heartbeat:IPaddr2 params ip="10.1.1.140" cidr_netmask="24" nic=eth1 op monitor interval="30s"

commit

或者

pcs resource create vip_public ocf:heartbeat:IPaddr2 params ip="10.254.15.140" cidr_netmask="27" nic=eth0 op monitor interval="30s"

pcs resource create primitive vip_management ocf:heartbeat:IPaddr2 params ip="10.1.1.140" cidr_netmask="24" nic=eth1 op monitor interval="30s"

四、安裝haproxy

1.安裝haproxy

在三個節點上分別安裝haproxy

yum install -y haproxy

systemctl enable haproxy.service

2.rsyslog集合配置haproxy日誌,在三個節點上都操作

cd /etc/rsyslog.d/

vim haproxy.conf

新增:

$ModLoad imudp

$UDpServerRun 514

$template Haproxy,"%rawmsg% \n"

local0.=info -/var/log/haproxy.log;Haproxy

local0.notice -var/log/haproxy-status.log;Haproxy

local0.* ~

scp -p /etc/rsyslog.d/haproxy.conf controller2:/etc/rsyslog.d/haproxy.conf

scp -p /etc/rsyslog.d/haproxy.conf controller3:/etc/rsyslog.d/haproxy.conf

systemctl restart rsyslog.service

systemctl status rsyslog.service

3.在三個節點上配置haproxy.cfg

cd /etc/haproxy

mv haproxy.cfg haproxy.cfg.orig

vim haproxy.cfg

新增下面內容:

global

log 127.0.0.1 local0

log 127.0.0.1 local1 notice

maxconn 16000

chroot /usr/share/haproxy

user haproxy

group haproxy

daemon

defaults

log global

mode http

option tcplog

option dontlognull

retries 3

option redispatch

maxconn 10000

contimeout 5000

clitimeout 50000

srvtimeout 50000

frontend stats-front

bind *:8088

mode http

default_backend stats-back

backend stats-back

mode http

balance source

stats uri /haproxy/stats

stats auth admin:gdxc1902

listen RabbitMQ-Server-Cluster

bind 10.1.1.140:56720

mode tcp

balance roundrobin

option tcpka

server controller1 controller1:5672 check inter 10s fastinter 2s downinter 3s rise 3 fall 3

server controller2 controller2:5672 check inter 10s fastinter 2s downinter 3s rise 3 fall 3

server controller3 controller3:5672 check inter 10s fastinter 2s downinter 3s rise 3 fall 3

listen RabbitMQ-Web

bind 10.254.15.140:15673

mode tcp

balance roundrobin

option tcpka

server controller1 controller1:15672 check inter 10s fastinter 2s downinter 3s rise 3 fall 3

server controller2 controller2:15672 check inter 10s fastinter 2s downinter 3s rise 3 fall 3

server controller3 controller3:15672 check inter 10s fastinter 2s downinter 3s rise 3 fall 3

listen Galera-Cluster

bind 10.1.1.140:3306

balance leastconn

mode tcp

option tcplog

option httpchk

server controller1 controller1:3306 check port 9200 inter 20s fastinter 2s downinter 2s rise 3 fall 3

server controller2 controller2:3306 check port 9200 inter 20s fastinter 2s downinter 2s rise 3 fall 3

server controller3 controller3:3306 check port 9200 inter 20s fastinter 2s downinter 2s rise 3 fall 3

listen keystone_admin_cluster

bind 10.1.1.140:35357

balance source

option httpchk

option httplog

option httpclose

server controller1 controller1:35358 check inter 10s fastinter 2s downinter 2s rise 30 fall 3

server controller2 controller2:35358 check inter 10s fastinter 2s downinter 2s rise 30 fall 3

server controller3 controller3:35358 check inter 10s fastinter 2s downinter 2s rise 30 fall 3

listen keystone_public_internal_cluster

bind 10.1.1.140:5000

balance source

option httpchk

option httplog

option httpclose

server controller1 controller1:5002 check inter 10s fastinter 2s downinter 2s rise 30 fall 3

server controller2 controller2:5002 check inter 10s fastinter 2s downinter 2s rise 30 fall 3

server controller3 controller3:5002 check inter 10s fastinter 2s downinter 2s rise 30 fall 3

listen Memcache_Servers

bind 10.1.1.140:22122

balance roundrobin

mode tcp

option tcpka

server controller1 controller1:11211 check inter 10s fastinter 2s downinter 2s rise 30 fall 3

server controller2 controller2:11211 check inter 10s fastinter 2s downinter 2s rise 30 fall 3

server controller3 controller3:11211 check inter 10s fastinter 2s downinter 2s rise 30 fall 3

listen dashboard_cluster

bind 10.254.15.140:80

balance source

option httpchk

option httplog

option httpclose

server controller1 controller1:8080 check inter 2000 fall 3

server controller2 controller2:8080 check inter 2000 fall 3

server controller3 controller3:8080 check inter 2000 fall 3

listen glance_api_cluster

bind 10.1.1.140:9292

balance source

option httpchk

option httplog

option httpclose

server controller1 controller1:9393 check inter 10s fastinter 2s downinter 3s rise 3 fall 3

server controller2 controller2:9393 check inter 10s fastinter 2s downinter 3s rise 3 fall 3

server controller3 controller3:9393 check inter 10s fastinter 2s downinter 3s rise 3 fall 3

listen glance_registry_cluster

bind 10.1.1.140:9090

balance roundrobin

mode tcp

option tcpka

server controller1 controller1:9191 check inter 10s fastinter 2s downinter 3s rise 3 fall 3

server controller2 controller2:9191 check inter 10s fastinter 2s downinter 3s rise 3 fall 3

server controller3 controller3:9191 check inter 10s fastinter 2s downinter 3s rise 3 fall 3

listen nova_compute_api_cluster

bind 10.1.1.140:8774

balance source

option httpchk

option httplog

option httpclose

server controller1 controller1:9774 check inter 10s fastinter 2s downinter 3s rise 3 fall 3

server controller2 controller2:9774 check inter 10s fastinter 2s downinter 3s rise 3 fall 3

server controller3 controller3:9774 check inter 10s fastinter 2s downinter 3s rise 3 fall 3

listen nova-metadata-api_cluster

bind 10.1.1.140:8775

balance source

option httpchk

option httplog

option httpclose

server controller1 controller1:9775 check inter 10s fastinter 2s downinter 3s rise 3 fall 3

server controller2 controller2:9775 check inter 10s fastinter 2s downinter 3s rise 3 fall 3

server controller3 controller3:9775 check inter 10s fastinter 2s downinter 3s rise 3 fall 3

listen nova_vncproxy_cluster

bind 10.1.1.140:6080

balance source

option tcpka

option tcplog

server controller1 controller1:6080 check inter 2000 rise 2 fall 5

server controller2 controller2:6080 check inter 2000 rise 2 fall 5

server controller3 controller3:6080 check inter 2000 rise 2 fall 5

listen neutron_api_cluster

bind 10.1.1.140:9696

balance source

option httpchk

option httplog

option httpclose

server controller1 controller1:9797 check inter 10s fastinter 2s downinter 3s rise 3 fall 3

server controller2 controller2:9797 check inter 10s fastinter 2s downinter 3s rise 3 fall 3

server controller3 controller3:9797 check inter 10s fastinter 2s downinter 3s rise 3 fall 3

listen cinder_api_cluster

bind 10.1.1.140:8776

balance source

option httpchk

option httplog

option httpclose

server controller1 controller1:8778 check inter 10s fastinter 2s downinter 3s rise 3 fall 3

server controller2 controller2:8778 check inter 10s fastinter 2s downinter 3s rise 3 fall 3

server controller3 controller3:8778 check inter 10s fastinter 2s downinter 3s rise 3 fall 3

scp -p /etc/haproxy/haproxy.cfg controller2:/etc/haproxy/haproxy.cfg

scp -p /etc/haproxy/haproxy.cfg controller3:/etc/haproxy/haproxy.cfg

systemctl restart haproxy.service

systemctl status haproxy.service

引數解釋,具體的看分享的《haproxy終極參考手冊》:

inter<delay>:設定健康狀態檢查的時間間隔,單位為毫秒,預設為2000;也可以使用fastinterdowninter來根據伺服器狀態優化此時間延遲。

rise<count>:設定健康狀態檢查中,某離線的server從離線狀態轉換至正常狀態需要成功檢查次數。

fall<count>:預設server從正常轉換為不可用狀態需要檢查的次數。

3.配置haproxy能監控galera資料庫叢集

controller1上進入mysql,建立clustercheck

grant process on *.* to 'clustercheckuser'@'localhost' identified by 'gdxc1902';

flush privileges;

三個節點分別建立clustercheck文字,裡面是clustercheckuser使用者密碼

vim /etc/sysconfig/clustercheck

新增:

MYSQL_USERNAME=clustercheckuser

MYSQL_PASSWORD=gdxc1902

MYSQL_HOST=localhost

MYSQL_PORT=3306

scp -p /etc/sysconfig/clustercheck controller2:/etc/sysconfig/clustercheck

scp -p /etc/sysconfig/clustercheck controller3:/etc/sysconfig/clustercheck

確認下是否存在/usr/bin/clustercheck,如果沒有從網上下載一個,然後放到/usr/bin目錄下面,記得chmod +x /usr/bin/clustercheck 賦予許可權

這個指令碼的作用就是讓haproxy能監控galera cluster狀態

scp -p /usr/bin/clustercheck controller2:/usr/bin/clustercheck

scp -p /usr/bin/clustercheck controller3:/usr/bin/clustercheck

controller1上檢查haproxy服務狀態

clustercheck (存在/usr/bin/clustercheck可以直接執行clustercheck命令)

結合xinetd監控galera服務(三個節點安裝xinetd

yum install -y xinetd

vim /etc/xinetd.d/mysqlchk

新增以下內容:

# default: on

# description: mysqlchk

service mysqlchk

{

# this is a config for xinetd, place it in /etc/xinetd.d/

disable = no

flags = REUSE

socket_type = stream

port = 9200

wait = no

user = nobody

server = /usr/bin/clustercheck

log_on_failure = USERID

only_from = 0.0.0.0/0

# recommended to put the IPs that need

# to connect exclusively (security purposes)

per_source = UNLIMITED

}

scp -p /etc/xinetd.d/mysqlchk controller2:/etc/xinetd.d/mysqlchk

scp -p /etc/xinetd.d/mysqlchk controller3:/etc/xinetd.d/mysqlchk

vim /etc/services

最後一行新增:mysqlchk 9200/tcp # mysqlchk

scp -p /etc/services controller2:/etc/services

scp -p /etc/services controller3:/etc/services

重啟xinetd服務

systemctl restart xinetd.service

systemctl status xinetd.service

5.三個節點修改核心引數

echo 'net.ipv4.ip_nonlocal_bind = 1'>>/etc/sysctl.conf

echo 'net.ipv4.ip_forward=1'>>/etc/sysctl.conf

sysctl -p

第一個引數的意思是設定haproxy能夠繫結到不屬於本地網絡卡的地址上。

第二個引數的意思是核心是否轉發資料包,預設是禁止的,這裡我們設定開啟。

注意!如果不設定這兩個引數,你的第二個和第三個ctr節點haproxy服務將啟動不了

6.三個節點啟動haproxy服務

systemctl restart haproxy.service

systemctl status haproxy.service

7.訪問haproxy前端web平臺

http://10.254.15.140:8088/haproxy/stats admin/gdxc1902

五、安裝配置keystone

1.controller1上建立keystone資料庫

CREATE DATABASE keystone;

2.congtroller1上建立資料庫使用者及賦予許可權

GRANT ALL PRIVILEGES ON keystone.* TO 'keystone'@'localhost' IDENTIFIED BY 'gdxc1902';

GRANT ALL PRIVILEGES ON keystone.* TO 'keystone'@'%' IDENTIFIED BY 'gdxc1902';

注意替換為自己的資料庫密碼

3.在三個節點上分別安裝keystonememcached

yum install -y openstack-keystone httpd mod_wsgi python-openstackclient memcached python-memcached openstack-utils

4.優化配置memcached

vim /etc/sysconfig/memcached

PORT="11211"

#定義埠

USER="memcached"

#定義執行memcache的使用者

MAXCONN="8192"

#定義最大連線數

CACHESIZE="1024"

#定義最大記憶體使用值

OPTIONS="-l 127.0.0.1,::1,10.1.1.141 -t 4 -I 10m"

# -|設定服務繫結IP-t設定執行緒數,-I調整分配slab頁大小

注意!OPTIONS10.1.1.141改成各個節點對應的IP

scp -p /etc/sysconfig/memcached controller2:/etc/sysconfig/memcached

scp -p /etc/sysconfig/memcached controller3:/etc/sysconfig/memcached

5.在三個節點上分別啟動memcache服務並設定開機自啟動

systemctl enable memcached.service

systemctl restart memcached.service

systemctl status memcached.service

6.配置/etc/keystone/keystone.conf檔案

cp /etc/keystone/keystone.conf /etc/keystone/keystone.conf.bak

>/etc/keystone/keystone.conf

openstack-config --set /etc/keystone/keystone.conf DEFAULT debug false

openstack-config --set /etc/keystone/keystone.conf DEFAULT verbose true

openstack-config --set /etc/keystone/keystone.conf DEFAULT admin_endpoint http://10.1.1.140:35357

openstack-config --set /etc/keystone/keystone.conf DEFAULT public_endpoint http://10.1.1.140:5000

openstack-config --set /etc/keystone/keystone.conf eventlet_server public_bind_host 10.1.1.141

openstack-config --set /etc/keystone/keystone.conf eventlet_server admin_bind_host 10.1.1.141

openstack-config --set /etc/keystone/keystone.conf cache backend oslo_cache.memcache_pool

openstack-config --set /etc/keystone/keystone.conf cache enabled true

openstack-config --set /etc/keystone/keystone.conf cache memcache_servers controller1:11211,controller2:11211,controller3:11211

openstack-config --set /etc/keystone/keystone.conf cache memcache_dead_retry 60

openstack-config --set /etc/keystone/keystone.conf cache memcache_socket_timeout 1

openstack-config --set /etc/keystone/keystone.conf cache memcache_pool_maxsize 1000

openstack-config --set /etc/keystone/keystone.conf cache memcache_pool_unused_timeout 60

openstack-config --set /etc/keystone/keystone.conf catalog template_file /etc/keystone/default_catalog.templates

openstack-config --set /etc/keystone/keystone.conf catalog driver sql

openstack-config --set /etc/keystone/keystone.conf database connection mysql://keystone:[email protected]/keystone

openstack-config --set /etc/keystone/keystone.conf database idle_timeout 3600

openstack-config --set /etc/keystone/keystone.conf database max_pool_size 30

openstack-config --set /etc/keystone/keystone.conf database max_retries -1

openstack-config --set /etc/keystone/keystone.conf database retry_interval 2

openstack-config --set /etc/keystone/keystone.conf database max_overflow 60

openstack-config --set /etc/keystone/keystone.conf identity driver sql

openstack-config --set /etc/keystone/keystone.conf identity caching false

openstack-config --set /etc/keystone/keystone.conf fernet_tokens key_repository /etc/keystone/fernet-keys/

openstack-config --set /etc/keystone/keystone.conf fernet_tokens max_active_keys 3

openstack-config --set /etc/keystone/keystone.conf memcache servers controller1:11211,controller2:11211,controller3:11211

openstack-config --set /etc/keystone/keystone.conf memcache dead_retry 60

openstack-config --set /etc/keystone/keystone.conf memcache socket_timeout 1

openstack-config --set /etc/keystone/keystone.conf memcache pool_maxsize 1000

openstack-config --set /etc/keystone/keystone.conf memcache pool_unused_timeout 60

openstack-config --set /etc/keystone/keystone.conf oslo_messaging_rabbit rabbit_hosts controller1:5672,controller2:5672,controller3:5672

openstack-config --set /etc/keystone/keystone.conf oslo_messaging_rabbit rabbit_userid openstack

openstack-config --set /etc/keystone/keystone.conf oslo_messaging_rabbit rabbit_password gdxc1902

openstack-config --set /etc/keystone/keystone.conf oslo_messaging_rabbit rabbit_use_ssl false

openstack-config --set /etc/keystone/keystone.conf oslo_messaging_rabbit rabbit_ha_queues true

openstack-config --set /etc/keystone/keystone.conf oslo_messaging_rabbit rabbit_retry_interval 1

openstack-config --set /etc/keystone/keystone.conf oslo_messaging_rabbit rabbit_retry_backoff 2

openstack-config --set /etc/keystone/keystone.conf oslo_messaging_rabbit rabbit_max_retries 0

openstack-config --set /etc/keystone/keystone.conf token expiration 3600

openstack-config --set /etc/keystone/keystone.conf token caching False

openstack-config --set /etc/keystone/keystone.conf token provider fernet

scp -p /etc/keystone/keystone.conf controller2:/etc/keystone/keystone.conf

scp -p /etc/keystone/keystone.conf controller3:/etc/keystone/keystone.conf

7.配置httpd.conf檔案

vim /etc/httpd/conf/httpd.conf

servername controller1(如果是controller2那就寫congtroller2

listen 808080->8080 haproxy裡用了80,不修改啟動不了)

sed -i "s/#ServerName www.example.com:80/ServerName controller1/" /etc/httpd/conf/httpd.conf

sed -i "s/Listen 80/Listen 8080/" /etc/httpd/conf/httpd.conf

8.配置keystonehttpd結合

vim /etc/httpd/conf.d/wsgi-keystone.conf

Listen 5002

Listen 35358

<VirtualHost *:5002>

WSGIDaemonProcess keystone-public processes=5 threads=1 user=keystone group=keystone display-name=%{GROUP}

WSGIProcessGroup keystone-public

WSGIScriptAlias / /usr/bin/keystone-wsgi-public

WSGIApplicationGroup %{GLOBAL}

WSGIPassAuthorization On

ErrorLogFormat "%{cu}t %M"

ErrorLog /var/log/httpd/keystone-error.log

CustomLog /var/log/httpd/keystone_access.log combined

<Directory /usr/bin>

Require all granted

</Directory>

</VirtualHost>

<VirtualHost *:35358>

WSGIDaemonProcess keystone-admin processes=5 threads=1 user=keystone group=keystone display-name=%{GROUP}

WSGIProcessGroup keystone-admin

WSGIScriptAlias / /usr/bin/keystone-wsgi-admin

WSGIApplicationGroup %{GLOBAL}

WSGIPassAuthorization On

ErrorLogFormat "%{cu}t %M"

ErrorLog /var/log/httpd/keystone-error.log

CustomLog /var/log/httpd/keystone_access.log combined

<Directory /usr/bin>

Require all granted

</Directory>

</VirtualHost>

把這個檔案拷貝到另外兩個節點上:

scp -p /etc/httpd/conf.d/wsgi-keystone.conf controller2:/etc/httpd/conf.d/wsgi-keystone.conf

scp -p /etc/httpd/conf.d/wsgi-keystone.conf controller3:/etc/httpd/conf.d/wsgi-keystone.conf

9.controller1上設定資料庫同步

su -s /bin/sh -c "keystone-manage db_sync" keystone

10.三個節點都初始化fernet

keystone-manage fernet_setup --keystone-user keystone --keystone-group keystone

keystone-manage credential_setup --keystone-user keystone --keystone-group keystone

11.同步三個節點fernet資訊,在congtroller1操作

scp -p /etc/keystone/fernet-keys/* controller2:/etc/keystone/fernet-keys/

scp -p /etc/keystone/fernet-keys/* controller3:/etc/keystone/fernet-keys/

scp -p /etc/keystone/credential-keys/* controller2:/etc/keystone/credential-keys/

scp -p /etc/keystone/credential-keys/* controller3:/etc/keystone/credential-keys/

12.三個節點都要啟動httpd,並設定httpd開機啟動

systemctl enable httpd.service

systemctl restart httpd.service

systemctl status httpd.service

systemctl list-unit-files |grep httpd.service

13.congtroller1上建立admin使用者角色

keystone-manage bootstrap \

--bootstrap-password gdxc1902\

--bootstrap-role-name admin \

--bootstrap-service-name keystone \

--bootstrap-admin-url http://10.1.1.140:35357/v3/ \

--bootstrap-internal-url http://10.1.1.140:35357/v3/ \

--bootstrap-public-url http://10.1.1.140:5000/v3/ \

--bootstrap-region-id RegionOne

這樣,就可以在openstack命令列裡使用admin賬號登陸了。

驗證。測試是否配置合理:

openstack project list --os-username admin --os-project-name admin --os-user-domain-id default --os-project-domain-id default --os-identity-api-version 3 --os-auth-url http://10.1.1.140:5000 --os-password gdxc1902

14.controller1上建立admin使用者環境變數,建立/root/admin-openrc檔案並寫入如下內容:

vim /root/admin-openrc

新增以下內容:

export OS_USER_DOMAIN_ID=default

export OS_PROJECT_DOMAIN_ID=default

export OS_USERNAME=admin

export OS_PROJECT_NAME=admin

export OS_PASSWORD=gdxc1902

export OS_IDENTITY_API_VERSION=3

export OS_AUTH_URL=http://10.1.1.140:35357/v3

15.controller1上建立service專案

source /root/admin-openrc

openstack project create --domain default --description "Service Project" service

16.controller1上建立demo專案

openstack project create --domain default --description "Demo Project" demo

17.controller1上建立demo使用者

openstack user create --domain default demo --password gdxc1902

注意:gdxc1902demo使用者密碼

18.controller1上建立user角色將demo使用者賦予user角色

openstack role create user

openstack role add --project demo --user demo user

19.controller1上驗證keystone

unset OS_TOKEN OS_URL

openstack --os-auth-url http://10.1.1.140:35357/v3 --os-project-domain-name default --os-user-domain-name default --os-project-name admin --os-username admin token issue --os-password gdxc1902

openstack --os-auth-url http://10.1.1.140:35357/v3 --os-project-domain-name default --os-user-domain-name default --os-project-name demo --os-username demo token issue --os-password gdxc1902

20.controller1上建立demo使用者環境變數,建立/root/demo-openrc檔案並寫入下列內容:

vim /root/demo-openrc

新增:

export OS_USER_DOMAIN_ID=default

export OS_PROJECT_DOMAIN_ID=default

export OS_USERNAME=demo

export OS_PROJECT_NAME=demo

export OS_PASSWORD=gdxc1902

export OS_IDENTITY_API_VERSION=3

export OS_AUTH_URL=http://10.1.1.140:35357/v3

scp -p /root/admin-openrc controller2:/root/admin-openrc

scp -p /root/admin-openrc controller3:/root/admin-openrc

scp -p /root/demo-openrc controller2:/root/demo-openrc

scp -p /root/demo-openrc controller3:/root/demo-openrc

六、安裝配置glance

1.controller1上建立glance資料庫

CREATE DATABASE glance;

2.controller1上建立資料庫使用者並賦予許可權

GRANT ALL PRIVILEGES ON glance.* TO 'glance'@'localhost' IDENTIFIED BY 'gdxc1902';

GRANT ALL PRIVILEGES ON glance.* TO 'glance'@'%' IDENTIFIED BY 'gdxc1902';

3.controller1上建立glance使用者並賦予admin許可權

source /root/admin-openrc

openstack user create --domain default glance --password gdxc1902

openstack role add --project service --user glance admin

4.controller1上建立image服務

openstack service create --name glance --description "OpenStack Image service" image

5.controller1上建立glanceendpoint

openstack endpoint create --region RegionOne image public http://10.1.1.140:9292

openstack endpoint create --region RegionOne image internal http://10.1.1.140:9292

openstack endpoint create --region RegionOne image admin http://10.1.1.140:9292

6.在三個節點上安裝glance相關rpm

yum install -y openstack-glance

7.在三個節點上修改glance配置檔案/etc/glance/glance-api.conf

注意紅色的密碼設定成你自己的

cp /etc/glance/glance-api.conf /etc/glance/glance-api.conf.bak

>/etc/glance/glance-api.conf

openstack-config --set /etc/glance/glance-api.conf DEFAULT debug False

openstack-config --set /etc/glance/glance-api.conf DEFAULT verbose True

openstack-config --set /etc/glance/glance-api.conf DEFAULT bind_host controller1

openstack-config --set /etc/glance/glance-api.conf DEFAULT bind_port 9393

openstack-config --set /etc/glance/glance-api.conf DEFAULT registry_host controller1

openstack-config --set /etc/glance/glance-api.conf DEFAULT registry_port 9191

openstack-config --set /etc/glance/glance-api.conf DEFAULT auth_region RegionOne

openstack-config --set /etc/glance/glance-api.conf DEFAULT registry_client_protocol http

openstack-config --set /etc/glance/glance-api.conf DEFAULT show_image_direct_url False

openstack-config --set /etc/glance/glance-api.conf DEFAULT workers 4

openstack-config --set /etc/glance/glance-api.conf DEFAULT backlog 4096

openstack-config --set /etc/glance/glance-api.conf DEFAULT image_cache_dir /var/lib/glance/image-cache

openstack-config --set /etc/glance/glance-api.conf DEFAULT rpc_backend rabbit

openstack-config --set /etc/glance/glance-api.conf DEFAULT scrub_time 43200

openstack-config --set /etc/glance/glance-api.conf DEFAULT delayed_delete False

openstack-config --set /etc/glance/glance-api.conf DEFAULT enable_v1_api False

openstack-config --set /etc/glance/glance-api.conf DEFAULT enable_v2_api True

openstack-config --set /etc/glance/glance-api.conf oslo_messaging_rabbit rabbit_hosts controller1:5672,controller2:5672,controller3:5672

openstack-config --set /etc/glance/glance-api.conf oslo_messaging_rabbit rabbit_userid openstack

openstack-config --set /etc/glance/glance-api.conf oslo_messaging_rabbit rabbit_password gdxc1902

openstack-config --set /etc/glance/glance-api.conf oslo_messaging_rabbit rabbit_use_ssl False

openstack-config --set /etc/glance/glance-api.conf oslo_messaging_rabbit rabbit_ha_queues True

openstack-config --set /etc/glance/glance-api.conf oslo_messaging_rabbit rabbit_retry_interval 1

openstack-config --set /etc/glance/glance-api.conf oslo_messaging_rabbit rabbit_retry_backoff 2

openstack-config --set /etc/glance/glance-api.conf oslo_messaging_rabbit rabbit_max_retries 0

openstack-config --set /etc/glance/glance-api.conf oslo_messaging_rabbit amqp_durable_queues False

openstack-config --set /etc/glance/glance-api.conf oslo_concurrency lock_path /var/lock/glance

openstack-config --set /etc/glance/glance-api.conf database connection mysql+pymysql://glance:[email protected]/glance

openstack-config --set /etc/glance/glance-api.conf database idle_timeout 3600

openstack-config --set /etc/glance/glance-api.conf database max_pool_size 30

openstack-config --set /etc/glance/glance-api.conf database max_retries -1

openstack-config --set /etc/glance/glance-api.conf database retry_interval 2

openstack-config --set /etc/glance/glance-api.conf database max_overflow 60

openstack-config --set /etc/glance/glance-api.conf keystone_authtoken auth_uri http://10.1.1.140:5000

openstack-config --set /etc/glance/glance-api.conf keystone_authtoken auth_url http://10.1.1.140:35357

openstack-config --set /etc/glance/glance-api.conf keystone_authtoken memcached_servers controller1:11211,controller2:11211,controller3:11211

openstack-config --set /etc/glance/glance-api.conf keystone_authtoken auth_type password

openstack-config --set /etc/glance/glance-api.conf keystone_authtoken project_domain_name default

openstack-config --set /etc/glance/glance-api.conf keystone_authtoken user_domain_name default

openstack-config --set /etc/glance/glance-api.conf keystone_authtoken username glance

openstack-config --set /etc/glance/glance-api.conf keystone_authtoken password gdxc1902

openstack-config --set /etc/glance/glance-api.conf keystone_authtoken project_name service

openstack-config --set /etc/glance/glance-api.conf keystone_authtoken token_cache_time -1

openstack-config --set /etc/glance/glance-api.conf paste_deploy flavor keystone

openstack-config --set /etc/glance/glance-api.conf glance_store stores file,http

openstack-config --set /etc/glance/glance-api.conf glance_store default_store file

openstack-config --set /etc/glance/glance-api.conf glance_store filesystem_store_datadir /var/lib/glance/images/

scp -p /etc/glance/glance-api.conf controller2:/etc/glance/glance-api.conf

scp -p /etc/glance/glance-api.conf controller3:/etc/glance/glance-api.conf

8.在三個節點上修改glance配置檔案/etc/glance/glance-registry.conf

cp /etc/glance/glance-registry.conf /etc/glance/glance-registry.conf.bak

>/etc/glance/glance-registry.conf

openstack-config --set /etc/glance/glance-registry.conf DEFAULT debug False

openstack-config --set /etc/glance/glance-registry.conf DEFAULT verbose True

openstack-config --set /etc/glance/glance-registry.conf DEFAULT bind_host controller1

openstack-config --set /etc/glance/glance-registry.conf DEFAULT bind_port 9191

openstack-config --set /etc/glance/glance-registry.conf DEFAULT workers 4

openstack-config --set /etc/glance/glance-registry.conf oslo_messaging_rabbit rabbit_hosts controller1:5672,controller2:5672,controller3:5672

openstack-config --set /etc/glance/glance-registry.conf oslo_messaging_rabbit rabbit_userid openstack

openstack-config --set /etc/glance/glance-registry.conf oslo_messaging_rabbit rabbit_password gdxc1902

openstack-config --set /etc/glance/glance-registry.conf oslo_messaging_rabbit rabbit_use_ssl False

openstack-config --set /etc/glance/glance-registry.conf oslo_messaging_rabbit rabbit_ha_queues True

openstack-config --set /etc/glance/glance-registry.conf oslo_messaging_rabbit rabbit_retry_interval 1

openstack-config --set /etc/glance/glance-registry.conf oslo_messaging_rabbit rabbit_retry_backoff 2

openstack-config --set /etc/glance/glance-registry.conf oslo_messaging_rabbit rabbit_max_retries 0

openstack-config --set /etc/glance/glance-registry.conf oslo_messaging_rabbit amqp_durable_queues False

openstack-config --set /etc/glance/glance-registry.conf database connection mysql+pymysql://glance:[email protected]/glance

openstack-config --set /etc/glance/glance-registry.conf database idle_timeout 3600

openstack-config --set /etc/glance/glance-registry.conf database max_pool_size 30

openstack-config --set /etc/glance/glance-registry.conf database max_retries -1

openstack-config --set /etc/glance/glance-registry.conf database retry_interval 2

openstack-config --set /etc/glance/glance-registry.conf database max_overflow 60

openstack-config --set /etc/glance/glance-registry.conf keystone_authtoken auth_uri http://10.1.1.140:5000

openstack-config --set /etc/glance/glance-registry.conf keystone_authtoken auth_url http://10.1.1.140:35357

openstack-config --set /etc/glance/glance-registry.conf keystone_authtoken memcached_servers controller1:11211,controller2:11211,controller3:11211

openstack-config --set /etc/glance/glance-registry.conf keystone_authtoken auth_type password

openstack-config --set /etc/glance/glance-registry.conf keystone_authtoken project_domain_name default

openstack-config --set /etc/glance/glance-registry.conf keystone_authtoken user_domain_name default

openstack-config --set /etc/glance/glance-registry.conf keystone_authtoken project_name service

openstack-config --set /etc/glance/glance-registry.conf keystone_authtoken username glance

openstack-config --set /etc/glance/glance-registry.conf keystone_authtoken password gdxc1902

openstack-config --set /etc/glance/glance-registry.conf paste_deploy flavor keystone

openstack-config --set /etc/glance/glance-registry.conf glance_store filesystem_store_datadir /var/lib/glance/images/

openstack-config --set /etc/glance/glance-registry.conf glance_store os_region_name RegionOne

scp -p /etc/glance/glance-registry.conf controller2:/etc/glance/glance-registry.conf

scp -p /etc/glance/glance-registry.conf controller3:/etc/glance/glance-registry.conf

9.controller1上同步glance資料庫

su -s /bin/sh -c "glance-manage db_sync" glance

10.在三個節點上啟動glance及設定開機啟動

systemctl enable openstack-glance-api.service openstack-glance-registry.service

systemctl restart openstack-glance-api.service openstack-glance-registry.service

systemctl status openstack-glance-api.service openstack-glance-registry.service

11.在三個節點上將glance版本號寫入環境變數openrc檔案中

echo " " >> /root/admin-openrc && \

echo " " >> /root/demo-openrc && \

echo "export OS_IMAGE_API_VERSION=2"|tee -a /root/admin-openrc /root/demo-openrc

12.搭建glance後端儲存

因為是ha環境,3個控制節點必須要有一個共享的後端儲存,不然request發起請求的時候不確定會去呼叫哪個控制節點的glance服務,如果沒有共享儲存池存映象,那麼會遇到建立vm時候image找不到的問題。

這裡我們採用nfs的方式把glance的後端儲存建立起來,當然在實際的生產環境當中,一般會用cephglusterfs等方式,這裡我們以nfs為例子來講述後端儲存的搭建。

首先準備一臺物理機或者虛擬機器,要求空間要大,網路最好是在萬兆。

這裡我們用10.1.1.125 這臺虛擬機器

首先在這臺機器上安裝glance元件:

yum install -y openstack-glance python-glance python-glanceclient python-openstackclient openstack-nova-compute

其次安裝nfs服務

yum install -y nfs-utils rpcbind

建立glance images的儲存路徑並賦予glance使用者相應的許可權:

mkdir -p /var/lib/glance/images

chown -R glance:glance /var/lib/glance/images

配置nfs/var/lib/glance目錄共享出去

vim /etc/exports

/var/lib/glance *(rw,sync,no_root_squash)

啟動相關服務,並把nfs設定開機啟動

systemctl enable rpcbind

systemctl enable nfs-server.service

systemctl restart rpcbind

systemctl restart nfs-server.service

systemctl status nfs-server.service

nfs共享目錄生效

showmount -e

接著在3controller節點上做如下操作:

mount -t nfs 10.1.1.125:/var/lib/glance/images /var/lib/glance/images

echo "/usr/bin/mount -t nfs 10.1.1.125:/var/lib/glance/ /var/lib/glance/" >> /etc/rc.d/rc.local

chmod +x /etc/rc.d/rc.local

df -h

13.controller1上下載測試映象檔案

wget http://10.254.15.138/images/cirros-0.3.4-x86_64-disk.img

14.controller1上傳映象到glance

source /root/admin-openrc

glance image-create --name "cirros-0.3.4-x86_64" --file cirros-0.3.4-x86_64-disk.img --disk-format qcow2 --container-format bare --visibility public --progress

如果你做好了一個centos6.7系統映象,也可以用這命令操作,例如:

glance image-create --name "CentOS6.7-x86_64" --file CentOS6.7.qcow2 --disk-format qcow2 --container-format bare --visibility public --progress

檢視映象列表:

glance image-list

七、安裝配置nova

1.controller1上建立nova資料庫

CREATE DATABASE nova;

CREATE DATABASE nova_api;

2.controller1上建立資料庫使用者並賦予許可權

GRANT ALL PRIVILEGES ON nova.* TO 'nova'@'localhost' IDENTIFIED BY 'gdxc1902';

GRANT ALL PRIVILEGES ON nova.* TO 'nova'@'%' IDENTIFIED BY 'gdxc1902';

GRANT ALL PRIVILEGES ON nova_api.* TO 'nova'@'localhost' IDENTIFIED BY 'gdxc1902';

GRANT ALL PRIVILEGES ON nova_api.* TO 'nova'@'%' IDENTIFIED BY 'gdxc1902';

3.controller1上建立nova使用者及賦予admin許可權

source /root/admin-openrc

openstack user create --domain default nova --password gdxc1902

openstack role add --project service --user nova admin

4.controller1上建立computer服務

openstack service create --name nova --description "OpenStack Compute" compute

5.controller1上建立novaendpoint

openstack endpoint create --region RegionOne compute public http://10.1.1.140:8774/v2.1/%\(tenant_id\)s

openstack endpoint create --region RegionOne compute internal http://10.1.1.140:8774/v2.1/%\(tenant_id\)s

openstack endpoint create --region RegionOne compute admin http://10.1.1.140:8774/v2.1/%\(tenant_id\)s

6.在三臺控制節點上安裝nova元件

yum install -y openstack-nova-api openstack-nova-conductor openstack-nova-cert openstack-nova-console openstack-nova-novncproxy openstack-nova-scheduler

7.在三臺控制節點上配置nova的配置檔案/etc/nova/nova.conf

cp /etc/nova/nova.conf /etc/nova/nova.conf.bak

>/etc/nova/nova.conf

openstack-config --set /etc/nova/nova.conf DEFAULT debug False

openstack-config --set /etc/nova/nova.conf DEFAULT verbose True

openstack-config --set /etc/nova/nova.conf DEFAULT enabled_apis osapi_compute,metadata

openstack-config --set /etc/nova/nova.conf DEFAULT osapi_compute_listen_port 9774

openstack-config --set /etc/nova/nova.conf DEFAULT metadata_listen_port 9775

openstack-config --set /etc/nova/nova.conf DEFAULT my_ip 10.1.1.141

openstack-config --set /etc/nova/nova.conf DEFAULT use_neutron True

openstack-config --set /etc/nova/nova.conf DEFAULT firewall_driver nova.virt.firewall.NoopFirewallDriver

openstack-config --set /etc/nova/nova.conf DEFAULT rpc_backend rabbit

openstack-config --set /etc/nova/nova.conf DEFAULT auth_strategy keystone

openstack-config --set /etc/nova/nova.conf DEFAULT scheduler_use_baremetal_filters False

openstack-config --set /etc/nova/nova.conf DEFAULT scheduler_default_filters RetryFilter,AvailabilityZoneFilter,RamFilter,CoreFilter,DiskFilter,ComputeFilter,ComputeCapabilitiesFilter,ImagePropertiesFilter,ServerGroupAntiAffinityFilter,ServerGroupAffinityFilter

openstack-config --set /etc/nova/nova.conf DEFAULT scheduler_weight_classes nova.scheduler.weights.all_weighers

openstack-config --set /etc/nova/nova.conf DEFAULT scheduler_host_subset_size 30

openstack-config --set /etc/nova/nova.conf DEFAULT scheduler_driver nova.scheduler.filter_scheduler.FilterScheduler

openstack-config --set /etc/nova/nova.conf DEFAULT scheduler_max_attempts 3

openstack-config --set /etc/nova/nova.conf DEFAULT scheduler_available_filters nova.scheduler.filters.all_filters

openstack-config --set /etc/nova/nova.conf DEFAULT ram_allocation_ratio 3.0

openstack-config --set /etc/nova/nova.conf DEFAULT disk_allocation_ratio 1.0

openstack-config --set /etc/nova/nova.conf DEFAULT cpu_allocation_ratio 16.0

openstack-config --set /etc/nova/nova.conf DEFAULT service_down_time 180

openstack-config --set /etc/nova/nova.conf DEFAULT osapi_compute_workers 4

openstack-config --set /etc/nova/nova.conf DEFAULT metadata_workers 4

openstack-config --set /etc/nova/nova.conf DEFAULT rootwrap_config /etc/nova/rootwrap.conf

openstack-config --set /etc/nova/nova.conf DEFAULT notify_on_state_change vm_and_task_state

openstack-config --set /etc/nova/nova.conf DEFAULT allow_resize_to_same_host True

openstack-config --set /etc/nova/nova.conf DEFAULT novncproxy_host 10.1.1.141

openstack-config --set /etc/nova/nova.conf DEFAULT novncproxy_port 6080

openstack-config --set /etc/nova/nova.conf database connection mysql+pymysql://nova:[email protected]/nova

openstack-config --set /etc/nova/nova.conf database idle_timeout 3600

openstack-config --set /etc/nova/nova.conf database max_pool_size 30

openstack-config --set /etc/nova/nova.conf database retry_interval 2

openstack-config --set /etc/nova/nova.conf database max_retries -1

openstack-config --set /etc/nova/nova.conf api_database connection mysql+pymysql://nova:[email protected]/nova_api

openstack-config --set /etc/nova/nova.conf oslo_messaging_rabbit rabbit_hosts controller1:5672,controller2:5672,controller3:5672

openstack-config --set /etc/nova/nova.conf oslo_messaging_rabbit rabbit_userid openstack

openstack-config --set /etc/nova/nova.conf oslo_messaging_rabbit rabbit_password gdxc1902

openstack-config --set /etc/nova/nova.conf oslo_messaging_rabbit rabbit_use_ssl False

openstack-config --set /etc/nova/nova.conf oslo_messaging_rabbit rabbit_ha_queues True

openstack-config --set /etc/nova/nova.conf oslo_messaging_rabbit rabbit_retry_interval 1

openstack-config --set /etc/nova/nova.conf oslo_messaging_rabbit rabbit_retry_backoff 2

openstack-config --set /etc/nova/nova.conf oslo_messaging_rabbit rabbit_max_retries 0

openstack-config --set /etc/nova/nova.conf oslo_messaging_rabbit amqp_durable_queues False

openstack-config --set /etc/nova/nova.conf oslo_concurrency lock_path /var/lib/nova/tmp

openstack-config --set /etc/nova/nova.conf keystone_authtoken auth_uri http://10.1.1.140:5000

openstack-config --set /etc/nova/nova.conf keystone_authtoken auth_url http://10.1.1.140:35357

openstack-config --set /etc/nova/nova.conf keystone_authtoken memcached_servers controller1:11211,controller2:11211,controller3:11211

openstack-config --set /etc/nova/nova.conf keystone_authtoken auth_type password

openstack-config --set /etc/nova/nova.conf keystone_authtoken project_domain_name default

openstack-config --set /etc/nova/nova.conf keystone_authtoken user_domain_name default

openstack-config --set /etc/nova/nova.conf keystone_authtoken project_name service

openstack-config --set /etc/nova/nova.conf keystone_authtoken username nova

openstack-config --set /etc/nova/nova.conf keystone_authtoken password gdxc1902

openstack-config --set /etc/nova/nova.conf glance api_servers http://10.1.1.140:9292

openstack-config --set /etc/nova/nova.conf conductor use_local False

openstack-config --set /etc/nova/nova.conf conductor workers 4

openstack-config --set /etc/nova/nova.conf vnc enabled True

openstack-config --set /etc/nova/nova.conf vnc vncserver_listen 0.0.0.0

openstack-config --set /etc/nova/nova.conf vnc vncserver_proxyclient_address 10.1.1.141

openstack-config --set /etc/nova/nova.conf vnc novncproxy_base_url http://10.1.1.140:6080/vnc_auto.html

注意:其他節點上記得替換IP,還有密碼。

scp -p /etc/nova/nova.conf controller2:/etc/nova/nova.conf

scp -p /etc/nova/nova.conf controller3:/etc/nova/nova.conf

8.controller1上同步nova資料

su -s /bin/sh -c "nova-manage api_db sync" nova

su -s /bin/sh -c "nova-manage db sync" nova

9.controller1上設定開機啟動

systemctl enable openstack-nova-api.service openstack-nova-cert.service openstack-nova-consoleauth.service openstack-nova-scheduler.service openstack-nova-conductor.service openstack-nova-novncproxy.service

controller23上設定開機啟動(注意比起ctr1節點少了openstack-nova-consoleauth.service):

systemctl enable openstack-nova-api.service openstack-nova-cert.service openstack-nova-scheduler.service openstack-nova-conductor.service openstack-nova-novncproxy.service

controller1上啟動nova服務:

systemctl restart openstack-nova-api.service openstack-nova-cert.service openstack-nova-consoleauth.service openstack-nova-scheduler.service openstack-nova-conductor.service openstack-nova-novncproxy.service

systemctl status openstack-nova-api.service openstack-nova-cert.service openstack-nova-consoleauth.service openstack-nova-scheduler.service openstack-nova-conductor.service openstack-nova-novncproxy.service

controller23上啟動nova服務:

systemctl restart openstack-nova-api.service openstack-nova-cert.service openstack-nova-scheduler.service openstack-nova-conductor.service openstack-nova-novncproxy.service

systemctl status openstack-nova-api.service openstack-nova-cert.service openstack-nova-scheduler.service openstack-nova-conductor.service openstack-nova-novncproxy.service

systemctl list-unit-files |grep openstack-nova-*

10.隨便一個節點上驗證nova服務

unset OS_TOKEN OS_URL

echo "export OS_REGION_NAME=RegionOne" >> admin-openrc

source /root/admin-openrc

nova service-list

openstack endpoint list

八、安裝配置neutron

1.controller1上建立neutron資料庫

CREATE DATABASE neutron;

2.controller1上建立資料庫使用者並賦予許可權

GRANT ALL PRIVILEGES ON neutron.* TO 'neutron'@'localhost' IDENTIFIED BY 'gdxc1902';

GRANT ALL PRIVILEGES ON neutron.* TO 'neutron'@'%' IDENTIFIED BY 'gdxc1902';

3.controller1上建立neutron使用者及賦予admin許可權

source /root/admin-openrc

openstack user create --domain default neutron --password gdxc1902

openstack role add --project service --user neutron admin

4.controller1上建立network服務

openstack service create --name neutron --description "OpenStack Networking" network

5.controller1上建立novaendpoint

openstack endpoint create --region RegionOne network public http://10.1.1.140:9696

openstack endpoint create --region RegionOne network internal http://10.1.1.140:9696

openstack endpoint create --region RegionOne network admin http://10.1.1.140:9696

6.在三臺控制節點上安裝neutron相關軟體

yum install -y openstack-neutron openstack-neutron-ml2 openstack-neutron-linuxbridge ebtables

7.在三臺控制節點上配置neutron的配置檔案/etc/neutron/neutron.conf

cp /etc/neutron/neutron.conf /etc/neutron/neutron.conf.bak

>/etc/neutron/neutron.conf

openstack-config --set /etc/neutron/neutron.conf DEFAULT debug False

openstack-config --set /etc/neutron/neutron.conf DEFAULT verbose true

openstack-config --set /etc/neutron/neutron.conf DEFAULT bind_host controller1

openstack-config --set /etc/neutron/neutron.conf DEFAULT bind_port 9797

openstack-config --set /etc/neutron/neutron.conf DEFAULT core_plugin neutron.plugins.ml2.plugin.Ml2Plugin

openstack-config --set /etc/neutron/neutron.conf DEFAULT service_plugins neutron.services.l3_router.l3_router_plugin.L3RouterPlugin,neutron.services.metering.metering_plugin.MeteringPlugin

openstack-config --set /etc/neutron/neutron.conf DEFAULT auth_strategy keystone

openstack-config --set /etc/neutron/neutron.conf DEFAULT allow_overlapping_ips True

openstack-config --set /etc/neutron/neutron.conf DEFAULT notify_nova_on_port_status_changes True

openstack-config --set /etc/neutron/neutron.conf DEFAULT notify_nova_on_port_data_changes True

openstack-config --set /etc/neutron/neutron.conf DEFAULT advertise_mtu True

openstack-config --set /etc/neutron/neutron.conf DEFAULT rpc_response_timeout 180

openstack-config --set /etc/neutron/neutron.conf DEFAULT mac_generation_retries 32

openstack-config --set /etc/neutron/neutron.conf DEFAULT dhcp_lease_duration 600

openstack-config --set /etc/neutron/neutron.conf DEFAULT global_physnet_mtu 1500

openstack-config --set /etc/neutron/neutron.conf DEFAULT control_exchange neutron

openstack-config --set /etc/neutron/neutron.conf DEFAULT api_workers 4

openstack-config --set /etc/neutron/neutron.conf DEFAULT rpc_workers 4

openstack-config --set /etc/neutron/neutron.conf DEFAULT agent_down_time 75

openstack-config --set /etc/neutron/neutron.conf DEFAULT dhcp_agents_per_network 2

openstack-config --set /etc/neutron/neutron.conf DEFAULT router_distributed False

openstack-config --set /etc/neutron/neutron.conf DEFAULT router_scheduler_driver neutron.scheduler.l3_agent_scheduler.ChanceScheduler

openstack-config --set /etc/neutron/neutron.conf DEFAULT allow_automatic_l3agent_failover True

openstack-config --set /etc/neutron/neutron.conf DEFAULT l3_ha True

openstack-config --set /etc/neutron/neutron.conf DEFAULT max_l3_agents_per_router 0

openstack-config --set /etc/neutron/neutron.conf DEFAULT min_l3_agents_per_router 2

openstack-config --set /etc/neutron/neutron.conf database connection mysql+pymysql://neutron:[email protected]/neutron

openstack-config --set /etc/neutron/neutron.conf database idle_timeout 3600

openstack-config --set /etc/neutron/neutron.conf database max_pool_size 30

openstack-config --set /etc/neutron/neutron.conf database max_retries -1

openstack-config --set /etc/neutron/neutron.conf database retry_interval 2

openstack-config --set /etc/neutron/neutron.conf database max_overflow 60

openstack-config --set /etc/neutron/neutron.conf oslo_messaging_rabbit rabbit_hosts controller1:5672,controller2:5672,controller3:5672

openstack-config --set /etc/neutron/neutron.conf oslo_messaging_rabbit rabbit_userid openstack

openstack-config --set /etc/neutron/neutron.conf oslo_messaging_rabbit rabbit_password gdxc1902

openstack-config --set /etc/neutron/neutron.conf oslo_messaging_rabbit rabbit_ha_queues True

openstack-config --set /etc/neutron/neutron.conf oslo_messaging_rabbit rabbit_retry_interval 1

openstack-config --set /etc/neutron/neutron.conf oslo_messaging_rabbit rabbit_retry_backoff 2

openstack-config --set /etc/neutron/neutron.conf oslo_messaging_rabbit rabbit_max_retries 0

openstack-config --set /etc/neutron/neutron.conf oslo_messaging_rabbit amqp_durable_queues False

openstack-config --set /etc/neutron/neutron.conf keystone_authtoken auth_uri http://10.1.1.140:5000

openstack-config --set /etc/neutron/neutron.conf keystone_authtoken auth_url http://10.1.1.140:35357

openstack-config --set /etc/neutron/neutron.conf keystone_authtoken memcached_servers controller1:11211,controller2:11211,controller3:11211

openstack-config --set /etc/neutron/neutron.conf keystone_authtoken auth_type password

openstack-config --set /etc/neutron/neutron.conf keystone_authtoken project_domain_name default

openstack-config --set /etc/neutron/neutron.conf keystone_authtoken user_domain_name default

openstack-config --set /etc/neutron/neutron.conf keystone_authtoken project_name service

openstack-config --set /etc/neutron/neutron.conf keystone_authtoken username neutron

openstack-config --set /etc/neutron/neutron.conf keystone_authtoken password gdxc1902

openstack-config --set /etc/neutron/neutron.conf nova auth_url http://10.1.1.140:35357

openstack-config --set /etc/neutron/neutron.conf nova auth_type password

openstack-config --set /etc/neutron/neutron.conf nova project_domain_name default

openstack-config --set /etc/neutron/neutron.conf nova user_domain_name default

openstack-config --set /etc/neutron/neutron.conf nova region_name RegionOne

openstack-config --set /etc/neutron/neutron.conf nova project_name service

openstack-config --set /etc/neutron/neutron.conf nova username nova

openstack-config --set /etc/neutron/neutron.conf nova password gdxc1902

openstack-config --set /etc/neutron/neutron.conf oslo_concurrency lock_path /var/lib/neutron/tmp

openstack-config --set /etc/neutron/neutron.conf agent report_interval 30

openstack-config --set /etc/neutron/neutron.conf agent root_helper sudo\ neutron-rootwrap\ /etc/neutron/rootwrap.conf

scp -p /etc/neutron/neutron.conf controller2:/etc/neutron/neutron.conf

scp -p /etc/neutron/neutron.conf controller3:/etc/neutron/neutron.conf

8.在三個控制節點上配置/etc/neutron/plugins/ml2/ml2_conf.ini

openstack-config --set /etc/neutron/plugins/ml2/ml2_conf.ini ml2 type_drivers flat,vlan,vxlan

openstack-config --set /etc/neutron/plugins/ml2/ml2_conf.ini ml2 mechanism_drivers linuxbridge,l2population

openstack-config --set /etc/neutron/plugins/ml2/ml2_conf.ini ml2 extension_drivers port_security

openstack-config --set /etc/neutron/plugins/ml2/ml2_conf.ini ml2 tenant_network_types vxlan

openstack-config --set /etc/neutron/plugins/ml2/ml2_conf.ini ml2 path_mtu 1500

openstack-config --set /etc/neutron/plugins/ml2/ml2_conf.ini ml2_type_flat flat_networks provider

openstack-config --set /etc/neutron/plugins/ml2/ml2_conf.ini ml2_type_vxlan vni_ranges 1:1000

openstack-config --set /etc/neutron/plugins/ml2/ml2_conf.ini securitygroup enable_ipset True

scp -p /etc/neutron/plugins/ml2/ml2_conf.ini controller2:/etc/neutron/plugins/ml2/ml2_conf.ini

scp -p /etc/neutron/plugins/ml2/ml2_conf.ini controller3:/etc/neutron/plugins/ml2/ml2_conf.ini

9.在三個控制節點上配置/etc/neutron/plugins/ml2/linuxbridge_agent.ini

openstack-config --set /etc/neutron/plugins/ml2/linuxbridge_agent.ini DEFAULT debug false

openstack-config --set /etc/neutron/plugins/ml2/linuxbridge_agent.ini linux_bridge physical_interface_mappings provider:eth0

openstack-config --set /etc/neutron/plugins/ml2/linuxbridge_agent.ini vxlan enable_vxlan True

openstack-config --set /etc/neutron/plugins/ml2/linuxbridge_agent.ini vxlan local_ip 10.2.2.141

openstack-config --set /etc/neutron/plugins/ml2/linuxbridge_agent.ini vxlan l2_population True

openstack-config --set /etc/neutron/plugins/ml2/linuxbridge_agent.ini agent prevent_arp_spoofing True

openstack-config --set /etc/neutron/plugins/ml2/linuxbridge_agent.ini securitygroup enable_security_group True

openstack-config --set /etc/neutron/plugins/ml2/linuxbridge_agent.ini securitygroup firewall_driver neutron.agent.linux.iptables_firewall.IptablesFirewallDriver

scp -p /etc/neutron/plugins/ml2/linuxbridge_agent.ini controller2:/etc/neutron/plugins/ml2/linuxbridge_agent.ini

scp -p /etc/neutron/plugins/ml2/linuxbridge_agent.ini controller3:/etc/neutron/plugins/ml2/linuxbridge_agent.ini

注意eth0public網絡卡,一般這裡寫的網絡卡名都是能訪問外網的,如果不是外網網絡卡,那麼vm就會與外界隔離。

10.在三個控制節點上配置/etc/neutron/l3_agent.ini

openstack-config --set /etc/neutron/l3_agent.ini DEFAULT interface_driver neutron.agent.linux.interface.BridgeInterfaceDriver

openstack-config --set /etc/neutron/l3_agent.ini DEFAULT external_network_bridge

openstack-config --set /etc/neutron/l3_agent.ini DEFAULT debug false

scp -p /etc/neutron/l3_agent.ini controller2:/etc/neutron/l3_agent.ini

scp -p /etc/neutron/l3_agent.ini controller3:/etc/neutron/l3_agent.ini

11.在三個控制節點上配置/etc/neutron/dhcp_agent.ini

openstack-config --set /etc/neutron/dhcp_agent.ini DEFAULT interface_driver neutron.agent.linux.interface.BridgeInterfaceDriver

openstack-config --set /etc/neutron/dhcp_agent.ini DEFAULT dhcp_driver neutron.agent.linux.dhcp.Dnsmasq

openstack-config --set /etc/neutron/dhcp_agent.ini DEFAULT enable_isolated_metadata True

openstack-config --set /etc/neutron/dhcp_agent.ini DEFAULT verbose True

openstack-config --set /etc/neutron/dhcp_agent.ini DEFAULT debug false

scp -p /etc/neutron/dhcp_agent.ini controller2:/etc/neutron/dhcp_agent.ini

scp -p /etc/neutron/dhcp_agent.ini controller3:/etc/neutron/dhcp_agent.ini

12.在三個控制節點上重新配置/etc/nova/nova.conf,配置這步的目的是讓compute節點能使用neutron網路

openstack-config --set /etc/nova/nova.conf neutron url http://10.1.1.140:9696

openstack-config --set /etc/nova/nova.conf neutron auth_url http://10.1.1.140:35357

openstack-config --set /etc/nova/nova.conf neutron auth_plugin password

openstack-config --set /etc/nova/nova.conf neutron project_domain_id default

openstack-config --set /etc/nova/nova.conf neutron user_domain_id default

openstack-config --set /etc/nova/nova.conf neutron region_name RegionOne

openstack-config --set /etc/nova/nova.conf neutron project_name service

openstack-config --set /etc/nova/nova.conf neutron username neutron

openstack-config --set /etc/nova/nova.conf neutron password gdxc1902

openstack-config --set /etc/nova/nova.conf neutron service_metadata_proxy True

openstack-config --set /etc/nova/nova.conf neutron metadata_proxy_shared_secret gdxc1902

openstack-config --set /etc/nova/nova.conf cinder os_region_name RegionOne

13.在三個控制節點上將dhcp-option-force=26,1450寫入/etc/neutron/dnsmasq-neutron.conf

echo "dhcp-option-force=26,1450" > /etc/neutron/dnsmasq-neutron.conf

14.在三個控制節點上配置/etc/neutron/metadata_agent.ini

openstack-config --set /etc/neutron/metadata_agent.ini DEFAULT nova_metadata_ip 10.1.1.140

openstack-config --set /etc/neutron/metadata_agent.ini DEFAULT metadata_proxy_shared_secret gdxc1902

openstack-config --set /etc/neutron/metadata_agent.ini DEFAULT metadata_workers 4

openstack-config --set /etc/neutron/metadata_agent.ini DEFAULT verbose True

openstack-config --set /etc/neutron/metadata_agent.ini DEFAULT debug false

openstack-config --set /etc/neutron/metadata_agent.ini DEFAULT nova_metadata_protocol http

scp -p /etc/neutron/metadata_agent.ini controller2:/etc/neutron/metadata_agent.ini

scp -p /etc/neutron/metadata_agent.ini controller3:/etc/neutron/metadata_agent.ini

15.在三個控制節點上建立軟連結

ln -s /etc/neutron/plugins/ml2/ml2_conf.ini /etc/neutron/plugin.ini

16.controller1上同步資料庫

su -s /bin/sh -c "neutron-db-manage --config-file /etc/neutron/neutron.conf --config-file /etc/neutron/plugins/ml2/ml2_conf.ini upgrade head" neutron

17.在三個控制節點上重啟nova服務,因為剛才改了nova.conf

systemctl restart openstack-nova-api.service

systemctl status openstack-nova-api.service

18.在三個控制節點上重啟neutron服務並設定開機啟動

systemctl enable neutron-server.service neutron-linuxbridge-agent.service neutron-dhcp-agent.service neutron-metadata-agent.service

systemctl restart neutron-server.service neutron-linuxbridge-agent.service neutron-dhcp-agent.service neutron-metadata-agent.service

systemctl status neutron-server.service neutron-linuxbridge-agent.service neutron-dhcp-agent.service neutron-metadata-agent.service

19.在三個控制節點上啟動neutron-l3-agent.service並設定開機啟動

systemctl enable neutron-l3-agent.service

systemctl restart neutron-l3-agent.service

systemctl status neutron-l3-agent.service

20.隨便一節點上執行驗證

source /root/admin-openrc

neutron agent-list

21.建立vxlan模式網路,讓虛擬機器能外出

a.首先執行環境變數

source /root/admin-openrc

b.建立flat模式的public網路,注意這個public是外出網路,必須是flat模式

neutron --debug net-create --shared provider --router:external True --provider:network_type flat --provider:physical_network provider

執行完這步,在介面裡進行操作,把public網路設定為共享和外部網路,建立後,結果為:

c.建立public網路子網,名為public-sub,網段就是10.254.15.160/27,並且IP範圍是162-190(這個一般是給vm用的floating IP了),DNS設定為218.30.26.68,閘道器為10.254.15.161

neutron subnet-create provider 10.254.15.160/27--name provider-sub --allocation-pool start=10.254.15.162,end=10.254.15.190--dns-nameserver 218.30.26.68--gateway 10.254.15.161

d.建立名為private的私有網路,網路模式為vxlan

neutron net-create private-test --provider:network_type vxlan --router:external False --shared

e.建立名為private-subnet的私有網路子網,網段為192.168.1.0,這個網段就是虛擬機器獲取的自由的IP地址

neutron subnet-create private-test --name private-subnet --gateway 192.168.1.1 192.168.1.0/24

例如你們公司的私有云環境是用於不同的業務,比如行政、銷售、技術等,那麼你可以建立3個不同名稱的私有網路

neutron net-create private-office --provider:network_type vxlan --router:external False --shared

neutron subnet-create private-office --name office-subnet --gateway 192.168.2.1 192.168.2.0/24

neutron net-create private-sale --provider:network_type vxlan --router:external False --shared

neutron subnet-create private-sale --name sale-subnet --gateway 192.168.3.1 192.168.3.0/24

neutron net-create private-tachnology --provider:network_type vxlan --router:external False --shared

neutron subnet-create private-tachnology --name tachnology-subnet --gateway 192.168.4.1 192.168.4.0/24

22.檢查網路服務

neutron agent-list

九、安裝dashboard

1.安裝dashboard相關軟體包

yum install openstack-dashboard -y

2.修改配置檔案/etc/openstack-dashboard/local_settings

wget http://10.254.15.147/local_settings

修改配置檔案

cp local_settings /etc/openstack-dashboard/

scp -p /etc/openstack-dashboard/local_settings controller2:/etc/openstack-dashboard/local_settings

scp -p /etc/openstack-dashboard/local_settings controller3:/etc/openstack-dashboard/local_settings

3.啟動dashboard服務並設定開機啟動

systemctl enable httpd.service memcached.service

systemctl restart httpd.service memcached.service

systemctl status httpd.service memcached.service

十、安裝配置cinder

1.controller1上建立資料庫使用者並賦予許可權

CREATE DATABASE cinder;

GRANT ALL PRIVILEGES ON cinder.* TO 'cinder'@'localhost' IDENTIFIED BY 'gdxc1902';

GRANT ALL PRIVILEGES ON cinder.* TO 'cinder'@'%' IDENTIFIED BY 'gdxc1902';

2.controller1上建立neutron使用者及賦予admin許可權

source /root/admin-openrc

openstack user create --domain default cinder --password gdxc1902

openstack role add --project service --user cinder admin

3.controller1上建立network服務

openstack service create --name cinder --description "OpenStack Block Storage" volume

openstack service create --name cinderv2 --description "OpenStack Block Storage" volumev2

4.controller1上建立novaendpoint

openstack endpoint create --region RegionOne volume public http://10.1.1.140:8776/v1/%\(tenant_id\)s

openstack endpoint create --region RegionOne volume internal http://10.1.1.140:8776/v1/%\(tenant_id\)s

openstack endpoint create --region RegionOne volume admin http://10.1.1.140:8776/v1/%\(tenant_id\)s

openstack endpoint create --region RegionOne volumev2 public http://10.1.1.140:8776/v2/%\(tenant_id\)s

openstack endpoint create --region RegionOne volumev2 internal http://10.1.1.140:8776/v2/%\(tenant_id\)s

openstack endpoint create --region RegionOne volumev2 admin http://10.1.1.140:8776/v2/%\(tenant_id\)s

5.在三臺控制節點上安裝neutron相關軟體

yum install -y openstack-cinder

6.在三臺控制節點上配置cinder配置檔案/etc/cinder/cinder.conf

cp /etc/cinder/cinder.conf /etc/cinder/cinder.conf.bak

>/etc/cinder/cinder.conf

openstack-config --set /etc/cinder/cinder.conf DEFAULT debug False

openstack-config --set /etc/cinder/cinder.conf DEFAULT verbose True

openstack-config --set /etc/cinder/cinder.conf DEFAULT my_ip 10.1.1.141

openstack-config --set /etc/cinder/cinder.conf DEFAULT osapi_volume_listen_port 8778

openstack-config --set /etc/cinder/cinder.conf DEFAULT auth_strategy keystone

openstack-config --set /etc/cinder/cinder.conf DEFAULT enable_v1_api True

openstack-config --set /etc/cinder/cinder.conf DEFAULT enable_v2_api True

openstack-config --set /etc/cinder/cinder.conf DEFAULT enable_v3_api True

openstack-config --set /etc/cinder/cinder.conf DEFAULT glance_api_servers http://10.1.1.140:9292

openstack-config --set /etc/cinder/cinder.conf DEFAULT glance_api_version 2

openstack-config --set /etc/cinder/cinder.conf DEFAULT storage_availability_zone nova

openstack-config --set /etc/cinder/cinder.conf DEFAULT default_availability_zone nova

openstack-config --set /etc/cinder/cinder.conf DEFAULT allow_availability_zone_fallback True

openstack-config --set /etc/cinder/cinder.conf DEFAULT service_down_time 180

openstack-config --set /etc/cinder/cinder.conf DEFAULT report_interval 10

openstack-config --set /etc/cinder/cinder.conf DEFAULT osapi_volume_workers 4

openstack-config --set /etc/cinder/cinder.conf DEFAULT enable_force_upload True

openstack-config --set /etc/cinder/cinder.conf DEFAULT rootwrap_config /etc/cinder/rootwrap.conf

openstack-config --set /etc/cinder/cinder.conf database connection mysql+pymysql://cinder:[email protected]/cinder

openstack-config --set /etc/cinder/cinder.conf database idle_timeout 3600

openstack-config --set /etc/cinder/cinder.conf database max_pool_size 30

openstack-config --set /etc/cinder/cinder.conf database max_retries -1

openstack-config --set /etc/cinder/cinder.conf database retry_interval 2

openstack-config --set /etc/cinder/cinder.conf database max_overflow 60

openstack-config --set /etc/cinder/cinder.conf oslo_messaging_rabbit rabbit_hosts controller1:5672,controller2:5672,controller3:5672

openstack-config --set /etc/cinder/cinder.conf oslo_messaging_rabbit rabbit_userid openstack

openstack-config --set /etc/cinder/cinder.conf oslo_messaging_rabbit rabbit_password gdxc1902

openstack-config --set /etc/cinder/cinder.conf oslo_messaging_rabbit rabbit_ha_queues True

openstack-config --set /etc/cinder/cinder.conf oslo_messaging_rabbit rabbit_use_ssl False

openstack-config --set /etc/cinder/cinder.conf oslo_messaging_rabbit rabbit_retry_interval 1

openstack-config --set /etc/cinder/cinder.conf oslo_messaging_rabbit rabbit_retry_backoff 2

openstack-config --set /etc/cinder/cinder.conf oslo_messaging_rabbit rabbit_max_retries 0

openstack-config --set /etc/cinder/cinder.conf oslo_messaging_rabbit amqp_durable_queues False

openstack-config --set /etc/cinder/cinder.conf keystone_authtoken auth_uri http://10.1.1.140:5000

openstack-config --set /etc/cinder/cinder.conf keystone_authtoken auth_url http://10.1.1.140:35357

openstack-config --set /etc/cinder/cinder.conf keystone_authtoken memcached_servers controller1:11211,controller2:11211,controller3:11211

openstack-config --set /etc/cinder/cinder.conf keystone_authtoken auth_type password

openstack-config --set /etc/cinder/cinder.conf keystone_authtoken project_domain_name default

openstack-config --set /etc/cinder/cinder.conf keystone_authtoken user_domain_name default

openstack-config --set /etc/cinder/cinder.conf keystone_authtoken project_name service

openstack-config --set /etc/cinder/cinder.conf keystone_authtoken username cinder

openstack-config --set /etc/cinder/cinder.conf keystone_authtoken password gdxc1902

openstack-config --set /etc/cinder/cinder.conf oslo_concurrency lock_path /var/lib/cinder/tmp

scp -p /etc/cinder/cinder.conf controller2:/etc/cinder/cinder.conf

scp -p /etc/cinder/cinder.conf controller3:/etc/cinder/cinder.conf

7.controller1上同步資料庫

su -s /bin/sh -c "cinder-manage db sync" cinder

8.在三臺控制節點上啟動cinde服務,並設定開機啟動

systemctl enable openstack-cinder-api.service openstack-cinder-scheduler.service

systemctl restart openstack-cinder-api.service openstack-cinder-scheduler.service

systemctl status openstack-cinder-api.service openstack-cinder-scheduler.service

9.安裝cinder節點,cinder節點這裡我們需要額外的新增一個硬碟(/dev/sdb)用作cinder的儲存服務(注意!這一步是在cinder節點操作的)

yum install lvm2 -y

10.啟動服務並設定開機啟動(注意!這一步是在cinder節點操作的)

systemctl enable lvm2-lvmetad.service

systemctl start lvm2-lvmetad.service

systemctl status lvm2-lvmetad.service

11.建立lvm,這裡的/dev/sdb就是額外新增的硬碟注意!(注意!這一步是在cinder節點操作的)

fdisk -l

pvcreate /dev/sdb

vgcreate cinder-volumes /dev/sdb

12.編輯儲存節點lvm.conf檔案

vim /etc/lvm/lvm.conf

devices下面新增filter = ["a/sda/","a/sdb/","r/.*/"],129

然後重啟lvm2服務

systemctl restart lvm2-lvmetad.service

systemctl status lvm2-lvmetad.service

13.安裝openstack-cindertargetcli(注意!這一步是在cinder節點操作的)

yum install openstack-cinder openstack-utils python-keystone scsi-target-utils targetcli ntpdate -y

14.配置cinder配置檔案(注意!這一步是在cinder節點操作的)

cp /etc/cinder/cinder.conf /etc/cinder/cinder.conf.bak

>/etc/cinder/cinder.conf

openstack-config --set /etc/cinder/cinder.conf DEFAULT debug False

openstack-config --set /etc/cinder/cinder.conf DEFAULT verbose True

openstack-config --set /etc/cinder/cinder.conf DEFAULT rpc_backend rabbit

openstack-config --set /etc/cinder/cinder.conf DEFAULT auth_strategy keystone

openstack-config --set /etc/cinder/cinder.conf DEFAULT my_ip 10.1.1.146

openstack-config --set /etc/cinder/cinder.conf DEFAULT enabled_backends lvm

openstack-config --set /etc/cinder/cinder.conf DEFAULT glance_api_servers http://10.1.1.140:9292

openstack-config --set /etc/cinder/cinder.conf DEFAULT glance_api_version 2

openstack-config --set /etc/cinder/cinder.conf DEFAULT enable_v1_api True

openstack-config --set /etc/cinder/cinder.conf DEFAULT enable_v2_api True

openstack-config --set /etc/cinder/cinder.conf DEFAULT enable_v3_api True

openstack-config --set /etc/cinder/cinder.conf DEFAULT storage_availability_zone nova

openstack-config --set /etc/cinder/cinder.conf DEFAULT default_availability_zone nova

openstack-config --set /etc/cinder/cinder.conf DEFAULT service_down_time 180

openstack-config --set /etc/cinder/cinder.conf DEFAULT report_interval 10

openstack-config --set /etc/cinder/cinder.conf DEFAULT osapi_volume_workers 4

openstack-config --set /etc/cinder/cinder.conf DEFAULT os_region_name RegionOne

openstack-config --set /etc/cinder/cinder.conf DEFAULT api_paste_config /etc/cinder/api-paste.ini

openstack-config --set /etc/cinder/cinder.conf oslo_messaging_rabbit rabbit_hosts controller1:5672,controller2:5672,controller3:5672

openstack-config --set /etc/cinder/cinder.conf oslo_messaging_rabbit rabbit_userid openstack

openstack-config --set /etc/cinder/cinder.conf oslo_messaging_rabbit rabbit_password gdxc1902

openstack-config --set /etc/cinder/cinder.conf oslo_messaging_rabbit rabbit_ha_queues True

openstack-config --set /etc/cinder/cinder.conf oslo_messaging_rabbit rabbit_use_ssl False

openstack-config --set /etc/cinder/cinder.conf oslo_messaging_rabbit rabbit_retry_interval 1

openstack-config --set /etc/cinder/cinder.conf oslo_messaging_rabbit rabbit_retry_backoff 2

openstack-config --set /etc/cinder/cinder.conf oslo_messaging_rabbit rabbit_max_retries 0

openstack-config --set /etc/cinder/cinder.conf oslo_messaging_rabbit amqp_durable_queues False

openstack-config --set /etc/cinder/cinder.conf database connection mysql+pymysql://cinder:[email protected]/cinder

openstack-config --set /etc/cinder/cinder.conf database idle_timeout 3600

openstack-config --set /etc/cinder/cinder.conf database max_pool_size 30

openstack-config --set /etc/cinder/cinder.conf database max_retries -1

openstack-config --set /etc/cinder/cinder.conf database retry_interval 2

openstack-config --set /etc/cinder/cinder.conf database max_overflow 60

openstack-config --set /etc/cinder/cinder.conf keystone_authtoken auth_uri http://10.1.1.140:5000

openstack-config --set /etc/cinder/cinder.conf keystone_authtoken auth_url http://10.1.1.140:35357

openstack-config --set /etc/cinder/cinder.conf keystone_authtoken memcached_servers controller1:11211,controller2:11211,controller3:11211

openstack-config --set /etc/cinder/cinder.conf keystone_authtoken auth_type password

openstack-config --set /etc/cinder/cinder.conf keystone_authtoken project_domain_name default

openstack-config --set /etc/cinder/cinder.conf keystone_authtoken user_domain_name default

openstack-config --set /etc/cinder/cinder.conf keystone_authtoken project_name service

openstack-config --set /etc/cinder/cinder.conf keystone_authtoken username cinder

openstack-config --set /etc/cinder/cinder.conf keystone_authtoken password gdxc1902

openstack-config --set /etc/cinder/cinder.conf lvm volume_driver cinder.volume.drivers.lvm.LVMVolumeDriver

openstack-config --set /etc/cinder/cinder.conf lvm volume_group cinder-volumes

openstack-config --set /etc/cinder/cinder.conf lvm iscsi_protocol iscsi

openstack-config --set /etc/cinder/cinder.conf lvm iscsi_helper lioadm

openstack-config --set /etc/cinder/cinder.conf oslo_convcurrency lock_path /var/lib/cinder/tmp

15.啟動openstack-cinder-volumetarget並設定開機啟動(注意!這一步是在cinder節點操作的)

systemctl enable openstack-cinder-volume.service target.service

systemctl restart openstack-cinder-volume.service target.service

systemctl status openstack-cinder-volume.service target.service

16.在任意節點上驗證cinder服務是否正常

source /root/admin-openrc

cinder service-list

netstat -ntlp|grep 3260

17.相關命令

cinder service-list //檢視cinder service服務

cinder-manage service remove cinder-volume controller1 //刪除不用的cinder service服務

http://blog.csdn.net/qq806692341/article/details/52397440 // Cinder命令總結

十一、compute節點部署

1.安裝相關依賴包

yum install -y openstack-selinux python-openstackclient yum-plugin-priorities openstack-nova-compute openstack-utils ntpdate

2.配置nova.conf

cp /etc/nova/nova.conf /etc/nova/nova.conf.bak

>/etc/nova/nova.conf

openstack-config --set /etc/nova/nova.conf DEFAULT debug False

openstack-config --set /etc/nova/nova.conf DEFAULT verbose True

openstack-config --set /etc/nova/nova.conf DEFAULT force_raw_images True

openstack-config --set /etc/nova/nova.conf DEFAULT remove_unused_original_minimum_age_seconds 86400

openstack-config --set /etc/nova/nova.conf DEFAULT image_service nova.image.glance.GlanceImageService

openstack-config --set /etc/nova/nova.conf DEFAULT use_cow_images True

openstack-config --set /etc/nova/nova.conf DEFAULT heal_instance_info_cache_interval 60

openstack-config --set /etc/nova/nova.conf DEFAULT notify_on_state_change vm_and_task_state

openstack-config --set /etc/nova/nova.conf DEFAULT rootwrap_config /etc/nova/rootwrap.conf

openstack-config --set /etc/nova/nova.conf DEFAULT allow_resize_to_same_host True

openstack-config --set /etc/nova/nova.conf DEFAULT connection_type libvirt

openstack-config --set /etc/nova/nova.conf DEFAULT instance_usage_audit True

openstack-config --set /etc/nova/nova.conf DEFAULT instance_usage_audit_period hour

openstack-config --set /etc/nova/nova.conf DEFAULT rpc_backend rabbit

openstack-config --set /etc/nova/nova.conf DEFAULT auth_strategy keystone

openstack-config --set /etc/nova/nova.conf DEFAULT my_ip 10.1.1.144

openstack-config --set /etc/nova/nova.conf DEFAULT use_neutron True

openstack-config --set /etc/nova/nova.conf DEFAULT firewall_driver nova.virt.firewall.NoopFirewallDriver

openstack-config --set /etc/nova/nova.conf DEFAULT vif_plugging_is_fatal False

openstack-config --set /etc/nova/nova.conf DEFAULT vif_plugging_timeout 30

openstack-config --set /etc/nova/nova.conf DEFAULT resume_guests_state_on_host_boot True

openstack-config --set /etc/nova/nova.conf DEFAULT api_rate_limit False

openstack-config --set /etc/nova/nova.conf DEFAULT block_device_allocate_retries_interval 3

openstack-config --set /etc/nova/nova.conf DEFAULT network_device_mtu 1500

openstack-config --set /etc/nova/nova.conf DEFAULT report_interval 60

openstack-config --set /etc/nova/nova.conf DEFAULT remove_unused_base_images False

openstack-config --set /etc/nova/nova.conf DEFAULT reserved_host_memory_mb 512

openstack-config --set /etc/nova/nova.conf DEFAULT service_down_time 180

openstack-config --set /etc/nova/nova.conf oslo_messaging_rabbit rabbit_hosts controller1:5672,controller2:5672,controller3:5672

openstack-config --set /etc/nova/nova.conf oslo_messaging_rabbit rabbit_userid openstack

openstack-config --set /etc/nova/nova.conf oslo_messaging_rabbit rabbit_password gdxc1902

openstack-config --set /etc/nova/nova.conf oslo_messaging_rabbit rabbit_use_ssl False

openstack-config --set /etc/nova/nova.conf oslo_messaging_rabbit rabbit_ha_queues True

openstack-config --set /etc/nova/nova.conf oslo_messaging_rabbit rabbit_retry_interval 1

openstack-config --set /etc/nova/nova.conf oslo_messaging_rabbit rabbit_retry_backoff 2

openstack-config --set /etc/nova/nova.conf oslo_messaging_rabbit rabbit_max_retries 0

openstack-config --set /etc/nova/nova.conf oslo_messaging_rabbit amqp_durable_queues False

openstack-config --set /etc/nova/nova.conf keystone_authtoken auth_uri http://10.1.1.140:5000

openstack-config --set /etc/nova/nova.conf keystone_authtoken auth_url http://10.1.1.140:35357

openstack-config --set /etc/nova/nova.conf keystone_authtoken memcached_servers controller1:11211,controller2:11211,controller3:11211

openstack-config --set /etc/nova/nova.conf keystone_authtoken auth_type password

openstack-config --set /etc/nova/nova.conf keystone_authtoken project_domain_name default

openstack-config --set /etc/nova/nova.conf keystone_authtoken user_domain_name default

openstack-config --set /etc/nova/nova.conf keystone_authtoken project_name service

openstack-config --set /etc/nova/nova.conf keystone_authtoken username nova

openstack-config --set /etc/nova/nova.conf keystone_authtoken password gdxc1902

openstack-config --set /etc/nova/nova.conf vnc enabled True

openstack-config --set /etc/nova/nova.conf vnc keymap en-us

openstack-config --set /etc/nova/nova.conf vnc vncserver_listen 0.0.0.0

openstack-config --set /etc/nova/nova.conf vnc vncserver_proxyclient_address 10.1.1.144

openstack-config --set /etc/nova/nova.conf vnc novncproxy_base_url http://10.1.1.140:6080/vnc_auto.html

openstack-config --set /etc/nova/nova.conf glance api_servers http://10.1.1.140:9292

openstack-config --set /etc/nova/nova.conf oslo_concurrency lock_path /var/lib/nova/tmp

openstack-config --set /etc/nova/nova.conf libvirt virt_type kvm

openstack-config --set /etc/nova/nova.conf libvirt cpu_mode host-model

注意!如果是在物理機上virt_type請改成kvm

線上熱遷移:

源和目標節點的cpu型別要一致。

源和目標節點的libvirt版本要一致。

源和目標節點能相互識別對方的主機名稱,比如可以在/etc/hosts中加入對方的主機名

vim /etc/nova/nova.conf

openstack-config --set /etc/nova/nova.conf libvirt block_migration_flag VIR_MIGRATE_UNDEFINE_SOURCE,VIR_MIGRATE_PEER2PEER,VIR_MIGRATE_LIVE,VIR_MIGRATE_NON_SHARED_INC

openstack-config --set /etc/nova/nova.conf libvirt live_migration_flag VIR_MIGRATE_UNDEFINE_SOURCE,VIR_MIGRATE_PEER2PEER,VIR_MIGRATE_LIVE,VIR_MIGRATE_PERSIST_DEST

注:如果cpu型號不一樣,比如一個cpu版本低,一個cpu版本高,那麼cpu版本低上面的虛擬機器可以熱遷移或者冷遷移到cpu版本高的上面,但是反過來不行,如果要cpu版本高的遷移到版本低的上,需要做如下設定:

vim /etc/nova/nova.conf

[libvirt]組額外新增下面兩引數:

openstack-config --set /etc/nova/nova.conf libvirt libvirt_cpu_mode custom

openstack-config --set /etc/nova/nova.conf libvirt libvirt_cpu_model kvm64

修改/etc/sysconfig/libvirtd/etc/libvirt/libvirtd.conf檔案

sed -i 's/#listen_tls = 0/listen_tls = 0/g' /etc/libvirt/libvirtd.conf

sed -i 's/#listen_tcp = 1/listen_tcp = 1/g' /etc/libvirt/libvirtd.conf

sed -i 's/#auth_tcp = "sasl"/auth_tcp = "none"/g' /etc/libvirt/libvirtd.conf

sed -i 's/#LIBVIRTD_ARGS="--listen"/LIBVIRTD_ARGS="--listen"/g' /etc/sysconfig/libvirtd

nfs-backend節點上操作:

mkdir -p /var/lib/nova/instances

mkdir -p /var/lib/glance/imagecache

然後/etc/exports裡新增以下內容

/var/lib/nova/instances *(rw,sync,no_root_squash)

/var/lib/glance/imagecache *(rw,sync,no_root_squash)

重啟nfs相關服務

systemctl restart rpcbind

systemctl restart nfs-server

nfs目錄生效

showmount -e

在計算節點上掛載共享目錄

mount -t nfs 10.1.1.125:/var/lib/nova/instances /var/lib/nova/instances

mount -t nfs 10.1.1.125:/var/lib/glance/imagecache /var/lib/nova/instances/_base

echo "/usr/bin/mount -t nfs 10.1.1.125:/var/lib/nova/instances /var/lib/nova/instances" >> /etc/rc.d/rc.local

echo "/usr/bin/mount -t nfs 10.1.1.125:/var/lib/glance/imagecache /var/lib/nova/instances/_base" >> /etc/rc.d/rc.local

cd /var/lib/nova

chown -R nova:nova instances/

chown -R nova:nova instances/_base

chmod +x /etc/rc.d/rc.local

cat /etc/rc.d/rc.local

df -h

nova-manage vm list

nova live-migration ID compute2

nova-manage vm list

3.設定libvirtd.serviceopenstack-nova-compute.service開機啟動

systemctl enable libvirtd.service openstack-nova-compute.service

systemctl restart libvirtd.service openstack-nova-compute.service

systemctl status libvirtd.service openstack-nova-compute.service

4.新增環境變數

cat <<END >/root/admin-openrc

cat <<END >/root/demo-openrc

5.驗證

source /root/admin-openrc

openstack compute service list

6.安裝neutron相關軟體包

yum install -y openstack-neutron-linuxbridge ebtables ipset

7.配置neutron.conf

cp /etc/neutron/neutron.conf /etc/neutron/neutron.conf.bak

>/etc/neutron/neutron.conf

openstack-config --set /etc/neutron/neutron.conf DEFAULT debug False

openstack-config --set /etc/neutron/neutron.conf DEFAULT verbose True

openstack-config --set /etc/neutron/neutron.conf DEFAULT rpc_backend rabbit

openstack-config --set /etc/neutron/neutron.conf DEFAULT auth_strategy keystone

openstack-config --set /etc/neutron/neutron.conf DEFAULT rpc_response_timeout 180

openstack-config --set /etc/neutron/neutron.conf DEFAULT bind_host compute1

openstack-config --set /etc/neutron/neutron.conf DEFAULT dhcp_lease_duration 600

openstack-config --set /etc/neutron/neutron.conf DEFAULT global_physnet_mtu 1500

openstack-config --set /etc/neutron/neutron.conf DEFAULT advertise_mtu True

openstack-config --set /etc/neutron/neutron.conf DEFAULT dhcp_agents_per_network 2

openstack-config --set /etc/neutron/neutron.conf DEFAULT control_exchange neutron

openstack-config --set /etc/neutron/neutron.conf DEFAULT nova_url http://10.1.1.140:8774/v2

openstack-config --set /etc/neutron/neutron.conf agent root_helper sudo

openstack-config --set /etc/neutron/neutron.conf agent report_interval 10

openstack-config --set /etc/neutron/neutron.conf oslo_messaging_rabbit rabbit_hosts controller1:5672,controller2:5672,controller3:5672

openstack-config --set /etc/neutron/neutron.conf oslo_messaging_rabbit rabbit_userid openstack

openstack-config --set /etc/neutron/neutron.conf oslo_messaging_rabbit rabbit_password gdxc1902

openstack-config --set /etc/neutron/neutron.conf oslo_messaging_rabbit rabbit_use_ssl False

openstack-config --set /etc/neutron/neutron.conf oslo_messaging_rabbit rabbit_ha_queues True

openstack-config --set /etc/neutron/neutron.conf oslo_messaging_rabbit rabbit_retry_interval 1

openstack-config --set /etc/neutron/neutron.conf oslo_messaging_rabbit rabbit_retry_backoff 2

openstack-config --set /etc/neutron/neutron.conf oslo_messaging_rabbit rabbit_max_retries 0

openstack-config --set /etc/neutron/neutron.conf oslo_messaging_rabbit amqp_durable_queues False

openstack-config --set /etc/neutron/neutron.conf keystone_authtoken auth_uri http://10.1.1.140:5000

openstack-config --set /etc/neutron/neutron.conf keystone_authtoken auth_url http://10.1.1.140:35357

openstack-config --set /etc/neutron/neutron.conf keystone_authtoken memcached_servers controller1:11211,controller2:11211,controller3:11211

openstack-config --set /etc/neutron/neutron.conf keystone_authtoken auth_type password

openstack-config --set /etc/neutron/neutron.conf keystone_authtoken project_domain_name default

openstack-config --set /etc/neutron/neutron.conf keystone_authtoken user_domain_name default

openstack-config --set /etc/neutron/neutron.conf keystone_authtoken project_name service

openstack-config --set /etc/neutron/neutron.conf keystone_authtoken username neutron

openstack-config --set /etc/neutron/neutron.conf keystone_authtoken password gdxc1902

openstack-config --set /etc/neutron/neutron.conf oslo_concurrency lock_path /var/lib/neutron/tmp

8.配置/etc/neutron/plugins/ml2/linuxbridge_agent.ini

cp /etc/neutron/plugins/ml2/linuxbridge_agent.ini/etc/neutron/plugins/ml2/linuxbridge_agent.ini.bak

>/etc/neutron/plugins/ml2/linuxbridge_agent.ini

openstack-config --set /etc/neutron/plugins/ml2/linuxbridge_agent.ini linux_bridge physical_interface_mappings provider:eth1

openstack-config --set /etc/neutron/plugins/ml2/linuxbridge_agent.ini vxlan enable_vxlan True

openstack-config --set /etc/neutron/plugins/ml2/linuxbridge_agent.ini vxlan local_ip 10.2.2.144

openstack-config --set /etc/neutron/plugins/ml2/linuxbridge_agent.ini vxlan l2_population True

openstack-config --set /etc/neutron/plugins/ml2/linuxbridge_agent.ini securitygroup enable_security_group True

openstack-config --set /etc/neutron/plugins/ml2/linuxbridge_agent.ini securitygroup firewall_driver neutron.agent.linux.iptables_firewall.IptablesFirewallDriver

9.配置nova.conf設定novaneutron結合

openstack-config --set /etc/nova/nova.conf neutron url http://10.1.1.140:9696

openstack-config --set /etc/nova/nova.conf neutron auth_url http://10.1.1.140:35357

openstack-config --set /etc/nova/nova.conf neutron auth_type password

openstack-config --set /etc/nova/nova.conf neutron project_domain_name default

openstack-config --set /etc/nova/nova.conf neutron user_domain_name default

openstack-config --set /etc/nova/nova.conf neutron region_name RegionOne

openstack-config --set /etc/nova/nova.conf neutron project_name service

openstack-config --set /etc/nova/nova.conf neutron username neutron

openstack-config --set /etc/nova/nova.conf neutron password gdxc1902

10.重啟和enable相關服務

systemctl restart libvirtd.service openstack-nova-compute.service

systemctl enable neutron-linuxbridge-agent.service && systemctl restart neutron-linuxbridge-agent.service

systemctl status openstack-nova-compute.service neutron-linuxbridge-agent.service

11.計算節點要是想用cinder,那麼需要配置nova配置檔案(注意!這一步是在計算節點操作的)

openstack-config --set /etc/nova/nova.conf cinder os_region_name RegionOne

systemctl restart libvirtd.service openstack-nova-compute.service

systemctl status libvirtd.service openstack-nova-compute.service

12.然後在三個控制節點上重啟nova服務

systemctl restart openstack-nova-api.service

systemctl status openstack-nova-api.service

13.驗證

source /root/admin-openrc

neutron ext-list

neutron agent-list

到此,compute節點搭建完畢,執行nova host-list可以檢視新加入的compute1節點。

如果需要再新增另外一個compute節點,只要重複上面的步驟,記得把計算機名和IP地址改下。

附錄:

建立flaor命令:

openstack flavor create m1.tiny --id 1 --ram 512 --disk 1 --vcpus 1

openstack flavor create m1.small --id 2 --ram 2048 --disk 20 --vcpus 1

openstack flavor create m1.medium --id 3 --ram 4096 --disk 40 --vcpus 2

openstack flavor create m1.large --id 4 --ram 8192 --disk 80 --vcpus 4

openstack flavor create m1.xlarge --id 5 --ram 16384 --disk 160 --vcpus 8

openstack flavor list

https://github.com/gaelL/openstack-log-colorizer/ //檢視log檔案並新增顏色工具

wget -O /usr/local/bin/openstack_log_colorizer https://raw.githubusercontent.com/gaelL/openstack-log-colorizer/master/openstack_log_colorizer

chmod +x /usr/local/bin/openstack_log_colorizer

cat log | openstack_log_colorizer --level warning

cat log | openstack_log_colorizer --include error TRACE

cat log | openstack_log_colorizer --exclude INFO warning

定時計劃任務:

crontab -e //編寫計劃任務

* * * * * source /root/admin-openrc && /usr/bin/python /root/ln_all_images.py

* * * * * sleep 10; source /root/admin-openrc && /usr/bin/python /root/ln_all_images.py

* * * * * sleep 20; source /root/admin-openrc && /usr/bin/python /root/ln_all_images.py

* * * * * sleep 30; source /root/admin-openrc && /usr/bin/python /root/ln_all_images.py

* * * * * sleep 40; source /root/admin-openrc && /usr/bin/python /root/ln_all_images.py

* * * * * sleep 50; source /root/admin-openrc && /usr/bin/python /root/ln_all_images.py

crontab -l //檢視計劃任務

systemctl restart crond.service

tail -f log檔案 //檢視log檔案

nova reset-state --active ID //修改虛擬機器狀態,重啟後恢復正常

自動ln指令碼:

vim ln_all_image.py

import os

import logging

import logging.handlers

import hashlib

import commands

#Log

LOG_FILE = 'ln_all_image.log'

handler = logging.handlers.RotatingFileHandler(LOG_FILE,maxBytes =1024*1024,backupCount = 5)

fmt = '%(asctime)s - %(filename)s:%(lineno)s - %(name)s - %(message)s'

formatter = logging.Formatter(fmt)

handler.setFormatter(formatter)

logger = logging.getLogger('images')

logger.addHandler(handler)

logger.setLevel(logging.DEBUG)

#image_list = commands.getoutput("ls -l -tr /var/lib/glance/images | awk 'NR>1{ print $NF }'").strip().split('\n')

image_list = commands.getoutput("""glance image-list |awk -F"|" '{print $2}'|grep -v -E '(ID|^$)'""").strip().split()

status = commands.getoutput("""openstack image list |awk 'NR>2{print $6}'|grep -v -E '(ID|^$)'""").strip().split()

queued = "queued"

saving = "saving"

#print status

#print type(status)

if queued in status or saving in status:

image_list_1 = commands.getoutput("ls -l -tr /var/lib/glance/images | awk 'NR>1{l[NR]=$0} END {for (i=1;i<=NR-3;i++)print l[i]}' | awk '{print $9}' |grep -v ^$").strip().split()

logger.info('new snapshoot is create now...')

for ida in image_list_1:

image_id = ida.strip()

image_id_hash = hashlib.sha1()

image_id_hash.update(ida)

newid1 = image_id_hash.hexdigest()

commands.getoutput('ln /var/lib/glance/images/{0} /var/lib/glance/imagecache/{1}'.format(ida,newid1))

commands.getoutput('chown qemu:qemu /var/lib/glance/imagecache/{0}'.format(newid1))

commands.getoutput('chmod 644 /var/lib/glance/imagecache/{0}'.format(newid1))

else:

image_list_2 = commands.getoutput("ls -l -tr /var/lib/glance/images | awk 'NR>1{ print $NF }'").strip().split()

logger.info('no image take snapshoot,ln all images...')

for idb in image_list_2:

image_id = idb.strip()

image_id_hash = hashlib.sha1()

image_id_hash.update(idb)

newid2 = image_id_hash.hexdigest()

commands.getoutput('ln /var/lib/glance/images/{0} /var/lib/glance/imagecache/{1}'.format(idb,newid2))

commands.getoutput('chown qemu:qemu /var/lib/glance/imagecache/{0}'.format(newid2))

commands.getoutput('chmod 644 /var/lib/glance/imagecache/{0}'.format(newid2))

十二、把相關服務和資源新增到pacermaker

0.pacermaker引數說明

primitive新增格式:

primitive 唯一ID 資源代理型別:資源代理的提供程式:資源代理名稱

params attr_list

meta attr_list

op op_type [<attribute>=<value>...]

primitive引數說明:

資源代理型別:lsb.ocf,stonith,service

資源代理的提供程式:heartbeat,pacemaker

資源代理名稱:即resource agent,如:IPaddr2,httpd,mysql

params:例項屬性,是特定資源類的引數,用於確定資源類的行為方式及其控制的服務例項。

meta:元屬性,是可以為資源新增的選項。它們告訴CRM如何處理特定資源。

op:操作,預設情況下,叢集不會確保您的資源一直正常。要指示叢集確保資源狀況依然正常,需要向資源的定義中新增一個監視操作monitor。可為所有類或資源代理新增monitor

op_type:包括startstopmonitor

interval:執行操作的頻率。單位:秒。

timeout:需要等待多久才宣告操作失敗。

requires:需要滿足什麼條件才能發生此操作。允許的值:nothingquorumfencing。預設值取決於是否啟用遮蔽和資源的類是否為stonith。對於STONITH資源,預設值為nothing

on-fail:此操作失敗時執行的操作。允許值:

ignore:假裝資源沒有失敗。

block:不對資源執行任何進一步操作。

stop:停止資源並且不在其他位置啟動該資源。

restart:停止資源並(可能在不同的節點上)重啟動。

fence:關閉資源失敗的節點(STONITH)。

standby:將所有資源從資源失敗的節點上移走。

enabled:如果值為false,將操作視為不存在。允許的值:truefalse

:

primitive r0 ocf:linbit:drbd \

params drbd_resource=r0 \

op monitor role=Master interval=60s \

op monitor role=Slave interval=300s

meta元屬性引數說明:

priority:如果不允許所有的資源處於活動狀態,叢集會停止優先順序較低的資源以便保持較高優先順序資源處於活動狀態。

target-role:此資源試圖保持的狀態,包括startedstoped

is-managed:是否允許叢集啟動和停止資源,包括truefalse

migration-threshold:用來定義資源的故障次數,假設已經為資源配製了一個首選在節點上執行的位置約束。如果那裡失敗了,系統會檢查migration-threshold並與故障計數進行比較。如果故障計數>=migration-threshold,會將資源遷移到下一個自選節點。

預設情況下,一旦達到閾值,就只有在管理員手動重置資源的故障計數後(在修復故障原因後),才允許在該節點執行有故障的資源。

但是,可以通過設定資源的failure-timeout選項使故障計數失效。如果設定migration-threshold=2failure-timeout=60s,將會導致資源在兩次故障後遷移到新的節點,並且可能允許在一分鐘後移回(取決於黏性和約束分數)。

遷移閾值概念有兩個例外,在資源啟動失敗或停止失敗時出現,啟動故障會使故障計數設定為INFINITY,因此總是導致立即遷移。停止故障會導致屏障(stonith-enabled設定為true時,這是預設設定)。如果不定義STONITH資源(或stonith-enabled設定為false),則該資源根本不會遷移。

failure-timeout:在恢復為如同未發生故障一樣正常工作(並允許資源返回它發生故障的節點)之前,需要等待幾秒鐘,預設值為0disabled

resource-stickiness:資源留在所處位置的自願程度如何,即黏性,預設為0

multiple-active:如果發現資源在多個節點上活動,叢集該如何操作,包括:

block(將資源標記為未受管),stop_ohly(停止所有活動例項),stop_start(預設值,停止所有活動例項,並在某個節點啟動資源)

requires:定義某種條件下資源會被穹頂。預設資源會被fencing。為以下這幾種值時除外

*nothing - 叢集總能啟動資源;

*quorum - 叢集只有在大多數節點線上時能啟動資源,當stonith-enabledfalse或資源為stonith時,其為預設值;

*fencing - 叢集只有在大多數節點線上,或在任何失敗或未知節點被關閉電源時,才能啟動資源;

*unfencing - 叢集只有在大多數節點線上,或在任何失敗或未知節點被關閉電源時並且只有當節點沒被fencing時,才能啟動資源。當為某一fencing裝置,而被stonithmeta引數設定為provides=unfencing時,其為預設值。

1.新增rabbitmq服務到pcs

rabbitmqpcs資源在/usr/lib/ocf/resource.d/rabbitmq

在每個控制節點操作:

systemctl disable rabbitmq-server

primitive rabbitmq-server systemd:rabbitmq-server \

op start interval=0s timeout=30 \

op stop interval=0s timeout=30 \

op monitor interval=30 timeout=30 \

meta priority=100 target-role=Started

clone rabbitmq-server-clone rabbitmq-server meta target-role=Started

commit

controller1上操作:

cat /var/lib/rabbitmq/.erlang.cookie 檢視本機rabbitmq cookie

crm configure

primitive p_rabbitmq-server ocf:rabbitmq:rabbitmq-server-ha \

params erlang_cookie=NGVCLPABVAERDMWKMGYT node_port=5672 \

op monitor interval=30 timeout=60 \

op monitor interval=27 role=Master timeout=60 \

op start interval=0s timeout=360 \

op stop interval=0s timeout=120 \

op promote interval=0 timeout=120 \

op demote interval=0 timeout=120 \

op notify interval=0 timeout=180 \

meta migration-threshold=10 failure-timeout=30s resource-stickiness=100

ms p_rabbitmq-server-master p_rabbitmq-server \

meta interleave=true master-max=1 master-node-max=1 notify=true ordered=false requires=nothing target-role=Started

commit

做完資源新增操作大概過4-5分鐘後,服務才能接管起來:

crm status

2.新增haproxypcs

在每個控制節點上操作:

systemctl disable haproxy

controller1上操作:

crm configure

primitive haproxy systemd:haproxy \

op start interval=0s timeout=20 \

op stop interval=0s timeout=20 \

op monitor interval=20s timeout=30s \

meta priority=100 target-role=Started

定義haproxy服務與VIP繫結

colocation haproxy-with-vip_management inf: vip_management:Started haproxy:Started

colocation haproxy-with-vip_public inf: vip_public:Started haproxy:Started

verify

commit

3.新增httpdmemcachepcs

在每個控制節點操作:

systemctl disable httpd

systemctl disable memcached

congtroller1上操作:

primitive httpd systemd:httpd \

op start interval=0s timeout=30s \

op stop interval=0s timeout=30s \

op monitor interval=30s timeout=30s \

meta priority=100 target-role=Started

primitive memcached systemd:memcached \

op start interval=0s timeout=30s \

op stop interval=0s timeout=30s \

op monitor interval=30s timeout=30s \

meta priority=100 target-role=Started

clone openstack-dashboard-clone httpd meta target-role=Started

clone openstack-memcached-clone memcached meta target-role=Started

commit

4.新增glance相關服務到pcs

在每個控制節點操作:

systemctl disable openstack-glance-api openstack-glance-registry

congtroller1上操作:

primitive openstack-glance-api systemd:openstack-glance-api \

op start interval=0s timeout=30s \

op stop interval=0s timeout=30s \

op monitor interval=30s timeout=30s \

meta priority=100 target-role=Started

primitive openstack-glance-registry systemd:openstack-glance-registry \

op start interval=0s timeout=30s \

op stop interval=0s timeout=30s \

op monitor interval=30s timeout=30s \

meta priority=100 target-role=Started

clone openstack-glance-api-clone openstack-glance-api \

meta target-role=Started

clone openstack-glance-registry-clone openstack-glance-registry \

meta target-role=Started

commit

5.新增nova相關服務到pcs

在每個控制節點操作:

systemctl disable openstack-nova-api openstack-nova-cert openstack-nova-consoleauth openstack-nova-scheduler openstack-nova-conductor openstack-nova-novncproxy

congtroller1上操作:

primitive openstack-nova-api systemd:openstack-nova-api \

op start interval=0s timeout=45 \

op stop interval=0s timeout=45 \

op monitor interval=30s timeout=30 \

meta priority=100 target-role=Started

primitive openstack-nova-cert systemd:openstack-nova-cert \

op start interval=0s timeout=45 \

op stop interval=0s timeout=45 \

op monitor interval=30s timeout=30 \

meta priority=100 target-role=Started

primitive openstack-nova-consoleauth systemd:openstack-nova-consoleauth \

op start interval=0s timeout=45 \

op stop interval=0s timeout=45 \

op monitor interval=30s timeout=30 \

meta priority=100 target-role=Started

primitive openstack-nova-scheduler systemd:openstack-nova-scheduler \

op start interval=0s timeout=45 \

op stop interval=0s timeout=45 \

op monitor interval=30s timeout=30 \

meta priority=100 target-role=Started

primitive openstack-nova-conductor systemd:openstack-nova-conductor \

op start interval=0s timeout=45 \

op stop interval=0s timeout=45 \

op monitor interval=30s timeout=30 \

meta priority=100 target-role=Started

primitive openstack-nova-novncproxy systemd:openstack-nova-novncproxy \

op start interval=0s timeout=45 \

op stop interval=0s timeout=45 \

op monitor interval=30s timeout=30 \

meta priority=100 target-role=Started

clone openstack-nova-api-clone openstack-nova-api \

meta target-role=Started

clone openstack-nova-cert-clone openstack-nova-cert \

meta target-role=Started

clone openstack-nova-scheduler-clone openstack-nova-scheduler \

meta target-role=Started

clone openstack-nova-conductor-clone openstack-nova-conductor \

meta target-role=Started

clone openstack-nova-novncproxy-clone openstack-nova-novncproxy \

meta target-role=Started

commit

6.新增cinder相關服務到pcs

在每個控制節點操作:

systemctl disable openstack-cinder-api openstack-cinder-scheduler

primitive openstack-cinder-api systemd:openstack-cinder-api \

op start interval=0s timeout=45 \

op stop interval=0s timeout=45 \

op monitor interval=30s timeout=30 \

meta priority=100 target-role=Started

primitive openstack-cinder-scheduler systemd:openstack-cinder-scheduler \

op start interval=0s timeout=45 \

op stop interval=0s timeout=45 \

op monitor interval=30s timeout=30 \

meta priority=100 target-role=Started

clone openstack-cinder-api-clone openstack-cinder-api \

meta target-role=Started

clone openstack-cinder-scheduler-clone openstack-cinder-scheduler \

meta target-role=Started

commit

7.新增neutron相關服務到pcs

在每個控制節點操作:

systemctl disable neutron-server neutron-l3-agent neutron-linuxbridge-agent neutron-dhcp-agent neutron-metadata-agent

primitive openstack-neutron-server systemd:neutron-server \

op start interval=0s timeout=45 \

op stop interval=0s timeout=45 \

op monitor interval=30s timeout=30 \

meta priority=100 target-role=Started

primitive openstack-neutron-l3-agent systemd:neutron-l3-agent \

op start interval=0s timeout=45 \

op stop interval=0s timeout=45 \

op monitor interval=30s timeout=30 \

meta priority=100 target-role=Started

primitive openstack-neutron-linuxbridge-agent systemd:neutron-linuxbridge-agent \

op start interval=0s timeout=45 \

op stop interval=0s timeout=45 \

op monitor interval=30s timeout=30 \

meta priority=100 target-role=Started

primitive openstack-neutron-dhcp-agent systemd:neutron-dhcp-agent \

op start interval=0s timeout=45 \

op stop interval=0s timeout=45 \

op monitor interval=30s timeout=30 \

meta priority=100 target-role=Started

primitive openstack-neutron-metadata-agent systemd:neutron-metadata-agent \

op start interval=0s timeout=45 \

op stop interval=0s timeout=45 \

op monitor interval=30s timeout=30 \

meta priority=100 target-role=Started

clone openstack-neutron-server-clone openstack-neutron-server \

meta target-role=Started

clone openstack-neutron-l3-agent-clone openstack-neutron-l3-agent \

meta target-role=Started

clone openstack-neutron-linuxbridge-agent-clone openstack-neutron-linuxbridge-agent \

meta target-role=Started

clone openstack-neutron-dhcp-agent-clone openstack-neutron-dhcp-agent \

meta target-role=Started

clone openstack-neutron-metadata-agent-clone openstack-neutron-metadata-agent \

meta target-role=Started

commit

8.排錯

crm resource cleanup rabbitmq-server-clone

crm resource cleanup openstack-dashboard-clone

crm resource cleanup openstack-memcached-clone

crm resource cleanup openstack-glance-api-clone

crm resource cleanup openstack-glance-registry-clone

crm resource cleanup openstack-nova-api-clone

crm resource cleanup openstack-nova-cert-clone

crm resource cleanup openstack-nova-scheduler-clone

crm resource cleanup openstack-nova-conductor-clone

crm resource cleanup openstack-nova-novncproxy-clone

crm resource cleanup openstack-cinder-api-clone

crm resource cleanup openstack-cinder-scheduler-clone

crm resource cleanup openstack-neutron-server-clone

crm resource cleanup openstack-neutron-l3-agent-clone

crm resource cleanup openstack-neutron-linuxbridge-agent-clone

crm resource cleanup openstack-neutron-dhcp-agent-clone

crm resource cleanup openstack-neutron-metadata-agent-clone

win2016計算節點nova配置

[libvirt]

cpu_mode=host-passthrough

sftp要下載資料夾,需要進入到目錄內,然後再使用命令get -r ./.來下載整個資料夾內的資料: