openstack部署5、部署PCS
5、部署PCS
5.1 安裝pcs+pacemaker+corosync (controller1、controller2和 controller3)
所有控制節點安裝pcs、pacemaker、corosync, pacemaker是資源管理器,corosync提供心跳機制。
[root@controller1:/root]# yum install -y lvm2 cifs-utils quota psmisc pcs pacemaker corosync fence-agents-all resource-agents crmsh [root@controller2:/root]# yum install-y lvm2 cifs-utils quota psmisc pcs pacemaker corosync fence-agents-all resource-agents crmsh [root@controller3:/root]# yum install -y lvm2 cifs-utils quota psmisc pcs pacemaker corosync fence-agents-all resource-agents crmsh [root@controller1:/root]# systemctl enable pcsd corosync [root@controller2:/root]# systemctl enable pcsd corosync [root@controller3:/root]# systemctl enable pcsd corosync [root@controller1:/root]# systemctl start pcsd && systemctl status pcsd [root@controller2:/root]# systemctl start pcsd && systemctl status pcsd [root@controller3:/root]# systemctl start pcsd && systemctl status pcsd
5.2 設定叢集密碼,而且三個節點密碼需一直為:pcs123456
[root@controller1:/root]# echo "pcs123456" |passwd --stdin hacluster [root@controller2:/root]# echo "pcs123456" |passwd --stdin hacluster [root@controller3:/root]# echo "pcs123456" |passwd --stdin hacluster
5.3 控制節點建立配置檔案corosync.conf
[root@controller2:/root]# cat <<EOF>/etc/corosync/corosync.conf totem { version: 2 secauth:off cluster_name:openstack-cluster transport:udpu } nodelist { node { ring0_addr:controller1 nodeid:1 } node { ring0_addr:controller2 nodeid:2 } node { ring0_addr:controller3 nodeid:3 } } logging { to_logfile: yes logfile: /var/log/cluster/corosync.log to_syslog: yes debug: off } quorum { provider: corosync_votequorum } EOF [root@controller2:/root]# scp /etc/corosync/corosync.conf controller1:/etc/corosync/ [root@controller2:/root]# scp /etc/corosync/corosync.conf controller3:/etc/corosync/
5.4 配置叢集,設定叢集互相認證
ssh-keygen ssh-copy-id controller1 ssh-copy-id controller2 ssh-copy-id controller3
5.5 配置節點認證
[root@controller2:/root]# pcs cluster auth controller1 controller2 controller3 -u hacluster -p"pcs123456" controller2: Authorized controller3: Authorized controller1: Authorized pcs cluster auth controller1 controller2 -u hacluster -p {password} {password}表示為剛才設定的密碼
5.6 建立叢集
[root@controller2:/root]# pcs cluster setup --force --name openstack-cluster controller1 controller2 controller3 Destroying cluster on nodes: controller1, controller2, controller3... controller2: Stopping Cluster (pacemaker)... controller3: Stopping Cluster (pacemaker)... controller1: Stopping Cluster (pacemaker)... controller1: Successfully destroyed cluster controller2: Successfully destroyed cluster controller3: Successfully destroyed cluster Sending 'pacemaker_remote authkey' to 'controller1', 'controller2', 'controller3' controller1: successful distribution of the file 'pacemaker_remote authkey' controller3: successful distribution of the file 'pacemaker_remote authkey' controller2: successful distribution of the file 'pacemaker_remote authkey' Sending cluster config files to the nodes... controller1: Succeeded controller2: Succeeded controller3: Succeeded Synchronizing pcsd certificates on nodes controller1, controller2, controller3... controller2: Success controller3: Success controller1: Success Restarting pcsd on the nodes in order to reload the certificates... controller2: Success controller3: Success controller1: Success
5.7 啟動叢集並檢視叢集狀態
[root@controller2:/root]# pcs cluster enable --all controller1: Cluster Enabled controller2: Cluster Enabled controller3: Cluster Enabled [root@controller2:/root]# pcs cluster start --all controller1: Starting Cluster (corosync)... controller2: Starting Cluster (corosync)... controller3: Starting Cluster (corosync)... controller1: Starting Cluster (pacemaker)... controller3: Starting Cluster (pacemaker)... controller2: Starting Cluster (pacemaker)... [root@controller2:/root]# pcs cluster status Cluster Status: Stack: corosync Current DC: controller3 (version 1.1.20-5.el7_7.2-3c4c782f70) - partition with quorum Last updated: Wed Aug 5 15:21:16 2020 Last change: Wed Aug 5 15:20:59 2020 by hacluster via crmd on controller3 3 nodes configured 0 resources configured PCSD Status: controller2: Online controller3: Online controller1: Online [root@controller2:/root]# ps aux | grep pacemaker root 15586 0.0 0.0 132972 8700 ? Ss 15:20 0:00 /usr/sbin/pacemakerd -f haclust+ 15587 0.1 0.0 136244 14620 ? Ss 15:20 0:00 /usr/libexec/pacemaker/cib root 15588 0.0 0.0 136064 7664 ? Ss 15:20 0:00 /usr/libexec/pacemaker/stonithd root 15589 0.0 0.0 98836 4372 ? Ss 15:20 0:00 /usr/libexec/pacemaker/lrmd haclust+ 15590 0.0 0.0 128068 6620 ? Ss 15:20 0:00 /usr/libexec/pacemaker/attrd haclust+ 15591 0.0 0.0 80508 3500 ? Ss 15:20 0:00 /usr/libexec/pacemaker/pengine haclust+ 15592 0.0 0.0 140380 8260 ? Ss 15:20 0:00 /usr/libexec/pacemaker/crmd root 15632 0.0 0.0 112712 960 pts/0 S+ 15:21 0:00 grep --color=auto pacemaker
5.8 配置叢集
三個節點都線上
預設的表決規則建議叢集中的節點個數為奇數且不低於3。當叢集只有2個節點,其中1個節點崩壞,由於不符合預設的表決規則,叢集資源不發生轉移,叢集整體仍不可用。no-quorum-policy="ignore"可以解決此雙節點的問題,但不要用於生產環境。換句話說,生產環境還是至少要3節點。
pe-warn-series-max、pe-input-series-max、pe-error-series-max代表日誌深度。
Virtual-recheck-interval是節點重新檢查的頻率。
[root@controller1 ~]# pcs property set pe-warn-series-max=1000 pe-input-series-max=1000 pe-error-series-max=1000 Virtual-recheck-interval=5min
禁用stonith:
stonith是一種能夠接受指令斷電的物理裝置,環境無此裝置,如果不關閉該選項,執行pcs命令總是含其報錯資訊。
[root@controller1 ~]# pcs property set stonith-enabled=false
二個節點時,忽略節點quorum功能:
[root@controller1 ~]# pcs property set no-quorum-policy=ignore
驗證叢集配置資訊
[root@controller1 ~]# crm_verify -L -V
為叢集配置虛擬 ip
[root@controller1 ~]# pcs resource create VirtualIP ocf:heartbeat:IPaddr2 \ ip="192.168.110.120" cidr_netmask=32 nic=ens160 op monitor interval=30s
到此,Pacemaker+corosync 是為 haproxy服務的,新增haproxy資源到pacemaker叢集
[root@controller1 ~]# pcs resource create lb-haproxy systemd:haproxy --clone
說明:建立克隆資源,克隆的資源會在全部節點啟動。這裡haproxy會在三個節點自動啟動。
檢視Pacemaker資源情況
[root@controller1 ~]# pcs resource VirtualIP (ocf::heartbeat:IPaddr2): Started controller1 # 心跳的資源繫結在第三個節點的 Clone Set: lb-haproxy-clone [lb-haproxy] # haproxy克隆資源 Started: [ controller1 controller2 controller3 ]
注意:這裡一定要進行資源繫結,否則每個節點都會啟動haproxy,造成訪問混亂
將這兩個資源繫結到同一個節點上
[root@controller1 ~]# pcs constraint colocation add lb-haproxy-clone VirtualIP INFINITY
繫結成功
[root@controller1 ~]# pcs resource VirtualIP (ocf::heartbeat:IPaddr2): Started controller3 Clone Set: lb-haproxy-clone [lb-haproxy] Started: [ controller1] Stopped: [ controller2 controller3 ]
配置資源的啟動順序,先啟動vip,然後haproxy再啟動,因為haproxy是監聽到vip
[root@controller1 ~]# pcs constraint order VirtualIP then lb-haproxy-clone pcs resource create haproxy systemd:haproxy op monitor interval="5s" pcs constraint colocation add VirtualIP haproxy INFINITY pcs constraint order VirtualIP then haproxy pcs resource restart haproxy
手動指定資源到某個預設節點,因為兩個資源繫結關係,移動一個資源,另一個資源自動轉移。
[root@controller1 ~]# pcs constraint location VirtualIP prefers controller1 [root@controller1 ~]# pcs resource VirtualIP (ocf::heartbeat:IPaddr2): Started controller1 Clone Set: lb-haproxy-clone [lb-haproxy] Started: [ controller1 ] Stopped: [ controller2 controller3 ] [root@controller1 ~]# pcs resource defaults resource-stickiness=100 # 設定資源粘性,防止自動切回造成叢集不穩定
現在vip已經繫結到controller1節點
[root@controller1 ~]# ip a | grep global inet 192.168.110.121/24 brd 192.168.0.255 scope global ens160 inet 192.168.110.120/32 brd 192.168.0.255 scope global ens160 inet 192.168.114.121/24 brd 192.168.114.255 scope global ens192