keepalived高可用負載均衡器
阿新 • • 發佈:2018-07-12
實現 sbin port iptables for epel源 物理 tcp協議 root
一、集群相關概念簡述
HA是High Available縮寫,是雙機集群系統簡稱,指高可用性集群,是保證業務連續性的有效解決方案,一般有兩個或兩個以上的節點,且分為活動節點及備用節點。
1、集群的分類:
- LB:負載均衡集群
- lvs負載均衡
- nginx反向代理
- HAProxy
- HA:高可用集群
- heartbeat
- keepalived
- redhat5 : cman + rgmanager , conga(WebGUI) --> RHCS(Cluster Suite)集群套件
- redhat6 : cman + rgmanager , corosync + pacemaker
- redhat7 : corosync + pacemaker
- HP:高性能集群
2、系統可用性的計算公式
A=MTBF/(MTBF+MTTR)
- A:高可用性,指標:95%, 99%, 99.5%, ...., 99.999%,99.9999%等
- MTBF:平均無故障時間
- MTTR:平均修復時間
二、keepalived相關概念
- vrrp協議:Virtual Redundant Routing Protocol 虛擬冗余路由協議
- Virtual Router:虛擬路由器
- VRID(0-255):虛擬路由器標識
- master:主設備,當前工作的設備
- backup:備用設備
- priority:優先級,優先級越大優先工作,具體情況示工作方式決定
- VIP:虛擬IP地址,正真向客戶服務的IP地址
- VMAC:虛擬MAC地址(00-00-5e-00-01-VRID)
- 搶占式:如果有優先級高的節點上線,則將此節點轉為master
- 非搶占式:即使有優先級高的節點上線,在當前master工作無故障的情況運行搶占;等到此master故障後重新按優先級選舉master
- 心跳:master將自己的心跳信息通知集群內的所有主機,證明自己正常工作
- 安全認證機制:
- 無認證:任何主機都可成為集群內主機,強烈不推薦
- 簡單的字符認證:使用簡單的密碼進行認證
- AH認證
- sync group:同步組,VIP和DIP配置到同一物理服務器上
- MULTICAST:組播,多播
- Failover:master故障,故障切換,故障轉移
- Failback:故障節點重新上線,故障切回
三、keepalived
vrrp協議的軟件實現,原生設計的目的為了高可用ipvs服務
- 基於vrrp協議完成地址流動;
- 為vip地址所在的節點生成ipvs規則(在配置文件中預先定義);
- 為ipvs集群的各RS做健康狀態檢測;
- 基於腳本調用接口通過執行腳本完成腳本中定義的功能,進而影響集群事務;
組件:
1、安裝
# yum install keepalived
主配置文件:/etc/keepalived/keepalived.conf
主程序文件:/usr/sbin/keepalived
啟動服務:systemctl start keepalived
Unit File的環境配置文件:/etc/sysconfig/keepalived
2、配置文件參數詳解
全局配置段:
global_defs {
notification_email { #發送通知email,收件人
[email protected]
[email protected]
[email protected]
}
notification_email_from [email protected]
smtp_server 192.168.200.1 #郵件服務器地址
smtp_connect_timeout 30 #超時時長
router_id LVS_DEVEL #路由器標識ID
vrrp_skip_check_adv_addr #跳過的檢查地址
vrrp_strict #嚴格模式
vrrp_garp_interval 0 #免費arp
vrrp_gna_interval 0
}
虛擬路由實例段:
vrrp_instance <STRING> {
state MASTER|BACKUP:#當前節點在此虛擬路由器上的初始狀態;只能有一個是MASTER,余下的都應該為BACKUP;
interface IFACE_NAME:#綁定為當前虛擬路由器使用的物理接口;
virtual_router_id VRID:#當前虛擬路由器的惟一標識,範圍是0-255;
priority 100:#當前主機在此虛擬路徑器中的優先級;範圍1-254;
advert_int 1:#vrrp通告的時間間隔;
authentication {
auth_type AH|PASS #pass為簡單認證
auth_pass <PASSWORD> #認證密碼,8為密碼
}
virtual_ipaddress { #VIP配置
<IPADDR>/<MASK> brd <IPADDR> dev <STRING> scope <SCOPE> label <LABEL>
192.168.200.17/24 dev eth1
192.168.200.18/24 dev eth2 label eth2:1
}
track_interface { #配置要監控的網絡接口,一旦接口出現故障,則轉為FAULT狀態;
eth0
eth1
...
}
nopreempt:定義工作模式為非搶占模式;
preempt_delay 300:搶占式模式下,節點上線後觸發新選舉操作的延遲時長;
notify_master <STRING>|<QUOTED-STRING>:當前節點成為主節點時觸發的腳本;
notify_backup <STRING>|<QUOTED-STRING>:當前節點轉為備節點時觸發的腳本;
notify_fault <STRING>|<QUOTED-STRING>:當前節點轉為“失敗”狀態時觸發的腳本;
notify <STRING>|<QUOTED-STRING>:通用格式的通知觸發機制,一個腳本可完成以上三種狀態的轉換時的通知;
}
虛擬服務器配置:
virtual_server IP port | virtual_server fwmark int {
delay_loop <INT>:服務輪詢的時間間隔;
lb_algo rr|wrr|lc|wlc|lblc|sh|dh:定義調度方法;
lb_kind NAT|DR|TUN:集群的類型;
persistence_timeout <INT>:持久連接時長;
protocol TCP:服務協議,僅支持TCP;
sorry_server <IPADDR> <PORT>:備用服務器地址;
real_server {
weight <INT>
notify_up <STRING>|<QUOTED-STRING>
notify_down <STRING>|<QUOTED-STRING>
HTTP_GET|SSL_GET|TCP_CHECK|SMTP_CHECK|MISC_CHECK { ... }:定義當前主機的健康狀態檢測方法;
}
HTTP_GET|SSL_GET:應用層檢測
HTTP_GET|SSL_GET {
url {
path <URL_PATH>:定義要監控的URL;
status_code <INT>:判斷上述檢測機制為健康狀態的響應碼;
digest <STRING>:判斷上述檢測機制為健康狀態的響應的內容的校驗碼;
}
nb_get_retry <INT>:重試次數;
delay_before_retry <INT>:重試之前的延遲時長;
connect_ip <IP ADDRESS>:向當前RS的哪個IP地址發起健康狀態檢測請求
connect_port <PORT>:向當前RS的哪個PORT發起健康狀態檢測請求
bindto <IP ADDRESS>:發出健康狀態檢測請求時使用的源地址;
bind_port <PORT>:發出健康狀態檢測請求時使用的源端口;
connect_timeout <INTEGER>:連接請求的超時時長;
}
TCP_CHECK {
connect_ip <IP ADDRESS>:向當前RS的哪個IP地址發起健康狀態檢測請求
connect_port <PORT>:向當前RS的哪個PORT發起健康狀態檢測請求
bindto <IP ADDRESS>:發出健康狀態檢測請求時使用的源地址;
bind_port <PORT>:發出健康狀態檢測請求時使用的源端口;
connect_timeout <INTEGER>:連接請求的超時時長;
}
}
腳本定義:
vrrp_script <SCRIPT_NAME> {
script "" #定義執行腳本
interval INT #多長時間檢測一次
weight -INT #如果腳本的返回值為假,則執行權重減N的操作
rise 2 #檢測2次為真,則上線
fall 3 #檢測3次為假,則下線
}
vrrp_instance VI_1 {
track_script { #在虛擬路由實例中調用此腳本
SCRIPT_NAME_1
SCRIPT_NAME_2
...
}
}
四、lvs + keepalived的單主模型實現
環境:
- 各節點時間必須同步;
- 確保iptables及selinux的正確配置;
- 各節點之間可通過主機名互相通信(對KA並非必須),建議使用/etc/hosts文件實現;
- 確保各節點的用於集群服務的接口支持MULTICAST通信;D類:224-239;
ip link set dev eth0 multicast off | on
node1配置:
[root@node1 ~]# yum install keepalied
[root@node1 ~]# vim /etc/keepalived/keepalived.conf
! Configuration File for keepalived
global_defs {
notification_email {
root@localhost #收件人
}
notification_email_from keepalived@localhoat #發件人
smtp_server 127.0.0.1 #郵件服務器IP
smtp_connect_timeout 30 #連接超時時長
router_id node1
vrrp_skip_check_adv_addr
vrrp_strict
vrrp_garp_interval 0
vrrp_gna_interval 0
vrrp_mcast_group4 224.0.111.111 #組播地址
vrrp_iptables #禁止keepalived添加iptables規則
}
vrrp_instance VI_1 { #定義虛擬路由實例
state MASTER #初始啟動為主節點
interface eth0 #IP屬於的網卡
virtual_router_id 51 #節點ID
priority 100 #優先級
advert_int 1 #每1秒檢測一次
authentication { #認證
auth_type PASS #簡單認證
auth_pass fd57721a #認證密碼
}
virtual_ipaddress { #vip綁定的網卡
192.168.0.2/24 dev eth0
}
}
virtual_server 192.168.0.2 80 { #ipvs規則定義
delay_loop 2 #健康檢測,2秒
lb_algo rr #調度算法,輪詢
lb_kind DR #lvs模型,DR
protocol TCP #tcp協議
real_server 192.168.0.10 80 { #real-server配置
weight 1 #權重為1
HTTP_GET { #HTTP協議檢測
url {
path / #檢測主頁
status_code 200 #返回值為200為正常
}
connect_timeout 2 #超時時長
nb_get_retry 3 #重連次數
delay_before_retry 1 #重連間隔
}
}
real_server 192.168.0.11 80 {
weight 1
HTTP_GET {
url {
path /
status_code 200
}
connect_timeout 2
nb_get_retry 3
delay_before_retry 1
}
}
real_server 192.168.0.12 80 {
weight 1
HTTP_GET {
url {
path /
status_code 200
}
connect_timeout 2
nb_get_retry 3
delay_before_retry 1
}
}
}
[root@node1 ~]# systemctl start keepalived.service
node2配置:
[root@node2 ~]# yum install keepalied
[root@node2 ~]# vim /etc/keepalived/keepalived.conf
! Configuration File for keepalived
global_defs {
notification_email {
root@localhost
}
notification_email_from keepalived@localhoat
smtp_server 127.0.0.1
smtp_connect_timeout 30
router_id node1
vrrp_skip_check_adv_addr
vrrp_strict
vrrp_garp_interval 0
vrrp_gna_interval 0
vrrp_mcast_group4 224.0.111.111
vrrp_iptables
}
vrrp_instance VI_1 {
state BACKUP
interface eth0
virtual_router_id 51
priority 98
advert_int 1
authentication {
auth_type PASS
auth_pass fd57721a
}
virtual_ipaddress {
192.168.0.2/24 dev eth0
}
preempt_delay 300
}
virtual_server 192.168.0.2 80 {
delay_loop 2
lb_algo rr
lb_kind DR
protocol TCP
real_server 192.168.0.10 80 {
weight 1
HTTP_GET {
url {
path /
status_code 200
}
connect_timeout 2
nb_get_retry 3
delay_before_retry 1
}
}
real_server 192.168.0.11 80 {
weight 1
HTTP_GET {
url {
path /
status_code 200
}
connect_timeout 2
nb_get_retry 3
delay_before_retry 1
}
}
real_server 192.168.0.12 80 {
weight 1
HTTP_GET {
url {
path /
status_code 200
}
connect_timeout 2
nb_get_retry 3
delay_before_retry 1
}
}
}
[root@node2 ~]# systemctl start keepalived.service
web1/web2/web3:配置腳本
#!/bin/bash
#
vip="192.168.0.2/32"
iface="lo"
case $1 in
start)
echo 1 > /proc/sys/net/ipv4/conf/all/arp_ignore
echo 1 > /proc/sys/net/ipv4/conf/lo/arp_ignore
echo 2 > /proc/sys/net/ipv4/conf/all/arp_announce
echo 2 > /proc/sys/net/ipv4/conf/lo/arp_announce
ip addr add $vip label $iface:0 broadcast ${vip%/*} dev $iface
ip route add $vip dev $iface
;;
stop)
ip addr flush dev $iface
ip route flush dev $iface
echo 0 > /proc/sys/net/ipv4/conf/all/arp_ignore
echo 0 > /proc/sys/net/ipv4/conf/lo/arp_ignore
echo 0 > /proc/sys/net/ipv4/conf/all/arp_announce
echo 0 > /proc/sys/net/ipv4/conf/lo/arp_announce
;;
*)
echo "Usage: `basename $0` start | stop" 1>&2
;;
esac
五、lvs + keepalived的雙主模型實現
node1配置:
! Configuration File for keepalived
global_defs {
notification_email {
root@localhost
}
notification_email_from keepalived@localhoat
smtp_server 127.0.0.1
smtp_connect_timeout 30
router_id node1
vrrp_skip_check_adv_addr
vrrp_strict
vrrp_garp_interval 0
vrrp_gna_interval 0
vrrp_mcast_group4 224.0.111.111
vrrp_iptables
}
vrrp_instance VI_1 {
state MASTER
interface eth0
virtual_router_id 51
priority 100
advert_int 1
authentication {
auth_type PASS
auth_pass fd57721a
}
virtual_ipaddress {
192.168.0.2/24 dev eth0
}
}
vrrp_instance VI_2 {
state BACKUP
interface eth0
virtual_router_id 52
priority 98
advert_int 1
authentication {
auth_type PASS
auth_pass 4a9a407a
}
virtual_ipaddress {
192.168.0.3/24 dev eth0
}
}
virtual_server 192.168.0.2 80 {
delay_loop 2
lb_algo rr
lb_kind DR
protocol TCP
real_server 192.168.0.10 80 {
weight 1
HTTP_GET {
url {
path /
status_code 200
}
connect_timeout 2
nb_get_retry 3
delay_before_retry 1
}
}
real_server 192.168.0.11 80 {
weight 1
HTTP_GET {
url {
path /
status_code 200
}
connect_timeout 2
nb_get_retry 3
delay_before_retry 1
}
}
real_server 192.168.0.12 80 {
weight 1
HTTP_GET {
url {
path /
status_code 200
}
connect_timeout 2
nb_get_retry 3
delay_before_retry 1
}
}
}
virtual_server 192.168.0.3 80 {
delay_loop 2
lb_algo rr
lb_kind DR
protocol TCP
real_server 192.168.0.10 80 {
weight 1
HTTP_GET {
url {
path /
status_code 200
}
connect_timeout 2
nb_get_retry 3
delay_before_retry 1
}
}
real_server 192.168.0.11 80 {
weight 1
HTTP_GET {
url {
path /
status_code 200
}
connect_timeout 2
nb_get_retry 3
delay_before_retry 1
}
}
real_server 192.168.0.12 80 {
weight 1
HTTP_GET {
url {
path /
status_code 200
}
connect_timeout 2
nb_get_retry 3
delay_before_retry 1
}
}
}
node2配置:
! Configuration File for keepalived
global_defs {
notification_email {
root@localhost
}
notification_email_from keepalived@localhoat
smtp_server 127.0.0.1
smtp_connect_timeout 30
router_id node1
vrrp_skip_check_adv_addr
vrrp_strict
vrrp_garp_interval 0
vrrp_gna_interval 0
vrrp_mcast_group4 224.0.111.111
vrrp_iptables
}
vrrp_instance VI_1 {
state BACKUP
interface eth0
virtual_router_id 51
priority 98
advert_int 1
authentication {
auth_type PASS
auth_pass fd57721a
}
virtual_ipaddress {
192.168.0.2/24 dev eth0
}
}
vrrp_instance VI_2 {
state MASTER
interface eth0
virtual_router_id 52
priority 100
advert_int 1
authentication {
auth_type PASS
auth_pass 4a9a407a
}
virtual_ipaddress {
192.168.0.3/24 dev eth0
}
}
virtual_server 192.168.0.2 80 {
delay_loop 2
lb_algo rr
lb_kind DR
protocol TCP
real_server 192.168.0.10 80 {
weight 1
HTTP_GET {
url {
path /
status_code 200
}
connect_timeout 2
nb_get_retry 3
delay_before_retry 1
}
}
real_server 192.168.0.11 80 {
weight 1
HTTP_GET {
url {
path /
status_code 200
}
connect_timeout 2
nb_get_retry 3
delay_before_retry 1
}
}
real_server 192.168.0.12 80 {
weight 1
HTTP_GET {
url {
path /
status_code 200
}
connect_timeout 2
nb_get_retry 3
delay_before_retry 1
}
}
}
virtual_server 192.168.0.3 80 {
delay_loop 2
lb_algo rr
lb_kind DR
protocol TCP
real_server 192.168.0.10 80 {
weight 1
HTTP_GET {
url {
path /
status_code 200
}
connect_timeout 2
nb_get_retry 3
delay_before_retry 1
}
}
real_server 192.168.0.11 80 {
weight 1
HTTP_GET {
url {
path /
status_code 200
}
connect_timeout 2
nb_get_retry 3
delay_before_retry 1
}
}
real_server 192.168.0.12 80 {
weight 1
HTTP_GET {
url {
path /
status_code 200
}
connect_timeout 2
nb_get_retry 3
delay_before_retry 1
}
}
}
web1/web2/web3:配置腳本
#!/bin/bash
#
vip="192.168.0.2/32"
vip2="192.168.0.3/32"
iface="lo"
case $1 in
start)
echo 1 > /proc/sys/net/ipv4/conf/all/arp_ignore
echo 1 > /proc/sys/net/ipv4/conf/lo/arp_ignore
echo 2 > /proc/sys/net/ipv4/conf/all/arp_announce
echo 2 > /proc/sys/net/ipv4/conf/lo/arp_announce
ip addr add $vip label $iface:0 broadcast ${vip%/*} dev $iface
ip addr add $vip2 label $iface:1 broadcast ${vip2%/*} dev $iface
ip route add $vip dev $iface
ip route add $vip2 dev $iface
;;
stop)
ip addr flush dev $iface
ip route flush dev $iface
echo 0 > /proc/sys/net/ipv4/conf/all/arp_ignore
echo 0 > /proc/sys/net/ipv4/conf/lo/arp_ignore
echo 0 > /proc/sys/net/ipv4/conf/all/arp_announce
echo 0 > /proc/sys/net/ipv4/conf/lo/arp_announce
;;
*)
echo "Usage: `basename $0` start | stop" 1>&2
;;
esac
六、通知腳本的實現
[root@node1 ~]# vim /etc/keepalived/notify.sh
#!/bin/bash
#
contact='root@localhost'
notify() {
local mailsubject="$(hostname) to be $1, vip floating"
local mailbody="$(date +'%F %T'): vrrp transition, $(hostname) changed to be $1"
echo "$mailbody" | mail -s "$mailsubject" $contact
}
case $1 in
master) notify master;;
backup) notify backup;;
fault) notify fault;;
*) echo "Usage: $(basename $0) {master|backup|fault}"; exit 1;;
esac
[root@node1 ~]# vim /etc/keepalived/keepalived.conf
vrrp_instance VI_1 {
state MASTER
interface eth0
virtual_router_id 51
priority 100
advert_int 1
authentication {
auth_type PASS
auth_pass fd57721a
}
virtual_ipaddress {
192.168.0.2/24 dev eth0
}
notify_master "/etc/keepalived/notify.sh master" #調用腳本發送通知郵件,當此節點轉為master時
notify_backup "/etc/keepalived/notify.sh backup"
notify_fault "/etc/keepalived/notify.sh fault"
}
七、keepalived + haproxy 實現調度器的高可用
配置haproxy實現負載均衡的功能
[root@node1 ~]# vim /etc/haproxy/haproxy.cfg
frontend web *:80
default_backend websrvs
backend websrvs
balance roundrobin
server srv1 192.168.0.10:80 check
server srv2 192.168.0.11:80 check
server srv3 192.168.0.12:80 check
配置keepalived實現高可用
[root@node1 ~]# vim /etc/keepalived/keepalived.conf
! Configuration File for keepalived
global_defs {
notification_email {
root@localhost
}
notification_email_from keepalived@localhoat
smtp_server 127.0.0.1
smtp_connect_timeout 30
router_id node1
vrrp_skip_check_adv_addr
vrrp_strict
vrrp_garp_interval 0
vrrp_gna_interval 0
vrrp_mcast_group4 224.0.111.111
vrrp_iptables
}
vrrp_script chk_haproxy {
script "killall -0 haproxy" #監控haproxy進程
interval 1
weight -5
fall 2
rise 1
}
vrrp_instance VI_1 {
state MASTER
interface eth0
virtual_router_id 51
priority 100
advert_int 1
authentication {
auth_type PASS
auth_pass fd57721a
}
virtual_ipaddress {
192.168.0.2/24 dev eth0
}
track_script { #調用監控腳本
chk_haproxy
}
notify_master "/etc/keepalived/notify.sh master"
notify_backup "/etc/keepalived/notify.sh backup"
notify_fault "/etc/keepalived/notify.sh fault"
}
八、利用腳本功能實現keepalived的維護模式
vrrp_script chk_down {
script "/bin/bash -c '[[ -f /etc/keepalived/down ]]' && exit 1 || exit 0" #在keepalived中要特別地指明作為bash的參數的運行
interval 1
weight -10
}
vrrp_instance VI_1 {
state MASTER
interface eth0
virtual_router_id 51
priority 100
advert_int 1
authentication {
auth_type PASS
auth_pass fd57721a
}
virtual_ipaddress {
192.168.0.2/24 dev eth0
}
track_script {
chk_down #調用腳本
}
notify_master "/etc/keepalived/notify.sh master"
notify_backup "/etc/keepalived/notify.sh backup"
notify_fault "/etc/keepalived/notify.sh fault"
}
測試:創建down文件後使得降優先級,從而使得VIP漂移到node2,進入維護模式
[root@node1 ~]# touch /etc/keepalived/down
九、編寫Ansible角色批量部署keepalived + nginx 實現雙主模型下反向代理器的高可用
以下操作全部在ansible主機上操作
1、基於秘鑰通信:
[root@ansible ~]# vim cpkey.sh
#!/bin/bash
rpm -q expect &>/dev/null || yum -q -y install expect
[ ! -e ~/.ssh/id_rsa ] && ssh-keygen -t rsa -P "" -f ~/.ssh/id_rsa &>/dev/null
read -p "Host_ip_list: " ip_list_file
read -p "Username: " username
read -s -p "Password: " password
[ ! -e "$ip_list_file" ] && echo "$ip_list_file not exist." && exit
[ -z "$ip_list_file" -o -z "$username" -o -z "$password" ] && echo "input error!" && exit
localhost_ip=`hostname -I |cut -d' ' -f1`
expect <<EOF
set timeout 10
spawn ssh-copy-id -i /root/.ssh/id_rsa.pub $localhost_ip
expect {
"yes/no" { send "yes\n"; exp_continue}
"password" { send "$password\n"}
}
expect eof
EOF
while read ipaddr1; do
expect <<EOF
set timeout 10
spawn ssh ${username}@${ipaddr1} ':'
expect {
"yes/no" { send "yes\n"; exp_continue}
"password" { send "$password\n"}
}
expect eof
EOF
done < "$ip_list_file"
while read ipaddr2; do
expect <<EOF
set timeout 10
spawn scp -pr .ssh/ ${username}@${ipaddr2}:
expect {
"yes/no" { send "yes\n"; exp_continue}
"password" { send "$password\n"}
}
expect eof
EOF
done < "$ip_list_file"
[root@ansible ~]# vim iplist.txt
192.168.0.8
192.168.0.9
192.168.0.11
192.168.0.12
192.168.0.13
[root@ansible ~]# ./cpkey.sh
Host_ip_list: iplist.txt #指定IP地址列表文件
Username: root
Password: ******
2、配置內部主機基於主機名通信:
[root@ansible ~]# vim /etc/hosts
192.168.0.8 node1
192.168.0.9 node2
192.168.0.10 dns
192.168.0.11 web1
192.168.0.12 web2
192.168.0.13 web3
192.168.0.13 ansible
[root@ansible ~]# yum install ansible -y (epel源)
[root@ansible ~]# vim /etc/ansible/hosts
[node]
192.168.0.8
192.168.0.9
[web]
192.168.0.11
192.168.0.12
192.168.0.13
[dns]
192.168.0.10
[root@ansible ~]# ansible all -m copy -a 'src=/etc/hosts dest=/etc/hosts backup=yes'
3、編寫角色,實現web服務的部署
[root@ansible ~]# mkdir -p ansible/roles/web/{tasks,templates,files,handlers}
[root@ansible ~]# cd ansible/
[root@ansible ansible]# vim roles/web/tasks/install.yml
- name: install httpd
yum: name=httpd state=present
[root@ansible ansible]# vim roles/web/tasks/copy.yml
- name: copy config file
template: src=httpd.conf.j2 dest=/etc/httpd/conf/httpd.conf
notify: restart service
- name: copy index.html
template: src=index.html.j2 dest=/var/www/html/index.html owner=apache
notify: restart service
[root@ansible ansible]# vim roles/web/tasks/start.yml
- name: start httpd
service: name=httpd state=started
[root@ansible ansible]# vim roles/web/tasks/main.yml
- include: install.yml
- include: copy.yml
- include: start.yml
[root@ansible ansible]# yum install httpd -y
[root@ansible ansible]# cp /etc/httpd/conf/httpd.conf roles/web/templates/httpd.conf.j2
[root@ansible ansible]# vim roles/web/templates/httpd.conf.j2
ServerName {{ ansible_fqdn }}
[root@ansible ansible]# vim roles/web/templates/index.html.j2
{{ ansible_fqdn }} test page.
[root@ansible ansible]# vim roles/web/handlers/main.yml
- name: restart service
service: name=httpd state=restarted
[root@ansible ansible]# vim web.yml
---
- hosts: web
remote_user: root
roles:
- web
...
[root@ansible ansible]# ansible-playbook web.yml
4、編寫角色,實現nginx反向代理服務的部署
[root@ansible ansible]# mkdir -p roles/nginx_proxy/{files,handlers,tasks,templates}
[root@ansible ansible]# vim roles/nginx_proxy/tasks/install.yml
- name: install nginx
yum: name=nginx state=present
[root@ansible ansible]# vim roles/nginx_proxy/tasks/copy.yml
- name: copy config file
template: src=nginx.conf.j2 dest=/etc/nginx/nginx.conf
notify: restart service
[root@ansible ansible]# vim roles/nginx_proxy/tasks/start.yml
- name: start nginx
service: name=nginx state=started
[root@ansible ansible]# vim roles/nginx_proxy/tasks/main.yml
- include: install.yml
- include: copy.yml
- include: start.yml
[root@ansible ansible]# yum install nginx -y
[root@ansible ansible]# cp /etc/nginx/nginx.conf roles/nginx_proxy/templates/nginx.conf.j2
[root@ansible ansible]# vim roles/nginx_proxy/templates/nginx.conf.j2
http {
upstream websrvs { #後端web服務器的IP地址
server 192.168.0.11;
server 192.168.0.12;
server 192.168.0.13;
}
server {
listen 80 default_server;
server_name _;
root /usr/share/nginx/html;
location / {
proxy_pass http://websrvs;
}
}
}
[root@ansible ansible]# vim roles/nginx_proxy/handlers/main.yml
- name: restart service
service: name=nginx state=restarted
[root@ansible ansible]# vim nginx_proxy.yml
---
- hosts: node
remote_user: root
roles:
- nginx_proxy
...
[root@ansible ansible]# ansible-playbook nginx_proxy.yml
5、編寫角色,利用keepalived實現nginx反向代理服務的高可用
[root@ansible ansible]# ansible 192.168.0.8 -m hostname -a 'name=node1'
[root@ansible ansible]# ansible 192.168.0.9 -m hostname -a 'name=node2'
[root@ansible ansible]# mkdir -p roles/keepalived/{files,handlers,tasks,templates,vars}
[root@ansible ansible]# vim roles/keepalived/tasks/install.yml #安裝劇本
- name: install keepalived
yum: name=keepalived state=present
[root@ansible ansible]# vim roles/keepalived/tasks/copy.yml #復制配置文件劇本
- name: copy configure file
template: src=keepalived.conf.j2 dest=/etc/keepalived/keepalived.conf
notify: restart service
when: ansible_fqdn == "node1" #選擇性復制,將第一套配置文件復制到node1上
- name: copy configure file2
template: src=keepalived.conf2.j2 dest=/etc/keepalived/keepalived.conf
notify: restart service
when: ansible_fqdn == "node2" #將第二套配置文件復制到node2上
[root@ansible ansible]# vim roles/keepalived/tasks/start.yml #啟動服務
- name: start keepalived
service: name=keepalived state=started
[root@ansible ansible]# vim roles/keepalived/tasks/main.yml
- include: install.yml
- include: copy.yml
- include: start.yml
[root@ansible ansible]# vim roles/keepalived/vars/main.yml #自定義變量
kepd_vrrp_mcast_group4: "224.0.111.222" #組播地址
kepd_interface_1: "eth0"
kepd_virtual_router_id_1: "51" #虛擬路由標識ID
kepd_priority_1: "100" #優先級
kepd_auth_pass_1: "fd57721a" #簡單認證密碼,8位
kepd_virtual_ipaddress_1: "192.168.0.2/24" #VIP地址,此處應該為公網地址
kepd_interface_2: "eth0"
kepd_virtual_router_id_2: "52"
kepd_priority_2: "98"
kepd_auth_pass_2: "41af6acc"
kepd_virtual_ipaddress_2: "192.168.0.3/24"
[root@ansible ansible]# yum install keepalived -y
[root@ansible ansible]# cp /etc/keepalived/keepalived.conf roles/keepalived/templates/keepalived.conf.j2
[root@ansible ansible]# vim roles/keepalived/templates/keepalived.conf.j2 #編輯配置文件模板
! Configuration File for keepalived
global_defs {
notification_email {
root@localhost
}
notification_email_from keepalived@localhoat
smtp_server 127.0.0.1
smtp_connect_timeout 30
router_id node1
vrrp_skip_check_adv_addr
vrrp_strict
vrrp_garp_interval 0
vrrp_gna_interval 0
vrrp_mcast_group4 {{ kepd_vrrp_mcast_group4 }}
vrrp_iptables
}
vrrp_script chk_nginx {
script "killall -0 nginx"
interval 1
weight -5
fall 2
rise 1
}
vrrp_instance VI_1 {
state MASTER
interface {{ kepd_interface_1 }}
virtual_router_id {{ kepd_virtual_router_id_1 }}
priority {{ kepd_priority_1 }}
advert_int 1
authentication {
auth_type PASS
auth_pass {{ kepd_auth_pass_1 }}
}
virtual_ipaddress {
{{ kepd_virtual_ipaddress_1 }}
}
track_script {
chk_nginx
}
}
vrrp_instance VI_2 {
state BACKUP
interface {{ kepd_interface_2 }}
virtual_router_id {{ kepd_virtual_router_id_2 }}
priority {{ kepd_priority_2 }}
advert_int 1
authentication {
auth_type PASS
auth_pass {{ kepd_auth_pass_2 }}
}
virtual_ipaddress {
{{ kepd_virtual_ipaddress_2 }}
}
track_script {
chk_nginx
}
}
[root@ansible ansible]# cp roles/keepalived/templates/keepalived.conf.j2 roles/keepalived/templates/keepalived.conf2.j2
[root@ansible ansible]# vim roles/keepalived/templates/keepalived.conf2.j2 #編寫第二套配置文件,和第一套不同的只有state和priority參數需要改
! Configuration File for keepalived
global_defs {
notification_email {
root@localhost
}
notification_email_from keepalived@localhoat
smtp_server 127.0.0.1
smtp_connect_timeout 30
router_id node1
vrrp_skip_check_adv_addr
vrrp_strict
vrrp_garp_interval 0
vrrp_gna_interval 0
vrrp_mcast_group4 {{ kepd_vrrp_mcast_group4 }}
vrrp_iptables
}
vrrp_script chk_nginx {
script "killall -0 nginx"
interval 1
weight -5
fall 2
rise 1
}
vrrp_instance VI_1 {
state BACKUP
interface {{ kepd_interface_1 }}
virtual_router_id {{ kepd_virtual_router_id_1 }}
priority {{ kepd_priority_2 }}
advert_int 1
authentication {
auth_type PASS
auth_pass {{ kepd_auth_pass_1 }}
}
virtual_ipaddress {
{{ kepd_virtual_ipaddress_1 }}
}
track_script {
chk_nginx
}
}
vrrp_instance VI_2 {
state MASTER
interface {{ kepd_interface_2 }}
virtual_router_id {{ kepd_virtual_router_id_2 }}
priority {{ kepd_priority_1 }}
advert_int 1
authentication {
auth_type PASS
auth_pass {{ kepd_auth_pass_2 }}
}
virtual_ipaddress {
{{ kepd_virtual_ipaddress_2 }}
}
track_script {
chk_nginx
}
}
[root@ansible ansible]# vim roles/keepalived/handlers/main.yml #實現配置文件變動時觸發重啟服務操作
- name: restart service
service: name=keepalived state=restarted
[root@ansible ansible]# vim keepalived.yml
---
- hosts: node
remote_user: root
roles:
- keepalived
...
[root@ansible ansible]# ansible-playbook keepalived.yml
6、配置DNS
[root@dns ~]# yum install bind -y
[root@dns ~]# vim /etc/named.conf
[root@dns ~]# vim /etc/named.conf #將以下參數註釋
//listen-on port 53 { 127.0.0.1; };
//allow-query { localhost; };
[root@dns ~]# vim /etc/named.rfc1912.zones
zone "dongfei.tech" {
type master;
file "dongfei.tech.zone";
};
[root@dns ~]# vim /var/named/dongfei.tech.zone
$TTL 1D
@ IN SOA dns1.dongfei.tech. admin.dongfei.tech. ( 1 1D 1H 1W 3H )
NS dns1
dns1 A 192.168.0.10
www A 192.168.0.2
www A 192.168.0.3
[root@dns ~]# named-checkconf
[root@dns ~]# named-checkzone "dongfei.tech" /var/named/dongfei.tech.zone
OK
[root@dns ~]# systemctl start named
[root@dns ~]# dig www.dongfei.tech @192.168.0.10
;; QUESTION SECTION:
;www.dongfei.tech. IN A
;; ANSWER SECTION:
www.dongfei.tech. 86400 IN A 192.168.0.3
www.dongfei.tech. 86400 IN A 192.168.0.2
;; AUTHORITY SECTION:
dongfei.tech. 86400 IN NS dns1.dongfei.tech.
;; ADDITIONAL SECTION:
dns1.dongfei.tech. 86400 IN A 192.168.0.10
;; SERVER: 192.168.0.10#53(192.168.0.10)
7、客戶端模擬測試
[root@client ~]# vim /etc/resolv.conf
nameserver 192.168.0.10
[root@client ~]# for i in {1..3}; do curl www.dongfei.tech; done
web2 test page.
web2 test page.
web3 test page.
將node1停機再測試
[root@client ~]# for i in {1..3}; do curl www.dongfei.tech; done
web2 test page.
web3 test page.
web1 test page.
現在已經實現了負載均衡的高可用!
附ansible角色下載:https://files.cnblogs.com/files/L-dongf/web_lb_cluster.tar.gz
感謝閱讀~
keepalived高可用負載均衡器