LVS負載均衡(7)-- LVS+keepalived實現高可用
目錄
1. LVS+keepalived實現高可用
LVS 可以實現負載均衡功能,但是沒有健康檢查機制,如果一臺 RS 節點故障,LVS 任然會將請求排程至該故障 RS 節點伺服器;可以使用 Keepalived 來實現解決:
-
1.使用 Keepalived 可以實現 LVS 的健康檢查機制, RS 節點故障,則自動剔除該故障的 RS 節點,如果 RS 節點恢復則自動加入叢集。
-
2.使用 Keeplaived 可以解決 LVS 單點故障,以此實現 LVS 的高可用。
1.1 實驗環境說明
實驗拓撲圖如下,使用LVS的DR模型:
- 客戶端:主機名:xuzhichao;地址:eth1:192.168.20.17;
- 路由器:主機名:router;地址:eth1:192.168.20.50;eth2:192.168.50.50;
- LVS負載均衡:
- 主機名:lvs-01;地址:eth2:192.168.50.31;
- 主機名:lvs-02;地址:eth2:192.168.50.32;
- VIP地址:192.168.50.100和192.168.50.101;
- WEB伺服器,使用nginx1.20.1:
- 主機名:nginx02;地址:eth2:192.168.50.22;
- 主機名:nginx03;地址:eth2:192.168.50.23;
1.2 路由器配置
-
ROUTER裝置的IP地址和路由資訊如下:
[root@router ~]# ip add 3: eth1: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc pfifo_fast state UP group default qlen 1000 link/ether 00:0c:29:4f:a9:ca brd ff:ff:ff:ff:ff:ff inet 192.168.20.50/24 brd 192.168.20.255 scope global noprefixroute eth1 valid_lft forever preferred_lft forever 4: eth2: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc pfifo_fast state UP group default qlen 1000 link/ether 00:0c:29:4f:a9:d4 brd ff:ff:ff:ff:ff:ff inet 192.168.50.50/24 brd 192.168.50.255 scope global noprefixroute eth2 valid_lft forever preferred_lft forever #此場景中無需配置路由 [root@router ~]# route -n Kernel IP routing table Destination Gateway Genmask Flags Metric Ref Use Iface 192.168.20.0 0.0.0.0 255.255.255.0 U 101 0 0 eth1 192.168.50.0 0.0.0.0 255.255.255.0 U 104 0 0 eth2
-
開啟router裝置的ip_forward功能:
[root@router ~]# echo "net.ipv4.ip_forward = 1" >> /etc/sysctl.conf [root@router ~]# sysctl -p net.ipv4.ip_forward = 1
-
把LVS的虛IP地址的80和443埠對映到路由器外網地址的80和443埠,也可以使用地址對映:
#埠對映: [root@router ~]# iptables -t nat -A PREROUTING -d 192.168.20.50 -p tcp --dport 80 -j DNAT --to 192.168.50.100:80 [root@router ~]# iptables -t nat -A PREROUTING -d 192.168.20.50 -p tcp --dport 443 -j DNAT --to 192.168.50.100:443 #地址對映: [root@router ~]# iptables -t nat -A PREROUTING -d 192.168.20.50 -j DNAT --to 192.168.50.100 #源NAT,讓內部主機上網使用 [root@router ~]# iptables -t nat -A POSTROUTING -s 192.168.50.0/24 -j SNAT --to 192.168.20.50 #檢視NAT配置: [root@router ~]# iptables -t nat -vnL Chain PREROUTING (policy ACCEPT 0 packets, 0 bytes) pkts bytes target prot opt in out source destination 0 0 DNAT tcp -- * * 0.0.0.0/0 192.168.20.50 tcp dpt:80 to:192.168.50.100:80 0 0 DNAT tcp -- * * 0.0.0.0/0 192.168.20.50 tcp dpt:443 to:192.168.50.100:443 Chain INPUT (policy ACCEPT 0 packets, 0 bytes) pkts bytes target prot opt in out source destination Chain OUTPUT (policy ACCEPT 0 packets, 0 bytes) pkts bytes target prot opt in out source destination Chain POSTROUTING (policy ACCEPT 0 packets, 0 bytes) pkts bytes target prot opt in out source destination 0 0 SNAT all -- * * 192.168.50.0/24 0.0.0.0/0 to:192.168.20.50
1.3 WEB伺服器nginx配置
-
nginx02主機的網路配置如下:
#1.在lo介面配置兩個VIP地址: [root@nginx02 ~]# cat /etc/sysconfig/network-scripts/ifcfg-lo:0 DEVICE=lo:0 BOOTPROTO=none IPADDR=192.168.50.100 NETMASK=255.255.255.255 <==注意:此處的掩碼不能與RIP的掩碼配置的一樣,否則其他主機無法學習到RIP的ARP資訊,會影響RIP的直連路由,而且設定的掩碼不能過大,讓VIP和CIP計算成同一網段,建議設定為32位掩碼。 ONBOOT=yes NAME=loopback #2.重啟網絡卡生效: [root@nginx02 ~]# ifdown lo:0 && ifup lo:0 [root@nginx02 ~]# ifconfig lo:0 lo:0: flags=73<UP,LOOPBACK,RUNNING> mtu 65536 inet 192.168.50.100 netmask 255.255.255.255 loop txqueuelen 1000 (Local Loopback) #3.eth2介面地址如下: [root@nginx02 ~]# ip add 4: eth2: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc pfifo_fast state UP group default qlen 1000 link/ether 00:0c:29:d9:f9:7d brd ff:ff:ff:ff:ff:ff inet 192.168.50.22/24 brd 192.168.50.255 scope global noprefixroute eth2 valid_lft forever preferred_lft forever #4.路由配置:閘道器指向路由器192.168.50.50 [root@nginx02 ~]# ip route add default via 192.168.50.50 dev eth2 <==預設路由必須指定下一跳地址和出介面,否則有可能會從lo:0接口出去,導致不通。 [root@nginx02 ~]# route -n Kernel IP routing table Destination Gateway Genmask Flags Metric Ref Use Iface 0.0.0.0 192.168.50.50 0.0.0.0 UG 0 0 0 eth2 192.168.50.0 0.0.0.0 255.255.255.0 U 103 0 0 eth2
-
配置 arp ,不對外宣告本機 VIP 地址,也不響應其他節點發起 ARP 請求 本機的VIP
[root@nginx02 ~]# echo 1 > /proc/sys/net/ipv4/conf/all/arp_ignore [root@nginx02 ~]# echo 1 > /proc/sys/net/ipv4/conf/lo/arp_ignore [root@nginx02 ~]# echo 1 > /proc/sys/net/ipv4/conf/default/arp_ignore [root@nginx02 ~]# echo 2 > /proc/sys/net/ipv4/conf/all/arp_announce [root@nginx02 ~]# echo 2 > /proc/sys/net/ipv4/conf/lo/arp_announce [root@nginx02 ~]# echo 2 > /proc/sys/net/ipv4/conf/default/arp_announce
-
nginx03主機的網路配置如下:
#1.在lo介面配置VIP地址: [root@nginx03 ~]# cat /etc/sysconfig/network-scripts/ifcfg-lo:0 DEVICE=lo:0 BOOTPROTO=none IPADDR=192.168.50.100 NETMASK=255.255.255.255 <==注意:此處的掩碼不能與RIP的掩碼配置的一樣,否則其他主機無法學習到RIP的ARP資訊,會影響RIP的直連路由,而且設定的掩碼不能過大,讓VIP和CIP計算成同一網段,建議設定為32位掩碼。 ONBOOT=yes NAME=loopback #2.重啟網絡卡生效: [root@nginx03 ~]# ifdown lo:0 && ifup lo:0 [root@nginx03 ~]# ifconfig lo:0 lo:0: flags=73<UP,LOOPBACK,RUNNING> mtu 65536 inet 192.168.50.100 netmask 255.255.255.255 loop txqueuelen 1000 (Local Loopback) #3.eth2介面地址如下: [root@nginx03 ~]# ip add show eth2 4: eth2: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc pfifo_fast state UP group default qlen 1000 link/ether 00:0c:29:0a:bf:63 brd ff:ff:ff:ff:ff:ff inet 192.168.50.23/24 brd 192.168.50.255 scope global noprefixroute eth2 valid_lft forever preferred_lft forever #4.路由配置:閘道器指向路由器192.168.50.50 [root@nginx03 ~]# ip route add default via 192.168.50.50 dev eth2 <==預設路由必須指定下一跳地址和出介面,否則有可能會從lo:0接口出去,導致不通。 [root@nginx03 ~]# route -n Kernel IP routing table Destination Gateway Genmask Flags Metric Ref Use Iface 0.0.0.0 192.168.50.50 0.0.0.0 UG 0 0 0 eth2 192.168.50.0 0.0.0.0 255.255.255.0 U 103 0 0 eth2
-
配置 arp ,不對外宣告本機 VIP 地址,也不響應其他節點發起 ARP 請求 本機的VIP
[root@nginx03 ~]# echo 1 > /proc/sys/net/ipv4/conf/all/arp_ignore [root@nginx03 ~]# echo 1 > /proc/sys/net/ipv4/conf/lo/arp_ignore [root@nginx03 ~]# echo 1 > /proc/sys/net/ipv4/conf/default/arp_ignore [root@nginx03 ~]# echo 2 > /proc/sys/net/ipv4/conf/all/arp_announce [root@nginx03 ~]# echo 2 > /proc/sys/net/ipv4/conf/lo/arp_announce [root@nginx03 ~]# echo 2 > /proc/sys/net/ipv4/conf/default/arp_announce
-
nginx配置檔案兩臺WEB伺服器保持一致:
[root@nginx03 ~]# cat /etc/nginx/conf.d/xuzhichao.conf server { listen 80 default_server; listen 443 ssl; server_name www.xuzhichao.com; access_log /var/log/nginx/access_xuzhichao.log access_json; charset utf-8,gbk; #SSL配置 ssl_certificate_key /apps/nginx/certs/www.xuzhichao.com.key; ssl_certificate /apps/nginx/certs/www.xuzhichao.com.crt; ssl_session_cache shared:ssl_cache:20m; ssl_session_timeout 10m; ssl_protocols TLSv1 TLSv1.1 TLSv1.2; keepalive_timeout 65; #防盜鏈 valid_referers none blocked server_names *.b.com b.* ~\.baidu\. ~\.google\.; if ( $invalid_referer ) { return 403; } client_max_body_size 10m; #瀏覽器圖示 location = /favicon.ico { root /data/nginx/xuzhichao; } location / { root /data/nginx/xuzhichao; index index.html index.php; #http自動跳轉https if ($scheme = http) { rewrite ^/(.*)$ https://www.xuzhichao.com/$1; } } } #重啟nginx服務: [root@nginx03 ~]# nginx -t nginx: the configuration file /etc/nginx/nginx.conf syntax is ok nginx: configuration file /etc/nginx/nginx.conf test is successful [root@nginx03 ~]# systemctl reload nginx.service
-
nginx02主機的主頁檔案如下:
[root@nginx02 certs]# cat /data/nginx/xuzhichao/index.html node1.xuzhichao.com page
-
nginx03主機的主頁檔案如下:
[root@nginx03 ~]# cat /data/nginx/xuzhichao/index.html node2.xuzhichao.com page
-
測試訪問:
[root@lvs-01 ~]# curl -Hhost:www.xuzhichao.com -k https://192.168.50.23 node2.xuzhichao.com page [root@lvs-01 ~]# curl -Hhost:www.xuzhichao.com -k https://192.168.50.22 node1.xuzhichao.com page
1.4 LVS+keepalived配置
1.4.1 keepalived檢測後端伺服器狀態語法
虛擬伺服器:
配置引數:
virtual_server IP port |
virtual_server fwmark int
{
...
real_server {
...
}
...
}
常用引數:
delay_loop <INT>:服務輪詢的時間間隔;
lb_algo rr|wrr|lc|wlc|lblc|sh|dh:定義排程方法;
lb_kind NAT|DR|TUN:叢集的型別;
persistence_timeout <INT>:持久連線時長;
protocol TCP:服務協議;
sorry_server <IPADDR> <PORT>:備用伺服器地址;
real_server <IPADDR> <PORT>
{
weight <INT> 定義RS權重
notify_up <STRING>|<QUOTED-STRING> 定義RS上線時呼叫的指令碼
notify_down <STRING>|<QUOTED-STRING> 定義RS下線或故障時呼叫的指令碼
HTTP_GET|SSL_GET|TCP_CHECK|SMTP_CHECK|MISC_CHECK { ... }:定義當前主機的健康狀態檢測方法;
}
HTTP_GET|SSL_GET:應用層檢測
HTTP_GET|SSL_GET {
url {
path <URL_PATH>:定義要監控的URL;
status_code <INT>:判斷上述檢測機制為健康狀態的響應碼;
digest <STRING>:判斷上述檢測機制為健康狀態的響應的內容的校驗碼;
}
nb_get_retry <INT>:重試次數;
delay_before_retry <INT>:重試之前的延遲時長,間隔時長;
connect_ip <IP ADDRESS>:向當前RS的哪個IP地址發起健康狀態檢測請求,預設為real_server定義的地址
connect_port <PORT>:向當前RS的哪個PORT發起健康狀態檢測請求,預設為real_server定義的埠
bindto <IP ADDRESS>:發出健康狀態檢測請求時使用的源地址;預設為出介面地址
bind_port <PORT>:發出健康狀態檢測請求時使用的源埠;
connect_timeout <INTEGER>:連線請求的超時時長;
}
傳輸層檢測:
TCP_CHECK {
connect_ip <IP ADDRESS>:向當前RS的哪個IP地址發起健康狀態檢測請求
connect_port <PORT>:向當前RS的哪個PORT發起健康狀態檢測請求
bindto <IP ADDRESS>:發出健康狀態檢測請求時使用的源地址;
bind_port <PORT>:發出健康狀態檢測請求時使用的源埠;
connect_timeout <INTEGER>:連線請求的超時時長;
}
1.4.2 keepalived配置例項
-
安裝keepalived軟體包:
[root@lvs-01 ~]# yum install keepalived -y
-
lvs01節點的keepalived配置檔案:
#1.keepalived配置檔案如下: [root@lvs-01 ~]# cat /etc/keepalived/keepalived.conf ! Configuration File for keepalived global_defs { notification_email { root@localhost } notification_email_from keepalived@localhost smtp_server 127.0.0.1 smtp_connect_timeout 30 router_id LVS01 script_user root enable_script_security } vrrp_instance VI_1 { state MASTER interface eth2 virtual_router_id 51 priority 120 advert_int 3 authentication { auth_type PASS auth_pass 1111 } virtual_ipaddress { 192.168.50.100/32 dev eth2 } track_interface { eth2 } notify_master "/etc/keepalived/notify.sh master" notify_backup "/etc/keepalived/notify.sh backup" notify_fault "/etc/keepalived/notify.sh fault" } virtual_server 192.168.50.100 443 { delay_loop 6 lb_algo rr lb_kind DR protocol TCP sorry_server 192.168.20.24 443 real_server 192.168.50.22 443 { weight 1 SSL_GET { url { path /index.html status_code 200 } connect_timeout 3 nb_get_retry 3 delay_before_retry 3 } } real_server 192.168.50.23 443 { weight 1 SSL_GET { url { path /index.html status_code 200 } connect_timeout 3 nb_get_retry 3 delay_before_retry 3 } } } virtual_server 192.168.50.100 80 { delay_loop 6 lb_algo rr lb_kind DR protocol TCP real_server 192.168.50.22 80 { weight 1 TCP_CHECK { connect_timeout 3 nb_get_retry 3 delay_before_retry 3 } } real_server 192.168.50.23 80 { weight 1 TCP_CHECK { connect_timeout 3 nb_get_retry 3 delay_before_retry 3 } } } #2.keepalived的notify.sh指令碼 [root@lvs-01 keepalived]# cat notify.sh #!/bin/bash contact='root@localhost' notify() { local mailsubject="$(hostname) to be $1, vip floating" local mailbody="$(date +'%F %T'): vrrp transition, $(hostname) changed to be $1" echo "$mailbody" | mail -s "$mailsubject" $contact } case $1 in master) notify master ;; backup) notify backup ;; fault) notify fault ;; *) echo "Usage: $(basename $0) {master|backup|fault}" exit 1 ;; esac #增加執行許可權 [root@lvs-01 keepalived]# chmod +x notify.sh #3.增加預設路由指向路由器閘道器 [root@lvs-01 ~]# ip route add default via 192.168.50.50 dev eth2 [root@lvs-01 ~]# route -n Kernel IP routing table Destination Gateway Genmask Flags Metric Ref Use Iface 0.0.0.0 192.168.50.50 0.0.0.0 UG 0 0 0 eth2 192.168.50.0 0.0.0.0 255.255.255.0 U 102 0 0 eth2 #4.啟動keepalived服務: [root@lvs-01 ~]# systemctl start keepalived.service #5.檢視自動生成的ipvs規則: [root@lvs-01 ~]# ipvsadm -Ln IP Virtual Server version 1.2.1 (size=4096) Prot LocalAddress:Port Scheduler Flags -> RemoteAddress:Port Forward Weight ActiveConn InActConn TCP 192.168.50.100:80 rr -> 192.168.50.22:80 Route 1 0 0 -> 192.168.50.23:80 Route 1 0 0 TCP 192.168.50.100:443 rr -> 192.168.50.22:443 Route 1 0 0 -> 192.168.50.23:443 Route 1 0 0 #6.檢視VIP所在的主機: [root@lvs-01 ~]# ip add 4: eth2: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc pfifo_fast state UP group default qlen 1000 link/ether 00:0c:29:21:84:9d brd ff:ff:ff:ff:ff:ff inet 192.168.50.31/24 brd 192.168.50.255 scope global noprefixroute eth2 valid_lft forever preferred_lft forever inet 192.168.50.100/32 scope global eth2 valid_lft forever preferred_lft forever
-
lvs02節點的keepalived配置檔案:
#1.keepalived配置檔案如下: [root@lvs-02 ~]# cat /etc/keepalived/keepalived.conf ! Configuration File for keepalived global_defs { notification_email { root@localhost } notification_email_from keepalived@localhost smtp_server 127.0.0.1 smtp_connect_timeout 30 router_id LVS02 script_user root enable_script_security } vrrp_instance VI_1 { state BACKUP interface eth2 virtual_router_id 51 priority 100 advert_int 3 authentication { auth_type PASS auth_pass 1111 } virtual_ipaddress { 192.168.50.100/32 dev eth2 } track_interface { eth2 } notify_master "/etc/keepalived/notify.sh master" notify_backup "/etc/keepalived/notify.sh backup" notify_fault "/etc/keepalived/notify.sh fault" } virtual_server 192.168.50.100 443 { delay_loop 6 lb_algo rr lb_kind DR protocol TCP sorry_server 192.168.20.24 443 real_server 192.168.50.22 443 { weight 1 SSL_GET { url { path /index.html status_code 200 } connect_timeout 3 nb_get_retry 3 delay_before_retry 3 } } real_server 192.168.50.23 443 { weight 1 SSL_GET { url { path /index.html status_code 200 } connect_timeout 3 nb_get_retry 3 delay_before_retry 3 } } } virtual_server 192.168.50.100 80 { delay_loop 6 lb_algo rr lb_kind DR protocol TCP real_server 192.168.50.22 80 { weight 1 TCP_CHECK { connect_timeout 3 nb_get_retry 3 delay_before_retry 3 } } real_server 192.168.50.23 80 { weight 1 TCP_CHECK { connect_timeout 3 nb_get_retry 3 delay_before_retry 3 } } } #2.keepalived的notify.sh指令碼 [root@lvs-02 keepalived]# cat notify.sh #!/bin/bash contact='root@localhost' notify() { local mailsubject="$(hostname) to be $1, vip floating" local mailbody="$(date +'%F %T'): vrrp transition, $(hostname) changed to be $1" echo "$mailbody" | mail -s "$mailsubject" $contact } case $1 in master) notify master ;; backup) notify backup ;; fault) notify fault ;; *) echo "Usage: $(basename $0) {master|backup|fault}" exit 1 ;; esac #增加執行許可權 [root@lvs-02 keepalived]# chmod +x notify.sh #3.增加預設路由指向路由器閘道器 [root@lvs-02 ~]# ip route add default via 192.168.50.50 dev eth2 [root@lvs-02 ~]# route -n Kernel IP routing table Destination Gateway Genmask Flags Metric Ref Use Iface 0.0.0.0 192.168.50.50 0.0.0.0 UG 0 0 0 eth2 192.168.50.0 0.0.0.0 255.255.255.0 U 102 0 0 eth2 #4.啟動keepalived服務: [root@lvs-02 ~]# systemctl start keepalived.service #5.檢視自動生成的ipvs規則: [root@lvs-02 ~]# ipvsadm -Ln IP Virtual Server version 1.2.1 (size=4096) Prot LocalAddress:Port Scheduler Flags -> RemoteAddress:Port Forward Weight ActiveConn InActConn TCP 192.168.50.100:80 rr -> 192.168.50.22:80 Route 1 0 0 -> 192.168.50.23:80 Route 1 0 0 TCP 192.168.50.100:443 rr -> 192.168.50.22:443 Route 1 0 0 -> 192.168.50.23:443 Route 1 0 0 #6.檢視VIP,不在本機: [root@lvs-02 ~]# ip add 4: eth2: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc pfifo_fast state UP group default qlen 1000 link/ether 00:0c:29:e4:cf:17 brd ff:ff:ff:ff:ff:ff inet 192.168.50.32/24 brd 192.168.50.255 scope global noprefixroute eth2 valid_lft forever preferred_lft forever
-
使用客戶端測試
-
客戶端網路配置如下:
[root@xuzhichao ~]# ip add 3: eth1: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc pfifo_fast state UP group default qlen 1000 link/ether 00:0c:29:2f:d0:da brd ff:ff:ff:ff:ff:ff inet 192.168.20.17/24 brd 192.168.20.255 scope global noprefixroute eth1 valid_lft forever preferred_lft forever [root@xuzhichao ~]# route -n Kernel IP routing table Destination Gateway Genmask Flags Metric Ref Use Iface 192.168.20.0 0.0.0.0 255.255.255.0 U 101 0 0 eth1
-
測試訪問:
#1.測試使用http方式訪問,重定向到https [root@xuzhichao ~]# for i in {1..10} ;do curl -k -L -Hhost:www,xuzhichao.com http://192.168.20.50; done node2.xuzhichao.com page node1.xuzhichao.com page node2.xuzhichao.com page node1.xuzhichao.com page node2.xuzhichao.com page node1.xuzhichao.com page node2.xuzhichao.com page node1.xuzhichao.com page node2.xuzhichao.com page node1.xuzhichao.com page #2.測試直接使用https方式訪問 [root@xuzhichao ~]# for i in {1..10} ;do curl -k -Hhost:www,xuzhichao.com https://192.168.20.50; done node2.xuzhichao.com page node1.xuzhichao.com page node2.xuzhichao.com page node1.xuzhichao.com page node2.xuzhichao.com page node1.xuzhichao.com page node2.xuzhichao.com page node1.xuzhichao.com page node2.xuzhichao.com page node1.xuzhichao.com page
-
1.5 RS故障場景測試
-
把nginx02節點的nginx服務停止
[root@nginx02 ~]# systemctl stop nginx.service
-
檢視兩個節點的日誌和ipvs規則變化:
#1.檢視日誌,發現檢測後端主機失敗,將RS從叢集中移除 [root@lvs-01 ~]# tail -f /var/log/keepalived.log Jul 13 20:00:57 lvs-01 Keepalived_healthcheckers[13466]: TCP connection to [192.168.50.22]:80 failed. Jul 13 20:00:59 lvs-01 Keepalived_healthcheckers[13466]: Error connecting server [192.168.50.22]:443. Jul 13 20:01:00 lvs-01 Keepalived_healthcheckers[13466]: TCP connection to [192.168.50.22]:80 failed. Jul 13 20:01:00 lvs-01 Keepalived_healthcheckers[13466]: Check on service [192.168.50.22]:80 failed after 1 retry. Jul 13 20:01:00 lvs-01 Keepalived_healthcheckers[13466]: Removing service [192.168.50.22]:80 from VS [192.168.50.100]:80 Jul 13 20:01:00 lvs-01 Keepalived_healthcheckers[13466]: Remote SMTP server [127.0.0.1]:25 connected. Jul 13 20:01:00 lvs-01 Keepalived_healthcheckers[13466]: SMTP alert successfully sent. Jul 13 20:01:02 lvs-01 Keepalived_healthcheckers[13466]: Error connecting server [192.168.50.22]:443. Jul 13 20:01:05 lvs-01 Keepalived_healthcheckers[13466]: Error connecting server [192.168.50.22]:443. Jul 13 20:01:08 lvs-01 Keepalived_healthcheckers[13466]: Error connecting server [192.168.50.22]:443. Jul 13 20:01:08 lvs-01 Keepalived_healthcheckers[13466]: Check on service [192.168.50.22]:443 failed after 3 retry. Jul 13 20:01:08 lvs-01 Keepalived_healthcheckers[13466]: Removing service [192.168.50.22]:443 from VS [192.168.50.100]:443 Jul 13 20:01:08 lvs-01 Keepalived_healthcheckers[13466]: Remote SMTP server [127.0.0.1]:25 connected. Jul 13 20:01:08 lvs-01 Keepalived_healthcheckers[13466]: SMTP alert successfully sent. #2.檢視ipvs規則,192.168.50.22主機已經被移除叢集: [root@lvs-01 ~]# ipvsadm -Ln IP Virtual Server version 1.2.1 (size=4096) Prot LocalAddress:Port Scheduler Flags -> RemoteAddress:Port Forward Weight ActiveConn InActConn TCP 192.168.50.100:80 rr -> 192.168.50.23:80 Route 1 0 0 TCP 192.168.50.100:443 rr -> 192.168.50.23:443 Route 1 0 0
-
客戶端測試,訪問全部分配給nginx03節點:
[root@xuzhichao ~]# for i in {1..10} ;do curl -L -k -Hhost:www.xuzhichao.com http://192.168.20.50 ;done node2.xuzhichao.com page node2.xuzhichao.com page node2.xuzhichao.com page node2.xuzhichao.com page node2.xuzhichao.com page node2.xuzhichao.com page node2.xuzhichao.com page node2.xuzhichao.com page node2.xuzhichao.com page node2.xuzhichao.com page
-
恢復nginx02節點,檢視兩個lvs節點的日誌和ipvs規則:
#1.開啟nginx02節點的nginx服務: [root@nginx02 ~]# systemctl start nginx.service #2.檢視lvs01的keepalived日誌,nginx02節點檢測成功,加入後端主機: [root@lvs-01 ~]# tail -f /var/log/keepalived.log Jul 13 20:06:44 lvs-01 Keepalived_healthcheckers[13466]: HTTP status code success to [192.168.50.22]:443 url(1). Jul 13 20:06:44 lvs-01 Keepalived_healthcheckers[13466]: Remote Web server [192.168.50.22]:443 succeed on service. Jul 13 20:06:44 lvs-01 Keepalived_healthcheckers[13466]: Adding service [192.168.50.22]:443 to VS [192.168.50.100]:443 Jul 13 20:06:44 lvs-01 Keepalived_healthcheckers[13466]: Remote SMTP server [127.0.0.1]:25 connected. Jul 13 20:06:44 lvs-01 Keepalived_healthcheckers[13466]: SMTP alert successfully sent. Jul 13 20:06:49 lvs-01 Keepalived_healthcheckers[13466]: TCP connection to [192.168.50.22]:80 success. Jul 13 20:06:49 lvs-01 Keepalived_healthcheckers[13466]: Adding service [192.168.50.22]:80 to VS [192.168.50.100]:80 Jul 13 20:06:49 lvs-01 Keepalived_healthcheckers[13466]: Remote SMTP server [127.0.0.1]:25 connected. Jul 13 20:06:49 lvs-01 Keepalived_healthcheckers[13466]: SMTP alert successfully sent. #3.檢視ipvs規則: [root@lvs-01 ~]# ipvsadm -Ln IP Virtual Server version 1.2.1 (size=4096) Prot LocalAddress:Port Scheduler Flags -> RemoteAddress:Port Forward Weight ActiveConn InActConn TCP 192.168.50.100:80 rr -> 192.168.50.22:80 Route 1 0 0 -> 192.168.50.23:80 Route 1 0 0 TCP 192.168.50.100:443 rr -> 192.168.50.22:443 Route 1 0 0 -> 192.168.50.23:443 Route 1 0 0
-
此時使用客戶端測試,兩個nginx節點恢復正常訪問:
[root@xuzhichao ~]# for i in {1..10} ;do curl -L -k -Hhost:www.xuzhichao.com http://192.168.20.50 ;done node1.xuzhichao.com page node2.xuzhichao.com page node1.xuzhichao.com page node2.xuzhichao.com page node1.xuzhichao.com page node2.xuzhichao.com page node1.xuzhichao.com page node2.xuzhichao.com page node1.xuzhichao.com page node2.xuzhichao.com page
1.6 lvs裝置故障場景測試
-
把lvs-01節點的keepalived服務關閉,模擬lvs-01節點故障,檢視負載均衡叢集情況:
#1.把lvs-01節點的keepalived服務關閉: [root@lvs-01 ~]# systemctl stop keepalived.service #2.檢視keepalived日誌情況: [root@lvs-01 ~]# tail -f /var/log/keepalived.log Jul 13 20:11:08 lvs-01 Keepalived[13465]: Stopping Jul 13 20:11:08 lvs-01 Keepalived_vrrp[13467]: VRRP_Instance(VI_1) sent 0 priority Jul 13 20:11:08 lvs-01 Keepalived_vrrp[13467]: VRRP_Instance(VI_1) removing protocol VIPs. Jul 13 20:11:08 lvs-01 Keepalived_healthcheckers[13466]: Removing service [192.168.50.22]:80 from VS [192.168.50.100]:80 Jul 13 20:11:08 lvs-01 Keepalived_healthcheckers[13466]: Removing service [192.168.50.23]:80 from VS [192.168.50.100]:80 Jul 13 20:11:08 lvs-01 Keepalived_healthcheckers[13466]: Stopped Jul 13 20:11:09 lvs-01 Keepalived_vrrp[13467]: Stopped Jul 13 20:11:09 lvs-01 Keepalived[13465]: Stopped Keepalived v1.3.5 (03/19,2017), git commit v1.3.5-6-g6fa32f2 [root@lvs-02 ~]# tail -f /var/log/keepalived.log Jul 13 20:11:09 lvs-02 Keepalived_vrrp[2247]: VRRP_Instance(VI_1) Transition to MASTER STATE Jul 13 20:11:12 lvs-02 Keepalived_vrrp[2247]: VRRP_Instance(VI_1) Entering MASTER STATE Jul 13 20:11:12 lvs-02 Keepalived_vrrp[2247]: VRRP_Instance(VI_1) setting protocol VIPs. Jul 13 20:11:12 lvs-02 Keepalived_vrrp[2247]: Sending gratuitous ARP on eth2 for 192.168.50.100 Jul 13 20:11:12 lvs-02 Keepalived_vrrp[2247]: VRRP_Instance(VI_1) Sending/queueing gratuitous ARPs on eth2 for 192.168.50.100 Jul 13 20:11:12 lvs-02 Keepalived_vrrp[2247]: Sending gratuitous ARP on eth2 for 192.168.50.100 Jul 13 20:11:12 lvs-02 Keepalived_vrrp[2247]: Sending gratuitous ARP on eth2 for 192.168.50.100 #3.檢視VIP情況,已經轉移到lvs-02節點: [root@lvs-02 ~]# ip add 4: eth2: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc pfifo_fast state UP group default qlen 1000 link/ether 00:0c:29:e4:cf:17 brd ff:ff:ff:ff:ff:ff inet 192.168.50.32/24 brd 192.168.50.255 scope global noprefixroute eth2 valid_lft forever preferred_lft forever inet 192.168.50.100/32 scope global eth2 valid_lft forever preferred_lft forever [root@lvs-01 ~]# ip add 4: eth2: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc pfifo_fast state UP group default qlen 1000 link/ether 00:0c:29:21:84:9d brd ff:ff:ff:ff:ff:ff inet 192.168.50.31/24 brd 192.168.50.255 scope global noprefixroute eth2 valid_lft forever preferred_lft forever #4.測試客戶端訪問正常: [root@xuzhichao ~]# for i in {1..10} ;do curl -L -k -Hhost:www.xuzhichao.com http://192.168.20.50 ;done node1.xuzhichao.com page node2.xuzhichao.com page node1.xuzhichao.com page node2.xuzhichao.com page node1.xuzhichao.com page node2.xuzhichao.com page node1.xuzhichao.com page node2.xuzhichao.com page node1.xuzhichao.com page node2.xuzhichao.com page
-
把lvs-01節點恢復,觀察負載均衡叢集情況:
#1.開啟lvs-01節點的keepalived服務: [root@lvs-01 ~]# systemctl start keepalived.service #2.檢視keepalived日誌情況: [root@lvs-01 ~]# tail -f /var/log/keepalived.log Jul 13 20:15:36 lvs-01 Keepalived_vrrp[13724]: VRRP_Instance(VI_1) Transition to MASTER STATE Jul 13 20:15:39 lvs-01 Keepalived_vrrp[13724]: VRRP_Instance(VI_1) Entering MASTER STATE Jul 13 20:15:39 lvs-01 Keepalived_vrrp[13724]: VRRP_Instance(VI_1) setting protocol VIPs. Jul 13 20:15:39 lvs-01 Keepalived_vrrp[13724]: Sending gratuitous ARP on eth2 for 192.168.50.100 Jul 13 20:15:39 lvs-01 Keepalived_vrrp[13724]: VRRP_Instance(VI_1) Sending/queueing gratuitous ARPs on eth2 for 192.168.50.100 Jul 13 20:15:39 lvs-01 Keepalived_vrrp[13724]: Sending gratuitous ARP on eth2 for 192.168.50.100 [root@lvs-02 ~]# tail -f /var/log/keepalived.log Jul 13 20:15:36 lvs-02 Keepalived_vrrp[2247]: VRRP_Instance(VI_1) Received advert with higher priority 120, ours 100 Jul 13 20:15:36 lvs-02 Keepalived_vrrp[2247]: VRRP_Instance(VI_1) Entering BACKUP STATE Jul 13 20:15:36 lvs-02 Keepalived_vrrp[2247]: VRRP_Instance(VI_1) removing protocol VIPs. Jul 13 20:15:36 lvs-02 Keepalived_vrrp[2247]: Opening script file /etc/keepalived/notify.sh #3.檢視VIP情況,回到lvs-01節點: [root@lvs-01 ~]# ip add 4: eth2: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc pfifo_fast state UP group default qlen 1000 link/ether 00:0c:29:21:84:9d brd ff:ff:ff:ff:ff:ff inet 192.168.50.31/24 brd 192.168.50.255 scope global noprefixroute eth2 valid_lft forever preferred_lft forever inet 192.168.50.100/32 scope global eth2 valid_lft forever preferred_lft forever [root@lvs-02 ~]# ip add 3: eth1: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc pfifo_fast state UP group default qlen 1000 link/ether 00:0c:29:e4:cf:0d brd ff:ff:ff:ff:ff:ff 4: eth2: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc pfifo_fast state UP group default qlen 1000 link/ether 00:0c:29:e4:cf:17 brd ff:ff:ff:ff:ff:ff inet 192.168.50.32/24 brd 192.168.50.255 scope global noprefixroute eth2 valid_lft forever preferred_lft forever #4.客戶端測試訪問正常: [root@xuzhichao ~]# for i in {1..10} ;do curl -L -k -Hhost:www.xuzhichao.com http://192.168.20.50 ;done node2.xuzhichao.com page node1.xuzhichao.com page node2.xuzhichao.com page node1.xuzhichao.com page node2.xuzhichao.com page node1.xuzhichao.com page node2.xuzhichao.com page node1.xuzhichao.com page node2.xuzhichao.com page node1.xuzhichao.com page