Linux系統之LVS+Keepalived實現
1、簡述lvs四種叢集特點及使用場景
LVS叢集有4種類型,分別是NAT、DR、TUN、FULLNAT
從工作方式來講,NAT和FULLNAT都要修改請求報文的目標IP和目標埠(NAT)或源IP目標IP或源埠目標埠(FULLNAT),通常情況下,不建議修改源埠。這兩種叢集的特點是,請求報文和響應報文都要經由DIRECTOR(排程器),在NAT型別的叢集中,後端real server一般都是和director在同一網段,且為私網地址。director應該是後端各real server 的閘道器。而FULLNAT各real server 的ip未必都在同一IP網路,但後端主機必須能與director通訊即可。通常這兩種型別的叢集應用比較多的是NAT,FULLNAT應用比較少,且FULLNAT是非標準應用,所以我們要在使用FULLNAT時還需要額外的給Linux核心打補丁才能使用;NAT通常應用在一些請求流量沒有太大的叢集環境中,且director和各後端real server在同一IP網網路,一般用於隱藏後端主機的真實地址;FULLNAT常用於後端主機和director不再同一IP網路,但他們又可以正常通行的跨網段的內網叢集環境中使用;
DR和TUN這兩種型別的叢集在使用者請求報文上都沒有修改操作,只是在原來的請求報文上各自封裝了一個新的mac首部(DR)或ip首部(TUN);這兩種型別的叢集都有這樣的特點,請求報文經由director,響應報文則由各real server各自響應給客戶端,這樣一來,各real server上就必須配置VIP地址;DR型別的叢集的director必須同後端各real server在同一物理網路,簡單說就是他們之間不能有路由器,原因是DR是基於在原來的請求報文前封裝一個MAC首部,源MAC為DIP,目標MAC為後端real server 中的其中一個RS的MAC;通常這種叢集應用在流量特別大,用來做進站流量接收器來用,通常情況LVS把前端流量接進來交給後端的排程器(這種排程器是基於更好的匹配演算法來排程,比如基於請求報文的URL做排程),這種常用於多級排程的叢集環境中;而TUN型別的叢集,它和DR類似,都是請求報文經由director 響應報文由各real server 自己響應給客戶端,響應報文不經由director,這種型別的叢集DIP和各後端real server都不在同一機房或同一區域網,通常都是各real server在一個公網環境中(對real server來講 出口地址都不是同一個);這種叢集的實現是通過在原請求報文的外邊封裝IP首部,其中源IP是DIP目標IP是RIP,且這種叢集環境中各real server必須能夠支援並識別tun型別的報文(簡單說就是雙IP首部的報文);通常這種叢集應用在多個real server 不同一公網ip地址下,且各real server相距很遠時(跨機房、跨地市州等);
2、描述LVS-DR工作原理,並配置實現。
如上圖所示,LVS-DR型別的叢集它的工作邏輯上這樣的,客戶端請求vip,當報文到達LVS伺服器時,LVS伺服器會對收到的報文進行檢視,它看到目標IP是VIP,且自己有VIP,然後它就把報文送到INPUT鏈上去,在INPUT鏈上進行規則匹配,這時我們在LVS上寫的規則,定義的叢集,當匹配到該報文是叢集服務時,它會把原來的請求報文原封不動的,然後在該報文外邊封裝一個MAC首部,這個MAC首部的源MAC就是DIP所在的介面MAC地址,目標MAC是經過排程演算法得到一個RIP,然後通過ARP廣播拿到對應RIP對應介面上的MAC,然後把這個封裝好的報文直接轉發出去。當報文從LVS伺服器的DIP所在介面發出後,交換機會通過目標MAC把這個報文送到對應的RIP介面上去,RIP收到封裝後的報文後,一看目標MAC是自己,然後它就把MAC首部給拆了,然後拿到客戶端的請求報文,一看目標IP也是自己,然後它就拆IP首部,然後拿到客戶端的請求資訊,然後它會根據客戶端的請求資訊給出對應的響應;在RS封裝響應報文時,它會把VIP封裝成源IP,把客戶端IP封裝成目標IP,然後通過VIP所在介面傳送出去(這是因為它收到報文時,是從VIP所在介面拿到的請求報文,響應報文會從請求報文進來的介面傳送出去);這時的響應報文會根據目標IP,層層路由最後送到客戶端;客戶端收到報文後,看目標IP是自己的IP,然後它就拆除IP首部拿到服務端給出的響應;
以上就是LVS-DR型別處理報文的過程,接下來我們搭建一個以上面top圖為例的實驗環境
環境說明:
客戶端:192.168.0.99 LVS伺服器:VIP是192.168.0.222 DIP是:192.168.0.10 後端RS1的IP是:192.168.0.20 RS2的IP是192.168.0.30 VIP是192.168.0.222
1)準備2臺RS並安裝配置好web服務(2臺RS目前IP是192.168.0.20和30先配置好web服務,並提供一個測試頁面,這兩個測試頁面有意不同,方便我們看出排程到那臺RS上去了)
[root@dr ~]# curl http://192.168.0.20/test.html <h1>RS1,192.168.0.20</h1> [root@dr ~]# curl http://192.168.0.30/test.html <h1>RS2,192.168.0.30</h1> [root@dr ~]#
提示:配置好web服務後,dr是能夠訪問得到
2)修改核心引數,讓其兩個RS配置的VIP不對區域網做ARP通報和ARP請求響應,同時配置路由,目標IP為VIP的報文轉發至VIP所在介面,並在各RS上配置好VIP
為了配置方便我們可以寫指令碼,然後執行指令碼即可
[root@rs1 ~]# cat setparam.sh #/bin/bash vip='192.168.0.222' mask='255.255.255.255' interface='lo:0' case $1 in start) echo 2 > /proc/sys/net/ipv4/conf/all/arp_announce echo 2 > /proc/sys/net/ipv4/conf/lo/arp_announce echo 1 > /proc/sys/net/ipv4/conf/all/arp_ignore echo 1 > /proc/sys/net/ipv4/conf/lo/arp_ignore ifconfig $interface $vip netmask $mask broadcast $vip up route add -host $vip dev $interface ;; stop) ifconfig $interface down echo 0 >/proc/sys/net/ipv4/conf/all/arp_announce echo 0 >/proc/sys/net/ipv4/conf/lo/arp_announce echo 0 >/proc/sys/net/ipv4/conf/all/arp_ignore echo 0 >/proc/sys/net/ipv4/conf/lo/arp_ignore ;; *) echo "Usage:bash $0 start|stop" exit 1 ;; esac [root@rs1 ~]#
提示:以上指令碼的意思就是設定核心引數,然後把VIP繫結到lo:0,新增主機路由
[root@rs1 ~]# ifconfig ens33: flags=4163<UP,BROADCAST,RUNNING,MULTICAST> mtu 1500 inet 192.168.0.20 netmask 255.255.255.0 broadcast 192.168.0.255 ether 00:0c:29:96:23:23 txqueuelen 1000 (Ethernet) RX packets 31990 bytes 42260814 (40.3 MiB) RX errors 0 dropped 0 overruns 0 frame 0 TX packets 23112 bytes 1983590 (1.8 MiB) TX errors 0 dropped 0 overruns 0 carrier 0 collisions 0 lo: flags=73<UP,LOOPBACK,RUNNING> mtu 65536 inet 127.0.0.1 netmask 255.0.0.0 inet6 ::1 prefixlen 128 scopeid 0x10<host> loop txqueuelen 1 (Local Loopback) RX packets 259 bytes 21752 (21.2 KiB) RX errors 0 dropped 0 overruns 0 frame 0 TX packets 259 bytes 21752 (21.2 KiB) TX errors 0 dropped 0 overruns 0 carrier 0 collisions 0 [root@rs1 ~]# bash -x setparam.sh start + vip=192.168.0.222 + mask=255.255.255.255 + interface=lo:0 + case $1 in + echo 2 + echo 2 + echo 1 + echo 1 + ifconfig lo:0 192.168.0.222 netmask 255.255.255.255 broadcast 192.168.0.222 up + route add -host 192.168.0.222 dev lo:0 [root@rs1 ~]# cat /proc/sys/net/ipv4/conf/all/arp_announce 2 [root@rs1 ~]# cat /proc/sys/net/ipv4/conf/all/arp_ignore 1 [root@rs1 ~]# cat /proc/sys/net/ipv4/conf/lo/arp_announce 2 [root@rs1 ~]# cat /proc/sys/net/ipv4/conf/lo/arp_ignore 1 [root@rs1 ~]# route -n Kernel IP routing table Destination Gateway Genmask Flags Metric Ref Use Iface 0.0.0.0 192.168.0.1 0.0.0.0 UG 100 0 0 ens33 192.168.0.0 0.0.0.0 255.255.255.0 U 100 0 0 ens33 192.168.0.222 0.0.0.0 255.255.255.255 UH 0 0 0 lo [root@rs1 ~]# ifconfig ens33: flags=4163<UP,BROADCAST,RUNNING,MULTICAST> mtu 1500 inet 192.168.0.20 netmask 255.255.255.0 broadcast 192.168.0.255 ether 00:0c:29:96:23:23 txqueuelen 1000 (Ethernet) RX packets 32198 bytes 42279504 (40.3 MiB) RX errors 0 dropped 0 overruns 0 frame 0 TX packets 23266 bytes 2001218 (1.9 MiB) TX errors 0 dropped 0 overruns 0 carrier 0 collisions 0 lo: flags=73<UP,LOOPBACK,RUNNING> mtu 65536 inet 127.0.0.1 netmask 255.0.0.0 inet6 ::1 prefixlen 128 scopeid 0x10<host> loop txqueuelen 1 (Local Loopback) RX packets 259 bytes 21752 (21.2 KiB) RX errors 0 dropped 0 overruns 0 frame 0 TX packets 259 bytes 21752 (21.2 KiB) TX errors 0 dropped 0 overruns 0 carrier 0 collisions 0 lo:0: flags=73<UP,LOOPBACK,RUNNING> mtu 65536 inet 192.168.0.222 netmask 255.255.255.255 loop txqueuelen 1 (Local Loopback) [root@rs1 ~]#
提示:可以看到我們執行指令碼後,對應的核心引數都已經設定好,VIP和相關路由也都新增成功;同樣RS2也只需要把上面的指令碼拿過去執行一遍即可
3)在LVS伺服器上配置VIP,定義叢集服務
3.1)首先給director繫結VIP
[root@dr ~]# ifconfig ens33: flags=4163<UP,BROADCAST,RUNNING,MULTICAST> mtu 1500 inet 192.168.0.10 netmask 255.255.255.0 broadcast 192.168.0.255 inet6 fe80::20c:29ff:fef2:820c prefixlen 64 scopeid 0x20<link> ether 00:0c:29:f2:82:0c txqueuelen 1000 (Ethernet) RX packets 11135 bytes 9240712 (8.8 MiB) RX errors 0 dropped 0 overruns 0 frame 0 TX packets 7705 bytes 754318 (736.6 KiB) TX errors 0 dropped 0 overruns 0 carrier 0 collisions 0 lo: flags=73<UP,LOOPBACK,RUNNING> mtu 65536 inet 127.0.0.1 netmask 255.0.0.0 inet6 ::1 prefixlen 128 scopeid 0x10<host> loop txqueuelen 1 (Local Loopback) RX packets 70 bytes 5804 (5.6 KiB) RX errors 0 dropped 0 overruns 0 frame 0 TX packets 70 bytes 5804 (5.6 KiB) TX errors 0 dropped 0 overruns 0 carrier 0 collisions 0 [root@dr ~]# ifconfig ens33:0 192.168.0.222 netmask 255.255.255.255 broadcast 192.168.0.222 up [root@dr ~]# ifconfig ens33: flags=4163<UP,BROADCAST,RUNNING,MULTICAST> mtu 1500 inet 192.168.0.10 netmask 255.255.255.0 broadcast 192.168.0.255 inet6 fe80::20c:29ff:fef2:820c prefixlen 64 scopeid 0x20<link> ether 00:0c:29:f2:82:0c txqueuelen 1000 (Ethernet) RX packets 11277 bytes 9253418 (8.8 MiB) RX errors 0 dropped 0 overruns 0 frame 0 TX packets 7800 bytes 765238 (747.3 KiB) TX errors 0 dropped 0 overruns 0 carrier 0 collisions 0 ens33:0: flags=4163<UP,BROADCAST,RUNNING,MULTICAST> mtu 1500 inet 192.168.0.222 netmask 255.255.255.255 broadcast 192.168.0.222 ether 00:0c:29:f2:82:0c txqueuelen 1000 (Ethernet) lo: flags=73<UP,LOOPBACK,RUNNING> mtu 65536 inet 127.0.0.1 netmask 255.0.0.0 inet6 ::1 prefixlen 128 scopeid 0x10<host> loop txqueuelen 1 (Local Loopback) RX packets 70 bytes 5804 (5.6 KiB) RX errors 0 dropped 0 overruns 0 frame 0 TX packets 70 bytes 5804 (5.6 KiB) TX errors 0 dropped 0 overruns 0 carrier 0 collisions 0 [root@dr ~]#
3.2)新增叢集服務
[root@dr ~]# ipvsadm -Ln IP Virtual Server version 1.2.1 (size=4096) Prot LocalAddress:Port Scheduler Flags -> RemoteAddress:Port Forward Weight ActiveConn InActConn [root@dr ~]# ipvsadm -A -t 192.168.0.222:80 -s rr [root@dr ~]# ipvsadm -a -t 192.168.0.222:80 -r 192.168.0.20 -g [root@dr ~]# ipvsadm -a -t 192.168.0.222:80 -r 192.168.0.30 -g [root@dr ~]# ipvsadm -Ln IP Virtual Server version 1.2.1 (size=4096) Prot LocalAddress:Port Scheduler Flags -> RemoteAddress:Port Forward Weight ActiveConn InActConn TCP 192.168.0.222:80 rr -> 192.168.0.20:80 Route 1 0 0 -> 192.168.0.30:80 Route 1 0 0 [root@dr ~]#
提示:以上規則表示新增一個叢集服務192.168.0.222:80,排程演算法是rr(輪詢),在叢集服務下添加了2個real server 分別是192.168.0.20和192.168.0.30,並且新增為DR型別
4)測試
用客戶端192.168.0.99 去訪問VIP
提示:可以看到客戶端是能夠訪問的,並且是輪詢的方式訪問後端伺服器。我們更換一個排程演算法,再來試試
[root@dr ~]# ipvsadm -Ln IP Virtual Server version 1.2.1 (size=4096) Prot LocalAddress:Port Scheduler Flags -> RemoteAddress:Port Forward Weight ActiveConn InActConn TCP 192.168.0.222:80 rr -> 192.168.0.20:80 Route 1 0 5 -> 192.168.0.30:80 Route 1 0 5 [root@dr ~]# ipvsadm -E -t 192.168.0.222:80 -s sh [root@dr ~]# ipvsadm -Ln IP Virtual Server version 1.2.1 (size=4096) Prot LocalAddress:Port Scheduler Flags -> RemoteAddress:Port Forward Weight ActiveConn InActConn TCP 192.168.0.222:80 sh -> 192.168.0.20:80 Route 1 0 0 -> 192.168.0.30:80 Route 1 0 0 [root@dr ~]#
提示:以上規則表示更改叢集服務192.168.0.222:80的排程演算法為sh
提示:可以看到我們更換了演算法後,也就立刻生效了。以上就是LVS-DR型別的叢集實現,這裡需要提醒一點的是如果VIP和DIP不再同一網段,需要考慮後端real server 怎麼將響應報文送出去;
3、實現LVS+Keepalived高可用。
首先來解釋下上面的圖,當keepalived主節點正常時,資料報文會和原來LVS叢集的資料報文流向一樣,如圖中紅色或綠色實線是請求報文的走向,紅色或綠色虛線是響應報文的走向;當keepalived主節點宕機後,備份節點會通過心跳資訊去判斷主節點是否線上,如果在規定的時間探測到主節點沒線上後,備份節點會立刻把VIP搶過來,然後提供服務。這時新來的請求就會通過備份節點去處理新的請求,從而實現,服務高可用,避免單點失敗的問題;圖上藍色虛線表示主節點宕機後,備份節點處理請求和響應報文的過程。
按照上圖,我們需要在LVS叢集上加一臺伺服器,並且需要在原有的LVS叢集的排程器上安裝配置keepalived服務,如上圖
1)在兩個排程器上安裝keepalived
[root@dr1 ~]# yum install -y keepalived Loaded plugins: fastestmirror epel | 5.4 kB 00:00:00 my_base | 3.6 kB 00:00:00 (1/2): epel/x86_64/updateinfo | 1.0 MB 00:00:00 (2/2): epel/x86_64/primary_db | 6.7 MB 00:00:01 Loading mirror speeds from cached hostfile Resolving Dependencies --> Running transaction check ---> Package keepalived.x86_64 0:1.3.5-1.el7 will be installed --> Processing Dependency: libnetsnmpmibs.so.31()(64bit) for package: keepalived-1.3.5-1.el7.x86_64 --> Processing Dependency: libnetsnmpagent.so.31()(64bit) for package: keepalived-1.3.5-1.el7.x86_64 --> Processing Dependency: libnetsnmp.so.31()(64bit) for package: keepalived-1.3.5-1.el7.x86_64 --> Running transaction check ---> Package net-snmp-agent-libs.x86_64 1:5.7.2-28.el7 will be installed --> Processing Dependency: libsensors.so.4()(64bit) for package: 1:net-snmp-agent-libs-5.7.2-28.el7.x86_64 ---> Package net-snmp-libs.x86_64 1:5.7.2-28.el7 will be installed --> Running transaction check ---> Package lm_sensors-libs.x86_64 0:3.4.0-4.20160601gitf9185e5.el7 will be installed --> Finished Dependency Resolution Dependencies Resolved ================================================================================================== Package Arch Version Repository Size ================================================================================================== Installing: keepalived x86_64 1.3.5-1.el7 my_base 327 k Installing for dependencies: lm_sensors-libs x86_64 3.4.0-4.20160601gitf9185e5.el7 my_base 41 k net-snmp-agent-libs x86_64 1:5.7.2-28.el7 my_base 704 k net-snmp-libs x86_64 1:5.7.2-28.el7 my_base 748 k Transaction Summary ================================================================================================== Install 1 Package (+3 Dependent packages) Total download size: 1.8 M Installed size: 6.0 M Downloading packages: (1/4): lm_sensors-libs-3.4.0-4.20160601gitf9185e5.el7.x86_64.rpm | 41 kB 00:00:00 (2/4): keepalived-1.3.5-1.el7.x86_64.rpm | 327 kB 00:00:00 (3/4): net-snmp-agent-libs-5.7.2-28.el7.x86_64.rpm | 704 kB 00:00:00 (4/4): net-snmp-libs-5.7.2-28.el7.x86_64.rpm | 748 kB 00:00:00 -------------------------------------------------------------------------------------------------- Total 1.9 MB/s | 1.8 MB 00:00:00 Running transaction check Running transaction test Transaction test succeeded Running transaction Installing : 1:net-snmp-libs-5.7.2-28.el7.x86_64 1/4 Installing : lm_sensors-libs-3.4.0-4.20160601gitf9185e5.el7.x86_64 2/4 Installing : 1:net-snmp-agent-libs-5.7.2-28.el7.x86_64 3/4 Installing : keepalived-1.3.5-1.el7.x86_64 4/4 Verifying : 1:net-snmp-libs-5.7.2-28.el7.x86_64 1/4 Verifying : 1:net-snmp-agent-libs-5.7.2-28.el7.x86_64 2/4 Verifying : lm_sensors-libs-3.4.0-4.20160601gitf9185e5.el7.x86_64 3/4 Verifying : keepalived-1.3.5-1.el7.x86_64 4/4 Installed: keepalived.x86_64 0:1.3.5-1.el7 Dependency Installed: lm_sensors-libs.x86_64 0:3.4.0-4.20160601gitf9185e5.el7 net-snmp-agent-libs.x86_64 1:5.7.2-28.el7 net-snmp-libs.x86_64 1:5.7.2-28.el7 Complete! [root@dr1 ~]#
提示:keepalived包來自base包,不需要額外配置epel源,在DR2上也是同樣的操作安裝好keepalived包
2)編寫郵件通知指令碼
[root@dr1 ~]# cat /etc/keepalived/notify.sh #!/bin/bash # contact='root@localhost' notify() { local mailsubject="$(hostname) to be $1, vip floating" local mailbody="$(date +'%F %T'): vrrp transition, $(hostname) changed to be $1" echo "$mailbody" | mail -s "$mailsubject" $contact } case $1 in master) notify master ;; backup) notify backup ;; fault) notify fault ;; *) echo "Usage: $(basename $0) {master|backup|fault}" exit 1 ;; esac [root@dr1 ~]# chmod +x /etc/keepalived/notify.sh [root@dr1 ~]# ll /etc/keepalived/notify.sh -rwxr-xr-x 1 root root 405 Feb 21 19:52 /etc/keepalived/notify.sh [root@dr1 ~]#
提示:以上指令碼主要思想是通過傳遞不同狀態的引數,然後相應的傳送不同狀態的郵件
3)在DR上面安裝sorry_sever
[root@dr1 ~]# yum install -y nginx Loaded plugins: fastestmirror Loading mirror speeds from cached hostfile Resolving Dependencies --> Running transaction check ---> Package nginx.x86_64 1:1.16.1-1.el7 will be installed --> Processing Dependency: nginx-all-modules = 1:1.16.1-1.el7 for package: 1:nginx-1.16.1-1.el7.x86_64 --> Processing Dependency: nginx-filesystem = 1:1.16.1-1.el7 for package: 1:nginx-1.16.1-1.el7.x86_64 --> Processing Dependency: nginx-filesystem for package: 1:nginx-1.16.1-1.el7.x86_64 ……省略部分資訊 Installed: nginx.x86_64 1:1.16.1-1.el7 Dependency Installed: centos-indexhtml.noarch 0:7-9.el7.centos fontconfig.x86_64 0:2.10.95-11.el7 fontpackages-filesystem.noarch 0:1.44-8.el7 gd.x86_64 0:2.0.35-26.el7 gperftools-libs.x86_64 0:2.4-8.el7 libX11.x86_64 0:1.6.5-1.el7 libX11-common.noarch 0:1.6.5-1.el7 libXau.x86_64 0:1.0.8-2.1.el7 libXpm.x86_64 0:3.5.12-1.el7 libjpeg-turbo.x86_64 0:1.2.90-5.el7 libpng.x86_64 2:1.5.13-7.el7_2 libunwind.x86_64 2:1.2-2.el7 libxcb.x86_64 0:1.12-1.el7 libxslt.x86_64 0:1.1.28-5.el7 lyx-fonts.noarch 0:2.2.3-1.el7 nginx-all-modules.noarch 1:1.16.1-1.el7 nginx-filesystem.noarch 1:1.16.1-1.el7 nginx-mod-http-image-filter.x86_64 1:1.16.1-1.el7 nginx-mod-http-perl.x86_64 1:1.16.1-1.el7 nginx-mod-http-xslt-filter.x86_64 1:1.16.1-1.el7 nginx-mod-mail.x86_64 1:1.16.1-1.el7 nginx-mod-stream.x86_64 1:1.16.1-1.el7 Complete! [root@dr1 ~]#
提示:在RS2上也是同樣的操作
給sorry server 一個測試主頁
[root@dr1 ~]# cat /usr/share/nginx/html/index.html <h1>sorry server 192.168.0.10</h1> [root@dr1 ~]#
[root@dr2 ~]# cat /usr/share/nginx/html/index.html <h1>sorry server 192.168.0.11<h1> [root@dr2 ~]#
提示:這兩個頁面可以是一樣的內容,我們為了區分,故意給出不一樣的頁面。
啟動服務
[root@dr1 ~]# systemctl start nginx [root@dr1 ~]# curl http://127.0.0.1 <h1>sorry server 192.168.0.10</h1> [root@dr1 ~]#
[root@dr2 ~]# systemctl start nginx [root@dr2 ~]# curl http://127.0.0.1 <h1>sorry server 192.168.0.11<h1> [root@dr2 ~]#
提示:可以看到兩個DR上面各自的sorry server都啟動,並能夠訪問
4)配置主節點keepalived
1)在配置前需要確認伺服器的各時間是否同步,通常情況下,我們會把一個叢集的所有主機都指向一臺時間伺服器,用來同步時間,有關時間伺服器搭建可參考本人部落格https://www.cnblogs.com/qiuhom-1874/p/12079927.html
[root@dr1 ~]# grep "^server" /etc/chrony.conf server 192.168.0.99 iburst [root@dr1 ~]# [root@dr2 ~]# grep "^server" /etc/chrony.conf server 192.168.0.99 iburst [root@dr2 ~]# [root@rs1 ~]# grep "^server" /etc/chrony.conf server 192.168.0.99 iburst [root@rs1 ~]# [root@rs2 ~]# grep "^server" /etc/chrony.conf server 192.168.0.99 iburst [root@rs2 ~]#
提示:把時間伺服器地址執行同一個時間伺服器,然後重啟服務即可同步時間
2)確保iptables及selinux不會成為阻礙;
[root@dr1 ~]# getenforce Disabled [root@dr1 ~]# iptables -nvL Chain INPUT (policy ACCEPT 0 packets, 0 bytes) pkts bytes target prot opt in out source destination Chain FORWARD (policy ACCEPT 0 packets, 0 bytes) pkts bytes target prot opt in out source destination Chain OUTPUT (policy ACCEPT 0 packets, 0 bytes) pkts bytes target prot opt in out source destination [root@dr1 ~]#
提示:我們可以選擇關閉selinux和iptables,在centos7上可能會有很多規則,我們可以選擇新增IPTABLES或者直接將規則情況,把預設的處理動作改為ACCEPT也行;
3)各節點之間可通過主機名互相通訊(對KA並非必須),建議使用/etc/hosts檔案實現;
[root@dr1 ~]# cat /etc/hosts 127.0.0.1 localhost localhost.localdomain localhost4 localhost4.localdomain4 ::1 localhost localhost.localdomain localhost6 localhost6.localdomain6 192.168.0.10 dr1.ilinux.io dr1 192.168.0.11 dr2.ilinux.io dr2 192.168.0.20 rs1.ilinux.io rs1 192.168.0.30 rs2.ilinux.io rs2 [root@dr1 ~]# scp /etc/hosts 192.168.0.11:/etc/ The authenticity of host '192.168.0.11 (192.168.0.11)' can't be established. ECDSA key fingerprint is SHA256:EG9nua4JJuUeofheXlgQeL9hX5H53JynOqf2vf53mII. ECDSA key fingerprint is MD5:57:83:e6:46:2c:4b:bb:33:13:56:17:f7:fd:76:71:cc. Are you sure you want to continue connecting (yes/no)? yes Warning: Permanently added '192.168.0.11' (ECDSA) to the list of known hosts. [email protected]'s password: hosts 100% 282 74.2KB/s 00:00 [root@dr1 ~]# scp /etc/hosts 192.168.0.20:/etc/ [email protected]'s password: hosts 100% 282 144.9KB/s 00:00 [root@dr1 ~]# scp /etc/hosts 192.168.0.30:/etc/ [email protected]'s password: hosts 100% 282 85.8KB/s 00:00 [root@dr1 ~]# ping dr1 PING dr1.ilinux.io (192.168.0.10) 56(84) bytes of data. 64 bytes from dr1.ilinux.io (192.168.0.10): icmp_seq=1 ttl=64 time=0.031 ms 64 bytes from dr1.ilinux.io (192.168.0.10): icmp_seq=2 ttl=64 time=0.046 ms ^C --- dr1.ilinux.io ping statistics --- 2 packets transmitted, 2 received, 0% packet loss, time 1000ms rtt min/avg/max/mdev = 0.031/0.038/0.046/0.009 ms [root@dr1 ~]# ping dr2 PING dr2.ilinux.io (192.168.0.11) 56(84) bytes of data. 64 bytes from dr2.ilinux.io (192.168.0.11): icmp_seq=1 ttl=64 time=1.36 ms 64 bytes from dr2.ilinux.io (192.168.0.11): icmp_seq=2 ttl=64 time=0.599 ms 64 bytes from dr2.ilinux.io (192.168.0.11): icmp_seq=3 ttl=64 time=0.631 ms ^C --- dr2.ilinux.io ping statistics --- 3 packets transmitted, 3 received, 0% packet loss, time 2003ms rtt min/avg/max/mdev = 0.599/0.865/1.366/0.355 ms [root@dr1 ~]# ping rs1 PING rs1.ilinux.io (192.168.0.20) 56(84) bytes of data. 64 bytes from rs1.ilinux.io (192.168.0.20): icmp_seq=1 ttl=64 time=0.614 ms 64 bytes from rs1.ilinux.io (192.168.0.20): icmp_seq=2 ttl=64 time=0.628 ms ^C --- rs1.ilinux.io ping statistics --- 2 packets transmitted, 2 received, 0% packet loss, time 1001ms rtt min/avg/max/mdev = 0.614/0.621/0.628/0.007 ms [root@dr1 ~]# ping rs2 PING rs2.ilinux.io (192.168.0.30) 56(84) bytes of data. 64 bytes from rs2.ilinux.io (192.168.0.30): icmp_seq=1 ttl=64 time=0.561 ms 64 bytes from rs2.ilinux.io (192.168.0.30): icmp_seq=2 ttl=64 time=0.611 ms 64 bytes from rs2.ilinux.io (192.168.0.30): icmp_seq=3 ttl=64 time=0.653 ms ^C --- rs2.ilinux.io ping statistics --- 3 packets transmitted, 3 received, 0% packet loss, time 2002ms rtt min/avg/max/mdev = 0.561/0.608/0.653/0.042 ms [root@dr1 ~]#
提示:配置好的hosts檔案可以通過scp拷貝到各節點即可
4)確保各節點的用於叢集服務的介面支援MULTICAST通訊;
以上4點沒有問題的情況下,我們可以來配置keepalived
[root@dr1 ~]# cat /etc/keepalived/keepalived.conf ! Configuration File for keepalived global_defs { notification_email { root@localhost } notification_email_from keepalived@localhost smtp_server 127.0.0.1 smtp_connect_timeout 30 router_id LVS_DR1 vrrp_mcast_group4 224.10.10.222 } vrrp_instance VI_1 { state MASTER interface ens33 virtual_router_id 51 priority 100 advert_int 1 authentication { auth_type PASS auth_pass Yc15tnWa } virtual_ipaddress { 192.168.0.222/24 dev ens33 label ens33:0 } notify_master "/etc/keepalived/notify.sh master" notify_backup "/etc/keepalived/notify.sh backup" notify_fault "/etc/keepalived/notify.sh fault" } virtual_server 192.168.0.222 80 { delay_loop 2 lb_algo rr lb_kind DR protocol TCP sorry_server 127.0.0.1 80 real_server 192.168.0.20 80 { weight 1 TCP_CHECK { connect_timeout 3 } } real_server 192.168.0.30 80 { weight 1 HTTP_GET { url { path /test.html status_code 200 } connect_timeout 3 nb_get_retry 3 delay_before_retry 3 } } } [root@dr1 ~]#
提示:多播地址是用來傳遞心跳資訊的,我們在配置時需要配置成一個D類地址。
5)複製郵件傳送指令碼,並配置備份節點keepalived
[root@dr1 ~]# scp /etc/keepalived/notify.sh 192.168.0.11:/etc/keepalived/ [email protected]'s password: notify.sh 100% 405 116.6KB/s 00:00 [root@dr1 ~]# scp /etc/keepalived/keepalived.conf 192.168.0.11:/etc/keepalived/keepalived.conf.bak [email protected]'s password: keepalived.conf 100% 1162 506.4KB/s 00:00 [root@dr1 ~]#
提示:我們可以把主節點配置檔案傳送到備份節點上,然後改下就可以了
[root@dr2 ~]# ls /etc/keepalived/ epalived.conf keepalived.conf.bak notify.sh [root@dr2 ~]# cp /etc/keepalived/keepalived.conf{,.backup} [root@dr2 ~]# ls /etc/keepalived/ keepalived.conf keepalived.conf.backup keepalived.conf.bak notify.sh [root@dr2 ~]# mv /etc/keepalived/keepalived.conf.bak /etc/keepalived/keepalived.conf mv: overwrite ‘/etc/keepalived/keepalived.conf’? y [root@dr2 ~]# vim /etc/keepalived/keepalived.conf ! Configuration File for keepalived global_defs { notification_email { root@localhost } notification_email_from keepalived@localhost smtp_server 127.0.0.1 smtp_connect_timeout 30 router_id LVS_DR2 vrrp_mcast_group4 224.10.10.222 } vrrp_instance VI_1 { state BACKUP interface ens33 virtual_router_id 51 priority 99 advert_int 1 authentication { auth_type PASS auth_pass Yc15tnWa } virtual_ipaddress { 192.168.0.222/24 dev ens33 label ens33:0 } notify_master "/etc/keepalived/notify.sh master" notify_backup "/etc/keepalived/notify.sh backup" notify_fault "/etc/keepalived/notify.sh fault" } virtual_server 192.168.0.222 80 { delay_loop 2 lb_algo rr lb_kind DR protocol TCP sorry_server 127.0.0.1 80 real_server 192.168.0.20 80 { weight 1 TCP_CHECK { connect_timeout 3 } } real_server 192.168.0.30 80 { weight 1 HTTP_GET { url { path /test.html status_code 200 } connect_timeout 3 nb_get_retry 3 delay_before_retry 3 } } } "/etc/keepalived/keepalived.conf" 60L, 1161C written [root@dr2 ~]#
提示:如果我們是從主節點複製配置檔案到備份節點上去,我們只需要更改global_defs裡面的route_id;vrrp_instances裡更改state 為BACKUP,priority 為99,這個值表示優先順序,數字越小優先順序越低
6)啟動主備節點,看啟VIP是否都已配,以及LVS規則是否生成
[root@dr1 ~]# ifconfig ens33: flags=4163<UP,BROADCAST,RUNNING,MULTICAST> mtu 1500 inet 192.168.0.10 netmask 255.255.255.0 broadcast 192.168.0.255 inet6 fe80::20c:29ff:fef2:820c prefixlen 64 scopeid 0x20<link> ether 00:0c:29:f2:82:0c txqueuelen 1000 (Ethernet) RX packets 16914 bytes 14760959 (14.0 MiB) RX errors 0 dropped 1 overruns 0 frame 0 TX packets 12058 bytes 1375703 (1.3 MiB) TX errors 0 dropped 0 overruns 0 carrier 0 collisions 0 lo: flags=73<UP,LOOPBACK,RUNNING> mtu 65536 inet 127.0.0.1 netmask 255.0.0.0 inet6 ::1 prefixlen 128 scopeid 0x10<host> loop txqueuelen 1 (Local Loopback) RX packets 15 bytes 1304 (1.2 KiB) RX errors 0 dropped 0 overruns 0 frame 0 TX packets 15 bytes 1304 (1.2 KiB) TX errors 0 dropped 0 overruns 0 carrier 0 collisions 0 [root@dr1 ~]# ipvsadm -Ln IP Virtual Server version 1.2.1 (size=4096) Prot LocalAddress:Port Scheduler Flags -> RemoteAddress:Port Forward Weight ActiveConn InActConn [root@dr1 ~]# systemctl start keepalived [root@dr1 ~]# ifconfig ens33: flags=4163<UP,BROADCAST,RUNNING,MULTICAST> mtu 1500 inet 192.168.0.10 netmask 255.255.255.0 broadcast 192.168.0.255 inet6 fe80::20c:29ff:fef2:820c prefixlen 64 scopeid 0x20<link> ether 00:0c:29:f2:82:0c txqueuelen 1000 (Ethernet) RX packets 17003 bytes 14768581 (14.0 MiB) RX errors 0 dropped 1 overruns 0 frame 0 TX packets 12150 bytes 1388509 (1.3 MiB) TX errors 0 dropped 0 overruns 0 carrier 0 collisions 0 ens33:0: flags=4163<UP,BROADCAST,RUNNING,MULTICAST> mtu 1500 inet 192.168.0.222 netmask 255.255.255.0 broadcast 0.0.0.0 ether 00:0c:29:f2:82:0c txqueuelen 1000 (Ethernet) lo: flags=73<UP,LOOPBACK,RUNNING> mtu 65536 inet 127.0.0.1 netmask 255.0.0.0 inet6 ::1 prefixlen 128 scopeid 0x10<host> loop txqueuelen 1 (Local Loopback) RX packets 15 bytes 1304 (1.2 KiB) RX errors 0 dropped 0 overruns 0 frame 0 TX packets 15 bytes 1304 (1.2 KiB) TX errors 0 dropped 0 overruns 0 carrier 0 collisions 0 [root@dr1 ~]# ipvsadm -Ln IP Virtual Server version 1.2.1 (size=4096) Prot LocalAddress:Port Scheduler Flags -> RemoteAddress:Port Forward Weight ActiveConn InActConn TCP 192.168.0.222:80 rr -> 192.168.0.20:80 Route 1 0 0 -> 192.168.0.30:80 Route 1 0 0 [root@dr1 ~]#
提示:可看到keepalived啟動後,VIP和LVS規則就自動生成了,接下我們在備份幾點抓包看看主節點是否在向組播地址傳送心跳資訊
提示:可看到主節點在向組播地址通告自己的心跳資訊
啟動備份節點
[root@dr2 ~]# systemctl start keepalived [root@dr2 ~]# ifconfig ens33: flags=4163<UP,BROADCAST,RUNNING,MULTICAST> mtu 1500 inet 192.168.0.11 netmask 255.255.255.0 broadcast 192.168.0.255 inet6 fe80::20c:29ff:fe50:13f1 prefixlen 64 scopeid 0x20<link> ether 00:0c:29:50:13:f1 txqueuelen 1000 (Ethernet) RX packets 12542 bytes 14907658 (14.2 MiB) RX errors 0 dropped 1 overruns 0 frame 0 TX packets 7843 bytes 701839 (685.3 KiB) TX errors 0 dropped 0 overruns 0 carrier 0 collisions 0 lo: flags=73<UP,LOOPBACK,RUNNING> mtu 65536 inet 127.0.0.1 netmask 255.0.0.0 inet6 ::1 prefixlen 128 scopeid 0x10<host> loop txqueuelen 1 (Local Loopback) RX packets 10 bytes 879 (879.0 B) RX errors 0 dropped 0 overruns 0 frame 0 TX packets 10 bytes 879 (879.0 B) TX errors 0 dropped 0 overruns 0 carrier 0 collisions 0 [root@dr2 ~]# tcpdump -i ens33 -nn host 224.10.10.222 tcpdump: verbose output suppressed, use -v or -vv for full protocol decode listening on ens33, link-type EN10MB (Ethernet), capture size 262144 bytes 20:59:33.620661 IP 192.168.0.10 > 224.10.10.222: VRRPv2, Advertisement, vrid 51, prio 100, authtype simple, intvl 1s, length 20 20:59:34.622645 IP 192.168.0.10 > 224.10.10.222: VRRPv2, Advertisement, vrid 51, prio 100, authtype simple, intvl 1s, length 20 20:59:35.624590 IP 192.168.0.10 > 224.10.10.222: VRRPv2, Advertisement, vrid 51, prio 100, authtype simple, intvl 1s, length 20 20:59:36.626588 IP 192.168.0.10 > 224.10.10.222: VRRPv2, Advertisement, vrid 51, prio 100, authtype simple, intvl 1s, length 20 20:59:37.628675 IP 192.168.0.10 > 224.10.10.222: VRRPv2, Advertisement, vrid 51, prio 100, authtype simple, intvl 1s, length 20 20:59:38.630562 IP 192.168.0.10 > 224.10.10.222: VRRPv2, Advertisement, vrid 51, prio 100, authtype simple, intvl 1s, length 20 20:59:39.632673 IP 192.168.0.10 > 224.10.10.222: VRRPv2, Advertisement, vrid 51, prio 100, authtype simple, intvl 1s, length 20 20:59:40.634658 IP 192.168.0.10 > 224.10.10.222: VRRPv2, Advertisement, vrid 51, prio 100, authtype simple, intvl 1s, length 20 20:59:41.636699 IP 192.168.0.10 > 224.10.10.222: VRRPv2, Advertisement, vrid 51, prio 100, authtype simple, intvl 1s, length 20 ^C 9 packets captured 9 packets received by filter 0 packets dropped by kernel [root@dr2 ~]#
提示:可以看到備份節點啟動後,它不會去拿VIP,這是因為主節點的優先順序要比備份節點高,同時主節點在向組播地址通告自己的心跳資訊
用客戶端192.168.0.99 去訪問叢集服務
[qiuhom@test ~]$ ip a s enp2s0 2: enp2s0: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc pfifo_fast state UP group default qlen 1000 link/ether 00:30:18:51:af:3c brd ff:ff:ff:ff:ff:ff inet 192.168.0.99/24 brd 192.168.0.255 scope global noprefixroute enp2s0 valid_lft forever preferred_lft forever inet 172.16.1.2/16 brd 172.16.255.255 scope global noprefixroute enp2s0:0 valid_lft forever preferred_lft forever inet6 fe80::230:18ff:fe51:af3c/64 scope link valid_lft forever preferred_lft forever [qiuhom@test ~]$ curl http://192.168.0.222/test.html <h1>RS2,192.168.0.30</h1> [qiuhom@test ~]$ curl http://192.168.0.222/test.html <h1>RS1,192.168.0.20</h1> [qiuhom@test ~]$ curl http://192.168.0.222/test.html <h1>RS2,192.168.0.30</h1> [qiuhom@test ~]$
提示:現在主節點正常的情況,叢集服務是可以正常訪問的
把主節點停掉,看看叢集服務是否能夠正常訪問
[root@dr1 ~]# systemctl stop keepalived [root@dr1 ~]# ifconfig ens33: flags=4163<UP,BROADCAST,RUNNING,MULTICAST> mtu 1500 inet 192.168.0.10 netmask 255.255.255.0 broadcast 192.168.0.255 inet6 fe80::20c:29ff:fef2:820c prefixlen 64 scopeid 0x20<link> ether 00:0c:29:f2:82:0c txqueuelen 1000 (Ethernet) RX packets 18001 bytes 15406859 (14.6 MiB) RX errors 0 dropped 1 overruns 0 frame 0 TX packets 14407 bytes 1548635 (1.4 MiB) TX errors 0 dropped 0 overruns 0 carrier 0 collisions 0 lo: flags=73<UP,LOOPBACK,RUNNING> mtu 65536 inet 127.0.0.1 netmask 255.0.0.0 inet6 ::1 prefixlen 128 scopeid 0x10<host> loop txqueuelen 1 (Local Loopback) RX packets 15 bytes 1304 (1.2 KiB) RX errors 0 dropped 0 overruns 0 frame 0 TX packets 15 bytes 1304 (1.2 KiB) TX errors 0 dropped 0 overruns 0 carrier 0 collisions 0 [root@dr1 ~]# ipvsadm -Ln IP Virtual Server version 1.2.1 (size=4096) Prot LocalAddress:Port Scheduler Flags -> RemoteAddress:Port Forward Weight ActiveConn InActConn [root@dr1 ~]#
提示:可以看到當我們停掉了主節點後,vip和lvs規則也就自動刪除了,接下來,我們再用客戶端來訪問下叢集服務,看看是否可訪問?
[qiuhom@test ~]$ curl http://192.168.0.222/test.html <h1>RS2,192.168.0.30</h1> [qiuhom@test ~]$ curl http://192.168.0.222/test.html <h1>RS1,192.168.0.20</h1> [qiuhom@test ~]$ curl http://192.168.0.222/test.html <h1>RS2,192.168.0.30</h1> [qiuhom@test ~]$ curl http://192.168.0.222/test.html <h1>RS1,192.168.0.20</h1> [qiuhom@test ~]$ curl http://192.168.0.222/test.html <h1>RS2,192.168.0.30</h1> [qiuhom@test ~]$
提示:可看到在主節點宕機的情況,叢集服務是不受影響的,這是因為備份節點接管了主節點的工作,把VIP和LVS規則在自己的節點上應用了
我們再看看備份節點上的IP資訊和LVS規則
[root@dr2 ~]# ifconfig ens33: flags=4163<UP,BROADCAST,RUNNING,MULTICAST> mtu 1500 inet 192.168.0.11 netmask 255.255.255.0 broadcast 192.168.0.255 inet6 fe80::20c:29ff:fe50:13f1 prefixlen 64 scopeid 0x20<link> ether 00:0c:29:50:13:f1 txqueuelen 1000 (Ethernet) RX packets 13545 bytes 15227354 (14.5 MiB) RX errors 0 dropped 1 overruns 0 frame 0 TX packets 9644 bytes 828542 (809.1 KiB) TX errors 0 dropped 0 overruns 0 carrier 0 collisions 0 ens33:0: flags=4163<UP,BROADCAST,RUNNING,MULTICAST> mtu 1500 inet 192.168.0.222 netmask 255.255.255.0 broadcast 0.0.0.0 ether 00:0c:29:50:13:f1 txqueuelen 1000 (Ethernet) lo: flags=73<UP,LOOPBACK,RUNNING> mtu 65536 inet 127.0.0.1 netmask 255.0.0.0 inet6 ::1 prefixlen 128 scopeid 0x10<host> loop txqueuelen 1 (Local Loopback) RX packets 10 bytes 879 (879.0 B) RX errors 0 dropped 0 overruns 0 frame 0 TX packets 10 bytes 879 (879.0 B) TX errors 0 dropped 0 overruns 0 carrier 0 collisions 0 [root@dr2 ~]# ipvsadm -Ln IP Virtual Server version 1.2.1 (size=4096) Prot LocalAddress:Port Scheduler Flags -> RemoteAddress:Port Forward Weight ActiveConn InActConn TCP 192.168.0.222:80 rr -> 192.168.0.20:80 Route 1 0 0 -> 192.168.0.30:80 Route 1 0 0 [root@dr2 ~]#
我們把主節點恢復,再看看備份節點是否把VIP和LVS規則刪除?
[root@dr1 ~]# systemctl start keepalived [root@dr1 ~]# ifconfig ens33: flags=4163<UP,BROADCAST,RUNNING,MULTICAST> mtu 1500 inet 192.168.0.10 netmask 255.255.255.0 broadcast 192.168.0.255 inet6 fe80::20c:29ff:fef2:820c prefixlen 64 scopeid 0x20<link> ether 00:0c:29:f2:82:0c txqueuelen 1000 (Ethernet) RX packets 18533 bytes 15699933 (14.9 MiB) RX errors 0 dropped 1 overruns 0 frame 0 TX packets 14808 bytes 1589148 (1.5 MiB) TX errors 0 dropped 0 overruns 0 carrier 0 collisions 0 ens33:0: flags=4163<UP,BROADCAST,RUNNING,MULTICAST> mtu 1500 inet 192.168.0.222 netmask 255.255.255.0 broadcast 0.0.0.0 ether 00:0c:29:f2:82:0c txqueuelen 1000 (Ethernet) lo: flags=73<UP,LOOPBACK,RUNNING> mtu 65536 inet 127.0.0.1 netmask 255.0.0.0 inet6 ::1 prefixlen 128 scopeid 0x10<host> loop txqueuelen 1 (Local Loopback) RX packets 17 bytes 1402 (1.3 KiB) RX errors 0 dropped 0 overruns 0 frame 0 TX packets 17 bytes 1402 (1.3 KiB) TX errors 0 dropped 0 overruns 0 carrier 0 collisions 0 [root@dr1 ~]# ipvsadm -Ln IP Virtual Server version 1.2.1 (size=4096) Prot LocalAddress:Port Scheduler Flags -> RemoteAddress:Port Forward Weight ActiveConn InActConn TCP 192.168.0.222:80 rr -> 192.168.0.20:80 Route 1 0 0 -> 192.168.0.30:80 Route 1 0 0 [root@dr1 ~]#
提示:可以看到主節點啟動keepalived後,VIP和LVS規則都自動生成,我們再來看看備份節點上的VIP和LVS是否存在?
[root@dr2 ~]# ifconfig ens33: flags=4163<UP,BROADCAST,RUNNING,MULTICAST> mtu 1500 inet 192.168.0.11 netmask 255.255.255.0 broadcast 192.168.0.255 inet6 fe80::20c:29ff:fe50:13f1 prefixlen 64 scopeid 0x20<link> ether 00:0c:29:50:13:f1 txqueuelen 1000 (Ethernet) RX packets 13773 bytes 15243276 (14.5 MiB) RX errors 0 dropped 1 overruns 0 frame 0 TX packets 10049 bytes 857748 (837.6 KiB) TX errors 0 dropped 0 overruns 0 carrier 0 collisions 0 lo: flags=73<UP,LOOPBACK,RUNNING> mtu 65536 inet 127.0.0.1 netmask 255.0.0.0 inet6 ::1 prefixlen 128 scopeid 0x10<host> loop txqueuelen 1 (Local Loopback) RX packets 12 bytes 977 (977.0 B) RX errors 0 dropped 0 overruns 0 frame 0 TX packets 12 bytes 977 (977.0 B) TX errors 0 dropped 0 overruns 0 carrier 0 collisions 0 You have mail in /var/spool/mail/root [root@dr2 ~]# ipvsadm -Ln IP Virtual Server version 1.2.1 (size=4096) Prot LocalAddress:Port Scheduler Flags -> RemoteAddress:Port Forward Weight ActiveConn InActConn TCP 192.168.0.222:80 rr -> 192.168.0.20:80 Route 1 0 0 -> 192.168.0.30:80 Route 1 0 0 [root@dr2 ~]#
提示:可以看到主節點啟動後,備份節點上的VIP沒有了,但是LVS規則還存在;我們看看備份節點上是否有郵件呢?
[root@dr2 ~]# mail Heirloom Mail version 12.5 7/5/10. Type ? for help. "/var/spool/mail/root": 1 message 1 new >N 1 root Fri Feb 21 08:13 18/673 "dr2.ilinux.io to be backup, vip floatin" & 1 Message 1: From [email protected] Fri Feb 21 08:13:00 2020 Return-Path: <[email protected]> X-Original-To: root@localhost Delivered-To: [email protected] Date: Fri, 21 Feb 2020 08:13:00 -0500 To: [email protected] Subject: dr2.ilinux.io to be backup, vip floating User-Agent: Heirloom mailx 12.5 7/5/10 Content-Type: text/plain; charset=us-ascii From: [email protected] (root) Status: R 2020-02-21 08:13:00: vrrp transition, dr2.ilinux.io changed to be backup &
提示:可看到有一封郵件,告訴我們DR2切換至backup狀態了,按理說,主節點上也有郵件,不妨我們也去看看吧
[root@dr1 ~]# mail Heirloom Mail version 12.5 7/5/10. Type ? for help. "/var/spool/mail/root": 1 message 1 new >N 1 root Fri Feb 21 08:13 18/673 "dr1.ilinux.io to be master, vip floatin" & 1 Message 1: From [email protected] Fri Feb 21 08:13:01 2020 Return-Path: <[email protected]> X-Original-To: root@localhost Delivered-To: [email protected] Date: Fri, 21 Feb 2020 08:13:01 -0500 To: [email protected] Subject: dr1.ilinux.io to be master, vip floating User-Agent: Heirloom mailx 12.5 7/5/10 Content-Type: text/plain; charset=us-ascii From: [email protected] (root) Status: R 2020-02-21 08:13:01: vrrp transition, dr1.ilinux.io changed to be master &
提示:在主節點上也收到了一封郵件,說dr1切換成master狀態了
到此LVS+keepalived高可用LVS測試沒有問題,接下來我們在測試,當一個real server宕機後,DR上的LVS是否能夠及時的把對應的RS下線?
[root@rs1 ~]# systemctl stop nginx [root@rs1 ~]# ss -ntl State Recv-Q Send-Q Local Address:Port Peer Address:Port LISTEN 0 128 *:22 *:* LISTEN 0 100 127.0.0.1:25 *:* LISTEN 0 128 :::22 :::* LISTEN 0 100 ::1:25 :::* [root@rs1 ~]#
[root@dr1 ~]# ipvsadm -Ln IP Virtual Server version 1.2.1 (size=4096) Prot LocalAddress:Port Scheduler Flags -> RemoteAddress:Port Forward Weight ActiveConn InActConn TCP 192.168.0.222:80 rr -> 192.168.0.30:80 Route 1 0 0 [root@dr1 ~]#
提示:可以看到當一個RS1故障時,DR會馬上把rs1從叢集服務下線
再把RS2都停掉看看 對應的sorry server是否能夠正常加到叢集服務
[root@rs2 ~]# systemctl stop nginx [root@rs2 ~]# ss -ntl State Recv-Q Send-Q Local Address:Port Peer Address:Port LISTEN 0 128 *:22 *:* LISTEN 0 100 127.0.0.1:25 *:* LISTEN 0 128 :::22 :::* LISTEN 0 100 ::1:25 :::* [root@rs2 ~]#
[root@dr1 ~]# ipvsadm -Ln IP Virtual Server version 1.2.1 (size=4096) Prot LocalAddress:Port Scheduler Flags -> RemoteAddress:Port Forward Weight ActiveConn InActConn TCP 192.168.0.222:80 rr -> 127.0.0.1:80 Route 1 0 0 [root@dr1 ~]#
提示:可看到當後端兩個RS都宕機後,sorry server 會立刻加到叢集來,如果這時客戶端再訪問叢集服務,就會把sorry server 的頁面響應給使用者
[qiuhom@test ~]$ curl http://192.168.0.222/ <h1>sorry server 192.168.0.10</h1> [qiuhom@test ~]$
提示:這個頁面主要是告訴使用者,網站正在維護等資訊,專用於給使用者說sorry的,所以叫sorry server ;當然我們也可以把它配置成和叢集服務頁面一模一樣的也是可以的,一般不建議這樣做
我們把RS1 啟動起來,再看看叢集是否把sorry server 下線把RS1加到叢集?
[root@rs1 ~]# systemctl start nginx [root@rs1 ~]# ss -ntl State Recv-Q Send-Q Local Address:Port Peer Address:Port LISTEN 0 128 *:80 *:* LISTEN 0 128 *:22 *:* LISTEN 0 100 127.0.0.1:25 *:* LISTEN 0 128 :::80 :::* LISTEN 0 128 :::22 :::* LISTEN 0 100 ::1:25 :::* [root@rs1 ~]#
[root@dr1 ~]# ipvsadm -Ln IP Virtual Server version 1.2.1 (size=4096) Prot LocalAddress:Port Scheduler Flags -> RemoteAddress:Port Forward Weight ActiveConn InActConn TCP 192.168.0.222:80 rr -> 192.168.0.20:80 Route 1 0 0 You have new mail in /var/spool/mail/root [root@dr1 ~]#
提示:可看到當後端real server 正常後,sorry server會從叢集服務裡下線,然後real server繼續提供服務
到此LVS叢集+keepalived高可用搭建和測試就完畢了!!!