1. 程式人生 > 實用技巧 >nfs+DRBD+corosync+pacemaker 實現高可用(ha)的nfs叢集

nfs+DRBD+corosync+pacemaker 實現高可用(ha)的nfs叢集

nfs+DRBD+corosync+pacemaker 實現高可用(ha)的nfs叢集(centos7)

摘要:

環境介紹

藉助pcs安裝與配置corosync和pacemaker(pcs只是一個管理工具)

DRBD安裝配置參考之前的部落格《DRBD-MYSQL分散式塊裝置實現高可用》

http://legehappy.blog.51cto.com/13251607/1975804

Nfs的安裝和配置

Crmsh安裝和資源管理

六、測試

  • 環境介紹:

基於上一篇Corosync+pacemaker+DRBD+mysql(mariadb)實現高可用(ha)的mysql叢集(centos7)部落格:http://legehappy.blog.51cto.com/13251607/1976251

,突然想到nfs也可以基於這種架構解決單點故障問題,nfs+DRBD+corosync+pacemaker這種架構可實現nfs的多點高可用叢集。

系統版本資訊:

[[email protected] ~]# cat/etc/redhat-release

CentOS Linux release 7.2.1511 (Core)

[[email protected] ~]# cat /etc/redhat-release

CentOS Linux release 7.2.1511 (Core)

主機對應關係:

node1:cml1:192.168.5.101

node2:cml2:192.168.5.102

client:cml3:192.168.5.104

833a029fa87176aeeb1cfa7c7d7f00c7.png

配置叢集的前提:

(1)時間同步

[[email protected]~]# ntpdate cn.pool.ntp.org

[[email protected]~]# ntpdate cn.pool.ntp.org

(2)主機名互相訪問

[[email protected]~]# ssh-keygen

[[email protected]~]# ssh-copy-id cml2

[[email protected]~]# hostname

cml1

[[email protected]~]# cat /etc/hosts

192.168.5.101 cml1 www.cml1.com

192.168.5.102 cml2 www.cml2.com

192.168.5.104 cml3 www.cml3.com

192.168.5.105 cml4 www.cml4.com


(3)是否使用仲裁裝置。

Centos7上面不需要使用

二、藉助pcs安裝與配置corosync和pacemaker(pcs只是一個管理工具)

1、在兩節點上執行:

[[email protected]~]#yuminstall-ypacemakerpcspsmiscpolicycoreutils-python

2、兩節點上啟動pcs並且開機啟動:

[[email protected]~]#systemctlstartpcsd.service
[[email protected]~]#systemctlenablepcsd.service

3、兩節點上修改使用者hacluster的密碼(使用者已經固定不可以改變)

[[email protected]~]#echoredhat|passwd--stdinhacluster

4、註冊pcs叢集主機(預設註冊使用使用者名稱hacluster,和密碼):

[[email protected]corosync]#pcsclusterauthcml25pxl2##設定註冊那個叢集節點
cml1:Alreadyauthorized
cml2:Alreadyauthorized

5、在叢集上註冊兩臺叢集:

[[email protected]corosync]#pcsclustersetup--namemyclustercml1cml2--force。##設定叢集

6、接下來就在某個節點上已經生成來corosync配置檔案:

[[email protected] corosync]# ls

corosync.conf corosync.conf.example corosync.conf.example.udpu corosync.xml.example uidgid.d

#我們看到生成來corosync.conf配置檔案:

7、我們看一下注冊進來的檔案:

[[email protected]corosync]#catcorosync.conf
totem{
version:2
secauth:off
cluster_name:webcluster
transport:udpu
}

nodelist{
node{
ring0_addr:cml1
nodeid:1
}

node{
ring0_addr:cml2
nodeid:2
}
}

quorum{
provider:corosync_votequorum
two_node:1
}

logging{
to_logfile:yes
logfile:/var/log/cluster/corosync.log
to_syslog:yes
}

8、啟動叢集:

[[email protected]corosync]#pcsclusterstart--all
cml1:StartingCluster...
cml2:StartingCluster...
##相當於啟動來pacemaker和corosync:
[[email protected]corosync]#ps-ef|grepcorosync
root574901121:47?00:00:52corosync
root7589351813023:12pts/000:00:00grep--color=autocorosync
[[email protected]corosync]#ps-ef|greppacemaker
root575021021:47?00:00:00/usr/sbin/pacemakerd-f
haclust+5750357502021:47?00:00:03/usr/libexec/pacemaker/cib
root5750457502021:47?00:00:00/usr/libexec/pacemaker/stonithd
root5750557502021:47?00:00:01/usr/libexec/pacemaker/lrmd
haclust+5750657502021:47?00:00:01/usr/libexec/pacemaker/attrd
haclust+5750757502021:47?00:00:00/usr/libexec/pacemaker/pengine
haclust+5750857502021:47?00:00:01/usr/libexec/pacemaker/crmd
root7593851813023:12pts/000:00:00grep--color=autopacemaker

8、檢視叢集的狀態(顯示為no faults就是ok)

[[email protected]corosync]#corosync-cfgtool-s
Printingringstatus.
LocalnodeID1
RINGID0
id=192.168.5.101
status=ring0activewithnofaults
[[email protected]corosync]#sshcml2corosync-cfgtool-s
Printingringstatus.
LocalnodeID2
RINGID0
id=192.168.5.102
status=ring0activewithnofaults

10、可以檢視叢集是否有錯:

[[email protected]corosync]#crm_verify-L-V
error:unpack_resources:Resourcestart-updisabledsincenoSTONITHresourceshavebeendefined
error:unpack_resources:EitherconfiguresomeordisableSTONITHwiththestonith-enabledoption
error:unpack_resources:NOTE:ClusterswithshareddataneedSTONITHtoensuredataintegrity
Errorsfoundduringcheck:confignotvalid

##因為我們沒有配置STONITH裝置,所以我們下面要關閉

11、關閉STONITH裝置:

[[email protected]corosync]#pcspropertysetstonith-enabled=false
[[email protected]corosync]#crm_verify-L-V
[[email protected]corosync]#pcspropertylist
ClusterProperties:
cluster-infrastructure:corosync
cluster-name:mycluster
dc-version:1.1.16-12.el7_4.2-94ff4df
have-watchdog:false
stonith-enabled:false

三、DRBD安裝配置

參考之前的部落格《DRBD-MYSQL分散式塊裝置實現高可用》http://legehappy.blog.51cto.com/13251607/1975804

[[email protected]drbd.d]#catnfs.res
resourcenfs{
protocolC;
meta-diskinternal;
device/dev/drbd1;
syncer{
verify-algsha1;
}
net{
allow-two-primaries;
}
oncml1{
disk/dev/sdb1;
address192.168.5.101:7789;
}
oncml2{
disk/dev/sdb1;
address192.168.5.102:7789;
}
}

四、nfs安裝與配置:

##在node1和node2伺服器上配置nfs服務:

[[email protected]~]#yuminstallnfs-utils-y
[[email protected]~]#systemctlenablenfs-server
[[email protected]~]#systemctlstartnfs-server
[[email protected]~]#systemctlstartrpcbind
[[email protected]~]#systemctlenablerpcbind

##建立掛載點:

[[email protected]~]#cat/etc/exports
/nfs_data192.168.5.0/24(rw,sync)
[[email protected]~]#mkdir/nfs_data
[[email protected]~]#cat/etc/exports
/nfs_data192.168.5.0/24(rw,sync)
[[email protected]~]#mkdir/nfs_data
[[email protected]~]#systemctlrestartnfs-server
[[email protected]~]#systemctlrestartnfs-server
##測試檢視過載目錄:
[[email protected]~]#showmount-e192.168.5.101
Exportlistfor192.168.5.101:
/nfs_data192.168.5.0/24
[[email protected]~]#showmount-e192.168.5.102
Exportlistfor192.168.5.102:
/nfs_data192.168.5.0/24

五、Crmsh安裝和資源管理

1、安裝crmsh:

叢集我們可以下載安裝crmsh來操作(從github來下載,然後解壓直接安裝):只在一個節點安裝即可。(但最好選擇兩節點上安裝這樣測試時方便點)

[[email protected]~]#cd/usr/local/src/
Youhavenewmailin/var/spool/mail/root
[[email protected]src]#ls
nginx-1.12.0php-5.5.38.tar.gz
crmsh-2.3.2.tarnginx-1.12.0.tar.gzzabbix-3.2.7.tar.gz
[[email protected]src]#tar-xfcrmsh-2.3.2.tar
[[email protected]crmsh-2.3.2]#pythonsetup.pyinstall


2、用crmsh來管理:

[[email protected] ~]# crm help

Help overview for crmsh

Available topics:

Overview Help overview forcrmsh

Topics Available topics

Description Program description

CommandLine Command lineoptions

Introduction Introduction

Interface User interface

Completion Tab completion

Shorthand Shorthand syntax

Features Features

Shadows Shadow CIB usage

Checks Configurationsemantic checks

Templates Configurationtemplates

Testing Resource testing

Security Access ControlLists (ACL)

Resourcesets Syntax: Resourcesets

AttributeListReferences Syntax: Attribute list references

AttributeReferences Syntax: Attribute references

RuleExpressions Syntax: Rule expressions

Lifetime Lifetime parameterformat

Reference Command reference

3、藉助crm管理工具配置DRBD+nfs+corosync+pacemaker高可用叢集:

##先停掉nfs、drbd服務

[[email protected]~]#systemctlstopnfs-server
[[email protected]~]#systemctlstopdrbd
[[email protected]~]#systemctlstopnfs-server
[[email protected]~]#systemctlstopdrbd

[[email protected]~]#crm
crm(live)#status
Stack:corosync
CurrentDC:cml1(version1.1.16-12.el7_4.4-94ff4df)-partitionwithquorum
Lastupdated:ThuOct2608:52:492017
Lastchange:ThuOct2608:51:452017byrootviacibadminoncml1

2nodesconfigured
0resourcesconfigured

Online:[cml1cml2]

Noresources
crm(live)configure#propertystonith-enabled=false
crm(live)configure#propertyno-quorum-policy=ignore
crm(live)configure#propertymigration-limit=1###表示服務搶佔一次不成功就給另一個節點接管
crm(live)#configure
crm(live)configure#primitivenfsdrbdocf:linbit:drbdparamsdrbd_resource=nfsopstarttimeout=240opstoptimeout=100opmonitorrole=Masterinterval=20
crm(live)configure#msms_mysqldrbdnfsdrbdmetamaster-max=1master-node-max=1clone-max=2clone-node-max=1notify=true
crm(live)configure#verify

2、添加掛載資源:

crm(live)configure#primitivemystoreocf:heartbeat:Filesystemparamsdevice=/dev/drbd1directory=/nfs_datafstype=ext4opstarttimeout=60opstoptimeout=60
crm(live)configure#colocationmystore_with_ms_nfsdrbdinf:mystorems_mysqldrbd:Master
crm(live)configure#orderms_mysqld_befor_mystoreMandatory:ms_mysqldrbdmystore
crm(live)configure#verify
crm(live)configure#commit
crm(live)configure#cd
crm(live)#status
Stack:corosync
CurrentDC:cml1(version1.1.16-12.el7_4.4-94ff4df)-partitionwithquorum
Lastupdated:ThuOct2621:08:412017
Lastchange:ThuOct2621:08:382017byrootviacibadminoncml1

2nodesconfigured
3resourcesconfigured

Online:[cml1cml2]

Fulllistofresources:

Master/SlaveSet:ms_nfsdrbd[nfsdrbd]
Masters:[cml2]
Slaves:[cml1]
mystore(ocf::heartbeat:Filesystem):Startedcml2
[[email protected]~]#df-TH
FilesystemTypeSizeUsedAvailUse%Mountedon
/dev/mapper/centos-rootxfs19G6.7G13G36%/
devtmpfsdevtmpfs501M0501M0%/dev
tmpfstmpfs512M278M234M55%/dev/shm
tmpfstmpfs512M27M486M6%/run
tmpfstmpfs512M0512M0%/sys/fs/cgroup
/dev/sda1xfs521M161M361M31%/boot
tmpfstmpfs103M0103M0%/run/user/0
/dev/drbd1ext411G69M9.9G1%/nfs_data

3、新增nfs_server

crm(live)configure#primitivenfs_serversystemd:nfs-serveropstarttimeout=100interval=0opstoptimeout=100interval=0
crm(live)configure#verify
crm(live)configure#colocationnfs_server_with_mystoreinf:nfs_servermystore
crm(live)configure#ordermystore_befor_nfsMandatory:mystorenfs_server
crm(live)configure#show
node1:cml1\
attributesstandby=off
node2:cml2\
attributesstandby=off
primitivemystoreFilesystem\
paramsdevice="/dev/drbd1"directory="/nfs_data"fstype=ext4\
opstarttimeout=60interval=0\
opstoptimeout=60interval=0
primitivenfs_serversystemd:nfs-server\
opstarttimeout=100interval=0\
opstoptimeout=100interval=0
primitivenfsdrbdocf:linbit:drbd\
paramsdrbd_resource=nfs\
opstarttimeout=240interval=0\
opstoptimeout=100interval=0\
opmonitorrole=Masterinterval=20timeout=30\
opmonitorrole=Slaveinterval=30timeout=30
msms_nfsdrbdnfsdrbd\
metamaster-max=1master-node-max=1clone-max=2clone-node-max=1notify=true
orderms_nfsdrbd_befor_mystoreMandatory:ms_nfsdrbdmystore
ordermystore_befor_nfsMandatory:mystorenfs_server
colocationmystore_with_ms_nfsdrbdinf:mystorems_nfsdrbd:Master
colocationnfs_server_with_mystoreinf:nfs_servermystore
propertycib-bootstrap-options:\
have-watchdog=false\
dc-version=1.1.16-12.el7_4.4-94ff4df\
cluster-infrastructure=corosync\
cluster-name=webcluster\
stonith-enabled=false\
no-quorum-policy=ignore\
migration-limit=1

4、新增虛擬vip:

crm(live)configure#primitivevipocf:heartbeat:IPaddrparamsip=192.168.5.200opmonitorinterval=20timeout=20on-fail=restart
crm(live)configure#verify
crm(live)configure#colocationvip_with_nfsinf:vipnfs_server
crm(live)configure#verify
crm(live)configure#show
node1:cml1\
attributesstandby=off
node2:cml2\
attributesstandby=off
primitivemystoreFilesystem\
paramsdevice="/dev/drbd1"directory="/nfs_data"fstype=ext4\
opstarttimeout=60interval=0\
opstoptimeout=60interval=0
primitivenfs_serversystemd:nfs-server\
opstarttimeout=100interval=0\
opstoptimeout=100interval=0
primitivenfsdrbdocf:linbit:drbd\
paramsdrbd_resource=nfs\
opstarttimeout=240interval=0\
opstoptimeout=100interval=0\
opmonitorrole=Masterinterval=20timeout=30\
opmonitorrole=Slaveinterval=30timeout=30
primitivevipIPaddr\
paramsip=192.168.5.200\
opmonitorinterval=20timeout=20on-fail=restart
msms_nfsdrbdnfsdrbd\
metamaster-max=1master-node-max=1clone-max=2clone-node-max=1notify=true
orderms_nfsdrbd_befor_mystoreMandatory:ms_nfsdrbdmystore
ordermystore_befor_nfsMandatory:mystorenfs_server
colocationmystore_with_ms_nfsdrbdinf:mystorems_nfsdrbd:Master
colocationnfs_server_with_mystoreinf:nfs_servermystore
colocationvip_with_nfsinf:vipnfs_server
propertycib-bootstrap-options:\
have-watchdog=false\
dc-version=1.1.16-12.el7_4.4-94ff4df\
cluster-infrastructure=corosync\
cluster-name=webcluster\
stonith-enabled=false\
no-quorum-policy=ignore\
migration-limit=1
crm(live)configure#commit

5、檢視節點狀態:

crm(live)#status
Stack:corosync
CurrentDC:cml1(version1.1.16-12.el7_4.4-94ff4df)-partitionwithquorum
Lastupdated:ThuOct2621:14:372017
Lastchange:ThuOct2621:14:222017byrootviacibadminoncml1

2nodesconfigured
5resourcesconfigured

Online:[cml1cml2]

Fulllistofresources:

Master/SlaveSet:ms_nfsdrbd[nfsdrbd]
Masters:[cml2]
Slaves:[cml1]
mystore(ocf::heartbeat:Filesystem):Startedcml2
nfs_server(systemd:nfs-server):Startedcml2
vip(ocf::heartbeat:IPaddr):Startedcml2

六、測試:

[[email protected] ~]# df -TH

Filesystem Type SizeUsed Avail Use% Mounted on

/dev/mapper/centos-root xfs 19G6.7G 13G 36% /

devtmpfs devtmpfs 501M0 501M 0% /dev

tmpfs tmpfs 512M278M 234M 55% /dev/shm

tmpfs tmpfs 512M27M 486M 6% /run

tmpfstmpfs 512M0 512M 0% /sys/fs/cgroup

/dev/sda1 xfs 521M161M 361M 31% /boot

tmpfs tmpfs 103M0 103M 0% /run/user/0

/dev/drbd1 ext4 11G 69M9.9G 1% /nfs_data

[[email protected] ~]# ip addr

2: ens34:<BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc pfifo_fast state UP qlen1000

link/ether 00:0c:29:5a:c5:ee brd ff:ff:ff:ff:ff:ff

inet 192.168.5.102/24 brd 192.168.5.255 scope global ens34

valid_lft forever preferred_lft forever

inet 192.168.5.200/24brd 192.168.5.255 scope global secondary ens34

valid_lft forever preferred_lft forever

###vip已經在cml2主機上了

[[email protected] ~]# showmount -e 192.168.5.200

Export list for 192.168.5.200:

/nfs_data 192.168.5.0/24

[[email protected] ~]# showmount -e 192.168.5.200

Export list for 192.168.5.200:

/nfs_data 192.168.5.0/24

[[email protected] ~]# mkdir /nfs

[[email protected] ~]# mount -t nfs192.168.5.200:/nfs_data/ /nfs

[[email protected] ~]# df -TH

Filesystem Type SizeUsed Avail Use% Mounted on

/dev/mapper/centos-root xfs 19G6.6G 13G 35% /

devtmpfs devtmpfs 503M0 503M 0% /dev

tmpfs tmpfs 513M0 513M 0% /dev/shm

tmpfs tmpfs 513M14M 500M 3% /run

tmpfs tmpfs 513M0 513M 0% /sys/fs/cgroup

/dev/sda1 xfs 521M131M 391M 25% /boot

tmpfs tmpfs 103M0 103M 0% /run/user/0

192.168.5.200:/nfs_datanfs4 11G 69M9.9G 1% /nfs

###掛載後也是11G證明是drbd掛載到nfs_data目錄在掛載過來的空間


轉載於:https://blog.51cto.com/legehappy/1976565