1. 程式人生 > >Centos7下配置DRBD搭建HA NFS Cluster

Centos7下配置DRBD搭建HA NFS Cluster

操作環境

Centos 7

DRBDADM_API_VERSION=2
DRBD_KERNEL_VERSION=9.0.14
DRBDADM_VERSION_CODE=0x090301

DRBDADM_VERSION=9.3.1

Corosync Cluster Engine, version '2.4.3'

Pacemaker 1.1.18-11.el7_5.3

crm 3.0.0

網路拓撲圖


安裝配置步驟

安裝DRBD/Corosync/Pacemaker/Crmsh

[[email protected] ~]# crm_mon -rf -n1
Stack: corosync
Current DC: drbd-node3 (version 1.1.18-11.el7_5.3-2b07d5c5a9) - partition with quorum
Last updated: Sat Jul  7 18:06:54 2018
Last change: Sat Jul  7 17:54:52 2018 by root via cibadmin on drbd-node1

2 nodes configured
0 resources configured

Node drbd-node1: online
Node drbd-node3: online

No inactive resources

除此外,還要配置檔案系統,在主節點上進行格式化drbd資源操作,並建立對應的目錄

[[email protected] ~]# mkfs.xfs /dev/drbd0 -f
meta-data=/dev/drbd0             isize=512    agcount=4, agsize=32766998 blks
         =                       sectsz=512   attr=2, projid32bit=1
         =                       crc=1        finobt=0, sparse=0
data     =                       bsize=4096   blocks=131067991, imaxpct=25
         =                       sunit=0      swidth=0 blks
naming   =version 2              bsize=4096   ascii-ci=0 ftype=1
log      =internal log           bsize=4096   blocks=63998, version=2
         =                       sectsz=512   sunit=0 blks, lazy-count=1
realtime =none                   extsz=4096   blocks=0, rtextents=0
[
[email protected]
~]# mkdir /mnt/nfs

在另一節點上不用進行格式化操作,但是要建立對應的目錄

[[email protected] ~]# mkdir /mnt/nfs

配置DRBD資源

註釋drbd_resource名稱為drbd中配置好的。

[[email protected] ~]# crm configure
crm(live)configure# primitive p_drbd_r0 ocf:linbit:drbd \
   > params drbd_resource=scsivol \
   > op start interval=0s timeout=240s \
   > op stop interval=0s timeout=100s \
   > op monitor interval=31s timeout=20s role=Slave \
   > op monitor interval=29s timeout=20s role=Master
crm(live)configure# ms ms_drbd_r0 p_drbd_r0 meta master-max=1 \
   > master-node-max=1 clone-max=2 clone-node-max=1 notify=true
crm(live)configure# verify
crm(live)configure# commit
crm(live)configure# exit
bye

通過上述配置後,通過crm_mon,可以檢視到pacemaker已經管理了2個資源,而在上一節中,通過相同命令查詢到的可管理資源數為0。

[[email protected] ~]# crm_mon -rf -n1
Stack: corosync
Current DC: drbd-node3 (version 1.1.18-11.el7_5.3-2b07d5c5a9) - partition with quorum
Last updated: Sat Jul  7 18:22:58 2018
Last change: Sat Jul  7 18:19:53 2018 by root via cibadmin on drbd-node1

2 nodes configured
2 resources configured

Node drbd-node1: online
        p_drbd_r0       (ocf::linbit:drbd):     Master
Node drbd-node3: online
        p_drbd_r0       (ocf::linbit:drbd):     Slave

No inactive resources

配置檔案系統

注意這裡面的引數device/direct/fstype都是上面配置好的。

[[email protected] ~]# crm configure
crm(live)configure# primitive p_fs_drbd0 ocf:heartbeat:Filesystem \
   > params device=/dev/drbd0 directory=/mnt/nfs fstype=xfs \
   > options=noatime,nodiratime \
   > op start interval="0" timeout="60s" \
   > op stop interval="0" timeout="60s" \
   > op monitor interval="20" timeout="40s"
crm(live)configure# order o_drbd_r0-before-fs_drbd0 \
   > inf: ms_drbd_r0:promote p_fs_drbd0:start
crm(live)configure# colocation c_fs_drbd0-with_drbd-r0 \
   > inf: p_fs_drbd0 ms_drbd_r0:Master
crm(live)configure# verify
crm(live)configure# commit
crm(live)configure# exit
bye

通過crm_mon檢視管理的資源,可以檢視到新新增的檔案系統資源,pacemaker可管理的資源數為3。

[[email protected] ~]# crm_mon -rf -n1
Stack: corosync
Current DC: drbd-node3 (version 1.1.18-11.el7_5.3-2b07d5c5a9) - partition with quorum
Last updated: Sat Jul  7 18:29:24 2018
Last change: Sat Jul  7 18:28:06 2018 by root via cibadmin on drbd-node1

2 nodes configured
3 resources configured

Node drbd-node1: online
        p_drbd_r0       (ocf::linbit:drbd):     Master
        p_fs_drbd0      (ocf::heartbeat:Filesystem):    Started
Node drbd-node3: online
        p_drbd_r0       (ocf::linbit:drbd):     Slave

No inactive resources

配置NFS服務

分別在兩個節點上安裝nfs

[[email protected] ~]# yum -y install nfs-utils rpcbind
[[email protected] ~]# systemctl enable rpcbind
Created symlink from /etc/systemd/system/multi-user.target.wants/rpcbind.service to /usr/lib/systemd/system/rpcbind.service.
[[email protected] ~]# systemctl start rpcbind
[[email protected] ~]# yum -y install nfs-utils rpcbind
[[email protected] ~]# systemctl enable rpcbind
Created symlink from /etc/systemd/system/multi-user.target.wants/rpcbind.service to /usr/lib/systemd/system/rpcbind.service.
[[email protected] ~]# systemctl start rpcbind

配置nfs server資源

[[email protected] ~]# crm configure
crm(live)configure# primitive p_nfsserver ocf:heartbeat:nfsserver \
   > ? params nfs_shared_infodir=/mnt/nfs/nfs_shared_infodir nfs_ip=10.10.200.235 \
   > ? op start interval=0s timeout=40s \
   > ? op stop interval=0s timeout=20s \
   > ? op monitor interval=10s timeout=20sCtrl-C, leaving
bye
[[email protected] ~]# crm configure
crm(live)configure# primitive p_nfsserver ocf:heartbeat:nfsserver \
   > params nfs_shared_infodir=/mnt/nfs/nfs_shared_infodir nfs_ip=10.10.200.235 \
   > op start interval=0s timeout=40s \
   > op stop interval=0s timeout=20s \
   > op monitor interval=10s timeout=20s
crm(live)configure# order o_fs_drbd0-before-nfsserver \
   > inf: p_fs_drbd0 p_nfsserver
crm(live)configure# colocation c_nfsserver-with-fs_drbd0 \
   > inf: p_nfsserver p_fs_drbd0
crm(live)configure# verify
crm(live)configure# commit
crm(live)configure# exit
bye

通過crm_mon檢視pacemaker可管理的資源,新添加了nfs server資源

[[email protected] ~]# crm_mon -rf -n1
Stack: corosync
Current DC: drbd-node3 (version 1.1.18-11.el7_5.3-2b07d5c5a9) - partition with quorum
Last updated: Sat Jul  7 18:37:51 2018
Last change: Sat Jul  7 18:37:10 2018 by root via cibadmin on drbd-node1

2 nodes configured
4 resources configured

Node drbd-node1: online
        p_drbd_r0       (ocf::linbit:drbd):     Master
        p_fs_drbd0      (ocf::heartbeat:Filesystem):    Started
        p_nfsserver     (ocf::heartbeat:nfsserver):     Started
Node drbd-node3: online
        p_drbd_r0       (ocf::linbit:drbd):     Slave

No inactive resources

配置nfs exportfs

建立exportfs目錄

[[email protected] ~]# mkdir -p /mnt/nfs/exports/dir1
[[email protected] ~]# chown nfsnobody:nfsnobody /mnt/nfs/exports/dir1/

配置exportfs資源

[[email protected] ~]# crm configure
crm(live)configure# primitive p_exportfs_dir1 ocf:heartbeat:exportfs \
   > params clientspec=10.10.200.0/24 directory=/mnt/nfs/exports/dir1 fsid=1 \
   > unlock_on_stop=1 options=rw,sync \
   > op start interval=0s timeout=40s \
   > op stop interval=0s timeout=120s \
   > op monitor interval=10s timeout=20s
crm(live)configure# order o_nfsserver-before-exportfs-dir1 \
   > inf: p_nfsserver p_exportfs_dir1
crm(live)configure# colocation c_exportfs-with-nfsserver \
   > inf: p_exportfs_dir1 p_nfsserver
crm(live)configure# verify
crm(live)configure# commit
crm(live)configure# exit

檢視exportfs

[[email protected] ~]# showmount -e drbd-node1
Export list for drbd-node1:
/mnt/nfs/exports/dir1 10.10.200.0/24

通過crm_mon檢視pacemaker可管理的資源,新添加了exportfs資源。

[[email protected] ~]# crm_mon -rf -n1
Stack: corosync
Current DC: drbd-node3 (version 1.1.18-11.el7_5.3-2b07d5c5a9) - partition with quorum
Last updated: Sat Jul  7 22:41:23 2018
Last change: Sat Jul  7 22:40:10 2018 by root via cibadmin on drbd-node1

2 nodes configured
5 resources configured

Node drbd-node1: online
        p_drbd_r0       (ocf::linbit:drbd):     Master
        p_fs_drbd0      (ocf::heartbeat:Filesystem):    Started
        p_nfsserver     (ocf::heartbeat:nfsserver):     Started
        p_exportfs_dir1 (ocf::heartbeat:exportfs):      Started
Node drbd-node3: online
        p_drbd_r0       (ocf::linbit:drbd):     Slave

No inactive resources

配置虛擬IP

注意nic引數後面網絡卡的裝置名稱

[[email protected] ~]# crm configure
crm(live)configure# primitive p_virtip_dir1 ocf:heartbeat:IPaddr2 \
   > params ip=10.10.200.235 cidr_netmask=24 nic=ens3 \
   > op monitor interval=20s timeout=20s \
   > op start interval=0s timeout=20s \
   > op stop interval=0s timeout=20s
crm(live)configure# order o_exportfs_dir1-before-p_virtip_dir1 \
   > inf: p_exportfs_dir1 p_virtip_dir1
crm(live)configure# colocation c_virtip_dir1-with-exportfs-dir1 \
   > inf: p_virtip_dir1 p_exportfs_dir1
crm(live)configure# verify
crm(live)configure# commit
crm(live)configure# exit
bye

此時我們從nfs client端可以通過虛擬IP地址檢視nfs資訊

[[email protected] ~]# showmount -e 10.10.200.235
Export list for 10.10.200.235:
/mnt/nfs/exports/dir1 10.10.200.0/24

檢視pacemaker可管理的資源

[[email protected] ~]# crm_mon -rf -n1
Stack: corosync
Current DC: drbd-node3 (version 1.1.18-11.el7_5.3-2b07d5c5a9) - partition with quorum
Last updated: Sat Jul  7 22:51:21 2018
Last change: Sat Jul  7 22:48:45 2018 by root via cibadmin on drbd-node1

2 nodes configured
6 resources configured

Node drbd-node1: online
        p_drbd_r0       (ocf::linbit:drbd):     Master
        p_fs_drbd0      (ocf::heartbeat:Filesystem):    Started
        p_nfsserver     (ocf::heartbeat:nfsserver):     Started
        p_exportfs_dir1 (ocf::heartbeat:exportfs):      Started
        p_virtip_dir1   (ocf::heartbeat:IPaddr2):       Started
Node drbd-node3: online
        p_drbd_r0       (ocf::linbit:drbd):     Slave

No inactive resources

以上就完成了nfs cluster的所有配置,進行下測試

測試NFS連線

在nfs client端掛在上述配置的exportfs目錄

[[email protected] ~]# mount 10.10.200.235:/mnt/nfs/exports/dir1 /mnt/nfs/
[[email protected] ~]# df -h
Filesystem                           Size  Used Avail Use% Mounted on
/dev/mapper/centos-root               50G  2.5G   48G   5% /
devtmpfs                             7.8G     0  7.8G   0% /dev
tmpfs                                7.8G     0  7.8G   0% /dev/shm
tmpfs                                7.8G  8.8M  7.8G   1% /run
tmpfs                                7.8G     0  7.8G   0% /sys/fs/cgroup
/dev/sdb                             931G   37G  894G   4% /drbd1
/dev/sde                             931G  501G  431G  54% /drbd4
/dev/sdc                             931G  501G  431G  54% /drbd2
/dev/sdd                             931G  8.1G  923G   1% /drbd3
/dev/sdf                             931G  501G  431G  54% /drbd5
/dev/sda1                           1014M  191M  824M  19% /boot
/dev/mapper/centos-home              872G   35G  838G   4% /home
tmpfs                                1.6G     0  1.6G   0% /run/user/0
10.10.200.235:/mnt/nfs/exports/dir1  500G   32M  500G   1% /mnt/nfs

失效切換測試

模擬主節點drbd-node1宕機,在drbd-node1宕機前,狀態資訊如下:

[[email protected] nfs]# crm status
Stack: corosync
Current DC: drbd-node3 (version 1.1.18-11.el7_5.3-2b07d5c5a9) - partition with quorum
Last updated: Sat Jul  7 22:56:58 2018
Last change: Sat Jul  7 22:48:45 2018 by root via cibadmin on drbd-node1

2 nodes configured
6 resources configured

Online: [ drbd-node1 drbd-node3 ]

Full list of resources:

 Master/Slave Set: ms_drbd_r0 [p_drbd_r0]
     Masters: [ drbd-node1 ]
     Slaves: [ drbd-node3 ]
 p_fs_drbd0     (ocf::heartbeat:Filesystem):    Started drbd-node1
 p_nfsserver    (ocf::heartbeat:nfsserver):     Started drbd-node1
 p_exportfs_dir1        (ocf::heartbeat:exportfs):      Started drbd-node1
 p_virtip_dir1  (ocf::heartbeat:IPaddr2):       Started drbd-node1

模擬主節點drbd-node1宕機後,所有資源切換至備用節點drbd-node3上了

[[email protected] ~]# crm status
Stack: corosync
Current DC: drbd-node3 (version 1.1.18-11.el7_5.3-2b07d5c5a9) - partition with quorum
Last updated: Sat Jul  7 22:57:58 2018
Last change: Sat Jul  7 22:48:45 2018 by root via cibadmin on drbd-node1

2 nodes configured
6 resources configured

Online: [ drbd-node3 ]
OFFLINE: [ drbd-node1 ]

Full list of resources:

 Master/Slave Set: ms_drbd_r0 [p_drbd_r0]
     Masters: [ drbd-node3 ]
     Stopped: [ drbd-node1 ]
 p_fs_drbd0     (ocf::heartbeat:Filesystem):    Started drbd-node3
 p_nfsserver    (ocf::heartbeat:nfsserver):     Started drbd-node3
 p_exportfs_dir1        (ocf::heartbeat:exportfs):      Started drbd-node3
 p_virtip_dir1  (ocf::heartbeat:IPaddr2):       Started drbd-node3

在nfs client檢視exportfs資訊以及nfs掛載情況

[[email protected] ~]# showmount -e 10.10.200.235
Export list for 10.10.200.235:
/mnt/nfs/exports/dir1 10.10.200.0/24
[[email protected] ~]# df -h
Filesystem                           Size  Used Avail Use% Mounted on
/dev/mapper/centos-root               50G  2.5G   48G   5% /
devtmpfs                             7.8G     0  7.8G   0% /dev
tmpfs                                7.8G     0  7.8G   0% /dev/shm
tmpfs                                7.8G  8.8M  7.8G   1% /run
tmpfs                                7.8G     0  7.8G   0% /sys/fs/cgroup
/dev/sdb                             931G   37G  894G   4% /drbd1
/dev/sde                             931G  501G  431G  54% /drbd4
/dev/sdc                             931G  501G  431G  54% /drbd2
/dev/sdd                             931G  8.1G  923G   1% /drbd3
/dev/sdf                             931G  501G  431G  54% /drbd5
/dev/sda1                           1014M  191M  824M  19% /boot
/dev/mapper/centos-home              872G   35G  838G   4% /home
tmpfs                                1.6G     0  1.6G   0% /run/user/0
10.10.200.235:/mnt/nfs/exports/dir1  500G   32M  500G   1% /mnt/nfs