DRBD數據同步部署-centos7
DRBD數據同步
DRBD安裝:(ha高可用集群。:在7的版本下)
環境:
172.25.0.29 node1
172.25.0.30 node2
1.首先我們需要在node1和node2上添加一塊硬盤,我這裏就添加2G的硬盤來做演示:
[root@node1 ~]# fdisk -l | grep /dev/sdb
Disk /dev/sdb: 2147 MB, 2147483648 bytes, 4194304 sectors
[root@node2 ~]# fdisk -l | grep /dev/sdb
Disk /dev/sdb: 2147 MB, 2147483648 bytes, 4194304 sectors
2、我們需要修改hosts文件保證hosts之間能夠互相訪問:
node1上:
[root@node1 ~]# cat /etc/hosts
172.25.0.29 node1
172.25.0.30 node2
node2上:
[root@node2 ~]# cat /etc/hosts
172.25.0.29 node1
172.25.0.30 node2
3、在node1修改ssh互信:
[root@node1 ~]# ssh-keygen
[root@node1 ~]# ssh-copy-id node2
4、在node1和node2上設置時鐘同步:
node1:
[root@node1 ~]# crontab -e
*/5 * * * * ntpdate cn.pool.ntp.org ###添加任務
node2:
[root@node1 ~]# crontab -e
*/5 * * * * ntpdate cn.pool.ntp.org ###添加任務
在node1和node2上可以看到已經添加時間任務:
[root@node1 ~]# crontab -l
*/5 * * * * ntpdate cn.pool.ntp.org
[root@node2 ~]# crontab -l
*/5 * * * * ntpdate cn.pool.ntp.org
5、現在我們就要開始安裝drbd包在node1和node2操作:
node1上:
[root@node1 ~]# rpm --import https://www.elrepo.org/RPM-GPG-KEY-elrepo.org [root@node1 ~]# rpm -Uvh http://www.elrepo.org/elrepo-release-7.0-2.el7.elrepo.noarch.rpm Retrieving http://www.elrepo.org/elrepo-release-7.0-2.el7.elrepo.noarch.rpm Retrieving http://elrepo.org/elrepo-release-7.0-3.el7.elrepo.noarch.rpm Preparing... ################################# [100%] Updating / installing... 1:elrepo-release-7.0-3.el7.elrepo ################################# [100%] [root@node1 ~]#yum install -y kmod-drbd84 drbd84-utils kernel* ##裝完重啟一下
node2上:
[root@node2 ~]# rpm --import https://www.elrepo.org/RPM-GPG-KEY-elrepo.org [root@node2 ~]# rpm -Uvh http://www.elrepo.org/elrepo-release-7.0-2.el7.elrepo.noarch.rpm Retrieving http://www.elrepo.org/elrepo-release-7.0-2.el7.elrepo.noarch.rpm Retrieving http://elrepo.org/elrepo-release-7.0-3.el7.elrepo.noarch.rpm Preparing... ################################# [100%] Updating / installing... 1:elrepo-release-7.0-3.el7.elrepo ################################# [100%] [root@node2 ~]#yum install -y kmod-drbd84 drbd84-utils kernel*
6、主配置文件:
/etc/drbd.conf #主配置文件
/etc/drbd.d/global_common.conf #全局配置文件
7、查看主配置文件:
[root@node1 ~]# cat /etc/drbd.conf # You can find an example in /usr/share/doc/drbd.../drbd.conf.example include "drbd.d/global_common.conf"; include "drbd.d/*.res";
8、配置文件說明:
[root@node1 ~]# vim /etc/drbd.d/global_common.conf global { usage-count no; #是否參加DRBD使用統計,默認為yes。官方統計drbd的裝機量,改為no # minor-count dialog-refresh disable-ip-verification } common { protocol C; #使用DRBD的同步協議,添加這一行 handlers { pri-on-incon-degr "/usr/lib/drbd/notify-pri-on-incon-degr.sh; /usr/lib/drbd/notify-emergency-reboot.sh; echo b > /proc/sysrq-trigger ; reboot -f"; pri-lost-after-sb "/usr/lib/drbd/notify-pri-lost-after-sb.sh; /usr/lib/drbd/notify-emergency-reboot.sh; echo b > /proc/sysrq-trigger ; reboot -f"; local-io-error "/usr/lib/drbd/notify-io-error.sh; /usr/lib/drbd/notify-emergency-shutdown.sh; echo o > /proc/sysrq-trigger ; halt -f"; ###需要把這三行的註釋去掉 } startup { # wfc-timeout degr-wfc-timeout outdated-wfc-timeout wait-after-sb } options { # cpu-mask on-no-data-accessible } disk { on-io-error detach; #配置I/O錯誤處理策略為分離,添加這一行 # size max-bio-bvecs on-io-error fencing disk-barrier disk-flushes # disk-drain md-flushes resync-rate resync-after al-extents # c-plan-ahead c-delay-target c-fill-target c-max-rate # c-min-rate disk-timeout } net { # protocol timeout max-epoch-size max-buffers unplug-watermark # connect-int ping-int sndbuf-size rcvbuf-size ko-count # allow-two-primaries cram-hmac-alg shared-secret after-sb-0pri # after-sb-1pri after-sb-2pri always-asbp rr-conflict # ping-timeout data-integrity-alg tcp-cork on-congestion # congestion-fill congestion-extents csums-alg verify-alg # use-rle } syncer { rate 1024M; #設置主備節點同步時的網絡速率,添加這個選項 } }
註釋: on-io-error 策略可能為以下選項之一
detach 分離:這是默認和推薦的選項,如果在節點上發生底層的硬盤I/O錯誤,它會將設備運行在Diskless無盤模式下
pass_on:DRBD會將I/O錯誤報告到上層,在主節點上,它會將其報告給掛載的文件系統,但是在此節點上就往往忽略(因此此節點上沒有可以報告的上層)
-local-in-error:調用本地磁盤I/O處理程序定義的命令;這需要有相應的local-io-error調用的資源處理程序處理錯誤的命令;這就給管理員有足夠自由的權力命令命令或是腳本調用local-io-error處理I/O錯誤
定義一個資源
9、創建配置文件
[root@node1 ~]# cat /etc/drbd.d/mysql.res ##這個文件需要自己創建 resource mysql { #資源名稱 protocol C; #使用協議 meta-disk internal; device /dev/drbd1; #DRBD設備名稱 syncer { verify-alg sha1;# 加密算法 } net { allow-two-primaries; } on node1 { #hostname一定要設為node1,不然下一步會報錯的 disk /dev/sdb; drbd1使用的磁盤分區為"mysql" address 172.25.0.29:7789; #設置DRBD監聽地址與端口 } on node2 { disk /dev/sdb; address 172.25.0.30:7789; } }
10、然後把配置文件copy到對面的機器上:
[root@node1 ~]# scp -rp /etc/drbd.d/* node2:/etc/drbd.d/ global_common.conf 100% 2621 2.6KB/s 00:00 mysql.res 100% 238 0.2KB/s 00:00
可以發現drbd.d目錄下的所有文件已經復制node2上了
##註意要先把防火墻給關掉先
11、在node1上面啟動mysql:
[root@node1 ~]# drbdadm create-md mysql You want me to create a v08 style flexible-size internal meta data block. There appears to be a v08 flexible-size internal meta data block already in place on /dev/sdb at byte offset 2147479552 Do you really want to overwrite the existing meta-data? [need to type ‘yes‘ to confirm] yes md_offset 2147479552 al_offset 2147446784 bm_offset 2147381248 Found xfs filesystem 2097052 kB data area apparently used 2097052 kB left usable by current configuration Even though it looks like this would place the new meta data into unused space, you still need to confirm, as this is only a guess. Do you want to proceed? [need to type ‘yes‘ to confirm] yes initializing activity log initializing bitmap (64 KB) to all zero Writing meta data... New drbd meta data block successfully created. [root@node1 ~]# modprobe drbd [root@node1 ~]# lsmod | grep drbd drbd 396875 0 libcrc32c 12644 4 xfs,drbd,nf_nat,nf_conntrack [root@node1 ~]# drbdadm up mysql [root@node1 ~]# drbdadm -- --force primary mysql 查看node1的狀態: [root@node1 ~]# cat /proc/drbd version: 8.4.10-1 (api:1/proto:86-101) GIT-hash: a4d5de01fffd7e4cde48a080e2c686f9e8cebf4c build by mockbuild@, 2017-09-15 14:23:22 1: cs:WFConnection ro:Primary/Unknown ds:UpToDate/DUnknown C r----s ns:0 nr:0 dw:0 dr:912 al:8 bm:0 lo:0 pe:0 ua:0 ap:0 ep:1 wo:f oos:2097052 You have new mail in /var/spool/mail/root
12、在對端節點執行:
[root@node2 ~]# drbdadm create-md mysql You want me to create a v08 style flexible-size internal meta data block. There appears to be a v08 flexible-size internal meta data block already in place on /dev/sdb at byte offset 2147479552 Do you really want to overwrite the existing meta-data? [need to type ‘yes‘ to confirm] yes md_offset 2147479552 al_offset 2147446784 bm_offset 2147381248 Found xfs filesystem 2097052 kB data area apparently used 2097052 kB left usable by current configuration Even though it looks like this would place the new meta data into unused space, you still need to confirm, as this is only a guess. Do you want to proceed? [need to type ‘yes‘ to confirm] yes initializing activity log initializing bitmap (64 KB) to all zero Writing meta data... New drbd meta data block successfully created. [root@node2 ~]# modprobe drbd [root@node2 ~]# drbdadm up mysql
在從上面可以查看數據同步的狀態:
[root@node2 ~]# cat /proc/drbd version: 8.4.10-1 (api:1/proto:86-101) GIT-hash: a4d5de01fffd7e4cde48a080e2c686f9e8cebf4c build by mockbuild@, 2017-09-15 14:23:22 1: cs:SyncTarget ro:Secondary/Primary ds:Inconsistent/UpToDate C r----- ns:0 nr:237568 dw:237568 dr:0 al:8 bm:0 lo:0 pe:0 ua:0 ap:0 ep:1 wo:f oos:1859484 [=>..................] sync‘ed: 11.6% (1859484/2097052)K finish: 0:00:39 speed: 47,512 (47,512) want: 102,400 K/sec
可以看到數據正在同步
13、格式化並掛載:
[root@node1 ~]# mkfs.xfs /dev/drbd1 meta-data=/dev/drbd1 isize=512 agcount=4, agsize=131066 blks = sectsz=512 attr=2, projid32bit=1 = crc=1 finobt=0, sparse=0 data = bsize=4096 blocks=524263, imaxpct=25 = sunit=0 swidth=0 blks naming =version 2 bsize=4096 ascii-ci=0 ftype=1 log =internal log bsize=4096 blocks=2560, version=2 = sectsz=512 sunit=0 blks, lazy-count=1 realtime =none extsz=4096 blocks=0, rtextents=0 You have new mail in /var/spool/mail/root [root@node1 ~]# mount /dev/drbd1 /mnt [root@node1 ~]# df -h Filesystem Size Used Avail Use% Mounted on /dev/mapper/cl-root 18G 2.3G 16G 13% / devtmpfs 226M 0 226M 0% /dev tmpfs 237M 0 237M 0% /dev/shm tmpfs 237M 4.6M 232M 2% /run tmpfs 237M 0 237M 0% /sys/fs/cgroup /dev/sda1 1014M 197M 818M 20% /boot tmpfs 48M 0 48M 0% /run/user/0 /dev/drbd1 2.0G 33M 2.0G 2% /mnt
註####要想使得從可以掛載,我們必須,先把主切換成叢,然後再到從上面掛載:
14、查看資源鏈接的狀態可以發現是Connected,正常的
[root@node1 ~]# drbdadm cstate mysql Connected
15、查看資源角色命令
[root@node1 ~]# drbdadm role mysql Primary/Secondary [root@node1 ~]# ssh node2 "drbdadm role mysql" Secondary/Primary [root@node1 ~]# cat /proc/drbd version: 8.4.10-1 (api:1/proto:86-101) GIT-hash: a4d5de01fffd7e4cde48a080e2c686f9e8cebf4c build by mockbuild@, 2017-09-15 14:23:22 1: cs:Connected ro:Primary/Secondary ds:UpToDate/UpToDate C r----- ns:2099100 nr:0 dw:2048 dr:2098449 al:9 bm:0 lo:0 pe:0 ua:0 ap:0 ep:1 wo:f oos:0
註釋:
Parimary 主:資源目前為主,並且可能正在被讀取或寫入,如果不是雙主只會出現在兩個節點中的其中一個節點上
Secondary 次:資源目前為次,正常接收對等節點的更新
Unknown 未知:資源角色目前未知,本地的資源不會出現這種狀態
16、查看硬盤狀態:
[root@node1 ~]# drbdadm dstate mysql
UpToDate/UpToDate
本地和對等節點的硬盤有可能為下列狀態之一:
註:
Diskless 無盤:本地沒有塊設備分配給DRBD使用,這表示沒有可用的設備,或者使用drbdadm命令手工分離或是底層的I/O錯誤導致自動分離
Attaching:讀取無數據時候的瞬間狀態
Failed 失敗:本地塊設備報告I/O錯誤的下一個狀態,其下一個狀態為Diskless無盤
Negotiating:在已經連接的DRBD設置進行Attach讀取無數據前的瞬間狀態
Inconsistent:數據是不一致的,在兩個節點上(初始的完全同步前)這種狀態出現後立即創建一個新的資源。此外,在同步期間(同步目標)在一個節點上出現這種狀態
Outdated:數據資源是一致的,但是已經過時
DUnknown:當對等節點網絡連接不可用時出現這種狀態
Consistent:一個沒有連接的節點數據一致,當建立連接時,它決定數據是UpToDate或是Outdated
UpToDate:一致的最新的數據狀態,這個狀態為正常狀態
測試數據同步:
17、安裝數據庫,我這裏用的是centos7的版本
[root@node1 ~]# yum install mariadb-server mariadb -y [root@node2 ~]# yum install mariadb-server mariadb -y
18、把數據庫的目錄指向/mnt
[root@node1 ~]# cat /etc/my.cnf [mysqld] datadir=/mnt ....... [root@node2 ~]# cat /etc/my.cnf [mysqld] datadir=/mnt .......
19、下一步我們需要把/mnt設置擁有者為mysql
[root@node1 ~]# chown -R mysql:mysql /mnt [root@node1 ~]# systemctl restart mariadb [root@node2 ~]# chown -R mysql:mysql /mnt [root@node2 ~]# systemctl restart mariadb
20、我們進入數據庫創建數據庫
[root@node1 ~]#mysqld_safe --skip-grant-tables & [root@node1 ~]#mysql -u root MariaDB [(none)]> create database xiaozhang; Query OK, 1 row affected (0.12 sec)
#創建一個叫xiaozhang的數據庫
[root@node2 ~]# mysqld_safe --skip-grant-tables & #進入mariadb安全模式
21、切換主備節點:
先關掉node1的mariadb
[root@node1 /]# systemctl stop mariadb
1、先把主結點降為從結點(先卸載才能變為從):
[root@node1 /]# umount /mnt [root@node1 /]# drbdadm secondary mysql ##降為從 [root@node1 /]# drbd-overview NOTE: drbd-overview will be deprecated soon. Please consider using drbdtop. 1:mysql/0 Connected Secondary/Secondary UpToDate/UpToDate
可以看到node1已經降為從了
2在node2:
[root@node2 ~]# drbdadm primary mysql You have new mail in /var/spool/mail/root [root@node2 ~]# drbd-overview NOTE: drbd-overview will be deprecated soon. Please consider using drbdtop. 1:mysql/0 Connected Primary/Secondary UpToDate/UpToDate
可以看到經把node2,升為主了
3、然後我們掛載試一下:
[root@node2 ~]# mount /dev/drbd1 /mnt
重啟mariadb
4、檢測
進入mariadb數據庫
[root@node2 ~]#mysql -u root MariaDB [(none)]> show databases; +--------------------+ | Database | +--------------------+ | information_schema | | mysql | | performance_schema | | test | | xiaozhang | +--------------------+ 5 rows in set (0.07 sec)
我們可以看到數據已經同步了,在node2上已經可以看到在node1創的數據庫了。
到這裏我們就可以基本實現我們的drdb部署,實現數據同步了 ,當然啦,我們部署是需要很多的細節,不過我遇到的基本都解決了,都已經在文檔中有提示。
本文出自 “我的運維” 博客,請務必保留此出處http://xiaozhagn.blog.51cto.com/13264135/1975397
DRBD數據同步部署-centos7