1. 程式人生 > 實用技巧 >高可用分散式儲存(Corosync+Pacemaker+DRBD+MooseFS)

高可用分散式儲存(Corosync+Pacemaker+DRBD+MooseFS)

高可用分散式儲存(Corosync+Pacemaker+DRBD+MooseFS)



配置步驟:

(1) 安裝與配置DRBD編譯安裝Master-server

(2)安裝配置使用pcs安裝corosync+pacemaker

(3)安裝crm配置安裝mfs+DRBD+corosync+pacemaker的高可用叢集

(4)編譯安裝Chunk-server和Matelogger主機

(5)安裝mfs客戶端測試高可用叢集

(個人覺得還是先安裝好drbd,然後安裝master-server,最後才安裝chunk-server和matelogger主機。因為之前的配置的時候出現過掛載目錄寫不進資料的情況,後來經過排查最終把drbd的掛載磁碟格式化後重新安裝chunk-server和matelogger主機。)


一、介紹

DRBD

DRBD是一個用軟體實現的、無共享的、伺服器之間映象塊裝置內容的儲存複製解決方案。 DRBD Logo資料映象:實時、透明、同步(所有伺服器都成功後返回)、非同步(本地伺服器成功後返回)。DBRD的核心功能通過Linux的核心實現,最接近系統的IO棧,但它不能神奇地新增上層的功能比如檢測到EXT3檔案系統的崩潰。DBRD的位置處於檔案系統以下,比檔案系統更加靠近作業系統核心及IO棧。

MooseFS

MooseFSmfs)被稱為物件儲存,提供了強大的擴充套件性、高可靠性和永續性。它能夠將檔案分佈儲存於不同的物理機器上,對外卻提供的是一個透明的介面的儲存資源池。它還具有線上擴充套件(這是個很大的好處)、檔案切塊儲存、讀寫效率高等特點。

MFS分散式檔案系統由元資料伺服器(Master Server)、元資料日誌伺服器(Metalogger Server)、資料儲存伺服器(Chunk Server)、客戶端(Client)組成。

(1)元資料伺服器:MFS系統中的核心組成部分,儲存每個檔案的元資料,負責檔案的讀寫排程、空間回收和在多個chunk server之間的資料拷貝等。目前MFS僅支援一個元資料伺服器,因此可能會出現單點故障。針對此問題我們需要用一臺效能很穩定的伺服器來作為我們的元資料伺服器,這樣可以降低出現單點故障的概率。

(2)元資料日誌伺服器:元資料伺服器的備份節點,按照指定的週期從元資料伺服器上將儲存元資料、更新日誌和會話資訊的檔案下載到本地目錄下。當元資料伺服器出現故障時,我們可以從該伺服器的檔案中拿到相關的必要的資訊對整個系統進行恢復。

此外,利用元資料進行備份是一種常規的日誌備份手段,這種方法在某些情況下並不能完美的接管業務,還是會造成資料丟失。此次將採用通過iSCSI共享磁碟對元資料節點做雙機熱備。

(3)資料儲存伺服器:負責連線元資料管理伺服器,聽從元資料伺服器的排程,提供儲存空間,併為客戶端提供資料傳輸,MooseFS提供一個手動指定每個目錄的備份個數。假設個數為n,那麼我們在向系統寫入檔案時,系統會將切分好的檔案塊在不同的chunk server上覆制n份。備份數的增加不會影響系統的寫效能,但是可以提高系統的讀效能和可用性,這可以說是一種以儲存容量換取寫效能和可用性的策略。

(4)客戶端:使用mfsmount的方式通過FUSE核心介面掛接遠端管理伺服器上管理的資料儲存伺服器到本地目錄上,然後就可以像使用本地檔案一樣來使用我們的MFS檔案系統了。

個人總結筆記:


分散式儲存:要使用源資料做(排程的作用),所以源資料也要做高可用

ceph:雲,openstack,kubernats,剛出來,可能不太穩定

glusterfs:儲存大檔案。支援塊裝置,FUSE,直接掛載

mogilefs:效能高,海量小檔案。但是FUSE效能不太好,需要折騰。支援物件儲存,需要程式語言呼叫API,最大優勢是有api

fastDFSmogilefsC語言實現形式,國人開發,不支援FUSE..儲存記憶體,也支援海量小檔案,都存在記憶體裡面,所以很快(相對缺陷很大)

HDFS:海量大檔案。(google的)

moosefs:(這次主要介紹因為國內比較受歡迎)儲存海量小檔案,支援FUSE.加伺服器把ip指向源資料伺服器就自動做成ha

常用高可用叢集解決方案:

Heatbeat+peachmaker:已慢慢淘汰

Cman+rgmanager

Cman+pacemaker

Corosync+pacemakercorosync:提供資訊傳遞、不做任何事情。只做心跳檢測。Pacemaker:只作為資源管理器)

cman+clvm(一般做磁碟塊的高可用cman:也逐漸淘汰,因為corosync有個優秀的投票機制。)

環境介紹:

8d55fd0986e82037fb590a0ca9760d88.png


系統版本: centos7

Yum源:http://mirrors.aliyun.com/repo/

cml1=Master Servermaster):192.168.5.101 (VIP192.168.5.200)

cml2=Master Serverslave):192.168.5.102

cml5=Chunk server192.168.5.104

cml4=Chunk server192.168.5.105

cml5=Metalogger Server192.168.5.103

cml6=Client192.168.5.129

二、配置步驟

(1)安裝與配置DRBD編譯安裝Master-server

1、修改hosts檔案保證hosts之間能夠互相訪問:

[[email protected] ~]#cat /etc/hosts

127.0.0.1 localhost localhost.localdomain localhost4localhost4.localdomain4

::1 localhost localhost.localdomainlocalhost6 localhost6.localdomain6

192.168.5.101 cml1mfsmaster

192.168.5.102 cml2

192.168.5.103 cml5

192.168.5.104 cml3

192.168.5.105 cml4

192.168.5.129 cml6

2、修改ssh互信:

[[email protected]~]#ssh-keygen
[[email protected]~]#ssh-copy-idcml2

3、設定時鐘同步:

[[email protected]~]#crontab-l
*/5****ntpdatecn.pool.ntp.org

4、安裝derb:

#rpm--importhttps://www.elrepo.org/RPM-GPG-KEY-elrepo.org
#rpm-Uvhhttp://www.elrepo.org/elrepo-release-7.0-2.el7.elrepo.noarch.rpm
#yuminstall-ykmod-drbd84drbd84-utils

5、主配置檔案:

/etc/drbd.conf#主配置檔案

/etc/drbd.d/global_common.conf#全域性配置檔案

6、檢視主配置檔案:

[[email protected] ~]#cat /etc/drbd.conf

# You can findan example in/usr/share/doc/drbd.../drbd.conf.example

include"drbd.d/global_common.conf";

include"drbd.d/*.res";

7、配置檔案說明:

[[email protected]~]#vim/etc/drbd.d/global_common.conf
global{
usage-countno;#是否參加DRBD使用統計,預設為yes。官方統計drbd的裝機量
#minor-countdialog-refreshdisable-ip-verification
}
common{
protocolC;#使用DRBD的同步協議
handlers{
pri-on-incon-degr"/usr/lib/drbd/notify-pri-on-incon-degr.sh;/usr/lib/drbd/notify-emergency-reboot.sh;echob>/proc/sysrq-trigger;reboot-f";
pri-lost-after-sb"/usr/lib/drbd/notify-pri-lost-after-sb.sh;/usr/lib/drbd/notify-emergency-reboot.sh;echob>/proc/sysrq-trigger;reboot-f";
local-io-error"/usr/lib/drbd/notify-io-error.sh;/usr/lib/drbd/notify-emergency-shutdown.sh;echoo>/proc/sysrq-trigger;halt-f";
}
startup{
#wfc-timeoutdegr-wfc-timeoutoutdated-wfc-timeoutwait-after-sb
}
options{
#cpu-maskon-no-data-accessible
}
disk{
on-io-errordetach;#配置I/O錯誤處理策略為分離
#sizemax-bio-bvecson-io-errorfencingdisk-barrierdisk-flushes
#disk-drainmd-flushesresync-rateresync-afteral-extents
#c-plan-aheadc-delay-targetc-fill-targetc-max-rate
#c-min-ratedisk-timeout
}
net{

#protocoltimeoutmax-epoch-sizemax-buffersunplug-watermark
#connect-intping-intsndbuf-sizercvbuf-sizeko-count
#allow-two-primariescram-hmac-algshared-secretafter-sb-0pri
#after-sb-1priafter-sb-2prialways-asbprr-conflict
#ping-timeoutdata-integrity-algtcp-corkon-congestion
#congestion-fillcongestion-extentscsums-algverify-alg
#use-rle
}
syncer{
rate1024M;#設定主備節點同步時的網路速率
}
}

8、建立配置檔案:

[[email protected]~]#cat/etc/drbd.d/mfs.res
resourcemfs{
protocolC;
meta-diskinternal;
device/dev/drbd1;
syncer{
verify-algsha1;
}
net{
allow-two-primaries;
}
oncml1{
disk/dev/sdb1;
address192.168.5.101:7789;
}
oncml2{
disk/dev/sdb1;
address192.168.5.102:7789;
}
}

9、然後把配置檔案copy到對面的機器上:

scp-rp/etc/drbd.d/*cml2:/etc/drbd.d/

10、在cml1上面啟動:

[[email protected]~]#drbdadmcreate-mdmfs
initializingactivitylog
initializingbitmap(160KB)toallzero
Writingmetadata...
Newdrbdmetadatablocksuccessfullycreated.
[[email protected]~]#modprobedrbd
##檢視核心是否已經載入了模組:
[[email protected]]#lsmod|grepdrbd
drbd3968751
libcrc32c126444xfs,drbd,ip_vs,nf_conntrack
###

[[email protected]~]#drbdadmupmfs
[[email protected]~]#drbdadm----forceprimarymfs
檢視狀態:
[[email protected]~]#cat/proc/drbd
version:8.4.10-1(api:1/proto:86-101)
GIT-hash:a4d5de01fffd7e4cde48a080e2c686f9e8cebf4cbuildby[email protected],2017-09-1514:23:22

1:cs:WFConnectionro:Primary/Unknownds:UpToDate/DUnknownCr----s
ns:0nr:0dw:0dr:912al:8bm:0lo:0pe:0ua:0ap:0ep:1wo:foos:5240636

10、在對端(cml2)節點執行:

[[email protected]~]#drbdadmcreate-mdmfs
[[email protected]~]#modprobedrbd
[[email protected]~]#drbdadmupmfs

11、格式化並掛載:

[[email protected]~]#mkfs.ext4/dev/drbd1
[[email protected]~]#mkdir/usr/local/mfs
[[email protected]~]#mount/dev/drbd1/usr/local/mfs
[[email protected]~]#df-TH
FilesystemTypeSizeUsedAvailUse%Mountedon
/dev/mapper/centos-rootxfs19G6.8G13G36%/
devtmpfsdevtmpfs501M0501M0%/dev
tmpfstmpfs512M56M456M11%/dev/shm
tmpfstmpfs512M33M480M7%/run
tmpfstmpfs512M0512M0%/sys/fs/cgroup
/dev/sda1xfs521M160M362M31%/boot
tmpfstmpfs103M0103M0%/run/user/0
/dev/drbd1ext45.2G30M4.9G1%/usr/local/mfs

####注意要想使得從可以掛載,我們必須,先把主切換成叢,然後再到從上面掛載:

###檢視狀態:

[[email protected]~]#cat/proc/drbd
version:8.4.10-1(api:1/proto:86-101)
GIT-hash:a4d5de01fffd7e4cde48a080e2c686f9e8cebf4cbuildby[email protected],2017-09-1514:23:22

1:cs:Connectedro:Primary/Secondaryds:UpToDate/UpToDateCr-----
ns:520744nr:0dw:252228dr:300898al:57bm:0lo:0pe:0ua:0ap:0ep:1wo:foos:0

12、安裝與配置Master Server:

##MFS安裝:下載3.0包:

[[email protected]]#yuminstallzlib-devel-y
[[email protected]]#wgethttps://github.com/moosefs/moosefs/archive/v3.0.96.tar.gz

(1)安裝master:

[[email protected]]#useraddmfs
[[email protected]]#tar-xfmoosefs.3.0.96.tar.gz
[[email protected]]#cdmoosefs-3.0.96/
[[email protected]]#./configure--prefix=/usr/local/mfs--with-default-user=mfs--with-default-group=mfs--disable-mfschunkserver--disable-mfsmount
[[email protected]]#ls/usr/local/mfs/
binetcsbinsharevar

(etc和var目錄裡面存放的是配置檔案和MFS的資料結構資訊,因此請及時做好備份,防止災難損毀。做了Master Server雙機之後,就可以解決這個問題。)

##注意:所有主機上的mfs,使用者id和組id要一樣:

(2)配置master:

[[email protected]]#pwd
/usr/local/mfs/etc/mfs
[[email protected]]#ls
mfsexports.cfg.samplemfsmaster.cfg.samplemfsmetalogger.cfg.samplemfstopology.cfg.sample

##都是sample檔案,所以我們要命名成.cfg檔案:

[[email protected]]#cpmfsexports.cfg.samplemfsexports.cfg
[[email protected]]#cpmfsmaster.cfg.samplemfsmaster.cfg

(3)看一下預設配置的引數:

[[email protected]]# vim mfsmaster.cfg

#WORKING_USER=mfs#執行masterserver的使用者

#WORKING_GROUP=mfs#執行masterserver的組

#SYSLOG_IDENT=mfsmaster#是masterserver在syslog中的標識,也就是說明這是由masterserve產生的

#LOCK_MEMORY=0#是否執行mlockall()以避免mfsmaster程序溢位(預設為0)

#NICE_LEVEL=-19#執行的優先順序(如果可以預設是-19;注意:程序必須是用root啟動)

#EXPORTS_FILENAME=/usr/local/mfs-1.6.27/etc/mfs/mfsexports.cfg#被掛載目錄及其許可權控制檔案的存放路徑

#TOPOLOGY_FILENAME=/usr/local/mfs-1.6.27/etc/mfs/mfstopology.cfg#mfstopology.cfg檔案的存放路徑

#DATA_PATH=/usr/local/mfs-1.6.27/var/mfs#資料存放路徑,此目錄下大致有三類檔案,changelog,sessions和stats;

#BACK_LOGS=50#metadata的改變log檔案數目(預設是50)

#BACK_META_KEEP_PREVIOUS=1#metadata的預設儲存份數(預設為1)

#REPLICATIONS_DELAY_INIT=300#延遲複製的時間(預設是300s)

#REPLICATIONS_DELAY_DISCONNECT=3600#chunkserver斷開的複製延遲(預設是3600)

#MATOML_LISTEN_HOST=*#metalogger監聽的IP地址(預設是*,代表任何IP)

#MATOML_LISTEN_PORT=9419#metalogger監聽的埠地址(預設是9419)

#MATOML_LOG_PRESERVE_SECONDS=600

#MATOCS_LISTEN_HOST=*#用於chunkserver連線的IP地址(預設是*,代表任何IP)

#MATOCS_LISTEN_PORT=9420#用於chunkserver連線的埠地址(預設是9420)

#MATOCL_LISTEN_HOST=*#用於客戶端掛接連線的IP地址(預設是*,代表任何IP)

#MATOCL_LISTEN_PORT=9421#用於客戶端掛接連線的埠地址(預設是9421)

#CHUNKS_LOOP_MAX_CPS=100000#chunks的最大回環頻率(預設是:100000秒)

#CHUNKS_LOOP_MIN_TIME=300#chunks的最小回環頻率(預設是:300秒)

#CHUNKS_SOFT_DEL_LIMIT=10#一個chunkserver中soft最大的可刪除數量為10個

#CHUNKS_HARD_DEL_LIMIT=25#一個chuankserver中hard最大的可刪除數量為25個

#CHUNKS_WRITE_REP_LIMIT=2#在一個迴圈裡複製到一個chunkserver的最大chunk數目(預設是1)

#CHUNKS_READ_REP_LIMIT=10#在一個迴圈裡從一個chunkserver複製的最大chunk數目(預設是5)

#ACCEPTABLE_DIFFERENCE=0.1#每個chunkserver上空間使用率的最大區別(預設為0.01即1%)

#SESSION_SUSTAIN_TIME=86400#客戶端會話超時時間為86400秒,即1天

#REJECT_OLD_CLIENTS=0#彈出低於1.6.0的客戶端掛接(0或1,預設是0)

##因為是官方的,預設配置,我們投入即可使用。

(4)修改控制檔案:

[[email protected]]#vimmfsexports.cfg

*/rw,alldirs,maproot=0,password=cml
*.rw

##mfsexports.cfg檔案中,每一個條目就是一個配置規則,而每一個條目又分為三個部分,其中第一部分是mfs客戶端的ip地址或地址範圍,第二部分是被掛載的目錄,第三個部分用來設定mfs客戶端可以擁有的訪問許可權。

(5)開啟元資料檔案預設是empty檔案,需要我們手工開啟:

[[email protected]]#cp/usr/local/mfs/var/mfs/metadata.mfs.empty/usr/local/mfs/var/mfs/metadata.mfs

(6)啟動master:

[[email protected]]#/usr/local/mfs/sbin/mfsmasterstart
openfileslimithasbeensetto:16384
workingdirectory:/usr/local/mfs/var/mfs
lockfilecreatedandlocked
initializingmfsmastermodules...
exportsfilehasbeenloaded
mfstopologyconfigurationfile(/usr/local/mfs/etc/mfstopology.cfg)notfound-usingdefaults
loadingmetadata...
metadatafilehasbeenloaded
nochartsdatafile-initializingemptycharts
master<->metaloggersmodule:listenon*:9419
master<->chunkserversmodule:listenon*:9420
mainmasterservermodule:listenon*:9421
mfsmasterdaemoninitializedproperly

(7)檢查程序是否啟動:

[[email protected]]# ps -ef | grep mfs

mfs 8109 15 18:40 ? 00:00:02/usr/local/mfs/sbin/mfsmaster start

root 81231307 0 18:41 pts/0 00:00:00 grep --color=auto mfs

(8)檢視埠:

[[email protected]]# netstat -ntlp

Active Internetconnections (only servers)

Proto Recv-QSend-Q Local Address ForeignAddress State PID/Program name

tcp 00 0.0.0.0:94190.0.0.0:* LISTEN 8109/mfsmaster

tcp 00 0.0.0.0:94200.0.0.0:*LISTEN 8109/mfsmaster

tcp 00 0.0.0.0:94210.0.0.0:* LISTEN 8109/mfsmaster

(9)當關閉的時候直接使用:

[[email protected]]#/usr/local/mfs/sbin/mfsmasterstop
sendingSIGTERMtolockowner(pid:8109)
waitingforterminationterminated

(2)安裝配置使用pcs安裝corosync+pacemaker

##pcs相關配置:(因為在7版本,所以pcs支援比較好,crmsh比較複雜)

1、兩個結點上執行:

[[email protected]]#yuminstall-ypacemakerpcspsmiscpolicycoreutils-python

2、啟動pcs並且讓開機啟動:

[[email protected]]#systemctlstartpcsd.service
[[email protected]]#systemctlenablepcsd

3、修改使用者hacluster的密碼:

[[email protected]]#echo123456|passwd--stdinhacluster

4、註冊pcs叢集主機(預設註冊使用使用者名稱hacluster,和密碼):

[[email protected]]#pcsclusterauthcml1cml2##設定註冊那個叢集節點
cml2:Alreadyauthorized
cml1:Alreadyauthorized

5、在叢集上註冊兩臺叢集:

[[email protected]]#pcsclustersetup--namemyclustercml1cml2--force

##設定叢集

6、接下來就在某個節點上已經生成來corosync配置檔案:

[[email protected]]#ls
corosync.confcorosync.conf.examplecorosync.conf.example.udpucorosync.xml.exampleuidgid.d

#我們看到生成來corosync.conf配置檔案:

7、我們看一下注冊進來的檔案:

[[email protected]]#catcorosync.conf
totem{
version:2
secauth:off
cluster_name:mycluster
transport:udpu
}

nodelist{
node{
ring0_addr:cml1
nodeid:1
}

node{
ring0_addr:cml2
nodeid:2
}
}

quorum{
provider:corosync_votequorum
two_node:1
}

logging{
to_logfile:yes
logfile:/var/log/cluster/corosync.log
to_syslog:yes
}
8、啟動叢集:
[[email protected]]#pcsclusterstart--all
cml1:StartingCluster...
cml2:StartingCluster...

##相當於啟動來pacemaker和corosync:

9、可以檢視叢集是否有錯:

[[email protected]]#crm_verify-L-V
error:unpack_resources:Resourcestart-updisabledsincenoSTONITHresourceshavebeendefined
error:unpack_resources:EitherconfiguresomeordisableSTONITHwiththestonith-enabledoption
error:unpack_resources:NOTE:ClusterswithshareddataneedSTONITHtoensuredataintegrity
Errorsfoundduringcheck:confignotvalid

##因為我們沒有配置STONITH裝置,所以我們下面要關閉

10、關閉STONITH裝置:

[[email protected]]#pcspropertysetstonith-enabled=false
[[email protected]]#crm_verify-L-V
[[email protected]]#pcspropertylist
ClusterProperties:
cluster-infrastructure:corosync
cluster-name:mycluster
dc-version:1.1.16-12.el7_4.2-94ff4df
have-watchdog:false
stonith-enabled:false

(3)安裝crm配置安裝mfs+DRBD+corosync+pacemaker的高可用叢集:

1、安裝crmsh:

叢集我們可以下載安裝crmsh來操作(從github來下載,然後解壓直接安裝):只在一個節點安裝即可。(但最好選擇兩節點上安裝這樣測試時方便點)

[[email protected]~]#cd/usr/local/src/
Youhavenewmailin/var/spool/mail/root
[[email protected]]#ls
nginx-1.12.0php-5.5.38.tar.gz
crmsh-2.3.2.tarnginx-1.12.0.tar.gzzabbix-3.2.7.tar.gz
[[email protected]]#tar-xfcrmsh-2.3.2.tar
[[email protected]]#pythonsetup.pyinstall

2、用crmsh來管理:

[[email protected] ~]#crm help

Help overview forcrmsh

Available topics:

Overview Help overview for crmsh

Topics Available topics

Description Program description

CommandLine Command line options

Introduction Introduction

Interface User interface

Completion Tab completion

Shorthand Shorthand syntax

Features Features

Shadows Shadow CIB usage

Checks Configuration semantic checks

Templates Configuration templates

Testing Resource testing

Security Access Control Lists (ACL)

Resourcesets Syntax: Resource sets

AttributeListReferences Syntax:Attribute list references

AttributeReferences Syntax: Attributereferences

RuleExpressions Syntax: Ruleexpressions

Lifetime Lifetime parameter format

Reference Command reference

3、藉助crm管理工具配置DRBD+nfs+corosync+pacemaker高可用叢集:

##先解除安裝掉掛載點和停掉drbd服務

[[email protected]~]#systemctlstopdrbd
[[email protected]~]#umount/usr/local/mfs/
[[email protected]~]#systemctlstopdrbd
[[email protected]~]#crm
crm(live)#status
Stack:corosync
CurrentDC:cml2(version1.1.16-12.el7_4.4-94ff4df)-partitionwithquorum
Lastupdated:FriOct2719:15:542017
Lastchange:FriOct2710:52:352017byrootviacibadminoncml1

2nodesconfigured
5resourcesconfigured

Online:[cml25pxl2]

Noresources
crm(live)configure#propertystonith-enabled=false
crm(live)configure#propertyno-quorum-policy=ignore
crm(live)configure#propertymigration-limit=1###表示服務搶佔一次不成功就給另一個節點接管
crm(live)#configure

4、寫一個mfsmaster的啟動指令碼:

[[email protected]]#cat/etc/systemd/system/mfsmaster.service
[Unit]
Description=mfs
After=network.target

[Service]
Type=forking
ExecStart=/usr/local/mfs/sbin/mfsmasterstart
ExecStop=/usr/local/mfs/sbin/mfsmasterstop
PrivateTmp=true

[Install]
WantedBy=multi-user.target

##開機啟動:

[[email protected]]#systemctlenablemfsmaster

##停止mfsmaster服務

[[email protected]]#systemctlstopmfsmaster

5、開啟工具:

[[email protected]]#systemctlstartcorosync
[[email protected]]#systemctlstartpacemaker
[[email protected]]#sshcml2systemctlstartcorosync
[[email protected]]#sshcml2systemctlstartpacemaker

6、配置資源:

crm(live)configure#primitivemfs_drbdocf:linbit:drbdparamsdrbd_resource=mfsopmonitorrole=Masterinterval=10timeout=20opmonitorrole=Slaveinterval=20timeout=20opstarttimeout=240opstoptimeout=100
crm(live)configure#verify
crm(live)configure#msms_mfs_drbdmfs_drbdmetamaster-max="1"master-node-max="1"clone-max="2"clone-node-max="1"notify="true"
crm(live)configure#verify
crm(live)configure#commit

7、配置掛載資源:

crm(live)configure#primitivemystoreocf:heartbeat:Filesystemparamsdevice=/dev/drbd1directory=/usr/local/mfsfstype=ext4opstarttimeout=60opstoptimeout=60
crm(live)configure#verify
crm(live)configure#colocationms_mfs_drbd_with_mystoreinf:mystorems_mfs_drbd
crm(live)configure#orderms_mfs_drbd_before_mystoreMandatory:ms_mfs_drbd:promotemystore:start

8、配置mfs資源:

crm(live)configure#primitivemfssystemd:mfsmasteropmonitortimeout=100interval=30opstarttimeout=30interval=0opstoptimeout=30interval=0
crm(live)configure#colocationmfs_with_mystoreinf:mfsmystore
crm(live)configure#ordermystor_befor_mfsMandatory:mystoremfs
crm(live)configure#verify
crm(live)configure#commit

9、配置VIP:

crm(live)configure#primitivevipocf:heartbeat:IPaddrparamsip=192.168.5.200
crm(live)configure#colocationvip_with_msfinf:vipmfs
crm(live)configure#verify
crm(live)configure#commit


10、檢視配置:

crm(live)configure#show
node1:cml1\
attributesstandby=off
node2:cml2\
attributesstandby=off
primitivemfssystemd:mfsmaster\
opmonitortimeout=100interval=30\
opstarttimeout=30interval=0\
opstoptimeout=30interval=0
primitivemfs_drbdocf:linbit:drbd\
paramsdrbd_resource=mfs\
opmonitorrole=Masterinterval=10timeout=20\
opmonitorrole=Slaveinterval=20timeout=20\
opstarttimeout=240interval=0\
opstoptimeout=100interval=0
primitivemystoreFilesystem\
paramsdevice="/dev/drbd1"directory="/usr/local/mfs"fstype=ext4\
opstarttimeout=60interval=0\
opstoptimeout=60interval=0
primitivevipIPaddr\
paramsip=192.168.5.200
msms_mfs_drbdmfs_drbd\
metamaster-max=1master-node-max=1clone-max=2clone-node-max=1notify=true
colocationmfs_with_mystoreinf:mfsmystore
orderms_mfs_drbd_before_mystoreMandatory:ms_mfs_drbd:promotemystore:start
colocationms_mfs_drbd_with_mystoreinf:mystorems_mfs_drbd
ordermystor_befor_mfsMandatory:mystoremfs
colocationvip_with_msfinf:vipmfs
propertycib-bootstrap-options:\
have-watchdog=false\
dc-version=1.1.16-12.el7_4.4-94ff4df\
cluster-infrastructure=corosync\
cluster-name=webcluster\
stonith-enabled=false\
no-quorum-policy=ignore\
migration-limit=1

crm(live)configure#commit
crm(live)configure#cd
crm(live)#status
Stack:corosync
CurrentDC:cml2(version1.1.16-12.el7_4.4-94ff4df)-partitionwithquorum
Lastupdated:FriOct2719:27:232017
Lastchange:FriOct2710:52:352017byrootviacibadminoncml1

2nodesconfigured
5resourcesconfigured

Online:[cml25pxl2]

Fulllistofresources:

Master/SlaveSet:ms_mfs_drbd[mfs_drbd]
Masters:[cml1]
Slaves:[cml2]
mystore(ocf::heartbeat:Filesystem):Startedcml1
mfs(systemd:mfsmaster):Startedcml1
vip(ocf::heartbeat:IPaddr):Startedcml1

##檢查是否已經掛載到cml1主機上

[[email protected] ~]#df -TH

Filesystem Type SizeUsed Avail Use% Mounted on

/dev/mapper/centos-rootxfs 19G 6.8G13G 36% /

devtmpfs devtmpfs 501M0 501M 0% /dev

tmpfs tmpfs 512M41M 472M 8% /dev/shm

tmpfs tmpfs 512M33M 480M 7% /run

tmpfs tmpfs512M 0 512M0% /sys/fs/cgroup

/dev/sda1 xfs 521M160M 362M 31% /boot

tmpfs tmpfs 103M0 103M 0% /run/user/0

/dev/drbd1 ext4 5.2G30M 4.9G 1% /usr/local/mfs

[[email protected] ~]#ip addr

2: ens34:<BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc pfifo_fast state UP qlen1000

link/ether 00:0c:29:4d:47:ed brdff:ff:ff:ff:ff:ff

inet 192.168.5.101/24 brd 192.168.5.255scope global ens34

valid_lft forever preferred_lft forever

inet 192.168.5.200/24brd 192.168.5.255 scope global secondary ens34

##vip已經被cml1(master)接管了。

(4)編譯安裝Chunk-server和Matelogger主機:

一、安裝Metalogger Server: (這步驟在cml5上配置,其實做了mfsmaster高可用可以不需要這步驟了。)

前面已經介紹了,MetaloggerServer 是 Master Server 的備份伺服器。因此,Metalogger Server 的安裝步驟和 Master Server 的安裝步驟相同。並且,最好使用和 Master Server 配置一樣的伺服器來做 Metalogger Server。這樣,一旦主伺服器master宕機失效,我們只要匯入備份資訊changelogs到元資料檔案,備份伺服器可直接接替故障的master繼續提供服務。

1、從master把包copy過來:

[[email protected]]#scp/usr/local/src/v3.0.96.tar.gzcml5:/usr/local/src/
v3.0.96.tar.gz
[[email protected]]#tar-xfmoosefs.3.0.96.tar.gz
[[email protected]]#useraddmfs
[[email protected]]#yuminstallzlib-devel-y
[[email protected]]#./configure--prefix=/usr/local/mfs--with-default-user=mfs--with-default-group=mfs--disable-mfschunkserver--disable-mfsmount
[[email protected]]#make&&makeinstall

2、配置Metalogger Server:

[[email protected]]#cd/usr/local/mfs/etc/mfs/
[[email protected]]#ls
mfsexports.cfg.samplemfsmaster.cfg.samplemfsmetalogger.cfg.samplemfstopology.cfg.sample
[[email protected]]#cpmfsmetalogger.cfg.samplemfsmetalogger.cfg
[[email protected]]#vimmfsmetalogger.cfg
MASTER_HOST=192.168.5.200##指向vip
#MASTER_PORT=9419##連結埠
#META_DOWNLOAD_FREQ=24##元資料備份檔案下載請求頻率,預設為24小時,即每個一天從元資料伺服器下載一個metadata.mfs.back檔案。當元資料伺服器關閉或者出故障時,metadata.mfs.back檔案將小時,那麼要恢復整個mfs,則需要從metalogger伺服器取得該檔案。請注意該檔案,它與日誌檔案在一起,才能夠恢復整個被損壞的分散式檔案系統。

3、啟動Metalogger Server:

[[email protected]~]#/usr/local/mfs/sbin/mfsmetaloggerstart
openfileslimithasbeensetto:4096
workingdirectory:/usr/local/mfs/var/mfs
lockfilecreatedandlocked
initializingmfsmetaloggermodules...
mfsmetaloggerdaemoninitializedproperly

[[email protected]~]#netstat-lantp|grepmetalogger
tcp00192.168.113.144:45620192.168.113.143:9419ESTABLISHED1751/mfsmetalogger

[[email protected]~]#netstat-lantp|grep9419
tcp00192.168.113.144:45620192.168.113.143:9419ESTABLISHED1751/mfsmetalogger

4、檢視一下生成的日誌檔案:

[[email protected]~]#ls/usr/local/mfs/var/mfs/
changelog_ml_back.0.mfschangelog_ml_back.1.mfsmetadata.mfs.emptymetadata_ml.mfs.back

二、安裝chunk servers(注意在cml5和cml4主機上做相同的配置)

1、下載包編譯安裝

[[email protected]~]#useraddmfs##注意uid和gid必須整個叢集都要相同的
[[email protected]~]#yuminstallzlib-devel-y
[[email protected]~]#cd/usr/local/src/
[[email protected]]#tar-xfmoosefs.3.0.96.tar.gz
[[email protected]]#./configure--prefix=/usr/local/mfs--with-default-user=mfs--with-default-group=mfs--disable-mfsmaster--disable-mfsmount
[[email protected]]#make&&makeinstall

2、配置check server:

[[email protected]]#cd/usr/local/mfs/etc/mfs/
Youhavenewmailin/var/spool/mail/root
[[email protected]]#mvmfschunkserver.cfg.samplemfschunkserver.cfg
[[email protected]]#vimmfschunkserver.cfg
MASTER_HOST=192.168.5.200##指向vip

3、配置mfshdd.cfg主配置檔案

mfshdd.cfg該檔案用來設定你將 Chunk Server 的哪個目錄共享出去給 Master Server進行管理。當然,雖然這裡填寫的是共享的目錄,但是這個目錄後面最好是一個單獨的分割槽。

[[email protected]]#cp/usr/local/mfs/etc/mfs/mfshdd.cfg.sample/usr/local/mfs/etc/mfs/mfshdd.cfg
[[email protected]]#vim/usr/local/mfs/etc/mfs/mfshdd.cfg
/mfsdata

##自己定義的目錄

4、啟動check Server:

[[email protected]]#mkdir/mfsdata
[[email protected]]#chownmfs:mfs/mfsdata/
[[email protected]]#/usr/local/mfs/sbin/mfschunkserverstart
openfileslimithasbeensetto:16384
workingdirectory:/usr/local/mfs/var/mfs
lockfilecreatedandlocked
settingglibcmallocarenamaxto4
settingglibcmallocarenatestto4
initializingmfschunkservermodules...
hddspacemanager:pathtoscan:/mfsdata/
hddspacemanager:startbackgroundhddscanning(searchingforavailablechunks)
mainservermodule:listenon*:9422
nochartsdatafile-initializingemptycharts
mfschunkserverdaemoninitializedproperly

###檢查監聽埠:

[[email protected]]#netstat-lantp|grep9420
tcp00192.168.113.145:45904192.168.113.143:9420ESTABLISHED9896/mfschunkserver

###在master上面檢視變化:

(5)安裝mfs客戶端測試高可用叢集:

1、安裝FUSE:

[[email protected]]#lsmod|grepfuse
[[email protected]]#yuminstallfusefuse-devel
[[email protected]~]#modprobefuse
[[email protected]~]#lsmod|grepfuse
fuse918740

2、安裝掛載客戶端

[[email protected]~]#yuminstallzlib-devel-y
[[email protected]]#yuminstallfuse-devel
[[email protected]~]#useraddmfs
[[email protected]]#tar-zxvfv3.0.96.tar.gz
[[email protected]]#cdmoosefs-3.0.96/
[[email protected]]#./configure--prefix=/usr/local/mfs--with-default-user=mfs--with-default-group=mfs--disable-mfsmaster--disable-mfschunkserver--enable-mfsmount
[[email protected]]#make&&makeinstall

3、在客戶端上掛載檔案系統,先建立掛載目錄:

[[email protected]]#mkdir/mfsdata
[[email protected]]#chown-Rmfs:mfs/mfsdata/
[[email protected]~]#/usr/local/mfs/bin/mfsmount-H192.168.5.200/mfsdata/-p
MFSPassword:
mfsmasteracceptedconnectionwithparameters:read-write,restricted_ip;rootmappedtoroot:root
[[email protected]~]#df-TH
FilesystemTypeSizeUsedAvailUse%Mountedon
/dev/mapper/vg_cml-lv_root
ext419G4.9G13G28%/
tmpfstmpfs977M0977M0%/dev/shm
/dev/sda1ext4500M29M445M7%/boot
192.168.5.200:9421fuse.mfs38G14G25G36%/mfsdata
[[email protected]]#echo"test">a.txt
[[email protected]]#ls
a.txt
[[email protected]]#cata.txt
test

測試master server(master)主機down掉切到(slave)上檔案是否還在

crm(live)#nodestandby
crm(live)#status
Stack:corosync
CurrentDC:cml2(version1.1.16-12.el7_4.4-94ff4df)-partitionwithquorum
Lastupdated:FriOct2719:55:152017
Lastchange:FriOct2719:55:012017byrootviacrm_attributeoncml1

2nodesconfigured
5resourcesconfigured

Nodecml1:standby
Online:[cml2]

Fulllistofresources:

Master/SlaveSet:ms_mfs_drbd[mfs_drbd]
Masters:[cml2]
Stopped:[cml1]
mystore(ocf::heartbeat:Filesystem):Startedcml2
mfs(systemd:mfsmaster):Startedcml2
vip(ocf::heartbeat:IPaddr):Startedcml2

##顯示業務已經切到cml2主機上了

[[email protected]~]#df-TH
FilesystemTypeSizeUsedAvailUse%Mountedon
/dev/mapper/centos-rootxfs19G6.7G13G36%/
devtmpfsdevtmpfs501M0501M0%/dev
tmpfstmpfs512M56M456M11%/dev/shm
tmpfstmpfs512M14M499M3%/run
tmpfstmpfs512M0512M0%/sys/fs/cgroup
/dev/sda1xfs521M160M362M31%/boot
tmpfstmpfs103M0103M0%/run/user/0
/dev/drbd1ext45.2G30M4.9G1%/usr/local/mfs
[[email protected]~]#ipaddr
2:ens34:<BROADCAST,MULTICAST,UP,LOWER_UP>mtu1500qdiscpfifo_faststateUPqlen1000
link/ether00:0c:29:5a:c5:eebrdff:ff:ff:ff:ff:ff
inet192.168.5.102/24brd192.168.5.255scopeglobalens34
valid_lftforeverpreferred_lftforever
inet192.168.5.200/24brd192.168.5.255scopeglobalsecondaryens34

##掛載點和vip已經切到cml2上面了

##重新掛載看看業務是否正常

[[email protected]~]#umount/mfsdata/
[[email protected]~]#/usr/local/mfs/bin/mfsmount-H192.168.5.200/mfsdata/-p
MFSPassword:
mfsmasteracceptedconnectionwithparameters:read-write,restricted_ip;rootmappedtoroot:root
[[email protected]~]#cd/mfsdata/
[[email protected]]#ls
a.txt
[[email protected]]#cata.txt
test

##剛剛寫進去的a.txt檔案還在證明業務正常


轉載於:https://blog.51cto.com/legehappy/1977270