詳解MFS分布式存儲系統
分布式文件系統是指文件系統管理的物理存儲資源下不一定直接連接在本地節點上,而是通過計算機網絡與節點相連。
分布式文件系統的優點是集中訪問、簡化操作、數據容災,以及提高了文件的存取性能。
MFS分布式文件系統
MFS是一種半分布式文件系統,它是由波蘭人開發的。MFS文件系統能夠實現RAID的功能,不但能夠更節約存儲成本,而且不比專業的存儲系統差,它還可以實現在線擴展。
MFS原理
MFS是一個具有容錯性的網絡分布式文件系統,它把數據分散存放在多個服務器上,而呈現給用戶的則是一個統一的資源。
(1) MFS文件系統的組成架構:
- 元數據服務器(Master):在整個體系中負責管理文件系統,維護元數據;
- 元數據日誌服務器(Metalogger):備份Master服務器的變化日誌文件,文件類型為changlog_ml.*.mfs。當Master服務器數據丟失或者損壞時,可以從日誌服務器中取得文件,進行恢復;
- 數據存儲服務器(Chunk Server):真正存儲的數據的服務器。存儲文件時,會把文件分塊保存,並在數據服務器之間進行復制。數據服務器越多,能夠使用的容量則越大,可靠性就越高,性能也就越好;
- 客戶端(Client):可以像掛載NFS一樣掛載MFS文件系統,其操作是相同的。
(2) MFS讀取數據的過程:
- 客戶端向元數據服務器發出讀請求;
- 元數據服務器把所需數據存放的位置(ChunkServer的IP地址和Chunk編號)告知客戶端;
- 客戶端向已知的ChunkServer請求發送數據;
- Chunkserver向客戶端發送數據。
(3) MFS寫入數據的過程:
- 客戶端向元數據服務器發送寫入請求;
- 元數據服務器與ChunkServer進行交互,但元數據服務器只在某些服務器創建新的分塊Chunks,創建成功後由ChunkServers告知元數據服務器操作成功;
- 元數據服務器告知客戶端,可以在哪個ChunkServer的哪些Chunks吸入數據;
- 客戶端向指定的ChunkServer寫入數據;
- 該ChunkServer與其他ChunkServer進行數據同步,同步成功後ChunkServer告知客戶端數據寫入成功;
- 客戶端告知元數據服務器本次寫入完畢。
搭建MFS文件系統
拓撲圖
系統環境
主機 | 操作系統 | IP地址 |
---|---|---|
Master Server | Centos 7.3 X86_64 | 192.168.96.22 |
Metalogger | Centos 7.3 X86_64 | 192.168.96.11 |
Chunk1 | Centos 7.3 X86_64 | 192.168.96.12 |
Chunk2 | Centos 7.3 X86_64 | 192.168.96.13 |
Chunk3 | Centos 7.3 X86_64 | 192.168.96.14 |
Clinent | Centos 7.3 X86_64 | 192.168.96.15 |
5臺服務器需連接互聯網
開始部署
Master Servers:
1.關閉防火墻機及Selinux[重要]
setenforce 0
systemctl stop firewalld
2.下載YUM的key認證文件
curl "https://ppa.moosefs.com/RPM-GPG-KEY-MooseFS" > /etc/pki/rpm-gpg/RPM-GPG-KEY-MooseFS
3.添加repo源
curl "http://ppa.moosefs.com/MooseFS-3-el7.repo" > /etc/yum.repos.d/MooseFS.repo
4.更新repo配置(可選)
yum update
5.安裝mfsmaster軟件包
yum -y install moosefs-master moosefs-cgi moosefs-cgiserv moosefs-cli
確認配置文件,在/etc/mfs下生成了相關的配置文件(mfsexports.cfg、mfsmaster.cfg等)
以下配置文件均采用默認值,不需做修改:mfsmaster.cfg、mfsexports.cfg、mfstopology.cfg
6.啟動mfsmaster
mfsmaster start
7.檢查是否啟動成功
ps -ef | grep mfs
Metaloggers:
1.關閉防火墻機及Selinux[重要]
setenforce 0
systemctl stop firewalld
2.下載YUM的key認證文件
curl "https://ppa.moosefs.com/RPM-GPG-KEY-MooseFS" > /etc/pki/rpm-gpg/RPM-GPG-KEY-MooseFS
3.添加repo源
curl "http://ppa.moosefs.com/MooseFS-3-el7.repo" > /etc/yum.repos.d/MooseFS.repo
4.更新repo配置(可選)
yum update
5.安裝mfsmetalogger軟件包
yum -y install moosefs-metalogger
6.編輯mfsmetalogger.cfg配置文件
vim /etc/mfs/mfsmetalogger.cfg
1 ###############################################
2 # RUNTIME OPTIONS #
3 ###############################################
4
5 # user to run daemon as (default is mfs)
6 # WORKING_USER = mfs
7
8 # group to run daemon as (optional - if empty then default user group will be used)
9 # WORKING_GROUP = mfs
10
11 # name of process to place in syslog messages (default is mfsmetalogger)
12 # SYSLOG_IDENT = mfsmetalogger
13
14 # whether to perform mlockall() to avoid swapping out mfsmetalogger process (default is 0, i.e. no)
15 # LOCK_MEMORY = 0
16
17 # Linux only: limit malloc arenas to given value - prevents server from using huge amount of virtual memor y (default is 4)
18 # LIMIT_GLIBC_MALLOC_ARENAS = 4
19
20 # Linux only: disable out of memory killer (default is 1)
21 # DISABLE_OOM_KILLER = 1
22
23 # nice level to run daemon with (default is -19; note: process must be started as root to increase priorit y, if setting of priority fails, process retains the nice level it started with)
24 # NICE_LEVEL = -19
25
26 # set default umask for group and others (user has always 0, default is 027 - block write for group and bl ock all for others)
27 # FILE_UMASK = 027
28
29 # where to store daemon lock file (default is /var/lib/mfs)
30 # DATA_PATH = /var/lib/mfs
31
32 # number of metadata change log files (default is 50)
33 # BACK_LOGS = 50
34
35 # number of previous metadata files to be kept (default is 3)
36 # BACK_META_KEEP_PREVIOUS = 3
37
38 # metadata download frequency in hours (default is 24, should be at least BACK_LOGS/2)
39 # META_DOWNLOAD_FREQ = 24
40
41 ###############################################
42 # MASTER CONNECTION OPTIONS #
43 ###############################################
44
45 # delay in seconds before next try to reconnect to master if not connected (default is 5)
46 # MASTER_RECONNECTION_DELAY = 5
47
48 # local address to use for connecting with master (default is *, i.e. default local address)
49 # BIND_HOST = *
50
51 # MooseFS master host, IP is allowed only in single-master installations (default is mfsmaster)
#修改為Master的IP地址
52 MASTER_HOST = 192.168.96.22
53
54 # MooseFS master supervisor port (default is 9419)
55 # MASTER_PORT = 9419
56
57 # timeout in seconds for master connections (default is 10)
58 # MASTER_TIMEOUT = 10
7.啟動mfsmetalogger
mfsmetalogger start
8.檢查是否啟動成功
ps -ef | grep mfs
停止mfsmetalogger命令為:mfsmetalogger stop
ChunkServers:
以下三臺數據存儲服務器配置一致,如下
1.關閉防火墻機及Selinux[重要]
setenforce 0
systemctl stop firewalld
2.下載YUM的key認證文件
curl "https://ppa.moosefs.com/RPM-GPG-KEY-MooseFS" > /etc/pki/rpm-gpg/RPM-GPG-KEY-MooseFS
3.添加repo源
curl "http://ppa.moosefs.com/MooseFS-3-el7.repo" > /etc/yum.repos.d/MooseFS.repo
4.更新repo配置(可選)
yum update
5.安裝chunkserver軟件包
yum -y install moosefs-chunkserver
6.修改主配置文件,修改Master的IP地址
vim /etc/mfs/mfschunkserver.cfg
66 ###############################################
67 # MASTER CONNECTION OPTIONS #
68 ###############################################
69
70 # labels string (default is empty - no labels)
71 # LABELS =
72
73 # local address to use for master connections (default is *, i.e. default local address)
74 # BIND_HOST = *
75
76 # MooseFS master host, IP is allowed only in single-master installations (default is mfsmaster)
# 修改為Master的IP地址
77 MASTER_HOST = 192.168.96.22
78
79 # MooseFS master command port (default is 9420)
80 # MASTER_PORT = 9420
81
82
83 # timeout in seconds for master connections. Value >0 forces given timeout, but when value is 0 then CS as ks master for timeout (default is 0 - ask master)
84 # MASTER_TIMEOUT = 0
85
86 # delay in seconds before next try to reconnect to master if not connected (default is 5)
87 # MASTER_RECONNECTION_DELAY = 5
88
89 # authentication string (used only when master requires authorization)
90 # AUTH_CODE = mfspassword
7.指定服務器分配給MFS使用的文件位置
vim /etc/mfs/mfshdd.cfg
# This file keeps definitions of mounting points (paths) of hard drives to use with chunk server.
# A path may begin with extra characters which swiches additional options:
# - ‘*‘ means that this hard drive is ‘marked for removal‘ and all data will be replicated to other hard drives (usually on other chunkservers)
# - ‘<‘ means that all data from this hard drive should be moved to other hard drives
# - ‘>‘ means that all data from other hard drives should be moved to this hard drive
# - ‘~‘ means that significant change of total blocks count will not mark this drive as damaged
# If there are both ‘<‘ and ‘>‘ drives then data will be moved only between these drives
# It is possible to specify optional space limit (after each mounting point), there are two ways of doing that:
# - set space to be left unused on a hard drive (this overrides the default setting from mfschunkserver.cfg)
# - limit space to be used on a hard drive
# Space limit definition: [0-9]*(.[0-9]*)?([kMGTPE]|[KMGTPE]i)?B?, add minus in front for the first option.
#
# Examples:
#
# use hard drive ‘/mnt/hd1‘ with default options:
#/mnt/hd1
#
# use hard drive ‘/mnt/hd2‘, but replicate all data from it:
#*/mnt/hd2
#
# use hard drive ‘/mnt/hd3‘, but try to leave 5GiB on it:
#/mnt/hd3 -5GiB
#
# use hard drive ‘/mnt/hd4‘, but use only 1.5TiB on it:
#/mnt/hd4 1.5TiB
#
# use hard drive ‘/mnt/hd5‘, but fill it up using data from other drives
#>/mnt/hd5
#
# use hard drive ‘/mnt/hd6‘, but move all data to other hard drives
#</mnt/hd6
#
# use hard drive ‘/mnt/hd7‘, but ignore significant change of hard drive total size (e.g. compressed file systems)
#~/mnt/hd7
#提供給MFS的分區目錄
/data
特別提醒:/data為提供給MFS的分區,一般最好使用獨立的分區或磁盤來掛載該目錄
8.創建目錄(提給給MFS分區使用)
mkdir /data
9.修改屬主/屬組信息
chown -R mfs.mfs /data
10.啟動chunkserver服務
mfschunkserver start
11.檢查是否啟動成功
ps -ef | grep mfs
停止chunkserver命令為:mfschunkserver stop
Clients:
1.關閉防火墻機及Selinux[重要]
setenforce 0
systemctl stop firewalld
2.下載YUM的key認證文件
curl "https://ppa.moosefs.com/RPM-GPG-KEY-MooseFS" > /etc/pki/rpm-gpg/RPM-GPG-KEY-MooseFS
3.添加repo源
curl "http://ppa.moosefs.com/MooseFS-3-el7.repo" > /etc/yum.repos.d/MooseFS.repo
4.更新repo配置(可選)
yum update
5.安裝MFS客戶端
yum -y install moosefs-client
6.創建掛載點
mkdir -p /mfs/data
7.加載fuse模塊到內核中
modprobe fuse
8.掛載MFS至/mfs/data
mfsmount /mfs/data -H 192.168.96.22
9.查看掛載情況
df -h
卸載MFS命令:umount /mfs/data
MFS監控
通過yum安裝方式已經默認安裝好Mfscgiserv功能,它是同Python編寫的一個web服務器,其監聽端口為9425,可以在Master Server上通過mfscgiserv命令開啟,然後利用瀏覽器打開就可以全面監控所有客戶端掛載、Chunk Server、Master Server,以及客戶端的各種操作等。
其中各部分的含義如下:
- Info部分:顯示了MFS的基本信息
- Server部分:列出現有的Chunk Server
- Disks部分:列出每一臺Chunk Server的磁盤目錄及使用量
- Exports部分:列出被共享的目錄,即可被掛載的目錄
- Mounts部分:顯示被掛載的情況
- Operations部分:顯示正在執行的操作
- Master Charts部分:顯示Master Server的操作情況,包括讀取、寫入、創建目錄、刪除目錄等
客戶端通過瀏覽器訪問http://192.168.96.22:9425,如下圖
MFS常用操作
mfsgetgoal與mfssetgoal命令
目標是指文件被復制的份數,設定了復制的份數後就可以通過mfsgetgoal命令來證實,也可以通過mfssetgoal來改變設定
mfscheckfile與mfsfileinfo命令
實際的副本分數可以通過mfscheckfile和mfsfileinfo命令來證實。
mfsdirinfo命令
整個目錄樹的內容摘要可以通過一個功能增強的、等同於“du -s”的命令mfsdirinfo來顯示。
維護MFS
最重要的就是維護元數據服務器,而元數據服務器最重要的目錄為/var/lib/mfs/,MFS數據的存儲、修改、更新等操作變化都會記錄咋這個目錄的某個文件中,因此只要保證這個目錄的數據安全,就能夠保證整個MFS文件系統的安全性和可靠性。
/var/lib/mfs/目錄下的數據由兩部分組成:一部分是元數據服務器的改變日誌,文件名稱類似於changelog.*.mfs;另一部分是元數據文件metadata.mfs,運行mfsmaster時該文件會被命名為metadata.mfs.back。只要保證了這兩部數據的安全,即使元數據服務器遭到致命×××,也可以通過備份的元數據文件來部署一套元數據服務器。
詳解MFS分布式存儲系統