1. 程式人生 > >GlusterFS 分布式存儲的搭建和使用

GlusterFS 分布式存儲的搭建和使用

推薦 main ports eset server8 item fsp 安裝和使用 cache

GlusterFS 分布式存儲的安裝和使用

1. GlusterFS分布式存儲系統簡介:

  GlusterFS is a scalable network filesystem suitable for data-intensive tasks such as cloud storage and media streaming. GlusterFS is free and open source software and can utilize common off-the-shelf hardware. (glusterfs是一個可擴展的網絡文件系統,適用於數據密集型任務,如雲存儲和媒體流。Glusterfs是免費的開源軟件,可以使用通用的現有硬件)

2. 快速部署一個GlusterFS分布式存儲:

  部署要求:

  1. 3個節點服務器,每臺節點的內存至少1G,每臺節點服務器設置好NTP時間同步;
  2. 每臺節點服務器設置好hosts文件;
  3. 每臺節點服務器需要有2塊物理硬盤,其中一塊安裝操作系統,另外一塊專門用於GlusterFS的卷;
  4. 每天服務器的/var目錄最好是單獨掛載,如果沒有條件單獨掛載,也需要保證/var目錄下面有空閑的空間;
  5. 建議將用於GlusterFS分布式存儲的硬盤格式化為XFS文件系統;

  安裝步驟:

1. 格式化磁盤

[root@test111 ~]# mkfs.xfs -i size=512 /dev/vdc  #格式化硬盤
meta-data=/dev/vdc               isize=512    agcount=4, agsize=6553600 blks
         =                       sectsz=512   attr=2, projid32bit=1
         =                       crc=1        finobt=0, sparse=0
data     =                       bsize=4096   blocks=26214400, imaxpct=25
         =                       sunit=0      swidth=0 blks
naming   =version 2              bsize=4096   ascii-ci=0 ftype=1
log      =internal log           bsize=4096   blocks=12800, version=2
         =                       sectsz=512   sunit=0 blks, lazy-count=1
realtime =none                   extsz=4096   blocks=0, rtextents=0
[root@test111 ~]# mkdir -p /data/brick1   #創建掛載點
[root@test111 ~]# echo ‘/dev/vdc /data/brick1 xfs defaults 1 2‘ >> /etc/fstab
[root@test111 ~]# mount -a #掛載bricks
[root@test111 ~]# df -h
Filesystem               Size  Used Avail Use% Mounted on
/dev/mapper/centos-root   50G  2.3G   48G   5% /
devtmpfs                 3.9G     0  3.9G   0% /dev
tmpfs                    3.9G     0  3.9G   0% /dev/shm
tmpfs                    3.9G  8.4M  3.9G   1% /run
tmpfs                    3.9G     0  3.9G   0% /sys/fs/cgroup
/dev/mapper/centos-home   42G   33M   42G   1% /home
/dev/vda1                497M  138M  360M  28% /boot
tmpfs                    799M     0  799M   0% /run/user/0
/dev/vdc                 100G   33M  100G   1% /data/brick1
[root@test111 ~]#

2. 安裝GlusterFS

cd /etc/yum.repos.d
[root@test111 yum.repos.d]# vim CentOS-Gluster-3.12.repo
# CentOS-Gluster-3.8.repo
#
# Please see http://wiki.centos.org/SpecialInterestGroup/Storage for more
# information

[centos-gluster312]
name=CentOS-$releasever - Gluster 3.12
baseurl=http://mirror.centos.org/centos/$releasever/storage/$basearch/gluster-3.12/
gpgcheck=0
enabled=1
gpgkey=file:///etc/pki/rpm-gpg/RPM-GPG-KEY-CentOS-SIG-Storage

[centos-gluster312-test]
name=CentOS-$releasever - Gluster 3.12 Testing
baseurl=http://buildlogs.centos.org/centos/$releasever/storage/$basearch/gluster-3.12/
gpgcheck=0
enabled=0
gpgkey=file:///etc/pki/rpm-gpg/RPM-GPG-KEY-CentOS-SIG-Storage
yum clean all && yum makecache all
yum -y install xfsprogs wget fuse fuse-libs
yum -y install glusterfs glusterfs-server gluster-fuse
systemctl enable glusterd
systemctl start glusterd
systemctl status glusterd
● glusterd.service - GlusterFS, a clustered file-system server
   Loaded: loaded (/usr/lib/systemd/system/glusterd.service; enabled; vendor preset: disabled)
   Active: active (running) since Thu 2019-03-21 08:56:11 CST; 1min 34s ago
  Process: 2097 ExecStart=/usr/sbin/glusterd -p /var/run/glusterd.pid --log-level $LOG_LEVEL $GLUSTERD_OPTIONS (code=exited, status=0/SUCCESS)
 Main PID: 2098 (glusterd)
   CGroup: /system.slice/glusterd.service
           └─2098 /usr/sbin/glusterd -p /var/run/glusterd.pid --log-level INFO

Mar 21 08:56:11 test111 systemd[1]: Starting GlusterFS, a clustered file-system server...
Mar 21 08:56:11 test111 systemd[1]: Started GlusterFS, a clustered file-system server.

3. 配置GlusterFS集群

  在其中任何一個節點上面執行創建分布式存儲集群的操作,我們這裏選擇在test111這臺機器上面執行如下命令:
  配置hosts文件:

echo -e "test111 10.83.32.173\nstorage2 10.83.32.143\nstorage1 10.83.32.147" >> /etc/hosts

  創建分布式存儲集群:

[root@test111 yum.repos.d]# gluster peer probe storage2  #添加節點
peer probe: success.
[root@test111 yum.repos.d]# gluster peer probe storage1
peer probe: success.
[root@test111 yum.repos.d]# gluster peer status
Number of Peers: 2

Hostname: storage2
Uuid: f80c3f7b-7e09-4c60-a57a-a70c8739753e
State: Peer in Cluster (Connected)

Hostname: storage1
Uuid: b731025b-f9e8-4232-9a3f-44a9f692351a
State: Peer in Cluster (Connected)
[root@test111 yum.repos.d]#

創建數據存儲目錄前面一步已經創建成功,現在開始創建GlusterFS磁盤
[root@test111 yum.repos.d]# gluster volume create models replica 3 test111:/data/brick1 storage2:/data/brick1 storage1:/data/brick1 force
volume create: models: success: please start the volume to access data
[root@test111 yum.repos.d]# gluster volume start models
volume start: models: success
[root@test111 yum.repos.d]#
加上replica 3就是3個節點中,每個節點都要把數據存儲一次,就是一個數據存儲3份,每個節點一份
如果不加replica 3,就是3個節點的磁盤空間整合成一個硬盤,

4. 掛載GlusterFS存儲

[root@test111 yum.repos.d]# vim CentOS-Gluster-3.12.repo
# CentOS-Gluster-3.8.repo
#
# Please see http://wiki.centos.org/SpecialInterestGroup/Storage for more
# information

[centos-gluster312]
name=CentOS-$releasever - Gluster 3.12
baseurl=http://mirror.centos.org/centos/$releasever/storage/$basearch/gluster-3.12/
gpgcheck=0
enabled=1
gpgkey=file:///etc/pki/rpm-gpg/RPM-GPG-KEY-CentOS-SIG-Storage

[centos-gluster312-test]
name=CentOS-$releasever - Gluster 3.12 Testing
baseurl=http://buildlogs.centos.org/centos/$releasever/storage/$basearch/gluster-3.12/
gpgcheck=0
enabled=0
gpgkey=file:///etc/pki/rpm-gpg/RPM-GPG-KEY-CentOS-SIG-Storage
yum clean all && yum makecache all
yum install glusterfs glusterfs-fuse
mkdir -p /mnt/models
mount -t glusterfs -o ro test111:models /mnt/models/  #以只讀方式掛載

5. GlusterFS存儲Volume卷類型

  A volume is a logical collection of bricks where each brick is an export directory on a server in the trusted storage pool. To create a new volume in your storage environment, specify the bricks that comprise the volume. After you have created a new volume, you must start it before attempting to mount it. (volume是brick的邏輯集合,其中每個brick是可信存儲池中的服務器上的export目錄。要在存儲環境中創建新卷,請指定組成brick的數量。創建新卷後,必須在嘗試mount之前啟動它。)
  卷的類型包括:

  1. 分布卷(DTH): 分發文件到所有的brick當中,特點是單個文件不進行條帶話,整個文件在一個brick當中,不同的文件分布在不同的brick當中;創建命令如下:

    gluster volume create models  test111:/data/brick1 storage2:/data/brick1 storage1:/data/brick1

    技術分享圖片

  2. 復制卷(AFR):類似於RAID1, 創建volume 時帶 replica x 數量: 將文件復制到 replica x 個節點中,創建卷的命令如下:

    gluster volume create models  replica 3 test111:/data/brick1 storage2:/data/brick1 storage1:/data/brick1 force

      一個復制卷中的多個brick不能存在於一臺主機,就是說一個節點只能包含復制卷中的一個brick
    gluster volume create replica 4 server1:/brick1 server1:/brick2 server2:/brick3 server4:/brick4
    以上這種情況將不能創建,因為第一個復制關系在同一個服務器上,將會產生單點故障,如果你真想這樣做,請在命令最後用force選項 強制執行
    技術分享圖片

  3. 條帶卷:類似於RAID0,單個文件被分散到多個brick中,而分布卷是整個文件在一個brick中,多個文件在不同的brick中。
    gluster volume create models stripe 3 test111:/data/brick1 storage2:/data/brick1 storage1:/data/brick1

    stripe 後面是幾,那麽就必須有幾個brick,stripe 3 說明單個文件被拆分到3個brick當中

技術分享圖片

  1. 分布式復制卷: 及提高了可靠性又能提供不錯的性能,在大多數生產環節中可用,brick的數量應該是分布式復制卷的整數倍
    # gluster volume create test-volume replica 2 transport tcp server1:/exp1 server2:/exp2 server3:/exp3 server4:/exp4
    Creation of test-volume has been successful
    Please start the volume to access data.
    分布式復制卷的brick順序決定了文件分布的位置,一般來說,先是兩個brick形成一個復制關系,然後兩個復制關系形成分布

    技術分享圖片

  2. 分布式條帶卷:單個文件分布於多個brick中,多個文件被分布到多個brick中
    # gluster volume create test-volume stripe 4 transport tcp server1:/exp1 server2:/exp2 server3:/exp3 server4:/exp4 server5:/exp5 server6:/exp6 server7:/exp7 server8:/exp8
    Creation of test-volume has been successful
    Please start the volume to access data.

    技術分享圖片
      其他更多卷類型請參考GlusterFS官方文檔 https://docs.gluster.org/en/latest/Administrator%20Guide/Setting%20Up%20Volumes/

6. GlusterFS存儲調優參數

# 開啟 指定 volume 的配額
[root@test111 yum.repos.d]# gluster volume quota models enable
volume quota : success

# 限制 指定 volume 的配額
[root@test111 yum.repos.d]# gluster volume quota models limit-usage / 95GB
volume quota : success

# 設置 cache 大小, 默認32MB
[root@test111 yum.repos.d]# gluster volume set models performance.cache-size 4GB
volume set: success

# 設置 io 線程, 太大會導致進程崩潰
[root@test111 yum.repos.d]# gluster volume set models performance.io-thread-count 16
volume set: success

# 設置 網絡檢測時間, 默認42s
[root@test111 yum.repos.d]# gluster volume set models network.ping-timeout 10
volume set: success

# 設置 寫緩沖區的大小, 默認1M
[root@test111 yum.repos.d]# gluster volume set models performance.write-behind-window-size 1024MB
volume set: success
[root@test111 yum.repos.d]#

7. GlusterFS存儲擴容

  現在有3個節點組成的GlusterFS集群。如果我們需要將存儲再擴容3個節點,形成6個節點的集群。操作如下:

#在新增加的節點上面配置hosts文件
vim /etc/hosts

test111 10.83.32.173
storage2 10.83.32.143
storage1 10.83.32.147
kubemaster 10.83.32.146
kubenode2 10.83.32.133
kubenode1 10.83.32.138

#在每一個新擴容的節點上面執行單獨的硬盤初始化掛載bricks
[root@test111 ~]# mkfs.xfs -i size=512 /dev/vdc  #格式化硬盤
meta-data=/dev/vdc               isize=512    agcount=4, agsize=6553600 blks
         =                       sectsz=512   attr=2, projid32bit=1
         =                       crc=1        finobt=0, sparse=0
data     =                       bsize=4096   blocks=26214400, imaxpct=25
         =                       sunit=0      swidth=0 blks
naming   =version 2              bsize=4096   ascii-ci=0 ftype=1
log      =internal log           bsize=4096   blocks=12800, version=2
         =                       sectsz=512   sunit=0 blks, lazy-count=1
realtime =none                   extsz=4096   blocks=0, rtextents=0
[root@test111 ~]# mkdir -p /data2/brick1   #創建掛載點
[root@test111 ~]# echo ‘/dev/vdc /data2/brick1 xfs defaults 1 2‘ >> /etc/fstab
[root@test111 ~]# mount -a #掛載bricks
[root@test111 ~]# df -h
Filesystem               Size  Used Avail Use% Mounted on
/dev/mapper/centos-root   50G  2.3G   48G   5% /
devtmpfs                 3.9G     0  3.9G   0% /dev
tmpfs                    3.9G     0  3.9G   0% /dev/shm
tmpfs                    3.9G  8.4M  3.9G   1% /run
tmpfs                    3.9G     0  3.9G   0% /sys/fs/cgroup
/dev/mapper/centos-home   42G   33M   42G   1% /home
/dev/vda1                497M  138M  360M  28% /boot
tmpfs                    799M     0  799M   0% /run/user/0
/dev/vdc                 100G   33M  100G   1% /data/brick1
[root@test111 ~]#

# 復制yum源倉庫文件到新增的三個節點
ssh-keygen -t RSA -P ""
ssh-copy-id root@kubemaster
ssh-copy-id root@kubenode1
ssh-copy-id root@kubenode2
scp -r /etc/yum.repos.d/CentOS-Gluster-3.12.repo root@kubemaster:/etc/yum.repos.d/
scp -r /etc/yum.repos.d/CentOS-Gluster-3.12.repo root@kubenode1:/etc/yum.repos.d/
scp -r /etc/yum.repos.d/CentOS-Gluster-3.12.repo root@kubenode2:/etc/yum.repos.d/

# 再新增的3個節點上面執行
yum -y install xfsprogs wget fuse fuse-libs
yum -y install glusterfs glusterfs-server gluster-fuse
systemctl enable glusterd && systemctl start glusterd && systemctl status glusterd

#在其中任何一臺節點上面執行添加節點
[root@test111 ~]# gluster peer probe kubemaster
peer probe: success.
[root@test111 ~]# gluster peer probe kubenode2
peer probe: success.
[root@test111 ~]# gluster peer probe kubenode1
peer probe: success.
[root@test111 ~]#

#擴展卷的bricks
[root@test111 ~]# gluster volume add-brick models kubemaster:/data2/brick1 kubenode1:/data2/brick1 kubenode2:/data2/brick1 force
volume add-brick: success
[root@test111 ~]#

#開啟數據打散設置
[root@test111 ~]# gluster volume rebalance models start
volume rebalance: models: success: Rebalance on models has been started successfully. Use rebalance status command to check status of the rebalance process.
ID: 82ac036d-5de2-42ec-830d-74e6a30dffe7

#查看數據打散狀態
[root@test111 ~]# gluster volume rebalance models status
                                    Node Rebalanced-files          size       scanned      failures       skipped               status  run time in h:m:s
                               ---------      -----------   -----------   -----------   -----------   -----------         ------------     --------------
                               localhost                0        0Bytes             0             0             0            completed        0:00:00
                                storage2                0        0Bytes             0             0             0            completed        0:00:00
                                storage1                0        0Bytes             0             0             0            completed        0:00:00
                              kubemaster                0        0Bytes             0             0             0            completed        0:00:00
                               kubenode2                0        0Bytes             0             0             0            completed        0:00:00
                               kubenode1                0        0Bytes             0             0             0            completed        0:00:00
volume rebalance: models: success
[root@test111 ~]#

#查看volume狀態
[root@test111 ~]# gluster volume info models

Volume Name: models
Type: Distributed-Replicate
Volume ID: d71bf7fa-7d65-4bcb-a926-8e6ef94ce068
Status: Started
Snapshot Count: 0
Number of Bricks: 2 x 3 = 6
Transport-type: tcp
Bricks:
Brick1: test111:/data/brick1
Brick2: storage2:/data/brick1
Brick3: storage1:/data/brick1
Brick4: kubemaster:/data2/brick1
Brick5: kubenode1:/data2/brick1
Brick6: kubenode2:/data2/brick1
Options Reconfigured:
performance.write-behind-window-size: 1024MB
network.ping-timeout: 10
performance.io-thread-count: 16
performance.cache-size: 4GB
features.quota-deem-statfs: on
features.inode-quota: on
features.quota: on
transport.address-family: inet
nfs.disable: on
performance.client-io-threads: off
[root@test111 ~]#

3. kubernetes集群使用GlusterFS分布式存儲系統:

  創建ep,地址為gluster的服務器地址

[root@kubemaster glusterfs]# cat glusterfs-endpoints.yaml
kind: Endpoints
apiVersion: v1
metadata: {name: glusterfs-cluster}
subsets:
- addresses:
  - {ip: 10.83.32.173}
  ports:
  - {port: 1}
- addresses:
  - {ip: 10.83.32.143}
  ports:
  - {port: 1}
- addresses:
  - {ip: 10.83.32.147}
  ports:
  - {port: 1}
- addresses:
  - {ip: 10.83.32.146}
  ports:
  - {port: 1}
- addresses:
  - {ip: 10.83.32.133}
  ports:
  - {port: 1}
- addresses:
  - {ip: 10.83.32.138}
  ports:
  - {port: 1}

kubectl apply -f glusterfs-endpoints.yaml

[root@kubemaster glusterfs]# kubectl get ep glusterfs-cluster
NAME                ENDPOINTS                                                  AGE
glusterfs-cluster   10.83.32.133:1,10.83.32.138:1,10.83.32.143:1 + 3 more...   2m53s
[root@kubemaster glusterfs]#

[root@kubemaster glusterfs]# kubectl describe ep glusterfs-cluster
Name:         glusterfs-cluster
Namespace:    default
Labels:       <none>
Annotations:  kubectl.kubernetes.io/last-applied-configuration:
                {"apiVersion":"v1","kind":"Endpoints","metadata":{"annotations":{},"name":"glusterfs-cluster","namespace":"default"},"subsets":[{"addresse...
Subsets:
  Addresses:          10.83.32.133,10.83.32.138,10.83.32.143,10.83.32.146,10.83.32.147,10.83.32.173
  NotReadyAddresses:  <none>
  Ports:
    Name     Port  Protocol
    ----     ----  --------
    <unset>  1     TCP

Events:  <none>
[root@kubemaster glusterfs]#

  創建svc,用於對接ep

[root@kubemaster glusterfs]# cat glusterfs-service.json
kind: Service
apiVersion: v1
metadata: {name: glusterfs-cluster}
spec:
  ports:
  - {port: 1}
[root@kubemaster glusterfs]#

[root@kubemaster glusterfs]# kubectl get svc glusterfs-cluster
NAME                TYPE        CLUSTER-IP     EXTERNAL-IP   PORT(S)   AGE
glusterfs-cluster   ClusterIP   10.99.135.11   <none>        1/TCP     31s

  創建pv,pv裏面使用glusterfs,需要配置gluster的卷名和服務名稱

[root@kubemaster glusterfs]# cat glusterfs-pv.yaml
apiVersion: v1
kind: PersistentVolume
metadata:
  name: pv001
spec:
  capacity:
    storage: 10Gi
  accessModes:
    - ReadWriteMany
  glusterfs:
    endpoints: "glusterfs-cluster"
    path: "models"
    readOnly: false
[root@kubemaster glusterfs]#

[root@kubemaster glusterfs]# kubectl apply -f glusterfs-pv.yaml

創建pvc,通過容量自動關聯pv
[root@kubemaster glusterfs]# cat glusterfs-pvc.yaml 
apiVersion: v1
kind: PersistentVolumeClaim
metadata:
  name: pv001
spec:
  accessModes:
    - ReadWriteMany
  resources:
    requests:
      storage: 10Gi

這裏遇到一個問題,開始建立的pv死活claim為空,查看pv以及pvc的配置發現並沒有任何名稱上的關聯,
繼續研究,發現純粹是通過storage大小進行匹配的,之前因為照抄書本,一個是5G
一個是8G所以就無法匹配了,修改後成功。

  創建一個pod,來掛載pvc

[root@kubemaster glusterfs]# cat glusterfs-nginx-deployment.yaml 
apiVersion: apps/v1
kind: Deployment
metadata:
  name: nginx-dm
  namespace: default
spec:
  selector:
    matchLabels:
      app: nginx
  replicas: 2
  template:
    metadata:
      labels:
        app: nginx
    spec:
      containers:
      - name: nginx
        image: nginx
        ports:
        - containerPort: 80
        volumeMounts:
        - name: storage001
          mountPath: "/usr/share/nginx/html"
      volumes:
      - name: storage001
        persistentVolumeClaim:
          claimName: pvc001
[root@kubemaster glusterfs]#

#進入容器查看掛載是成功的
[root@kubemaster glusterfs]# kubectl exec -it nginx-dm-6478b6499d-q7njz -- df -h|grep nginx
10.83.32.133:models                                                                                   95G     0   95G   0% /usr/share/nginx/html
[root@kubemaster glusterfs]#

推薦關註我的個人微信公眾號 “雲時代IT運維”,周期性更新最新的應用運維類技術文檔。關註虛擬化和容器技術、CI/CD、自動化運維等最新前沿運維技術和趨勢;

技術分享圖片

GlusterFS 分布式存儲的搭建和使用