1. 程式人生 > >CentOS 7配置儲存伺服器

CentOS 7配置儲存伺服器

CentOS 7配置儲存伺服器

一、配置NFS伺服器

配置NFS服務在區域網共享文件

1、配置NFS服務

Configure NFS Server to share directories on your Network.

This example is based on the environment below.

+----------------------+          |          +----------------------+

| [    NFS Server    ] |10.0.0.30 | 10.0.0.31| [    NFS Client    ] |

|    dlp.srv.world     +----------+----------+     www.srv.world    |

|                      |                     |                      |

+----------------------+                     +----------------------+

[1]    Configure NFS Server.

[[email protected] ~]# yum -y install nfs-utils

[[email protected] ~]# vi /etc/idmapd.conf

# line 5: uncomment and change to your domain name

Domain = srv.world

[[email protected] ~]# vi /etc/exports

# write settings for NFS exports

/home 10.0.0.0/24(rw,no_root_squash)

[[email protected] ~]# systemctl start rpcbind nfs-server

[[email protected] ~]# systemctl enable rpcbind nfs-server

[2]    If Firewalld is running, allow NFS service.

[[email protected] ~]# firewall-cmd --add-service=nfs --permanent

success

[[email protected] ~]# firewall-cmd --reload

success

For basic options of exports

Option     Description

rw    Allow both read and write requests on a NFS volume.

ro     Allow only read requests on a NFS volume.

sync Reply to requests only after the changes have been committed to stable storage. (Default)

async       This option allows the NFS server to violate the NFS protocol and reply to requests before any changes made by that request have been committed to stable storage.

secure     This option requires that requests originate on an Internet port less than IPPORT_RESERVED (1024). (Default)

insecure   This option accepts all ports.

wdelay     Delay committing a write request to disc slightly if it suspects that another related write request may be in progress or may arrive soon. (Default)

no_wdelay      This option has no effect if async is also set. The NFS server will normally delay committing a write request to disc slightly if it suspects that another related write request may be in progress or may arrive soon. This allows multiple write requests to be committed to disc with the one operation which can improve performance. If an NFS server received mainly small unrelated requests, this behaviour could actually reduce performance, so no_wdelay is available to turn it off.

subtree_check This option enables subtree checking. (Default)

no_subtree_check  This option disables subtree checking, which has mild security implications, but can improve reliability in some circumstances.

root_squash    Map requests from uid/gid 0 to the anonymous uid/gid. Note that this does not apply to any other uids or gids that might be equally sensitive, such as user bin or group staff.

no_root_squash     Turn off root squashing. This option is mainly useful for disk-less clients.

all_squash       Map all uids and gids to the anonymous user. Useful for NFS exported public FTP directories, news spool directories, etc.

no_all_squash Turn off all squashing. (Default)

anonuid=UID  These options explicitly set the uid and gid of the anonymous account. This option is primarily useful for PC/NFS clients, where you might want all requests appear to be from one user. As an example, consider the export entry for /home/joe in the example section below, which maps all requests to uid 150.

anongid=GID  Read above (anonuid=UID)

2、Configure NFS Client.

This example is based on the environment below.

+----------------------+          |          +----------------------+

| [    NFS Server    ] |10.0.0.30 | 10.0.0.31| [    NFS Client    ] |

|    dlp.srv.world     +----------+----------+     www.srv.world    |

|                      |                     |                      |

+----------------------+                     +----------------------+

[1]    Configure NFS Client.

[[email protected] ~]# yum -y install nfs-utils

[[email protected] ~]# vi /etc/idmapd.conf

# line 5: uncomment and change to your domain name

Domain = srv.world

[[email protected] ~]# systemctl start rpcbind

[[email protected] ~]# systemctl enable rpcbind

[[email protected] ~]# mount -t nfs dlp.srv.world:/home /home

[[email protected] ~]# df -hT

Filesystem              Type      Size  Used Avail Use% Mounted on

/dev/mapper/centos-root xfs        46G  1.4G   45G   4% /

devtmpfs                devtmpfs  1.9G     0  1.9G   0% /dev

tmpfs                   tmpfs     1.9G     0  1.9G   0% /dev/shm

tmpfs                   tmpfs     1.9G  8.3M  1.9G   1% /run

tmpfs                   tmpfs     1.9G     0  1.9G   0% /sys/fs/cgroup

/dev/vda1               xfs       497M  219M  278M  45% /boot

dlp.srv.world:/home  nfs4       46G  1.4G   45G   4% /home

# /home from NFS server is mounted

[2]    Configure NFS mounting on fstab to mount it when the system boots.

[[email protected] ~]# vi /etc/fstab

/dev/mapper/centos-root /                       xfs     defaults        1 1

UUID=a18716b4-cd67-4aec-af91-51be7bce2a0b /boot xfs     defaults        1 2

/dev/mapper/centos-swap swap                    swap    defaults        0 0

# add like follows to the end

dlp.srv.world:/home  /home                   nfs     defaults        0 0

[3]    Configure auto-mounting. For example, set NFS directory on /mntdir.

[[email protected] ~]# yum -y install autofs

[[email protected] ~]# vi /etc/auto.master

# add follows to the end

 /-    /etc/auto.mount

[[email protected] ~]# vi /etc/auto.mount

# create new : [mount point] [option] [location]

 /mntdir -fstype=nfs,rw  dlp.srv.world:/home

[[email protected] ~]# mkdir /mntdir

[[email protected] ~]# systemctl start autofs

[[email protected] ~]# systemctl enable autofs

# move to the mount point to make sure it normally mounted

[[email protected] ~]# cd /mntdir

[[email protected] mntdir]# ll

total 0

drwx------ 2 cent cent 59 Jul  9  2014 cent

[[email protected] mntdir]# cat /proc/mounts | grep mntdir

/etc/auto.mount /mntdir autofs rw,relatime,fd=18,pgrp=2093,timeout=300,minproto=5,maxproto=5,direct 0 0

dlp.srv.world:/home /mntdir nfs4 rw,relatime,vers=4.0,rsize=524288,wsize=524288,namlen=255,hard,proto=tcp,

port=0,timeo=600,retrans=2,sec=sys,clientaddr=10.0.0.31,local_lock=none,addr=10.0.0.30 0 0

二、Configure iSCSI Target

1、Configure Storage Server with iSCSI.

A storage on a network is called iSCSI Target, a Client which connects to iSCSI Target is called iSCSI Initiator.

This example is based on the environment below.

+----------------------+          |          +----------------------+

| [   iSCSI Target   ] |10.0.0.30 | 10.0.0.31| [ iSCSI Initiator  ] |

|     dlp.srv.world    +----------+----------+     www.srv.world    |

|                      |                     |                      |

+----------------------+                     +----------------------+

[1]    Install administration tools first.

[[email protected] ~]# yum -y install targetcli

[2]    Configure iSCSI Target.

For example, create an disk-image under the /iscsi_disks directory and set it as a SCSI device.

# create a directory

[[email protected] ~]# mkdir /iscsi_disks

# enter the admin console

[[email protected] ~]# targetcli

targetcli shell version 2.1.fb34

Copyright 2011-2013 by Datera, Inc and others.

For help on commands, type 'help'.

/> cd backstores/fileio

# create a disk-image with the name "disk01" on /iscsi_disks/disk01.img with 10G

/backstores/fileio> create disk01 /iscsi_disks/disk01.img 10G

Created fileio disk01 with size 10737418240

/backstores/fileio> cd /iscsi

# create a target

/iscsi> create iqn.2014-07.world.srv:storage.target00

Created target iqn.2014-07.world.srv:storage.target00.

Created TPG 1.

Global pref auto_add_default_portal=true

Created default portal listening on all IPs (0.0.0.0), port 3260.

/iscsi> cd iqn.2014-07.world.srv:storage.target00/tpg1/luns

# set LUN

/iscsi/iqn.20...t00/tpg1/luns> create /backstores/fileio/disk01

Created LUN 0.

/iscsi/iqn.20...t00/tpg1/luns> cd ../acls

# set ACL (it's the IQN of an initiator you permit to connect)

/iscsi/iqn.20...t00/tpg1/acls> create iqn.2014-07.world.srv:www.srv.world

Created Node ACL for iqn.2014-07.world.srv:www.srv.world

Created mapped LUN 0.

/iscsi/iqn.20...t00/tpg1/acls> cd iqn.2014-07.world.srv:www.srv.world

# set UserID for authentication

/iscsi/iqn.20....srv.world> set auth userid=username

Parameter userid is now 'username'.

/iscsi/iqn.20....srv.world> set auth password=password

Parameter password is now 'password'.

/iscsi/iqn.20....srv.world> exit

Global pref auto_save_on_exit=true

Last 10 configs saved in /etc/target/backup.

Configuration saved to /etc/target/saveconfig.json

# after configuration above, the target enters in listening like follows.

[[email protected] ~]# ss -napt | grep 3260

LISTEN     0      256          *:3260                     *:*

[[email protected] ~]# systemctl enable target

[3]    If Firewalld is running, allow iSCSI Target service.

[[email protected] ~]# firewall-cmd --add-service=iscsi-target --permanent

success

[[email protected] ~]# firewall-cmd --reload

Success

2、Configure iSCSI Target(tgt)

Configure Storage Server with iSCSI.

This is the example of configuring iSCSI Target with scsi-target-utils.

[1]    Install scsi-target-utils.

# install from EPEL

[[email protected] ~]# yum --enablerepo=epel -y install scsi-target-utils

注意:在centos7.3上不能安裝成功上述的命令,不使用與之相關的命令,可以成功

[2]    Configure iSCSI Target.

For example, create a disk image under the [/iscsi_disks] directory and set it as a shared disk.

# create a disk image

[[email protected] ~]# mkdir /iscsi_disks

[[email protected] ~]# dd if=/dev/zero of=/iscsi_disks/disk01.img count=0 bs=1 seek=10G

[[email protected] ~]# vi /etc/tgt/targets.conf

# add follows to the end

# if you set some devices, add - and set the same way with follows

# naming rule : [ iqn.yaer-month.domain:any name ]

    # provided devicce as a iSCSI target

    backing-store /iscsi_disks/disk01.img

    # iSCSI Initiator's IP address you allow to connect

    initiator-address 10.0.0.31

    # authentication info ( set anyone you like for "username", "password" )

    incominguser username password

[3]    If SELinux is enabled, change SELinux Context.

[[email protected] ~]# chcon -R -t tgtd_var_lib_t /iscsi_disks

[[email protected] ~]# semanage fcontext -a -t tgtd_var_lib_t /iscsi_disks

[4]    If Firewalld is running, allow iSCSI Target service.

[[email protected] ~]# firewall-cmd --add-service=iscsi-target --permanent

success

[[email protected] ~]# firewall-cmd --reload

success

[5]    Start tgtd and verify status.

[[email protected] ~]# systemctl start tgtd

[[email protected] ~]# systemctl enable tgtd

# show status

[[email protected] ~]# tgtadm --mode target --op show

Target 1: iqn.2015-12.world.srv:target00

    System information:

        Driver: iscsi

        State: ready

    I_T nexus information:

    LUN information:

        LUN: 0

            Type: controller

            SCSI ID: IET     00010000

            SCSI SN: beaf10

            Size: 0 MB, Block size: 1

            Online: Yes

            Removable media: No

            Prevent removal: No

            Readonly: No

            SWP: No

            Thin-provisioning: No

            Backing store type: null

            Backing store path: None

            Backing store flags:

        LUN: 1

            Type: disk

            SCSI ID: IET     00010001

            SCSI SN: beaf11

            Size: 10737 MB, Block size: 512

            Online: Yes

            Removable media: No

            Prevent removal: No

            Readonly: No

            SWP: No

            Thin-provisioning: No

            Backing store type: rdwr

            Backing store path: /iscsi_disks/disk01.img

            Backing store flags:

    Account information:

        username

    ACL information:

        10.0.0.31

3、  

Configure iSCSI Initiator.

This example is based on the environment below.

+----------------------+          |          +----------------------+

| [   iSCSI Target   ] |10.0.0.30 | 10.0.0.31| [ iSCSI Initiator  ] |

|     dlp.srv.world    +----------+----------+     www.srv.world    |

|                      |                     |                      |

+----------------------+                     +----------------------+

[1]    Configure iSCSI Initiator.

[[email protected] ~]# yum -y install iscsi-initiator-utils

[[email protected] ~]# vi /etc/iscsi/initiatorname.iscsi

# change to the same IQN you set on the iSCSI target server

InitiatorName=iqn.2014-07.world.srv:www.srv.world

[[email protected] ~]# vi /etc/iscsi/iscsid.conf

# line 57: uncomment

node.session.auth.authmethod = CHAP

# line 61,62: uncomment and specify the username and password you set on the iSCSI target server

node.session.auth.username = username

node.session.auth.password = password

#重啟相關isscsi的程序

systemctl restart iscsid

# discover target

[[email protected] ~]# iscsiadm -m discovery -t sendtargets -p 10.0.0.30

[  635.510656] iscsi: registered transport (tcp)

10.0.0.30:3260,1 iqn.2014-07.world.srv:storage.target00

# confirm status after discovery

[[email protected] ~]# iscsiadm -m node -o show

# BEGIN RECORD 6.2.0.873-21

node.name = iqn.2014-07.world.srv:storage.target00

node.tpgt = 1

node.startup = automatic

node.leading_login = No

...

...

...

node.conn[0].iscsi.IFMarker = No

node.conn[0].iscsi.OFMarker = No

# END RECORD

# login to the target

[[email protected] ~]# iscsiadm -m node --login

Logging in to [iface: default, target: iqn.2014-07.world.srv:storage.target00, portal: 10.0.0.30,3260] (multiple)

[  708.383308] scsi2 : iSCSI Initiator over TCP/IP

[  709.393277] scsi 2:0:0:0: Direct-Access     LIO-ORG  disk01           4.0  PQ: 0 ANSI: 5

[  709.395709] scsi 2:0:0:0: alua: supports implicit and explicit TPGS

[  709.398155] scsi 2:0:0:0: alua: port group 00 rel port 01

[  709.399762] scsi 2:0:0:0: alua: port group 00 state A non-preferred supports TOlUSNA

[  709.401763] scsi 2:0:0:0: alua: Attached

[  709.402910] scsi 2:0:0:0: Attached scsi generic sg0 type 0

Login to [iface: default, target: iqn.2014-07.world.srv:storage.target00, portal: 10.0.0.30,3260] successful.

# confirm the established session

[[email protected] ~]# iscsiadm -m session -o show

tcp: [1] 10.0.0.30:3260,1 iqn.2014-07.world.srv:storage.target00 (non-flash)

# confirm the partitions

[[email protected] ~]# cat /proc/partitions

major minor  #blocks  name

 252        0   52428800 sda

 252        1     512000 sda1

 252        2   51915776 sda2

 253        0    4079616 dm-0

 253        1   47833088 dm-1

   8        0   20971520 sdb

# added new device provided from the target server as "sdb"

[2]    After setting iSCSI devide, configure on Initiator to use it like follwos.

# create label

[[email protected] ~]# parted --script /dev/sdb "mklabel msdos"

# create partiton

[[email protected] ~]# parted --script /dev/sdb "mkpart primary 0% 100%"

# format with XFS

[[email protected] ~]# mkfs.xfs -i size=1024 -s size=4096 /dev/sdb1

meta-data=/dev/sdb1        isize=1024   agcount=16, agsize=327616 blks

         =                 sectsz=4096  attr=2, projid32bit=1

         =                 crc=0

data     =                 bsize=4096   blocks=5241856, imaxpct=25

         =                 sunit=0      swidth=0 blks

naming   =version 2        bsize=4096   ascii-ci=0 ftype=0

log      =internal log     bsize=4096   blocks=2560, version=2

         =                 sectsz=4096  sunit=1 blks, lazy-count=1

realtime =none             extsz=4096   blocks=0, rtextents=0

# mount it

[[email protected] ~]# mount /dev/sdb1 /mnt

[ 6894.010661] XFS (sdb1): Mounting Filesystem

[ 6894.031358] XFS (sdb1): Ending clean mount

[[email protected] ~]# df -hT

Filesystem              Type      Size  Used Avail Use% Mounted on

/dev/mapper/centos-root xfs        46G 1023M   45G   3% /

devtmpfs                devtmpfs  1.9G     0  1.9G   0% /dev

tmpfs                   tmpfs     1.9G     0  1.9G   0% /dev/shm

tmpfs                   tmpfs     1.9G  8.3M  1.9G   1% /run

tmpfs                   tmpfs     1.9G     0  1.9G   0% /sys/fs/cgroup

/dev/sda1               xfs       497M  120M  378M  25% /boot

/dev/sdb1               xfs        20G   33M   20G   1% /mnt

三、Ceph : Configure Ceph Cluster

1、Install Distributed File System "Ceph" to Configure Storage Cluster.

For example on here, Configure Cluster with 1 admin Node and 3 Storage Node like follows.

                                         |

        +--------------------+           |           +-------------------+

        |   [dlp.srv.world]  |10.0.0.30  |   10.0.0.x|   [   Client  ]   |

        |    Ceph-Deploy     +-----------+-----------+                   |

        |                    |           |           |                   |

        +--------------------+           |           +-------------------+

            +----------------------------+----------------------------+

            |                            |                            |

            |10.0.0.51                   |10.0.0.52                   |10.0.0.53

+-----------+-----------+    +-----------+-----------+    +-----------+-----------+

|   [node01.srv.world]  |    |  [node02.srv.world]   |    |   [node03.srv.world]  |

|     Object Storage    +----+     Object Storage    +----+     Object Storage    |

|     Monitor Daemon    |    |                       |    |                       |

|                       |    |                       |    |                       |

+-----------------------+    +-----------------------+    +-----------------------+

[1]   

Add a user for Ceph admin on all Nodes.

It adds "cent" user on this exmaple.

[2]    Grant root priviledge to Ceph admin user just added above with sudo settings.

And also install required packages.

Furthermore, If Firewalld is running on all Nodes, allow SSH service.

Set all of above on all Nodes.

[[email protected] ~]# echo -e 'Defaults:cent !requiretty\ncent ALL = (root) NOPASSWD:ALL' | tee /etc/sudoers.d/ceph

[[email protected] ~]# chmod 440 /etc/sudoers.d/ceph

[[email protected] ~]# yum -y install centos-release-ceph-hammer epel-release yum-plugin-priorities

[[email protected] ~]# sed -i -e "s/enabled=1/enabled=1\npriority=1/g" /etc/yum.repos.d/CentOS-Ceph-Hammer.repo

[[email protected] ~]# firewall-cmd --add-service=ssh --permanent

[[email protected] ~]# firewall-cmd --reload

[3]    On Monitor Node (Monitor Daemon), If Firewalld is running, allow required port.

[[email protected] ~]# firewall-cmd --add-port=6789/tcp --permanent

[[email protected] ~]# firewall-cmd --reload

[4]    On Storage Nodes (Object Storage), If Firewalld is running, allow required ports.

[[email protected] ~]# firewall-cmd --add-port=6800-7100/tcp --permanent

[[email protected] ~]# firewall-cmd --reload

[4]    Login as a Ceph admin user and configure Ceph.

Set SSH key-pair from Ceph Admin Node (it's "dlp.srv.world" on this example) to all storage Nodes.

[[email protected] ~]$ ssh-keygen

Generating public/private rsa key pair.

Enter file in which to save the key (/home/cent/.ssh/id_rsa):

Created directory '/home/cent/.ssh'.

Enter passphrase (empty for no passphrase):

Enter same passphrase again:

Your identification has been saved in /home/cent/.ssh/id_rsa.

Your public key has been saved in /home/cent/.ssh/id_rsa.pub.

The key fingerprint is:

54:c3:12:0e:d3:65:11:49:11:73:35:1b:e3:e8:63:5a [email protected]

The key's randomart image is:

[[email protected] ~]$ vi ~/.ssh/config

# create new ( define all nodes and users )

Host dlp

    Hostname dlp.srv.world

    User cent

Host node01

    Hostname node01.srv.world

    User cent

Host node02

    Hostname node02.srv.world

    User cent

Host node03

    Hostname node03.srv.world

    User cent

[[email protected] ~]$ chmod 600 ~/.ssh/config

# transfer key file

[[email protected] ~]$ ssh-copy-id node01

[email protected]'s password:

Number of key(s) added: 1

Now try logging into the machine, with:   "ssh 'node01'"

and check to make sure that only the key(s) you wanted were added.

[[email protected] ~]$ ssh-copy-id node02

[[email protected] ~]$ ssh-copy-id node03

[5]    Install Ceph to all Nodes from Admin Node.

[[email protected] ~]$ sudo yum -y install ceph-deploy

[[email protected] ~]$ mkdir ceph

[[email protected] ~]$ cd ceph

[[email protected] ceph]$ ceph-deploy new node01

[[email protected] ceph]$ vi ./ceph.conf

# add to the end

osd pool default size = 2

# Install Ceph on each Node

[[email protected] ceph]$ ceph-deploy install dlp node01 node02 node03

# settings for monitoring and keys

[[email protected] ceph]$ ceph-deploy mon create-initial

[6]    Configure Ceph Cluster from Admin Node.

Beforeit, Create a directory /storage01 on Node01, /storage02 on Node02, /storage03 on node03 on this example.

# prepare Object Storage Daemon

[[email protected] ceph]$ ceph-deploy osd prepare node01:/storage01 node02:/storage02 node03:/storage03

# activate Object Storage Daemon

[[email protected] ceph]$ ceph-deploy osd activate node01:/storage01 node02:/storage02 node03:/storage03

# transfer config files

[[email protected] ceph]$ ceph-deploy admin dlp node01 node02 node03

[[email protected] ceph]$ sudo chmod 644 /etc/ceph/ceph.client.admin.keyring

# show status (display like follows if no ploblem)

[[email protected] ceph]$ ceph health

HEALTH_OK

[7]    By the way, if you'd like to clean settings and re-configure again, do like follows.

# remove packages

[[email protected] ceph]$ ceph-deploy purge dlp node01 node02 node03

# remove settings

[[email protected] ceph]$ ceph-deploy purgedata dlp node01 node02 node03

[[email protected] ceph]$ ceph-deploy forgetkeys

2、Ceph : Use as Block Device

Configure Clients to use Ceph Storage like follows.

                                         |

        +--------------------+           |           +-------------------+

        |   [dlp.srv.world]  |10.0.0.30  |   10.0.0.x|   [   Client  ]   |

        |    Ceph-Deploy     +-----------+-----------+                   |

        |                    |           |           |                   |

        +--------------------+           |           +-------------------+

            +----------------------------+----------------------------+

            |                            |                            |

            |10.0.0.51                   |10.0.0.52                   |10.0.0.53

+-----------+-----------+    +-----------+-----------+    +-----------+-----------+

|   [node01.srv.world]  |    |   [node02.srv.world]  |    |   [node03.srv.world]  |

|     Object Storage    +----+     Object Storage    +----+     Object Storage    |

|     Monitor Daemon    |    |                       |    |                       |

|                       |    |                       |    |                       |

+-----------------------+    +-----------------------+    +-----------------------+

For exmaple, Create a block device and mount it on a Client.

[1]    First, Configure Sudo and SSH key-pair for a user on a Client and next, Install Ceph from Ceph admin Node like follows.

[[email protected] ceph]$ ceph-deploy install client

[[email protected] ceph]$ ceph-deploy admin client

[2]    Create a Block device and mount it on a Client.

[[email protected] ~]$ sudo chmod 644 /etc/ceph/ceph.client.admin.keyring

# create a disk with 10G

[[email protected] ~]$ rbd create disk01 --size 10240

# show list

[[email protected] ~]$ rbd ls -l

NAME     SIZE PARENT FMT PROT LOCK

disk01 10240M          2

# map the image to device

[[email protected] ~]$ sudo rbd map disk01

/dev/rbd0

# show mapping

[[email protected] ~]$ rbd showmapped

id pool image  snap device

0  rbd  disk01 -    /dev/rbd0

# format with XFS

[[email protected] ~]$ sudo mkfs.xfs /dev/rbd0

# mount device

[[email protected] ~]$ sudo mount /dev/rbd0 /mnt

[[email protected] ~]$ df -hT

Filesystem              Type      Size  Used Avail Use% Mounted on

/dev/mapper/centos-root xfs        27G  1.3G   26G   5% /

devtmpfs                devtmpfs  2.0G     0  2.0G   0% /dev

tmpfs                   tmpfs     2.0G     0  2.0G   0% /dev/shm

tmpfs                   tmpfs     2.0G  8.4M  2.0G   1% /run

tmpfs                   tmpfs     2.0G     0  2.0G   0% /sys/fs/cgroup

/dev/vda1               xfs       497M  151M  347M  31% /boot

/dev/rbd0               xfs        10G   33M   10G   1% /mnt

3、Ceph : Use as File System    

Configure Clients to use Ceph Storage like follows.

                                         |

        +--------------------+           |           +-------------------+

        |   [dlp.srv.world]  |10.0.0.30  |   10.0.0.x|   [   Client  ]   |

        |    Ceph-Deploy     +-----------+-----------+                   |

        |                    |           |           |                   |

        +--------------------+           |           +-------------------+

            +----------------------------+----------------------------+

            |                            |                            |

            |10.0.0.51                   |10.0.0.52                   |10.0.0.53

+-----------+-----------+    +-----------+-----------+    +-----------+-----------+

|   [node01.srv.world]  |    |   [node02.srv.world]  |    |   [node03.srv.world]  |

|     Object Storage    +----+     Object Storage    +----+     Object Storage    |

|     Monitor Daemon    |    |                       |    |                       |

|                       |    |                       |    |                       |

+-----------------------+    +-----------------------+    +-----------------------+

For example, mount as Filesystem on a Client.

[1]    Create MDS (MetaData Server) on a Node which you'd like to set MDS. It sets to node01 on this exmaple.

[[email protected] ceph]$ ceph-deploy mds create node01

[2]    Create at least 2 RADOS pools on MDS Node and activate MetaData Server.

For pg_num which is specified at the end of a creating command, refer to official document and decide appropriate value.

⇒ http://docs.ceph.com/docs/master/rados/operations/placement-groups/

[[email protected] ~]$ sudo chmod 644 /etc/ceph/ceph.client.admin.keyring

# create pools

[[email protected] ~]$ ceph osd pool create cephfs_data 128

pool 'cephfs_data' created

[[email protected] ~]$ ceph osd pool create cephfs_metadata 128

pool 'cephfs_metadata' created

# enable pools

[[email protected] ~]$ ceph fs new cephfs cephfs_metadata cephfs_data

new fs with metadata pool 2 and data pool 1

# show list

[[email protected] ~]$ ceph fs ls

name: cephfs, metadata pool: cephfs_metadata, data pools: [cephfs_data ]

[[email protected] ~]$ ceph mds stat

e5: 1/1/1 up {0=node01=up:active}

[3]    Mount CephFS on a Client.

[[email protected] ~]# yum -y install ceph-fuse

# get admin key

[[email protected] ~]# ssh [email protected] "sudo ceph-authtool -p /etc/ceph/ceph.client.admin.keyring" > admin.key

[email protected]'s password:

[[email protected] ~]# chmod 600 admin.key

[[email protected] ~]# mount -t ceph node01.srv.world:6789:/ /mnt -o name=admin,secretfile=admin.key

[[email protected] ~]# df -hT

Filesystem              Type      Size  Used Avail Use% Mounted on

/dev/mapper/centos-root xfs        27G  1.3G   26G   5% /

devtmpfs                devtmpfs  2.0G     0  2.0G   0% /dev

tmpfs                   tmpfs     2.0G     0  2.0G   0% /dev/shm

tmpfs                   tmpfs     2.0G  8.3M  2.0G   1% /run

tmpfs                   tmpfs     2.0G     0  2.0G   0% /sys/fs/cgroup

/dev/vda1               xfs       497M  151M  347M  31% /boot

10.0.0.51:6789:/        ceph       80G   19G   61G  24% /mnt

四、GlusterFS 安裝

1、Install GlusterFS to Configure Storage Cluster.

It is recommended to use partitions for GlusterFS volumes which are different from the / partition.

The environment on this example is set as that sdb1 is mounted on /glusterfs directory for GlusterFS Configuration on all Nodes.

[1]    Install GlusterFS Server on all Nodes in Cluster.

[[email protected] ~]# curl http://download.gluster.org/pub/gluster/glusterfs/LATEST/EPEL.repo/glusterfs-epel.repo -o /etc/yum.repos.d/glusterfs-epel.repo

# enable EPEL, too

[[email protected] ~]# yum --enablerepo=epel -y install glusterfs-server

[[email protected] ~]# systemctl start glusterd

[[email protected] ~]# systemctl enable glusterd

[2]    If Firewalld is running, allow GlusterFS service on all nodes.

[[email protected] ~]# firewall-cmd --add-service=glusterfs --permanent

success

[[email protected] ~]# firewall-cmd --reload

success

It's OK if you mount GlusterFS volumes from clients with GlusterFS Native Client.

[3]    GlusterFS supports NFS (v3), so if you mount GlusterFS volumes from clients with NFS, Configure additinally like follows.

[[email protected] ~]# yum -y install rpcbind

[[email protected] ~]# systemctl start rpcbind

[[email protected] ~]# systemctl enable rpcbind

[[email protected] ~]# systemctl restart glusterd

[4]    Installing and Basic Settings of GlusterFS are OK. Refer to next section for settings of clustering.

2、GlusterFS : Distributed Configuration

Configure Storage Clustering.

For example, create a distributed volume with 2 servers.

This example shows to use 2 servers but it's possible to use more than 3 servers.

                                  |

+----------------------+          |          +----------------------+

| [GlusterFS Server#1] |10.0.0.51 | 10.0.0.52| [GlusterFS Server#2] |

|   node01.srv.world   +----------+----------+    node02.srv.world  |

|                      |                     |                      |

+----------------------+                     +----------------------+

It is recommended to use partitions for GlusterFS volumes which are different from the / partition.

The environment on this example is set as that sdb1 is mounted on /glusterfs directory for GlusterFS Configuration on all Nodes.

[1]   

Install GlusterFS Server on All Nodes, refer to here.

[2]    Create a Directory for GlusterFS Volume on all Nodes.

[[email protected] ~]# mkdir /glusterfs/distributed

[3]    Configure Clustering like follows on a node. (it's OK on any node)

# probe the node

[[email protected] ~]# gluster peer probe node02

peer probe: success.

# show status

[[email protected] ~]# gluster peer status

Number of Peers: 1

Hostname: node02

Uuid: 2ca22769-28a1-4204-9957-886579db2231

State: Peer in Cluster (Connected)

# create volume

[[email protected] ~]# gluster volume create vol_distributed transport tcp \

node01:/glusterfs/distributed \

node02:/glusterfs/distributed

volume create: vol_distributed: success: please start the volume to access data

# start volume

[[email protected] ~]# gluster volume start vol_distributed

volume start: vol_distributed: success

# show volume info

[[email protected] ~]# gluster volume info

Volume Name: vol_distributed

Type: Distribute

Volume ID: 6677caa9-9aab-4c1a-83e5-2921ee78150d

Status: Started

Number of Bricks: 2

Transport-type: tcp

Bricks:

Brick1: node01:/glusterfs/distributed

Brick2: node02:/glusterfs/distributed

Options Reconfigured:

performance.readdir-ahead: on

[4]    To mount GlusterFS volume on clients, refer to here.

3、GlusterFS : Replication Configuration

For example, create a Replication volume with 2 servers.

This example shows to use 2 servers but it's possible to use more than 3 servers.

                                  |

+----------------------+          |          +----------------------+

| [GlusterFS Server#1] |10.0.0.51 | 10.0.0.52| [GlusterFS Server#2] |

|   node01.srv.world   +----------+----------+   node02.srv.world   |

|                      |                     |                      |

+----------------------+                     +----------------------+

It is recommended to use partitions for GlusterFS volumes which are different from the / partition.

The environment on this example is set as that sdb1 is mounted on /glusterfs directory for GlusterFS Configuration on all Nodes.

[1]   

Install GlusterFS Server on All Nodes, refer to here.

[2]    Create a Directory for GlusterFS Volume on all Nodes.

[[email protected] ~]# mkdir /glusterfs/replica

[3]    Configure Clustering like follows on a node. (it's OK on any node)

# probe the node

[[email protected] ~]# gluster peer probe node02

peer probe: success.

# show status

[[email protected] ~]# gluster peer status

Number of Peers: 1

Hostname: node02

Uuid: 2ca22769-28a1-4204-9957-886579db2231

State: Peer in Cluster (Connected)

# create volume

[[email protected] ~]# gluster volume create vol_replica replica 2 transport tcp \

node01:/glusterfs/replica \

node02:/glusterfs/replica

volume create: vol_replica: success: please start the volume to access data

# start volume

[[email protected] ~]# gluster volume start vol_replica

volume start: vol_replica: success

# show volume info

[[email protected] ~]# gluster volume info

Volume Name: vol_replica

Type: Replicate

Volume ID: 0d5d5ef7-bdfa-416c-8046-205c4d9766e6

Status: Started

Number of Bricks: 1 x 2 = 2

Transport-type: tcp

Bricks:

Brick1: node01:/glusterfs/replica

Brick2: node02:/glusterfs/replica

Options Reconfigured:

performance.readdir-ahead: on

[4]    To mount GlusterFS volume on clients, refer to here.

4、GlusterFS : Striping Configuration

Configure Storage Clustering.

For example, create a Striping volume with 2 servers.

This example shows to use 2 servers but it's possible to use more than 3 servers.

                                  |

+----------------------+          |          +----------------------+

| [GlusterFS Server#1] |10.0.0.51 | 10.0.0.52| [GlusterFS Server#2] |

|   node01.srv.world   +----------+----------+   node02.srv.world   |

|                      |                     |                      |

+----------------------+                     +----------------------+

It is recommended to use partitions for GlusterFS volumes which are different from the / partition.

The environment on this example is set as that sdb1 is mounted on /glusterfs directory for GlusterFS Configuration on all Nodes.

[1]   

Install GlusterFS Server on All Nodes, refer to here.

[2]    Create a Directory for GlusterFS Volume on all Nodes.

[[email protected] ~]# mkdir /glusterfs/striped

[3]    Configure Clustering like follows on a node. (it's OK on any node)

# probe the node

[[email protected] ~]# gluster peer probe node02

peer probe: success.

# show status

[[email protected] ~]# gluster peer status

Number of Peers: 1

Hostname: node02

Uuid: 2ca22769-28a1-4204-9957-886579db2231

State: Peer in Cluster (Connected)

# create volume

[[email protected] ~]# gluster volume create vol_striped stripe 2 transport tcp \

node01:/glusterfs/striped \

node02:/glusterfs/striped

volume create: vol_striped: success: please start the volume to access data

# start volume

[[email protected] ~]# gluster volume start vol_striped

volume start: vol_replica: success

# show volume info

[[email protected] ~]# gluster volume info

Volume Name: vol_striped

Type: Stripe

Volume ID: b6f6b090-3856-418c-aed3-bc430db91dc6

Status: Started

Number of Bricks: 1 x 2 = 2

Transport-type: tcp

Bricks:

Brick1: node01:/glusterfs/striped

Brick2: node02:/glusterfs/striped

Options Reconfigured:

performance.readdir-ahead: on

[4]    To mount GlusterFS volume on clients, refer to here.

5、GlusterFS : Distributed + Replication

Configure Storage Clustering.

For example, create a Distributed + Replication volume with 4 servers.

                                  |

+----------------------+          |          +----------------------+

| [GlusterFS Server#1] |10.0.0.51 | 10.0.0.52| [GlusterFS Server#2] |

|   node01.srv.world   +----------+----------+   node02.srv.world   |

|                      |          |          |                      |

+----------------------+          |          +----------------------+

                                  |

+----------------------+          |          +----------------------+

| [GlusterFS Server#3] |10.0.0.53 | 10.0.0.54| [GlusterFS Server#4] |

|   node03.srv.world   +----------+----------+   node04.srv.world   |

|                      |                     |                      |

+----------------------+                     +----------------------+

It is recommended to use partitions for GlusterFS volumes which are different from the / partition.

The environment on this example is set as that sdb1 is mounted on /glusterfs directory for GlusterFS Configuration on all Nodes.

[1]   

Install GlusterFS Server on All Nodes, refer to here.

[2]    Create a Directory for GlusterFS Volume on all Nodes.

[[email protected] ~]# mkdir /glusterfs/dist-replica

[3]    Configure Clustering like follows on a node. (it's OK on any node)

# probe the node

[[email protected] ~]# gluster peer probe node02

peer probe: success.

[[email protected] ~]# gluster peer probe node03

peer probe: success.

[[email protected] ~]# gluster peer probe node04

peer probe: success.

# show status

[[email protected] ~]# gluster peer status

Number of Peers: 3

Hostname: node02

Uuid: 2ca22769-28a1-4204-9957-886579db2231

State: Peer in Cluster (Connected)

Hostname: node03

Uuid: 79cff591-1e98-4617-953c-0d3e334cf96a

State: Peer in Cluster (Connected)

Hostname: node04

Uuid: 779ab1b3-fda9-46da-af95-ba56477bf638

State: Peer in Cluster (Connected)

# create volume

[[email protected] ~]# gluster volume create vol_dist-replica replica 2 transport tcp \

node01:/glusterfs/dist-replica \

node02:/glusterfs/dist-replica \

node03:/glusterfs/dist-replica \

node04:/glusterfs/dist-replica

volume create: vol_dist-replica: success: please start the volume to access data

# start volume

[[email protected] ~]# gluster volume start vol_dist-replica

volume start: vol_dist-replica: success

# show volume info

[[email protected] ~]# gluster volume info

Volume Name: vol_dist-replica

Type: Distributed-Replicate

Volume ID: 784d2953-6599-4102-afc2-9069932894cc

Status: Started

Number of Bricks: 2 x 2 = 4

Transport-type: tcp

Bricks:

Brick1: node01:/glusterfs/dist-replica

Brick2: node02:/glusterfs/dist-replica

Brick3: node03:/glusterfs/dist-replica

Brick4: node04:/glusterfs/dist-replica

Options Reconfigured:

performance.readdir-ahead: on

6、GlusterFS : Striping + Replication

Configure Storage Clustering.

For example, create a Striping + Replication volume with 4 servers.

                                  |

+----------------------+          |          +----------------------+

| [GlusterFS Server#1] |10.0.0.51 | 10.0.0.52| [GlusterFS Server#2] |

|   node01.srv.world   +----------+----------+   node02.srv.world   |

|                      |          |          |                      |

+----------------------+          |          +----------------------+

                                  |

+----------------------+          |          +----------------------+

| [GlusterFS Server#3] |10.0.0.53 | 10.0.0.54| [GlusterFS Server#4] |

|   node03.srv.world   +----------+----------+   node04.srv.world   |

|                      |                     |                      |

+----------------------+                     +----------------------+

It is recommended to use partitions for GlusterFS volumes which are different from the / partition.

The environment on this example is set as that sdb1 is mounted on /glusterfs directory for GlusterFS Configuration on all Nodes.

[1]   

Install GlusterFS Server on All Nodes, refer to here.

[2]    Create a Directory for GlusterFS Volume on all Nodes.

[[email protected] ~]# mkdir /glusterfs/strip-replica

[3]    Configure Clustering like follows on a node. (it's OK on any node)

# probe the node

[[email protected] ~]# gluster peer probe node02

peer probe: success.

[

相關推薦

CentOS 7配置儲存伺服器

CentOS 7配置儲存伺服器 一、配置NFS伺服器 配置NFS服務在區域網共享文件 1、配置NFS服務 Configure NFS Server to share directories on your Network. This example is based on

CentOS 7 配置DHCP伺服器

一、DHCP介紹   DHCP 動態主機管理協議(Dynamic Host Configuration Protocol) 是一種基於UDP協議且僅限於區域網的網路協議,主要用途是為區域網內部裝置或網路供應商自動分配IP地址,通常會應用在大型的區域網記憶體在比較多的移動辦公裝置,DHCP協議能夠實現集中的管理

伺服器 ECS CentOS 7配置預設防火牆 Firewall

說明:Centos7 下預設的防火牆是 Firewall,替代了之前的 iptables,Firewall 有圖形介面管理和命令列管理兩種方式,本文簡要介紹命令 行Firewall 的使用。 配置 Firewall 進入系統之後,Centos7 預設是已安裝了 Firewall,但是沒有

CentOS 7配置成閘道器伺服器

其實在Linux下配置閘道器伺服器很簡單,如果配置好之後出現無法訪問外網的情況,那麼可以排查以下情況: 1、防火牆和iptables的服務關掉(firewalld、iptables) 2、清空iptables的規則(iptables -F、iptables -X、iptables -F -t nat、ip

centos 7 配置靜態IP

linux切換root賬號,進入/etc/sysconfig/network-scripts/ 目錄 2.編輯網絡配置接口文件3.保存修改並重啟網絡服務centos 7 配置靜態IP

CentOS 7 配置IP地址以及出現的問題排查

centos 7 linux ip地址配置 當我們新建好一個新的CentOS系統後我們首先需要配置IP 地址,為的就是可以方便遠程連接和後續的正常使用!由於CentOS 7更新之後配置和CentOS 6還是有點小區別,讓我們開始吧~首先進入系統後我們先自動獲取一個IP地址:#dhclient查看獲

centos 7配置靜態IP,並配置DNS

emctl 文件 網卡 管理器 sco auto .html b2b 進行 centos 7配置靜態IP,並配置DNS cd /etc/sysconfig/network-scripts/找到對應的網卡配置並編輯 vim ifcfg-eno16777736配置eno-167

centos 7 配置 loganalyzer

centos 7 配置 loganalyzer0. 準備工作操作系統:Centos 7.xloganalyzer 服務端:192.168.10.74loganalyzer 客戶端:192.168.10.71systemctl stop firewalld.service systemctl disable f

CentOS 7 配置本地yum 源

directly enabled other rpm file dia sed eve pan 1. 加載 CentOS的ISO鏡像並掛載: [[email protected]/* */ files]# mount /media/files/CentOS-7-

centOS 7 配置ip和網絡問題排查

linux1.6/1.7 配置IP安裝好centOS 7後可以先進行配置IP地址查看IP地址 用 ifconfig -a[[email protected] ~]$ ifconfig -a ens33: flags=4163<UP,BROADCAST,RUNNING,MULTICAST>

CentOS 7 配置靜態 IP

href class red 翻譯 ref 之前 本機 oca bsp 版權聲明:本文為博主原創文章,未經博主允許不得轉載。 目錄(?)[+] VirtualBox 網絡模式 設置靜態IP 配置文件 重置網絡配置 查看網路配置

centos 7 配置iptables(轉)

sysconf nta 配置ip sco ces num acc csharp rest 一、防火墻配置 1、檢測並關閉firewall 1 2 3 4 5 systemctl status firewalld.service #檢測是否開啟了firewa

centos 7 配置php

命令行 secure 獲取 using use 啟動服務 pac rest password 對於我們的目的而言,安裝 Apache 只需要在 CentOS 命令終端敲入這條命令就行了: $ sudo yum install httpd $ sudo systemctl e

CentOS 7 配置yum本地base源和阿裏雲epel源

base源 epel源 設置yum倉庫優先級的;插 yum倉庫的配置文件都存放在/etc/yum.repo.d/目錄下,並且文件名必須以.repo結尾。base源:解決rpm依賴性關系epel源:Extra Packages for Enterprise Linux的縮寫,包含許多基源沒有軟件,仍

Centos 7 配置SSH遠程連接及RAID 5的創建

Centos 7 SSH遠程連接 RAID 5的創建 Centos 7 配置SSH遠程連接及RAID的創建安裝Centos系統首先進入引導界面:選擇第一項,安裝Centos7選擇安裝語言:默認即可下面進入安裝信息界面時區選擇:選擇安裝界面,web版就行點擊完成,進入安裝界面,這時設置root

CentOS 7配置Let’s Encrypt支持免費泛域名證書

證書 token rep -h clas oot serve 執行 details Let’s Encrypt從2018年開始支持泛域名證書,有效期3個月,目前僅支持acme方式申請,暫不支持certbot。 1、安裝acme.sh curl https://get.ac

CentOS 7 - 配置服務實現開機自啟動

用戶名 centos 7 ring Language get syslog emc log tst 新建系統服務描述文件 cd /etc/systemd/system sudo vim myapp.service 添加以下配置: [Unit] # 這裏添加你的服務描述 D

Linux系統CentOS 7配置Spring Boot運行環境

創建 entos linux () 如果 博客 配置服務 oracle 自啟 從阿裏雲新買的一臺Linux服務器,用來部署SpringBoot應用,由於之前一直使用Debian版本,環境配置有所不同,也較為繁瑣,本文主要介紹CentOS下配置SpringBoot環境的過程

Centos 7 配置 filebeat 6.4

clust hosts document oot 定義 tee 密碼 conf ESS 添加yum源 cat > /etc/yum.repos.d/artifacts.repo <<EOF [elastic-6.x] name=Elastic reposi

CentOS 7配置IPv6 DNS Server

cti red x86 oba update none ear etc 網卡 以本人的機器為例,網卡為eth3,IPv6地址就選2000::ffff [root@lenovo-m8400-01 ~]# uname -r2.6.32-431.el6.x86_64 [root@