1. 程式人生 > 其它 >軟RAID搭建-基於linux系統搭建RAID1

軟RAID搭建-基於linux系統搭建RAID1

技術標籤:磁碟陣列RAIDlinux運維儲存器raid

實驗二、軟RAID搭建-基於linux系統搭建RAID1

實驗要求:
1)建立RAID1;
2)新增一個熱備盤
3模擬故障,自動頂替故障盤
4從raid1中移除故障盤
在這裡插入圖片描述

  • 搭建:【建立-儲存配置資訊-檢視陣列資訊】
    1.建立:

mdadm -C -v /dev/md1 -l 1 -n 2 -x 1 /dev/sd[d,e,f]

[[email protected] ~]# mdadm -C -v /dev/md1 -l 1 -n 2 -x 1 /dev/sd[d,e,f]
mdadm: Note: this array has metadata at the start
and may not be suitable as a boot device. If you plan to store '/boot' on this device please ensure that your boot-loader understands md/v1.x metadata, or use --metadata=0.90 mdadm: size set to 20954112K Continue creating array? y mdadm: Defaulting to version 1.2 metadata mdadm: array /dev/
md1 started.

2.儲存陣列資訊:

mdadm -Dsv > /etc/mdadm.conf

3.檢視陣列資訊:

mdadm -Dsv 或 mdadm -D /dev/md1

可以看到同步進度

[[email protected] ~]# mdadm -Dsv > /etc/mdadm.conf 
[[email protected] ~]# mdadm -D /dev/md1
/dev/md1:
           Version : 1.2
     Creation Time : Tue Dec 15 03:07:16 2020
        Raid Level : raid1
        Array Size : 20954112 (
19.98 GiB 21.46 GB) Used Dev Size : 20954112 (19.98 GiB 21.46 GB) Raid Devices : 2 Total Devices : 3 Persistence : Superblock is persistent Update Time : Tue Dec 15 03:08:49 2020 State : clean, resyncing Active Devices : 2 Working Devices : 3 Failed Devices : 0 Spare Devices : 1 Consistency Policy : resync Resync Status : 35% complete Name : 192.168.74.128:1 (local to host 192.168.74.128) UUID : af8a3ec5:715b9882:5ae40383:db213061 Events : 5 Number Major Minor RaidDevice State 0 8 48 0 active sync /dev/sdd 1 8 64 1 active sync /dev/sde 2 8 80 - spare /dev/sdf

4.檢視

cat /proc/mdstat

[[email protected] ~]# cat /proc/mdstat 
Personalities : [raid0] [raid1] 
md1 : active raid1 sdf[2](S) sde[1] sdd[0]
      20954112 blocks super 1.2 [2/2] [UU]
      [=========>...........]  resync = 49.2% (10327424/20954112) finish=2.1min speed=82512K/sec
      
md0 : active raid0 sdc[1] sdb[0]
      41908224 blocks super 1.2 512k chunks
      
unused devices: <none>
  • 使用【格式化陣列-建立掛載點-掛載-寫入資料-檢視檔案系統大小】
    5.格式化陣列

mkfs.xfs /dev/md1

[[email protected] ~]# mkfs.xfs /dev/md1
meta-data=/dev/md1               isize=512    agcount=4, agsize=1309632 blks
         =                       sectsz=512   attr=2, projid32bit=1
         =                       crc=1        finobt=0, sparse=0
data     =                       bsize=4096   blocks=5238528, imaxpct=25
         =                       sunit=0      swidth=0 blks
naming   =version 2              bsize=4096   ascii-ci=0 ftype=1
log      =internal log           bsize=4096   blocks=2560, version=2
         =                       sectsz=512   sunit=0 blks, lazy-count=1
realtime =none                   extsz=4096   blocks=0, rtextents=0

6.建立掛載目錄並掛載

mkdir /raid1
mount /dev/md1 /raid1

[[email protected] ~]# mkdir /raid1 
[[email protected] ~]# mount /dev/md1  /dev/raid1
mount: mount point /dev/raid1 does not exist
[[email protected] ~]# mount /dev/md1  /raid1
[[email protected] ~]# 

7.寫入資料測試

cp /etc/passwd /raid1/

cp -r /boot/grub /raid1/

8.檢視檔案系統大小,確認有資料

df -h

[[email protected] ~]# cp /boot/grub/ /raid1/
cp: omitting directory ‘/boot/grub/[[email protected] ~]# cp /boot/grub /raid1/
cp: omitting directory ‘/boot/grub’
[[email protected] ~]# cp -r /boot/grub /raid1/
[[email protected] ~]# df -h
Filesystem               Size  Used Avail Use% Mounted on
devtmpfs                 898M     0  898M   0% /dev
tmpfs                    910M     0  910M   0% /dev/shm
tmpfs                    910M  9.6M  901M   2% /run
tmpfs                    910M     0  910M   0% /sys/fs/cgroup
/dev/mapper/centos-root   17G  1.3G   16G   8% /
/dev/md0                  40G   33M   40G   1% /raid0
/dev/sda1               1014M  151M  864M  15% /boot
tmpfs                    182M     0  182M   0% /run/user/0
/dev/md1                  20G   33M   20G   1% /raid1
  • 模擬故障【模擬故障盤-檢查是否自動切換備用盤-儲存配置資訊】
    9.模擬故障盤sde

mdadm /dev/md1 -f /dev/sde

[[email protected] ~]# mdadm /dev/md1 -f /dev/sde
mdadm: set /dev/sde faulty in /dev/md1
[[email protected] ~]# 

10.檢視備用盤是否已經自動頂替,自動同步

mddm -D /dev/md1

[[email protected] ~]# mdadm -D /dev/md1
/dev/md1:
           Version : 1.2
     Creation Time : Tue Dec 15 03:07:16 2020
        Raid Level : raid1
        Array Size : 20954112 (19.98 GiB 21.46 GB)
     Used Dev Size : 20954112 (19.98 GiB 21.46 GB)
      Raid Devices : 2
     Total Devices : 3
       Persistence : Superblock is persistent

       Update Time : Tue Dec 15 03:16:45 2020
             State : clean, degraded, recovering 
    Active Devices : 1
   Working Devices : 2
    Failed Devices : 1
     Spare Devices : 1

Consistency Policy : resync

    Rebuild Status : 11% complete

              Name : 192.168.74.128:1  (local to host 192.168.74.128)
              UUID : af8a3ec5:715b9882:5ae40383:db213061
            Events : 22

    Number   Major   Minor   RaidDevice State
       0       8       48        0      active sync   /dev/sdd
       2       8       80        1      spare rebuilding   /dev/sdf

       1       8       64        -      faulty   /dev/sde

11.更改以後需要儲存磁碟資訊

mdadm -Dsv > /etc/mdadm.conf

12.檢視資料是否丟失

ls /raid1/

[[email protected] ~]# ls /raid1/
grub  passwd
[[email protected] ~]# 
  • 移除/新增裝置【移除盤-新增盤】
    13.移除損壞裝置sde,並檢視

mdadm -r /dev/md1 /dev/sde

mdadm -D /dev/md1

[[email protected] ~]# mdadm -r /dev/md1 /dev/sde
mdadm: hot removed /dev/sde from /dev/md1
[[email protected] ~]# mdadm -D /dev/md1
/dev/md1:
           Version : 1.2
     Creation Time : Tue Dec 15 03:07:16 2020
        Raid Level : raid1
        Array Size : 20954112 (19.98 GiB 21.46 GB)
     Used Dev Size : 20954112 (19.98 GiB 21.46 GB)
      Raid Devices : 2
     Total Devices : 2
       Persistence : Superblock is persistent

       Update Time : Tue Dec 15 03:17:52 2020
             State : clean, degraded, recovering 
    Active Devices : 1
   Working Devices : 2
    Failed Devices : 0
     Spare Devices : 1

Consistency Policy : resync

    Rebuild Status : 36% complete

              Name : 192.168.74.128:1  (local to host 192.168.74.128)
              UUID : af8a3ec5:715b9882:5ae40383:db213061
            Events : 31

    Number   Major   Minor   RaidDevice State
       0       8       48        0      active sync   /dev/sdd
       2       8       80        1      spare rebuilding   /dev/sdf

14新增裝置sde,並檢視

mdadm -a /dev/md1 /dev/sde
mdadm -D /dev/md1

[[email protected] ~]# mdadm -a   /dev/md1 /dev/sde
mdadm: added /dev/sde
[[email protected] ~]# mdadm -D /dev/md1
/dev/md1:
           Version : 1.2
     Creation Time : Tue Dec 15 03:07:16 2020
        Raid Level : raid1
        Array Size : 20954112 (19.98 GiB 21.46 GB)
     Used Dev Size : 20954112 (19.98 GiB 21.46 GB)
      Raid Devices : 2
     Total Devices : 3
       Persistence : Superblock is persistent

       Update Time : Tue Dec 15 03:18:48 2020
             State : clean, degraded, recovering 
    Active Devices : 1
   Working Devices : 3
    Failed Devices : 0
     Spare Devices : 2

Consistency Policy : resync

    Rebuild Status : 55% complete

              Name : 192.168.74.128:1  (local to host 192.168.74.128)
              UUID : af8a3ec5:715b9882:5ae40383:db213061
            Events : 36

    Number   Major   Minor   RaidDevice State
       0       8       48        0      active sync   /dev/sdd
       2       8       80        1      spare rebuilding   /dev/sdf

       3       8       64        -      spare   /dev/sde

實驗結果:
磁碟sde sdd做raid1,命名md1,sdf作為熱備盤,自動頂替
在這裡插入圖片描述

分析:實驗一和實驗二做對比,可看出Raid0磁碟利用率=100%,陣列大小為所有磁碟大小之和。Raid1利用率為50%,稱為映象盤。

**

Raid1常用於資料庫、系統盤,保證資料安全性。

**