在CentOS上建立 Software RAID 10的詳解
做 Software RAID 不要求硬碟都一模一樣,但是強烈推薦用同一廠商、型號和大小的硬碟。為啥 RAID 10,不選 RAID0, RAID1, RAID5 呢?答:RAID0 太危險,RAID1 效能稍遜一些,RAID5 頻繁寫情況下效能差,RAID10 似乎是當今磁碟陣列的最佳選擇,特別適合做 KVM/Xen/VMware 虛擬機器母機(host)的本地儲存系統(如果不考慮 SAN 和分散式儲存的話)。
這臺伺服器上有6塊完全相同的硬碟,給每塊硬碟分成一個區,分割槽格式為 Linux software raid:
# fdisk /dev/sda
WARNING: DOS-compatible mode is deprecated. It’s strongly recommended to switch off the mode (command ‘c’) and change display units to sectors (command ‘u’).
Command (m for help): n Command action e extended p primary partition (1-4) p Partition number (1-4): 1 First cylinder (1-91201, default 1): Using default value 1 Last cylinder, +cylinders or +size{K,M,G} (1-91201, default 91201): Using default value 91201
Command (m for help): p
Disk /dev/sda: 750.2 GB, 750156374016 bytes 255 heads, 63 sectors/track, 91201 cylinders Units = cylinders of 16065 * 512 = 8225280 bytes Sector size (logical/physical): 512 bytes / 512 bytes I/O size (minimum/optimal): 512 bytes / 512 bytes Disk identifier: 0x0005c259
Device Boot Start End Blocks Id System /dev/sda1 1 91201 732572001 83 Linux
Command (m for help): t Selected partition 1 Hex code (type L to list codes): fd Changed system type of partition 1 to fd (Linux raid autodetect)
Command (m for help): w The partition table has been altered!
Calling ioctl() to re-read partition table. Syncing disks. 按照上面的 /dev/sda 的分割槽例子依次給剩下的5塊硬碟 sdc, sdd, sde, sdf, sdg 分割槽、更改分割槽格式:
# fdisk /dev/sdc … # fdisk /dev/sdd … # fdisk /dev/sde … # fdisk /dev/sdf … # fdisk /dev/sdg … 分割槽完成後就可以開始建立 RAID 了,在上面的6個相同大小的分割槽上建立 raid10:
# mdadm --create /dev/md0 -v --raid-devices=6 --level=raid10 /dev/sda1 /dev/sdc1 /dev/sdd1 /dev/sde1 /dev/sdf1 /dev/sdg1 mdadm: layout defaults to n2 mdadm: layout defaults to n2 mdadm: chunk size defaults to 512K mdadm: size set to 732440576K mdadm: Defaulting to version 1.2 metadata mdadm: array /dev/md0 started. 檢視磁碟陣列的初始化過程(build),根據磁碟大小和速度,整個過程大概需要幾個小時:
# watch cat /proc/mdstat Every 2.0s: cat /proc/mdstat Tue Feb 11 12:51:25 2014
Personalities : [raid10] md0 : active raid10 sdg1[5] sdf1[4] sde1[3] sdd1[2] sdc1[1] sda1[0] 2197321728 blocks super 1.2 512K chunks 2 near-copies [6/6] [UUUUUU] [>…] resync = 0.2% (5826816/2197321728) finish=278.9min speed=13 0948K/sec
unused devices: 等陣列完成初始化後,就可以給 md0 裝置建立分割槽和檔案系統了,有了檔案系統就可以掛載到系統裡:
# fdisk /dev/md0 # mkfs.ext4 /dev/md0p1
# mkdir /raid10 # mount /dev/md0p1 /raid10 修改 /etc/fstab 檔案讓每次系統啟動時自動掛載:
# vi /etc/fstab … /dev/md0p1 /raid10 ext4 noatime,rw 0 0 在上面的 /etc/fstab 檔案裡使用 /dev/md0p1 裝置名不是一個好辦法,因為 udev 的緣故,這個裝置名常在重啟系統後變化,所以最好用 UUID,使用 blkid 命令找到相應分割槽的 UUID:
# blkid … /dev/md0p1: UUID=“093e0605-1fa2-4279-99b2-746c70b78f1b” TYPE=“ext4” 然後修改相應的 fstab,使用 UUID 掛載: # vi /etc/fstab … #/dev/md0p1 /raid10 ext4 noatime,rw 0 0 UUID=093e0605-1fa2-4279-99b2-746c70b78f1b /raid10 ext4 noatime,rw 0 0 檢視 RAID 的情況:
# mdadm --query --detail /dev/md0 /dev/md0: Version : 1.2 Creation Time : Tue Feb 11 12:50:38 2014 Raid Level : raid10 Array Size : 2197321728 (2095.53 GiB 2250.06 GB) Used Dev Size : 732440576 (698.51 GiB 750.02 GB) Raid Devices : 6 Total Devices : 6 Persistence : Superblock is persistent Update Time : Tue Feb 11 18:48:10 2014 State : clean Active Devices : 6 Working Devices : 6 Failed Devices : 0 Spare Devices : 0 Layout : near=2 Chunk Size : 512K Name : local:0 (local to host local) UUID : e3044b6c:5ab972ea:8e742b70:3f766a11 Events : 70 Number Major Minor RaidDevice State 0 8 1 0 active sync /dev/sda1 1 8 33 1 active sync /dev/sdc1 2 8 49 2 active sync /dev/sdd1 3 8 65 3 active sync /dev/sde1 4 8 81 4 active sync /dev/sdf1 5 8 97 5 active sync /dev/sdg1