使用 fio 工具測試 EBS 效能
阿新 • • 發佈:2018-11-29
fio
fio 是一個專業的磁碟效能測試工具,本文將以滴滴雲 SSD 雲盤為例,演示用 fio 測試 EBS 效能的方法。
磁碟效能指標
指標 | 說明 |
---|---|
IOPS | 每秒處理隨機讀寫的 IO 個數 |
頻寬 | 每秒處理順序讀寫的 IO 資料量 |
延遲 | 處理單個 IO 的平均耗時 |
讀寫型別說明
型別 | 說明 |
---|---|
順序讀/寫 | 每次 IO 的讀/寫位置緊跟上一個 IO 的讀/寫位置,即每次讀/寫的偏移量是順序遞增的 |
隨機讀/寫 | 每次 IO 的讀/寫位置在磁碟內是隨機的,與上一個 IO 的讀/寫位置無關 |
環境準備
建立滴滴雲 EBS
DC2 配置:
引數 | 型別 | 規格 | 說明 |
---|---|---|---|
作業系統 | centos 7 | - | - |
CPU | - | 4核 | - |
記憶體 | - | 8G | - |
系統盤 | 本地 SSD | 80G | vda |
資料盤 | SSD 雲盤(EBS) | 200G | vdb |
登入 DC2, 用 lsblk 命令檢視掛載好的塊裝置:
[[email protected] ~]$ lsblk NAME MAJ:MIN RM SIZE RO TYPE MOUNTPOINT vda 253:0 0 80G 0 disk └─vda1 253:1 0 80G 0 part / <b>vdb 253:16 0 200G 0 disk</b>
其中, vdb 就是接下來要測試的 EBS。
安裝 fio
# sudo yum install fio -y
檢視 fio 版本
[[email protected] ~]$ fio -v
fio-3.1
開始測試
***注:如果 EBS 裡已有資料,為防止資料損壞,請用檔案的方式進行測試,即: 將 EBS mount 到目錄後,用 fio 讀寫 mount 目錄下的 test 檔案。***
測試順序寫頻寬
[[email protected] ~]$ sudo fio -direct=1 -iodepth=128 -ioengine=libaio -rw=write -bs=1M -size=10G -numjobs=1 -runtime=200 <mark>-filename=/mnt/test</mark> -name=perf
輸出:
perf: (g=0): rw=write, bs=(R) 1024KiB-1024KiB, (W) 1024KiB-1024KiB, (T) 1024KiB-1024KiB, ioengine=libaio, iodepth=128
fio-3.1
Starting 1 process
Jobs: 1 (f=1): [W(1)][98.8%][r=0KiB/s,w=123MiB/s][r=0,w=123 IOPS][eta 00m:01s]
perf: (groupid=0, jobs=1): err= 0: pid=29102: Thu Nov 22 14:42:51 2018
write: IOPS=121, <b>BW=121MiB/s (127MB/s)</b>(10.0GiB/84523msec)
slat (usec): min=113, max=95366, avg=8208.02, stdev=11154.58
clat (msec): min=345, max=1387, avg=1046.02, stdev=45.09
lat (msec): min=345, max=1387, avg=1054.23, stdev=45.45
clat percentiles (msec):
| 1.00th=[ 894], 5.00th=[ 1036], 10.00th=[ 1045], 20.00th=[ 1045],
| 30.00th=[ 1045], 40.00th=[ 1045], 50.00th=[ 1045], 60.00th=[ 1053],
| 70.00th=[ 1053], 80.00th=[ 1062], 90.00th=[ 1070], 95.00th=[ 1070],
| 99.00th=[ 1070], 99.50th=[ 1083], 99.90th=[ 1284], 99.95th=[ 1334],
| 99.99th=[ 1368]
bw ( KiB/s): min=49152, max=129282, per=99.40%, avg=123311.84, stdev=6481.48, samples=168
iops : min= 48, max= 126, avg=120.39, stdev= 6.33, samples=168
lat (msec) : 500=0.21%, 750=0.47%, 1000=0.71%, 2000=98.61%
cpu : usr=3.83%, sys=2.72%, ctx=3379, majf=0, minf=34
IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.3%, >=64=99.4%
submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0%
complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1%
issued rwt: total=0,10240,0, short=0,0,0, dropped=0,0,0
latency : target=0, window=0, percentile=100.00%, depth=128
Run status group 0 (all jobs):
WRITE: bw=121MiB/s (127MB/s), 121MiB/s-121MiB/s (127MB/s-127MB/s), io=10.0GiB (10.7GB), run=84523-84523msec
Disk stats (read/write):
vdb: ios=40/30713, merge=0/0, ticks=18/10532831, in_queue=10535157, util=99.55%
其中 BW=121MiB/s 表示頻寬為 121MiB/s。
測試隨機寫 IOPS
[[email protected] ~]$ sudo fio -direct=1 -iodepth=128 -ioengine=libaio -rw=randwrite -bs=4K -size=5G -numjobs=1 -runtime=200 -filename=/mnt/test -name=perf
輸出:
perf: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128
fio-3.1
Starting 1 process
Jobs: 1 (f=1): [w(1)][100.0%][r=0KiB/s,w=32.2MiB/s][r=0,w=8233 IOPS][eta 00m:00s]
perf: (groupid=0, jobs=1): err= 0: pid=29217: Thu Nov 22 14:48:14 2018
write: <b>IOPS=8206</b>, BW=32.1MiB/s (33.6MB/s)(5120MiB/159726msec)
slat (usec): min=2, max=6104, avg=17.47, stdev=20.97
clat (usec): min=1455, max=29684, avg=15574.64, stdev=761.22
lat (usec): min=1460, max=29716, avg=15593.16, stdev=760.98
clat percentiles (usec):
| 1.00th=[14222], 5.00th=[14615], 10.00th=[14877], 20.00th=[15139],
| 30.00th=[15270], 40.00th=[15533], 50.00th=[15533], 60.00th=[15664],
| 70.00th=[15795], 80.00th=[16057], 90.00th=[16319], 95.00th=[16450],
| 99.00th=[16909], 99.50th=[17957], 99.90th=[20055], 99.95th=[21103],
| 99.99th=[25035]
bw ( KiB/s): min=32597, max=39344, per=100.00%, avg=32830.01, stdev=370.81, samples=319
iops : min= 8149, max= 9836, avg=8207.47, stdev=92.70, samples=319
lat (msec) : 2=0.01%, 4=0.02%, 10=0.10%, 20=99.77%, 50=0.10%
cpu : usr=6.53%, sys=18.82%, ctx=770227, majf=0, minf=32
IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.1%, >=64=100.0%
submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0%
complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1%
issued rwt: total=0,1310720,0, short=0,0,0, dropped=0,0,0
latency : target=0, window=0, percentile=100.00%, depth=128
Run status group 0 (all jobs):
WRITE: bw=32.1MiB/s (33.6MB/s), 32.1MiB/s-32.1MiB/s (33.6MB/s-33.6MB/s), io=5120MiB (5369MB), run=159726-159726msec
Disk stats (read/write):
vdb: ios=42/1310020, merge=0/0, ticks=11/20372129, in_queue=20373325, util=100.00%
測試結果為:IOPS=8206。
測試隨機寫延遲
[[email protected] ~]$ sudo fio -direct=1 -iodepth=1 -ioengine=libaio -rw=randwrite -bs=4K -size=5G -numjobs=1 -runtime=200 -filename=/mnt/test -name=perf
輸出:
perf: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1
fio-3.1
Starting 1 process
Jobs: 1 (f=1): [w(1)][100.0%][r=0KiB/s,w=32.0MiB/s][r=0,w=8204 IOPS][eta 00m:00s]
perf: (groupid=0, jobs=1): err= 0: pid=29529: Thu Nov 22 14:58:59 2018
write: IOPS=8203, BW=32.0MiB/s (33.6MB/s)(5120MiB/159768msec)
slat (usec): min=10, max=7737, avg=15.86, stdev=16.18
clat (nsec): min=1604, max=19115k, avg=101374.85, stdev=99297.45
<b>lat (usec)</b>: min=56, max=19129, <b>avg=118.04</b>, stdev=100.77
clat percentiles (usec):
| 1.00th=[ 55], 5.00th=[ 58], 10.00th=[ 61], 20.00th=[ 67],
| 30.00th=[ 73], 40.00th=[ 80], 50.00th=[ 89], 60.00th=[ 98],
| 70.00th=[ 108], 80.00th=[ 120], 90.00th=[ 139], 95.00th=[ 163],
| 99.00th=[ 273], 99.50th=[ 445], 99.90th=[ 1188], 99.95th=[ 1237],
| 99.99th=[ 2966]
bw ( KiB/s): min=28320, max=38352, per=100.00%, avg=32821.44, stdev=966.43, samples=319
iops : min= 7080, max= 9588, avg=8205.33, stdev=241.62, samples=319
lat (usec) : 2=0.01%, 4=0.01%, 10=0.01%, 20=0.01%, 50=0.27%
lat (usec) : 100=62.36%, 250=36.10%, 500=0.79%, 750=0.02%, 1000=0.01%
lat (msec) : 2=0.44%, 4=0.01%, 10=0.01%, 20=0.01%
cpu : usr=5.28%, sys=17.99%, ctx=1310683, majf=0, minf=34
IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0%
submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0%
complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0%
issued rwt: total=0,1310720,0, short=0,0,0, dropped=0,0,0
latency : target=0, window=0, percentile=100.00%, depth=1
Run status group 0 (all jobs):
WRITE: bw=32.0MiB/s (33.6MB/s), 32.0MiB/s-32.0MiB/s (33.6MB/s-33.6MB/s), io=5120MiB (5369MB), run=159768-159768msec
Disk stats (read/write):
vdb: ios=42/1309723, merge=0/0, ticks=18/138560, in_queue=138152, util=86.58%
測試結果為, 寫平均延遲 = 118 us。
測試順序讀頻寬
[[email protected] ~]$ sudo fio -direct=1 -iodepth=128 -ioengine=libaio -rw=read -bs=1M -size=10G -numjobs=1 -runtime=200 <mark>-filename=/mnt/test</mark> -name=perf
測試隨機讀 IOPS
[[email protected] ~]$ sudo fio -direct=1 -iodepth=128 -ioengine=libaio -rw=randread -bs=4K -size=10G -numjobs=1 -runtime=200 <mark>-filename=/mnt/test</mark> -name=perf
測試隨機讀延遲
[[email protected] ~]$ sudo fio -direct=1 -iodepth=1 -ioengine=libaio -rw=randread -bs=4K -size=5G -numjobs=1 -runtime=200 -filename=/mnt/test -name=perf
fio 引數說明
引數 | 說明 |
---|---|
direct | direct=1 表示測試的 IO 用直接 IO,即跳過檔案系統快取記憶體, 使測試結果更接近磁碟效能 |
iodepth | IO 佇列長度,典型值: 64 或 128, 值越大磁碟的 IO 負載越高,同時 IO 延遲也會增大,iodepth=1 用來測 IO 延遲 |
ioengine | 設定 IO 引擎, 決定 fio 呼叫哪種 IO 庫,通常用 libaio |
rw | IO 型別:read = 順序讀,write = 順序寫,randread = 隨機讀,randwrite = 隨機寫… |
bs | IO 塊大小,即每個 IO 讀寫的資料量,通常 bs=1M 用於測試順序讀寫頻寬,bs=4K 用於測試隨機讀寫的 IOPS |
size | IO 讀寫資料總量,如果 runtime 引數設定的時間到,則會停止 |
numjobs | 同時讀寫的子程序數 |
runtime | 讀寫時長,如果讀寫完 size 引數設定的資料量後還未到讀寫時長且沒有設定 time_based 選項,則停止 |
name | 本次測試任務的名字 |