FastNetMon

пятница, 8 октября 2010 г.

Измерение производительности дисковой подсистемы в Linux посредством утилиты fio

Давно ломал голову над вопросом объективного тестирования файловой системы, но никаких адекватных утилит для этого не попадалось. И вот, читая статью про IOPS, наткнулся упоминание утилиты fio. Официальный сайт утилиты находится по адресу: http://freshmeat.net/projects/fio

Как показал беглый осмотр репозиториев (как CentOS, таки Debian), утилиты там нету, поэтому будем собирать из исходников.

Ставим зависимости в случае CentOS:
yum install -y make gcc libaio-devel

Или в случае Debian:
apt-get install -y gcc make libaio-dev

Собираем:
cd /usr/src
wget http://brick.kernel.dk/snaps/fio-2.0.6.tar.gz
tar -xf fio-2.0.6.tar.gz
cd fio-2.0.6
make

В случае FreeBSD 8.1 установка чуть проще:
cd /usr/ports/sysutils/fio
make install clean
rehash

Все, утилита собрана и теперь ее можно запускать вот так:
/usr/src/fio-2.0.6/fio

Итак, теперь можно потестить свою дисковую подсистему и сравнить с эталонами (ряд значений есть в вики статье: http://en.wikipedia.org/wiki/IOPS).

Попробуем протестировать скорость случайного чтения:
/usr/src/fio-1.41/fio -readonly -name iops -rw=randread -bs=512 -runtime=20 -iodepth 1 -filename /dev/sda -ioengine libaio -direct=1

Далее я провожу тесты для пары своих железок, для начала это массив: RAID-10 на Adaptec 5405 из 4x SAS 15k SEAGATE ST3300657SS.

В ответ будет выдано что-то примерно следующее (интересующее нас поле IOPS я выделил):
iops: (g=0): rw=randread, bs=512-512/512-512, ioengine=libaio, iodepth=1
Starting 1 process
Jobs: 1 (f=1): [r] [100.0% done] [101K/0K /s] [198/0 iops] [eta 00m:00s]
iops: (groupid=0, jobs=1): err= 0: pid=15183
read : io=2,024KB, bw=101KB/s, iops=202, runt= 20003msec
slat (usec): min=11, max=40, avg=22.55, stdev= 3.29
clat (usec): min=59, max=11,462, avg=4912.37, stdev=1475.76
bw (KB/s) : min= 92, max= 112, per=99.90%, avg=100.90, stdev= 3.60
cpu : usr=0.16%, sys=0.52%, ctx=4049, majf=0, minf=21
IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0%
submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0%
complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0%
issued r/w: total=4047/0, short=0/0
lat (usec): 100=0.10%, 250=0.05%
lat (msec): 2=1.38%, 4=28.19%, 10=70.20%, 20=0.07%

Run status group 0 (all jobs):
READ: io=2,023KB, aggrb=101KB/s, minb=103KB/s, maxb=103KB/s, mint=20003msec, maxt=20003msec

Disk stats (read/write):
sda: ios=4027/2, merge=0/5, ticks=19782/1, in_queue=19783, util=98.70%

Если же изменить iodepth до 24 (опять же, это взято с вики), то результаты получатся чуть иные:
iops: (g=0): rw=randread, bs=512-512/512-512, ioengine=libaio, iodepth=24
Starting 1 process
Jobs: 1 (f=1): [r] [100.0% done] [568K/0K /s] [1K/0 iops] [eta 00m:00s]
iops: (groupid=0, jobs=1): err= 0: pid=15264
read : io=10,035KB, bw=500KB/s, iops=999, runt= 20074msec
slat (usec): min=6, max=97, avg=23.53, stdev= 4.45
clat (usec): min=45, max=968K, avg=23958.61, stdev=37772.28
bw (KB/s) : min= 0, max= 573, per=64.60%, avg=322.34, stdev=242.21
cpu : usr=0.78%, sys=2.87%, ctx=18892, majf=0, minf=24
IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=99.9%, 32=0.0%, >=64=0.0%
submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0%
complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.1%, 64=0.0%, >=64=0.0%
issued r/w: total=20070/0, short=0/0
lat (usec): 50=0.01%, 100=0.02%, 250=0.03%
lat (msec): 2=0.21%, 4=7.71%, 10=38.74%, 20=22.52%, 50=19.02%
lat (msec): 100=7.61%, 250=3.76%, 500=0.32%, 750=0.04%, 1000=0.01%

Run status group 0 (all jobs):
READ: io=10,035KB, aggrb=499KB/s, minb=511KB/s, maxb=511KB/s, mint=20074msec, maxt=20074msec

Disk stats (read/write):
sda: ios=19947/2, merge=0/5, ticks=475887/21, in_queue=476812, util=99.26%


А вот тест на рандомную запись на RAID-10 SAS:
/usr/src/fio-1.41/fio -name iops -rw=randwrite -bs=512 -runtime=20 -iodepth 1 -filename /dev/sda1 -ioengine libaio -direct=1
iops: (g=0): rw=randwrite, bs=512-512/512-512, ioengine=libaio, iodepth=1
Starting 1 process
Jobs: 1 (f=1): [w] [100.0% done] [0K/652K /s] [0/1275 iops] [eta 00m:00s]
iops: (groupid=0, jobs=1): err= 0: pid=2815
write: io=12759KB, bw=653202B/s, iops=1275, runt= 20001msec
slat (usec): min=12, max=89, avg=15.77, stdev= 1.01
clat (usec): min=260, max=40079, avg=765.22, stdev=777.73
bw (KB/s) : min= 499, max= 721, per=100.16%, avg=638.00, stdev=33.64
cpu : usr=0.52%, sys=2.02%, ctx=25529, majf=0, minf=25
IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0%
submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0%
complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0%
issued r/w: total=0/25517, short=0/0
lat (usec): 500=47.21%, 750=19.48%, 1000=6.87%
lat (msec): 2=24.44%, 4=1.38%, 10=0.54%, 20=0.07%, 50=0.02%

Run status group 0 (all jobs):
WRITE: io=12758KB, aggrb=637KB/s, minb=653KB/s, maxb=653KB/s, mint=20001msec, maxt=20001msec

Disk stats (read/write):
sda: ios=0/25385, merge=0/0, ticks=0/19604, in_queue=19604, util=98.03%


Далее тест чтения для одного SSD диска SuperTalent UltraDrive GX SSD STT_FTM28GX25H 128 Gb:
/usr/src/fio-1.41/fio -readonly -name iops -rw=randread -bs=512 -runtime=20 -iodepth 1 -filename /dev/sdd -ioengine libaio -direct=1
iops: (g=0): rw=randread, bs=512-512/512-512, ioengine=libaio, iodepth=1
Starting 1 process
Jobs: 1 (f=1): [r] [100.0% done] [5602K/0K /s] [11K/0 iops] [eta 00m:00s]
iops: (groupid=0, jobs=1): err= 0: pid=5993
read : io=109399KB, bw=5470KB/s, iops=10939, runt= 20001msec
slat (usec): min=8, max=169, avg= 9.52, stdev= 2.47
clat (usec): min=1, max=375, avg=78.44, stdev= 8.10
bw (KB/s) : min= 5442, max= 5486, per=100.03%, avg=5470.54, stdev= 7.93
cpu : usr=5.98%, sys=13.00%, ctx=218950, majf=0, minf=25
IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0%
submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0%
complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0%
issued r/w: total=218797/0, short=0/0
lat (usec): 2=0.01%, 4=0.01%, 10=0.01%, 20=0.01%, 50=0.14%
lat (usec): 100=98.83%, 250=1.01%, 500=0.01%


Run status group 0 (all jobs):
READ: io=109398KB, aggrb=5469KB/s, minb=5600KB/s, maxb=5600KB/s, mint=20001msec, maxt=20001msec

Disk stats (read/write):
sdd: ios=217551/0, merge=0/0, ticks=16620/0, in_queue=16580, util=82.91%


Если же изменить iodepth до 24:
/usr/src/fio-1.41/fio -readonly -name iops -rw=randread -bs=512 -runtime=20 -iodepth 24 -filename /dev/sdd -ioengine libaio -direct=1
iops: (g=0): rw=randread, bs=512-512/512-512, ioengine=libaio, iodepth=24
Starting 1 process
Jobs: 1 (f=1): [r] [100.0% done] [14589K/0K /s] [28K/0 iops] [eta 00m:00s]
iops: (groupid=0, jobs=1): err= 0: pid=5999
read : io=284442KB, bw=14221KB/s, iops=28442, runt= 20001msec
slat (usec): min=2, max=336, avg= 6.83, stdev= 5.36
clat (usec): min=264, max=1691, avg=832.82, stdev=129.85
bw (KB/s) : min=14099, max=14300, per=100.00%, avg=14220.82, stdev=35.28
cpu : usr=14.04%, sys=25.98%, ctx=323599, majf=0, minf=28
IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=100.0%, 32=0.0%, >=64=0.0%
submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0%
complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.1%, 64=0.0%, >=64=0.0%
issued r/w: total=568884/0, short=0/0
lat (usec): 500=0.01%, 750=29.74%, 1000=59.18%
lat (msec): 2=11.07%

Run status group 0 (all jobs):
READ: io=284442KB, aggrb=14221KB/s, minb=14562KB/s, maxb=14562KB/s, mint=20001msec, maxt=20001msec

Disk stats (read/write):
sdd: ios=565467/0, merge=0/0, ticks=469940/0, in_queue=469896, util=99.39%


Теперь тест на скорость рандомной записи тоже для SSD (обращаю внимание, данные на носителе при этом тесте будут уничтожены!):
/usr/src/fio-1.41/fio -name iops -rw=randwrite -bs=512 -runtime=20 -iodepth 1 -filename /dev/sdd -ioengine libaio -direct=1
iops: (g=0): rw=randwrite, bs=512-512/512-512, ioengine=libaio, iodepth=1
Starting 1 process
Jobs: 1 (f=1): [w] [100.0% done] [0K/1447K /s] [0/2828 iops] [eta 00m:00s]
iops: (groupid=0, jobs=1): err= 0: pid=6006
write: io=28765KB, bw=1438KB/s, iops=2876, runt= 20001msec
slat (usec): min=10, max=151, avg=16.01, stdev= 2.75
clat (usec): min=28, max=4387, avg=326.63, stdev=266.13
bw (KB/s) : min= 1359, max= 1659, per=100.20%, avg=1440.92, stdev=50.97
cpu : usr=2.24%, sys=5.56%, ctx=57682, majf=0, minf=24
IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0%
submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0%
complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0%
issued r/w: total=0/57530, short=0/0
lat (usec): 50=0.25%, 100=0.01%, 250=20.55%, 500=76.71%, 750=0.60%
lat (usec): 1000=0.39%
lat (msec): 2=0.72%, 4=0.77%, 10=0.01%

Run status group 0 (all jobs):
WRITE: io=28765KB, aggrb=1438KB/s, minb=1472KB/s, maxb=1472KB/s, mint=20001msec, maxt=20001msec

Disk stats (read/write):
sdd: ios=0/57207, merge=0/0, ticks=0/18264, in_queue=18264, util=91.33%


Далее тест чтения для одного SSD диска Corsair CSSD-F12 (Corsair Force Series F120 - CSSD-F120GB2), firmware 1.1:
/usr/src/fio-1.41/fio -readonly -name iops -rw=randread -bs=512 -runtime=20 -iodepth 1 -filename /dev/sda -ioengine libaio -direct=1
iops: (g=0): rw=randread, bs=512-512/512-512, ioengine=libaio, iodepth=1
Starting 1 process
Jobs: 1 (f=1): [r] [100.0% done] [1728K/0K /s] [3376/0 iops] [eta 00m:00s]
iops: (groupid=0, jobs=1): err= 0: pid=4193
read : io=33676KB, bw=1684KB/s, iops=3367, runt= 20001msec
slat (usec): min=7, max=45, avg=12.74, stdev= 0.58
clat (usec): min=138, max=9325, avg=281.76, stdev=66.69
bw (KB/s) : min= 1655, max= 1691, per=100.07%, avg=1684.26, stdev= 9.71
cpu : usr=1.16%, sys=3.54%, ctx=67374, majf=0, minf=25
IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0%
submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0%
complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0%
issued r/w: total=67352/0, short=0/0
lat (usec): 250=1.35%, 500=98.63%, 750=0.01%
lat (msec): 10=0.01%

Run status group 0 (all jobs):
READ: io=33676KB, aggrb=1683KB/s, minb=1724KB/s, maxb=1724KB/s, mint=20001msec, maxt=20001msec

Disk stats (read/write):
sda: ios=66945/0, merge=0/0, ticks=19392/0, in_queue=19392, util=97.00%


И тест записи на нем же:
/usr/src/fio-1.41/fio -name iops -rw=randwrite -bs=512 -runtime=20 -iodepth 1 -filename /dev/sdc -ioengine libaio -direct=1
iops: (g=0): rw=randwrite, bs=512-512/512-512, ioengine=libaio, iodepth=1
Starting 1 process
Jobs: 1 (f=1): [w] [100.0% done] [0K/1418K /s] [0/2771 iops] [eta 00m:00s]
iops: (groupid=0, jobs=1): err= 0: pid=4251
write: io=33458KB, bw=1673KB/s, iops=3345, runt= 20001msec
slat (usec): min=12, max=472, avg=13.43, stdev= 1.89
clat (usec): min=1, max=31361, avg=282.93, stdev=347.25
bw (KB/s) : min= 1034, max= 2454, per=100.54%, avg=1680.95, stdev=459.33
cpu : usr=1.98%, sys=5.36%, ctx=66945, majf=0, minf=24
IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0%
submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0%
complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0%
issued r/w: total=0/66915, short=0/0
lat (usec): 2=0.01%, 250=36.63%, 500=59.59%, 750=1.57%, 1000=0.57%
lat (msec): 2=1.50%, 4=0.12%, 10=0.01%, 20=0.01%, 50=0.01%

Run status group 0 (all jobs):
WRITE: io=33457KB, aggrb=1672KB/s, minb=1712KB/s, maxb=1712KB/s, mint=20001msec, maxt=20001msec

Disk stats (read/write):
sda: ios=0/66550, merge=0/0, ticks=0/19080, in_queue=19060, util=95.34%


Далее тесты на запись на SATA2 ST31500341AS 7200rpm (данные на диске при этом будут уничтожены!):
/usr/src/fio-1.41/fio -name iops -rw=randwrite -bs=512 -runtime=20 -iodepth 1 -filename /dev/sdc -ioengine libaio -direct=1
iops: (g=0): rw=randwrite, bs=512-512/512-512, ioengine=libaio, iodepth=1
Starting 1 process
Jobs: 1 (f=1): [w] [100.0% done] [0K/54K /s] [0/107 iops] [eta 00m:00s]
iops: (groupid=0, jobs=1): err= 0: pid=7118
write: io=1163KB, bw=59542B/s, iops=116, runt= 20001msec
slat (usec): min=14, max=140, avg=16.29, stdev= 3.23
clat (usec): min=449, max=35573, avg=8566.41, stdev=6411.62
bw (KB/s) : min= 44, max= 108, per=99.91%, avg=57.95, stdev=10.26
cpu : usr=0.16%, sys=0.24%, ctx=2335, majf=0, minf=24
IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0%
submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0%
complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0%
issued r/w: total=0/2326, short=0/0
lat (usec): 500=0.26%, 750=12.90%, 1000=6.49%
lat (msec): 2=0.47%, 4=3.44%, 10=37.75%, 20=34.35%, 50=4.34%

Run status group 0 (all jobs):
WRITE: io=1163KB, aggrb=58KB/s, minb=59KB/s, maxb=59KB/s, mint=20001msec, maxt=20001msec

Disk stats (read/write):
sdc: ios=0/2325, merge=0/0, ticks=0/19884, in_queue=19892, util=98.24%


Опять же для SATA2 на чтение:
/usr/src/fio-1.41/fio -name iops -rw=randread -bs=512 -runtime=20 -iodepth 1 -filename /dev/sdc -ioengine libaio -direct=1
iops: (g=0): rw=randread, bs=512-512/512-512, ioengine=libaio, iodepth=1
Starting 1 process
Jobs: 1 (f=1): [r] [100.0% done] [33K/0K /s] [66/0 iops] [eta 00m:00s]
iops: (groupid=0, jobs=1): err= 0: pid=7130
read : io=662016B, bw=33085B/s, iops=64, runt= 20009msec
slat (usec): min=13, max=125, avg=15.18, stdev= 4.09
clat (msec): min=3, max=30, avg=15.44, stdev= 4.20
bw (KB/s) : min= 28, max= 37, per=99.20%, avg=31.74, stdev= 1.93
cpu : usr=0.22%, sys=0.12%, ctx=1302, majf=0, minf=25
IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0%
submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0%
complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0%
issued r/w: total=1293/0, short=0/0

lat (msec): 4=0.08%, 10=9.20%, 20=76.57%, 50=14.15%

Run status group 0 (all jobs):
READ: io=646KB, aggrb=32KB/s, minb=33KB/s, maxb=33KB/s, mint=20009msec, maxt=20009msec

Disk stats (read/write):
sdc: ios=1290/0, merge=0/0, ticks=19884/0, in_queue=19900, util=98.28%


SATA2 на чтение, depth=24:
/usr/src/fio-1.41/fio -name iops -rw=randread -bs=512 -runtime=20 -iodepth 24 -filename /dev/sdc -ioengine libaio -direct=1
iops: (g=0): rw=randread, bs=512-512/512-512, ioengine=libaio, iodepth=24
Starting 1 process
Jobs: 1 (f=1): [r] [100.0% done] [65K/0K /s] [128/0 iops] [eta 00m:00s]
iops: (groupid=0, jobs=1): err= 0: pid=7135
read : io=1230KB, bw=62274B/s, iops=121, runt= 20217msec
slat (usec): min=2, max=35, avg= 8.72, stdev= 1.28
clat (msec): min=13, max=907, avg=196.69, stdev=126.43
bw (KB/s) : min= 0, max= 67, per=64.52%, avg=38.71, stdev=29.17
cpu : usr=0.40%, sys=0.06%, ctx=2464, majf=0, minf=28
IO depths : 1=0.1%, 2=0.1%, 4=0.2%, 8=0.3%, 16=99.4%, 32=0.0%, >=64=0.0%
submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0%
complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.1%, 64=0.0%, >=64=0.0%
issued r/w: total=2459/0, short=0/0

lat (msec): 20=0.20%, 50=4.15%, 100=19.44%, 250=48.80%, 500=24.68%
lat (msec): 750=2.56%, 1000=0.16%

Run status group 0 (all jobs):
READ: io=1229KB, aggrb=60KB/s, minb=62KB/s, maxb=62KB/s, mint=20217msec, maxt=20217msec

Disk stats (read/write):
sdc: ios=2456/0, merge=0/0, ticks=480380/0, in_queue=481280, util=98.21%

Программный RAID-1 из SATA дисков, рандомное чтение:

/usr/src/fio-1.41/fio -readonly -name iops -rw=randread -bs=512 -runtime=20 -iodepth 1 -filename /dev/md1 -ioengine libaio -direct=1
iops: (g=0): rw=randread, bs=512-512/512-512, ioengine=libaio, iodepth=1
Starting 1 process
Jobs: 1 (f=1): [r] [100.0% done] [70K/0K /s] [137/0 iops] [eta 00m:00s]
iops: (groupid=0, jobs=1): err= 0: pid=3647
read : io=1,406KB, bw=71,958B/s, iops=140, runt= 20001msec
slat (usec): min=0, max=4,001, avg=31.31, stdev=352.56
clat (usec): min=0, max=204K, avg=7079.63, stdev=5615.73
bw (KB/s) : min= 40, max= 79, per=99.96%, avg=69.97, stdev= 6.46
cpu : usr=0.00%, sys=0.34%, ctx=5624, majf=0, minf=26
IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0%
submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0%
complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0%
issued r/w: total=2811/0, short=0/0
lat (usec): 2=5.80%
lat (msec): 10=77.45%, 20=16.47%, 50=0.18%, 100=0.04%, 250=0.07%

Run status group 0 (all jobs):
READ: io=1,405KB, aggrb=70KB/s, minb=71KB/s, maxb=71KB/s, mint=20001msec, maxt=20001msec

Disk stats (read/write):
md1: ios=2782/0, merge=0/0, ticks=0/0, in_queue=0, util=0.00%, aggrios=0/0, aggrmerge=0/0, aggrticks=0/0, aggrin_queue=0, aggrutil=0.00%
sdb: ios=0/0, merge=0/0, ticks=0/0, in_queue=0, util=nan%
sda: ios=0/0, merge=0/0, ticks=0/0, in_queue=0, util=nan%

Программный RAID-1 из SATA дисков, рандомная запись:
/usr/src/fio-1.41/fio -name iops -rw=randwrite -bs=512 -runtime=20 -iodepth 1 -filename /dev/md0 -ioengine libaio -direct=1
iops: (g=0): rw=randwrite, bs=512-512/512-512, ioengine=libaio, iodepth=1
Starting 1 process
Jobs: 1 (f=1): [w] [100.0% done] [0K/105K /s] [0/205 iops] [eta 00m:00s]
iops: (groupid=0, jobs=1): err= 0: pid=3650
write: io=2,089KB, bw=104KB/s, iops=208, runt= 20004msec
slat (usec): min=0, max=192K, avg=103.42, stdev=3007.66
clat (usec): min=0, max=880K, avg=4680.69, stdev=31246.53
bw (KB/s) : min= 3, max= 243, per=108.77%, avg=113.12, stdev=52.21
cpu : usr=0.06%, sys=0.20%, ctx=8347, majf=0, minf=25
IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0%
submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0%
complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0%
issued r/w: total=0/4177, short=0/0
lat (usec): 2=27.60%
lat (msec): 4=9.50%, 10=60.14%, 20=2.61%, 1000=0.14%

Run status group 0 (all jobs):
WRITE: io=2,088KB, aggrb=104KB/s, minb=106KB/s, maxb=106KB/s, mint=20004msec, maxt=20004msec

Disk stats (read/write):
md0: ios=0/4130, merge=0/0, ticks=0/0, in_queue=0, util=0.00%, aggrios=0/0, aggrmerge=0/0, aggrticks=0/0, aggrin_queue=0, aggrutil=0.00%
sdb: ios=0/0, merge=0/0, ticks=0/0, in_queue=0, util=nan%
sda: ios=0/0, merge=0/0, ticks=0/0, in_queue=0, util=nan%


Итого, резюмирую:
RAID-10 на Adaptec 5405 из 4x SAS 15k SEAGATE ST3300657SS (2 магнитных диска)
random read, depth=1: 202 IOPS
random read, depth=24: 999 IOPS
random write, depth=1: 1275 IOPS
RAID-10 на LSI MR9260-4i из 4x SAS 15k SEAGATE ST3300657SS (2 магнитных диска)
random read, depth=1: 200 IOPS
random read, depth=24: 1011 IOPS
random write, depth=1: 1148 IOPS
RAID-1 на LSI MR9260-4i из 2x SAS 15k SEAGATE ST3300657SS (2 магнитных диска)
random read, depth=1: 218 IOPS
random read, depth=24: 887 IOPS
random write, depth=1: 624 IOPS
RAID-5 на LSI MR9260-4i из 3x SAS 15k SEAGATE ST3300657SS  (2 магнитных диска)
random read, depth=1: 167 IOPS
random read, depth=24: 878 IOPS
random write, depth=1: 631 IOPS
RAID-0 на LSI MR9260-4i из 2x SAS 15k SEAGATE ST3300657SS (2 магнитных диска)
random read, depth=1: 160 IOPS
random read, depth=24: 504 IOPS
random write, depth=1: 987 IOPS
SSD диск SuperTalent UltraDrive GX SSD STT_FTM28GX25H 128 Gb
random read, depth=1: 10939 IOPS
random read, depth=24: 28442 IOPS
random write, depth=1: 2876 IOPS
SATA2 ST31500341AS 7200rpm
random read, depth=1: 64 IOPS
random read, depth=24: 121 IOPS
random write, depth=1: 116 IOPS
VPS
random read, depth=1: 126 IOPS
random write,depth=1: 236 IOPS
Adaptec 5805 RAID 10 8 SATA 3Tb WD Green
random read, depth=1: 87 IOPS
random read, depth=24: 431 IOPS
random write, depth=1: 186 IOPS
random write, depth=24: 193 IOPS
Adaptec 5805 RAID 10 8 SATA 3Tb WD Green over 1Gbps iSCSI without Jumbo frames
random read, depth=1: 60 IOPS
random read, depth=24: 320 IOPS
random write, depth=1: 187 IOPS
random write, depth=24: 192 IOPS  
Adaptec 2405 Velociraptor WD1000DHTZ-0 x 2 RAID-1:
random read, depth=1: 160 IOPS
random read, depth=24: 545 IOPS
random write, depth=1: 140 IOPS
random write, depth=24: 140 IOPS
Adaptec 2405 Seagate ST3600057SS 15000 600Gb (4 магнитных диска) x 2 RAID-1: 
random read, depth=1: 205 IOPS
random read, depth=24: 650 IOPS
random write, depth=1: 375 IOPS
random write, depth=24: 385 IOPS

Более подробное описание параметров и типов теста можно найти в файле HOWTO, он лежит в папке рядом с бинариком.

9 комментариев :

  1. Этот комментарий был удален автором.

    ОтветитьУдалить
  2. А в чем сакральный смысл именно его? :)

    ОтветитьУдалить
  3. да мне было интересно сравнить с моими цифрами, при, скажем 4х потоках

    но я уже нашел с чем сравнить. :)

    ОтветитьУдалить
  4. (1 поток) read : io=79992KB, bw=3999.4KB/s, iops=7998 , runt= 20001msec
    (24 потока) read : io=457419KB, bw=22870KB/s, iops=45739 , runt= 20001msec
    запись
    (1 поток) write: io=119640KB, bw=5981.7KB/s, iops=11963 , runt= 20001msec
    (24 потока) write: io=268542KB, bw=13426KB/s, iops=26852 , runt= 20001msec


    Device Model: SSDSA2SH032G1GN INTEL

    ОтветитьУдалить
  5. селектел, спб-2
    Прыжки на тестах чтения в 1 поток 200-3000 иопс, по прогонам в 1 поток 400-800-1100.. и 330.
    Тест в 2 минуты:
    (1 поток) io=41716KB, bw=355955B/s, iops=695, runt=120007msec
    (24 потока) io=303598KB, bw=2529KB/s, iops=5057, runt=120069msec

    Судя по увеличению скорости, запас у полки есть и немалый.
    нода боевая, поэтому запись не гоняли. Если надо больше иопс - вариант подключить 2-10 дисков и собрать из них чередование.

    ОтветитьУдалить
    Ответы
    1. Тоже имею сервер в селектеле. Скажите пожалуйста, у вас ситуация с прыжками на тестах чтения до сих пор наблюдается, или уже нормализовалась?
      И могли бы вы в дополнение к iops ещё привести значение clat (latency) на ваших тестах? Хотел бы сравнить со своим сервером на селектеле.

      Удалить
  6. Тоже имею виртуальный сервер на селектеле (услуга "облачный сервер") и периодически замечаю тормоза при работе с диском, поэтому полез искать методы тестирования.

    В дополнение к этой статье ещё нашел полезную статью http://habrahabr.ru/post/154235/ которая показывает, что для скорости нужно мерять не только IOPS (количество операций в секунду), но и latency (отзывчивость) диска.

    Так вот селектел в итоге дает хороший IOPS но низкий latency, поэтому если нам нужно, например, прочитать последовательно (не параллельно) тыщу файлов, то эта операция в облачном хранилище будет выполнятся очень долго, несмотря на высокий IOPS хранилища.

    ОтветитьУдалить
    Ответы
    1. Вот примеры моих измерений:

      Облачный сервер:
      read : io=658936KB, bw=21239KB/s, iops=5309, runt= 31025msec
      clat (usec): min=218, max=213050, avg=6010.04, stdev=7040.76

      write: io=347300KB, bw=11217KB/s, iops=2804, runt= 30963msec
      clat (msec): min=1, max=215, avg=11.39, stdev= 9.70


      Виртуальное приватное облако - быстрый диск:
      read : io=99796KB, bw=3209.2KB/s, iops=802, runt= 31097msec
      clat (usec): min=244, max=140717, avg=39278.48, stdev=7872.94

      write: io=49996KB, bw=1604.4KB/s, iops=401, runt= 31162msec
      clat (usec): min=427, max=155474, avg=79289.17, stdev=9029.01

      Виртуальное приватное облако - медленный диск:
      read : io=14696KB, bw=482701B/s, iops=117, runt= 31176msec
      clat (msec): min=15, max=534, avg=267.07, stdev=38.65

      write: io=15016KB, bw=493196B/s, iops=120, runt= 31177msec
      clat (msec): min=7, max=516, avg=264.20, stdev=33.61


      Обычный десктоп-компьютер (Pentium G2120) и 1 SATA диск (Seagate ST91000640NS)
      read : io=22872KB, bw=827097B/s, iops=201, runt= 28317msec
      clat (msec): min=6, max=1608, avg=158.40, stdev=95.58

      write: io=16140KB, bw=585453B/s, iops=142, runt= 28230msec
      clat (msec): min=6, max=1453, avg=223.82, stdev=164.04

      Удалить
    2. Какая у вас "красивая" латентность....
      Состариться можно пока ответ дождешься.

      Удалить

Примечание. Отправлять комментарии могут только участники этого блога.