Skip site navigation (1)Skip section navigation (2)
Date:      Fri, 07 Jun 2013 17:07:18 +0200
From:      Pierre Lemazurier <pierre@lemazurier.fr>
To:        freebsd-fs@freebsd.org
Subject:   [ZFS] Raid 10 performance issues
Message-ID:  <51B1F726.7090402@lemazurier.fr>
In-Reply-To: <51B1EBD1.9010207@gmail.com>
References:  <51B1EBD1.9010207@gmail.com>

next in thread | previous in thread | raw e-mail | index | archive | help
Hi, i think i suffer of write and read performance issues on my zpool.

About my system and hardware :

uname -a
FreeBSD bsdnas 9.1-RELEASE FreeBSD 9.1-RELEASE #0 r243825: Tue Dec  4 
09:23:10 UTC 2012 
root@farrell.cse.buffalo.edu:/usr/obj/usr/src/sys/GENERIC  amd64

sysinfo -a : http://www.privatepaste.com/b32f34c938

- 24 (4gbx6) GB DDR3 ECC :
http://www.ec.kingston.com/ecom/configurator_new/partsinfo.asp?ktcpartno=KVR16R11D8/4HC
- 14x this drive :
http://www.wdc.com/global/products/specs/?driveID=1086&language=1
- server :
http://www.supermicro.com/products/system/1u/5017/sys-5017r-wrf.cfm?parts=show
- CPU :
http://ark.intel.com/fr/products/64594/Intel-Xeon-Processor-E5-2620-15M-Cache-2_00-GHz-7_20-GTs-Intel-QPI
- chassis :
http://www.supermicro.com/products/chassis/4u/847/sc847e16-rjbod1.cfm
- HBA sas connector :
http://www.lsi.com/products/storagecomponents/Pages/LSISAS9200-8e.aspx
- Cable between chassis and server :
http://www.provantage.com/supermicro-cbl-0166l~7SUPA01R.htm

I use this command for test write speed :dd if=/dev/zero of=test.dd
bs=2M count=10000
I use this command for test read speed :dd if=test.dd of=/dev/null bs=2M
count=10000

Of course no compression on zfs dataset.

Test on one of this disk format with UFS :

Write :
some gstat raising : http://www.privatepaste.com/dd31fafaa6
speed around 140 mo/s and something like 1100 iops
dd result : 20971520000 bytes transferred in 146.722126 secs (142933589
bytes/sec)

Read :
I think I read on RAM (20971520000 bytes transferred in 8.813298 secs
(2379531480 bytes/sec)).
Then I make the test on all the drive (dd if=/dev/gpt/disk14.nop
of=/dev/null bs=2M count=10000)
some gstat raising : http://www.privatepaste.com/d022b7c480
speed around 140 mo/s again an near 1100+ iops
dd reslut : 20971520000 bytes transferred in 142.895212 secs (146761530
bytes/sec)


ZFS - I make my zpool on this way : http://www.privatepaste.com/e74d9cc3b9

zpool status : http://www.privatepaste.com/0276801ef6
zpool get all : http://www.privatepaste.com/74b37a2429
zfs get all : http://www.privatepaste.com/e56f4a33f8
zfs-stats -a : http://www.privatepaste.com/f017890aa1
zdb : http://www.privatepaste.com/7d723c5556

With this setup I hope to have near 7x more speed for write and near 14x for
read than the UFS device alone. Then for be realistic, something like
850 mo/s for write and 1700 mo/s for read.


ZFS – test :

Write :
gstat raising : http://www.privatepaste.com/7cefb9393a
zpool iostat -v 1 of a fastest try : http://www.privatepaste.com/8ade4defbe
dd result : 20971520000 bytes transferred in 54.326509 secs (386027381
bytes/sec)

386 mo/s more than twice less than I expect.


Read :
I export and import the pool for limit the ARC effect. I don't know how
to do better, I hope that sufficient.
gstat raising : http://www.privatepaste.com/130ce43af1
zpool iostat -v 1 : http://privatepaste.com/eb5f9d3432
dd result : 20971520000 bytes transferred in 30.347214 secs (691052563
bytes/sec)

690 mo/s 2,5x less than I expect.


It's appear to not be an hardware issue, when I do a dd test of each
whole disk at the same time with the command dd if=/dev/gpt/diskX
of=/dev/null bs=1M count=10000, I have this gstat raising :
http://privatepaste.com/df9f63fd4d

Near 130 mo/s for each device, something like I expect.

In your opinion where the problem come from ?


Forgive me for my English, please keep easy language, i'm not realy easy
with English.
I can give you more information if you need.

Many thanks for your help.



Want to link to this message? Use this URL: <https://mail-archive.FreeBSD.org/cgi/mid.cgi?51B1F726.7090402>