Skip site navigation (1)Skip section navigation (2)
Date:      Tue, 04 Nov 2008 15:56:20 +1000
From:      Danny Carroll <danny@dannysplace.net>
To:        Ivan Voras <ivoras@freebsd.org>
Cc:        freebsd-hardware@freebsd.org
Subject:   Re: Areca vs. ZFS performance testing.
Message-ID:  <490FE404.2000308@dannysplace.net>
In-Reply-To: <geesig$9gg$1@ger.gmane.org>
References:  <490A782F.9060406@dannysplace.net> <geesig$9gg$1@ger.gmane.org>

next in thread | previous in thread | raw e-mail | index | archive | help
Ivan Voras wrote:
> Danny Carroll wrote:
> I'd suggest two more tests, because bonnie++ won't tell you the
> performance of random IO and file system overhead:
> 
> 1) randomIO: http://arctic.org/~dean/randomio/
> 2) blogbench: http://www.pureftpd.org/project/blogbench
> 
> Be sure to select appropriate parameters for both (and the same
> parameters in every test so they can be compared) and study how they are
> used so you don't, for example, benchmark your system drive instead of
> the array :) ! (try not to put the system on the array - use the array
> only for benchmarks).
> 
> For example, use blogbench "-c 30 -i 20 -r 40 -W 5 -w 5" to simulate a
> read-mostly environment.


Thanks for the info.  I'll put together a few tests together with the
test scenarios already discussed.

On another note, slightly OT, I've been tuning the system a little bit
and I already have had some gains.  Apart from the ZFS tuning already
mentioned, I have also done a few other things.

- Forced 1000baseTX mode on the Nic
- Experimented with jumbo frames and device polling.
- Tuned a few network IO parameters.

These really have no relevance to the tests I want to do (Areca Vs. ZFS)
but it was interesting to me to note the following:

 - Device polling resulted in a performance degradation.
	It's possible that I did not correctly tune
	the device polling sysctl parameters well,
	so I will revisit this.
 - Tuning sysctl params gave the best results
	I've been able to double my Samba throughput.
 - Jumbo Frames had no noticeable effect.
 - I have seen sustained 130Mb reads from ZFS:
               capacity     operations    bandwidth
pool         used  avail   read  write   read  write
----------  -----  -----  -----  -----  -----  -----
bigarray    1.29T  3.25T  1.10K      0   140M      0
bigarray    1.29T  3.25T  1.00K      0   128M      0
bigarray    1.29T  3.25T    945      0   118M      0
bigarray    1.29T  3.25T  1.05K      0   135M      0
bigarray    1.29T  3.25T  1.01K      0   129M      0
bigarray    1.29T  3.25T    994      0   124M      0

           ad4              ad6              ad8             cpu
KB/t tps  MB/s   KB/t tps  MB/s   KB/t tps  MB/s  us ni sy in id
0.00   0  0.00  65.90 375 24.10  63.74 387 24.08   0  0 19  2 78
0.00   0  0.00  66.36 357 23.16  63.93 370 23.11   0  0 23  2 75
16.00  0  0.00  64.84 387 24.51  63.79 389 24.20   0  0 23  2 75
16.00  2  0.03  68.09 407 27.04  64.98 409 25.98   0  0 28  2 70

Notes:
 ad4 is the system drive, and not part of ZFS.  I forgot to add the
options for the rest of the array drives (5 in total)
 These figures are not measured along the same time frame.

I'm curious if the ~130M figure shown above is bandwidth from the array
or a total of all the drives.  In other words, does it include reading
the parity information?  I think it does not since if I look at iostat
figures and add up all of the drives it is greater than that reported by
zfs by a factor of 5/4  (100M in Zfs iostat = 5 x 25Mb in standard iostat).

If so then that is probably the most I will see coming off the drives
during a network transfer given that 130Mb/s should already be over the
limit of Gigabit ethernet.

Lastly, The windows client which performed these tests was measuring
local bandwidth at about 30-50Mb/s.  I believe this figure to be
incorrect (given how much I transferred in X seconds...)

-D



Want to link to this message? Use this URL: <https://mail-archive.FreeBSD.org/cgi/mid.cgi?490FE404.2000308>