Skip site navigation (1)Skip section navigation (2)
Date:      Wed, 05 Nov 2008 11:58:51 +0100
From:      Ivan Voras <ivoras@freebsd.org>
To:        freebsd-hardware@freebsd.org
Subject:   Re: Areca vs. ZFS performance testing.
Message-ID:  <geru8q$fbr$1@ger.gmane.org>
In-Reply-To: <490FE404.2000308@dannysplace.net>
References:  <490A782F.9060406@dannysplace.net> <geesig$9gg$1@ger.gmane.org> <490FE404.2000308@dannysplace.net>

next in thread | previous in thread | raw e-mail | index | archive | help
This is an OpenPGP/MIME signed message (RFC 2440 and 3156)
--------------enig44A484ED89D2AD7FC7AB8745
Content-Type: text/plain; charset=UTF-8
Content-Transfer-Encoding: quoted-printable

Danny Carroll wrote:

>  - I have seen sustained 130Mb reads from ZFS:
>                capacity     operations    bandwidth
> pool         used  avail   read  write   read  write
> ----------  -----  -----  -----  -----  -----  -----
> bigarray    1.29T  3.25T  1.10K      0   140M      0
> bigarray    1.29T  3.25T  1.00K      0   128M      0
> bigarray    1.29T  3.25T    945      0   118M      0
> bigarray    1.29T  3.25T  1.05K      0   135M      0
> bigarray    1.29T  3.25T  1.01K      0   129M      0
> bigarray    1.29T  3.25T    994      0   124M      0
>=20
>            ad4              ad6              ad8             cpu
> KB/t tps  MB/s   KB/t tps  MB/s   KB/t tps  MB/s  us ni sy in id
> 0.00   0  0.00  65.90 375 24.10  63.74 387 24.08   0  0 19  2 78
> 0.00   0  0.00  66.36 357 23.16  63.93 370 23.11   0  0 23  2 75
> 16.00  0  0.00  64.84 387 24.51  63.79 389 24.20   0  0 23  2 75
> 16.00  2  0.03  68.09 407 27.04  64.98 409 25.98   0  0 28  2 70

> I'm curious if the ~130M figure shown above is bandwidth from the array=

> or a total of all the drives.  In other words, does it include reading
> the parity information?  I think it does not since if I look at iostat
> figures and add up all of the drives it is greater than that reported b=
y
> zfs by a factor of 5/4  (100M in Zfs iostat =3D 5 x 25Mb in standard io=
stat).

The numbers make sense - you have 5 drives in RAID-Z and 4/5ths of total
bandwidth is the "real" bandwidth. On the other hand, 25 MB/s is very
slow for modern drives (assuming you're doing sequential read/write
tests). Are you having hardware problems?

> Lastly, The windows client which performed these tests was measuring
> local bandwidth at about 30-50Mb/s.  I believe this figure to be
> incorrect (given how much I transferred in X seconds...)

Using Samba? Search the lists for Samba performance advice - the default
configuration isn't nearly optimal.


--------------enig44A484ED89D2AD7FC7AB8745
Content-Type: application/pgp-signature; name="signature.asc"
Content-Description: OpenPGP digital signature
Content-Disposition: attachment; filename="signature.asc"

-----BEGIN PGP SIGNATURE-----
Version: GnuPG v1.4.6 (GNU/Linux)
Comment: Using GnuPG with Mozilla - http://enigmail.mozdev.org

iD8DBQFJEXxsldnAQVacBcgRAqSBAKCgCBQqBG7cgOLBWYVGokCgT9lUNACgm0h2
cp69wbiwPY3e1GqadCeRh9M=
=anJL
-----END PGP SIGNATURE-----

--------------enig44A484ED89D2AD7FC7AB8745--




Want to link to this message? Use this URL: <https://mail-archive.FreeBSD.org/cgi/mid.cgi?geru8q$fbr$1>