Skip site navigation (1)Skip section navigation (2)
Date:      Thu, 8 Jan 2009 11:19:59 +0200
From:      Nikolay Denev <ndenev@gmail.com>
To:        fbsd@dannysplace.net
Cc:        freebsd-fs@freebsd.org, Jeremy Chadwick <koitsu@FreeBSD.org>, freebsd-hardware@freebsd.org
Subject:   Re: Areca vs. ZFS performance testing.
Message-ID:  <BCA7594A-27EF-4209-9752-E749BACC87BE@gmail.com>
In-Reply-To: <496549D9.7010003@dannysplace.net>
References:  <20081031033208.GA21220@icarus.home.lan>	<490A849C.7030009@dannysplace.net>	<20081031043412.GA22289@icarus.home.lan>	<490A8FAD.8060009@dannysplace.net>	<491BBF38.9010908@dannysplace.net> <491C5AA7.1030004@samsco.org>	<491C9535.3030504@dannysplace.net>	<CEDCDD3E-B908-44BF-9D00-7B73B3C15878@anduin.net>	<4920E1DD.7000101@dannysplace.net>	<F55CD13C-8117-4D34-9C35-618D28F9F2DE@spry.com> <20081117070818.GA22231@icarus.home.lan> <496549D9.7010003@dannysplace.net>

next in thread | previous in thread | raw e-mail | index | archive | help
-----BEGIN PGP SIGNED MESSAGE-----
Hash: SHA1


On 8 Jan, 2009, at 02:33 , Danny Carroll wrote:

> I'd like to post some results of what I have found with my tests.
> I did a few different types of tests.  Basically a set of 5-disk tests
> and a set of 12-disk tests.
>
> I did this because I only had 5 ports available on my onboard =20
> controller
> and I wanted to see how the areca compared to that.  I also wanted to
> see comparisons between JBOD, Passthru and hardware raid5.
>
> I have not tested raid6 or raidz2.
>
> You can see the results here:
> http://www.dannysplace.net/quickweb/filesystem%20tests.htm
>
> An explanation of each of the tests:
> ICH9_ZFS			5 disk zfs raidz test with onboard SATA
> 				ports.
> ARECAJBOD_ZFS			5 disk zfs raidz test with Areca SATA
> 				ports configured in JBOD mode.
> ARECAJBOD_ZFS_NoWriteCache	5 disk zfs raidz test with Areca SATA 		=
		=09
> ports configured in JBOD mode and with
> 				disk caches disabled.
> ARECARAID			5 disk zfs single-disk test with Areca
> 				raid5 array.
> ARECAPASSTHRU			5 disk zfs raidz test with Areca SATA 		=
				ports
> configured in Passthru mode.  This
> 				means that the onboard areca cache is
> 				active.
> ARECARAID-UFS2			5 disk ufs2 single-disk test =
with Areca
> 				raid5 array.
> ARECARAID-BIG			12 disk zfs single-disk test with Areca
> 				raid5 array.
> ARECAPASSTHRU_12		12 disk zfs raidz test with Areca SATA 		=
				ports
> configured in Passthru mode.  This
> 				means that the onboard areca cache is
> 				active.
>
>
> I'll probably be opting for the ARECAPASSTHRU_12 configuration.   =20
> Mainly
> because I do not need amazing read speeds (network port would be
> saturated anyway) and I think that the raidz implementation would be
> more fault tolerant.  By that I mean if you have a disk read error
> during a rebuild then as I understand it, raidz will write off that
> block (and hopefully tell me about dead files) but continue with the
> rest of the rebuild.
>
> This is something I'd love to test for real, just to see what happens.
> But I am not sure how I could do that.  Perhaps removing one drive, =20=

> then
> a few random writes to a remaining disk (or two) and seeing how it =20
> goes
> with a rebuild.
>
> Something else worth mentioning.   When I converted from JBOD to
> passthrough, I was able to re-import the disks without any problems.
> This must mean that the areca passthrough option does not alter the =20=

> disk
> much, perhaps not at all.
>
> After a 21 hour rebuild I have to say I am not that keen to do more of
> these tests, but if there is something someone wants to see, then I'll
> definitely consider it.
>
> One thing I am at a loss to understand is why turning off the disk
> caches when testing the JBOD performance produced almost identical =20
> (very
> slightly better) results.  Perhaps it was a case of the ZFS internal
> cache making the disks cache redundant?  Comparing to the ARECA
> passthrough (where the areca cache is used) shows again, similar =20
> results.
>
> -D
> _______________________________________________
> freebsd-fs@freebsd.org mailing list
> http://lists.freebsd.org/mailman/listinfo/freebsd-fs
> To unsubscribe, send any mail to "freebsd-fs-unsubscribe@freebsd.org"


There is a big difference betweeen hardware and ZFS raidz with 12 disk =20=

on the get_block test,
maybe it would be interesting to rerun this test with zfs prefetch =20
disabled?

- --
Regards,
Nikolay Denev




-----BEGIN PGP SIGNATURE-----
Version: GnuPG v2.0.9 (Darwin)

iEYEARECAAYFAkllxT8ACgkQHNAJ/fLbfrnHnwCeJ8nSjBY6fc0Lvu2+fSN5E4HI
zb0Ani2ZFLdxYCWYBuCnoo+D244O2lg5
=3DEKgi
-----END PGP SIGNATURE-----



Want to link to this message? Use this URL: <https://mail-archive.FreeBSD.org/cgi/mid.cgi?BCA7594A-27EF-4209-9752-E749BACC87BE>