Skip site navigation (1)Skip section navigation (2)
Date:      Mon, 17 Nov 2008 14:07:49 -0800
From:      Matt Simerson <matt@corp.spry.com>
To:        Wes Morgan <morganw@chemikals.org>
Cc:        freebsd-fs@freebsd.org, freebsd-hardware@freebsd.org
Subject:   Re: Areca vs. ZFS performance testing.
Message-ID:  <8B620677-C2CA-4408-A0B1-AACC23FD0FF1@corp.spry.com>
In-Reply-To: <alpine.BSF.2.00.0811170521320.1488@ibyngvyr.purzvxnyf.bet>
References:  <490A782F.9060406@dannysplace.net> <20081031033208.GA21220@icarus.home.lan> <490A849C.7030009@dannysplace.net> <20081031043412.GA22289@icarus.home.lan> <490A8FAD.8060009@dannysplace.net> <491BBF38.9010908@dannysplace.net> <491C5AA7.1030004@samsco.org> <491C9535.3030504@dannysplace.net> <CEDCDD3E-B908-44BF-9D00-7B73B3C15878@anduin.net> <4920E1DD.7000101@dannysplace.net> <F55CD13C-8117-4D34-9C35-618D28F9F2DE@spry.com> <alpine.BSF.2.00.0811170521320.1488@ibyngvyr.purzvxnyf.bet>

next in thread | previous in thread | raw e-mail | index | archive | help

On Nov 17, 2008, at 3:26 AM, Wes Morgan wrote:

>>> The Areca cards do NOT have the cache enabled by default. I  
>>> ordered the optional battery and RAM upgrade for my collection of  
>>> 1231ML cards. Even with the BBWC, the cache is not enabled by  
>>> default. I had to go out of my way to enable it, on every single  
>>> controller.
>
> Are you using these areca cards successfully with large arrays?

Yes, if you consider 24 x 1TB large.

> I found a 1680i card for a decent price and installed it this  
> weekend, but since then I'm seeing the raidz2 pool that it's running  
> hang so frequently that I can't even trust using it. The hangs occur  
> in both 7-stable and 8-current with the new ZFS patch. Same exact  
> settings that have been rock solid for me before now don't want to  
> work at all. The drives are just set as JBOD -- the controller  
> actually defaulted to this, so I didn't have to make any real  
> changes in the BIOS.
>
> Any tips on your setup? Did you have any similar problems?

I talked to a storage vendor of ours that has sold several SuperMicro  
systems like ours where the client was using OpenSolaris and having  
similar stability issues to what we see on FreeBSD. It seems to be a  
lack of maturity in ZFS that underlies these problems.

It appears that running ZFS on FreeBSD will either thrill or horrify.  
When I tested with modest I/O requirements, it worked great and I was  
tickled. But when I build these new systems as backup servers, I was  
generating immensely more disk I/O. I started with 7.0 release and saw  
crashes hourly. With tuning, I was only crashing once or twice a day  
(always memory related). With 16GB of RAM.

I ran for a month with one server on JBOD with RAIDZ2 and another with  
RAIDZ across two RAID 5 arrays. Then I lost a disk and consequently  
the array on the JBOD server. Since RAID 5 had proved to run so much  
faster, I ditched the Marvell cards, installed a pair of 1231MLs and  
reformatted it with RAID 5. Both 24 disk systems have been ZFS RAIDZ  
across two RAID 5 hardware arrays for months since. If I build another  
system tomorrow, that's exactly how I'd do it.

After upgrading to 8-HEAD and applying The Great ZFS Patch, I am  
content with only having to reboot the systems once every 7-12 days.

I have another system with only 8 disks and 4GB of RAM with ZFS  
running on a single RAID 5 array.  Under the same workload as the 24  
disk systems, it was crashing at least once a day. This was existing  
hardware, so we were confident it wasn't hardware issues. I finally  
resolved it by wiping the disks clean, creating a GPT partition on the  
array and using UFS.  The system hasn't crashed once since and is far  
more responsive under heavy load than my ZFS systems.

Of course, all of this might get a fair bit better soon:

http://svn.freebsd.org/viewvc/base?view=revision&revision=185029

Matt



Want to link to this message? Use this URL: <https://mail-archive.FreeBSD.org/cgi/mid.cgi?8B620677-C2CA-4408-A0B1-AACC23FD0FF1>