From owner-freebsd-fs@FreeBSD.ORG Mon Nov 17 22:07:55 2008 Return-Path: Delivered-To: freebsd-fs@freebsd.org Received: from mx1.freebsd.org (mx1.freebsd.org [IPv6:2001:4f8:fff6::34]) by hub.freebsd.org (Postfix) with ESMTP id 67191106564A for ; Mon, 17 Nov 2008 22:07:55 +0000 (UTC) (envelope-from matt@corp.spry.com) Received: from yx-out-2324.google.com (yx-out-2324.google.com [74.125.44.30]) by mx1.freebsd.org (Postfix) with ESMTP id 1B5E58FC0C for ; Mon, 17 Nov 2008 22:07:54 +0000 (UTC) (envelope-from matt@corp.spry.com) Received: by yx-out-2324.google.com with SMTP id 8so1111226yxb.13 for ; Mon, 17 Nov 2008 14:07:54 -0800 (PST) Received: by 10.143.1.12 with SMTP id d12mr2195453wfi.189.1226959673677; Mon, 17 Nov 2008 14:07:53 -0800 (PST) Received: from mattintosh.spry.com (207-178-4-6.wia.com [207.178.4.6]) by mx.google.com with ESMTPS id 32sm2151855wfa.20.2008.11.17.14.07.51 (version=TLSv1/SSLv3 cipher=RC4-MD5); Mon, 17 Nov 2008 14:07:52 -0800 (PST) Message-Id: <8B620677-C2CA-4408-A0B1-AACC23FD0FF1@corp.spry.com> From: Matt Simerson To: Wes Morgan In-Reply-To: Content-Type: text/plain; charset=US-ASCII; format=flowed; delsp=yes Content-Transfer-Encoding: 7bit Mime-Version: 1.0 (Apple Message framework v929.2) Date: Mon, 17 Nov 2008 14:07:49 -0800 References: <490A782F.9060406@dannysplace.net> <20081031033208.GA21220@icarus.home.lan> <490A849C.7030009@dannysplace.net> <20081031043412.GA22289@icarus.home.lan> <490A8FAD.8060009@dannysplace.net> <491BBF38.9010908@dannysplace.net> <491C5AA7.1030004@samsco.org> <491C9535.3030504@dannysplace.net> <4920E1DD.7000101@dannysplace.net> X-Mailer: Apple Mail (2.929.2) Cc: freebsd-fs@freebsd.org, freebsd-hardware@freebsd.org Subject: Re: Areca vs. ZFS performance testing. X-BeenThere: freebsd-fs@freebsd.org X-Mailman-Version: 2.1.5 Precedence: list List-Id: Filesystems List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , X-List-Received-Date: Mon, 17 Nov 2008 22:07:55 -0000 On Nov 17, 2008, at 3:26 AM, Wes Morgan wrote: >>> The Areca cards do NOT have the cache enabled by default. I >>> ordered the optional battery and RAM upgrade for my collection of >>> 1231ML cards. Even with the BBWC, the cache is not enabled by >>> default. I had to go out of my way to enable it, on every single >>> controller. > > Are you using these areca cards successfully with large arrays? Yes, if you consider 24 x 1TB large. > I found a 1680i card for a decent price and installed it this > weekend, but since then I'm seeing the raidz2 pool that it's running > hang so frequently that I can't even trust using it. The hangs occur > in both 7-stable and 8-current with the new ZFS patch. Same exact > settings that have been rock solid for me before now don't want to > work at all. The drives are just set as JBOD -- the controller > actually defaulted to this, so I didn't have to make any real > changes in the BIOS. > > Any tips on your setup? Did you have any similar problems? I talked to a storage vendor of ours that has sold several SuperMicro systems like ours where the client was using OpenSolaris and having similar stability issues to what we see on FreeBSD. It seems to be a lack of maturity in ZFS that underlies these problems. It appears that running ZFS on FreeBSD will either thrill or horrify. When I tested with modest I/O requirements, it worked great and I was tickled. But when I build these new systems as backup servers, I was generating immensely more disk I/O. I started with 7.0 release and saw crashes hourly. With tuning, I was only crashing once or twice a day (always memory related). With 16GB of RAM. I ran for a month with one server on JBOD with RAIDZ2 and another with RAIDZ across two RAID 5 arrays. Then I lost a disk and consequently the array on the JBOD server. Since RAID 5 had proved to run so much faster, I ditched the Marvell cards, installed a pair of 1231MLs and reformatted it with RAID 5. Both 24 disk systems have been ZFS RAIDZ across two RAID 5 hardware arrays for months since. If I build another system tomorrow, that's exactly how I'd do it. After upgrading to 8-HEAD and applying The Great ZFS Patch, I am content with only having to reboot the systems once every 7-12 days. I have another system with only 8 disks and 4GB of RAM with ZFS running on a single RAID 5 array. Under the same workload as the 24 disk systems, it was crashing at least once a day. This was existing hardware, so we were confident it wasn't hardware issues. I finally resolved it by wiping the disks clean, creating a GPT partition on the array and using UFS. The system hasn't crashed once since and is far more responsive under heavy load than my ZFS systems. Of course, all of this might get a fair bit better soon: http://svn.freebsd.org/viewvc/base?view=revision&revision=185029 Matt