From owner-freebsd-fs@FreeBSD.ORG Fri Nov 16 20:28:34 2012 Return-Path: Delivered-To: freebsd-fs@freebsd.org Received: from mx1.freebsd.org (mx1.freebsd.org [69.147.83.52]) by hub.freebsd.org (Postfix) with ESMTP id 1353E8F5 for ; Fri, 16 Nov 2012 20:28:34 +0000 (UTC) (envelope-from julian@freebsd.org) Received: from vps1.elischer.org (vps1.elischer.org [204.109.63.16]) by mx1.freebsd.org (Postfix) with ESMTP id C12788FC08 for ; Fri, 16 Nov 2012 20:28:33 +0000 (UTC) Received: from JRE-MBP-2.local (c-50-143-149-146.hsd1.ca.comcast.net [50.143.149.146]) (authenticated bits=0) by vps1.elischer.org (8.14.5/8.14.5) with ESMTP id qAGKSQF7016429 (version=TLSv1/SSLv3 cipher=DHE-RSA-CAMELLIA256-SHA bits=256 verify=NO); Fri, 16 Nov 2012 12:28:27 -0800 (PST) (envelope-from julian@freebsd.org) Message-ID: <50A6A1E5.4070000@freebsd.org> Date: Fri, 16 Nov 2012 12:28:21 -0800 From: Julian Elischer User-Agent: Mozilla/5.0 (Macintosh; Intel Mac OS X 10.6; rv:16.0) Gecko/20121026 Thunderbird/16.0.2 MIME-Version: 1.0 To: Mike McLaughlin Subject: Re: SSD recommendations for ZFS cache/log References: In-Reply-To: Content-Type: text/plain; charset=ISO-8859-1; format=flowed Content-Transfer-Encoding: 7bit Cc: freebsd-fs@freebsd.org, jpaetzel@ixsystems.com X-BeenThere: freebsd-fs@freebsd.org X-Mailman-Version: 2.1.14 Precedence: list List-Id: Filesystems List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , X-List-Received-Date: Fri, 16 Nov 2012 20:28:34 -0000 On 11/16/12 12:15 PM, Mike McLaughlin wrote: >> On Thu, Nov 15, 2012 at 1:18 AM, John wrote: >> >>> ----- Julian Elischer's Original Message ----- >>>> On 11/13/12 1:19 PM, Jason Keltz wrote: >>>>> On 11/13/2012 12:41 PM, Bob Friesenhahn wrote: >>>>>> On Mon, 12 Nov 2012, kpneal@pobox.com wrote: >>>>>>> With your setup of 11 mirrors you have a good mixture of read >>>>>>> and write >>>>>>> performance, but you've compromised on the safety. The reason >>>>>>> that RAID 6 >>> ... >>> >>>>> By the way - on another note - what do you or other list members >>>>> think of the new Intel SSD DC S3700 as ZIL? Sounds very promising >>>>> when it's finally available. I spent a lot of time researching >>>>> ZILs today, and one thing I can say is that I have a major >>>>> headache now because of it!! >>>> ZIL is best served by battery backed up ram or something.. it's tiny >>>> and not a really good fit an SSD (maybe just a partition) L2ARC on >>>> the other hand is a really good use for SSD. >>> Well, since you brought the subject up :-) >>> >>> Do you have any recommendations for an NVRAM unit usable with Freebsd? >>> >> I've always had my eyes on something like this for ZIL but never had the >> need to explore it yet: http://www.ddrdrive.com/ >> Most recommendations I've seen have also been around mirrored 15krpm disks >> of some sort or even a cheaper battery-backed raid controller in front of >> decent disks. for zil it would just need a tiny bit of RAM anyway. >> >> > First, I wholeheartedly agree with some of the other posts calling for more > documentation and FAQs on ZFS; It's sorely lacking and there is a whole lot > of FUD and outdated information. > > I've tested several SSDs, I have a few DDRdrives, and I have a ZuesRAM (in > a TrueNAS appliance - and another on order that I can test with Solaris). > The DDRdrive is OK at best. The latency is quite good, but it's not very > high throughput mostly because it's PCIe 1x I believe. It can do lots of > very small writes but peaks out at about 130MB/sec no matter the blocksize. > If you're using GbE, you're set. If you're using LAGG or 10GbE, it's not > great for the price. I also just had a wicked evening a few days ago when > my building lost power for a few hours at night and UPSs failed. The UPS > that the DDRdrive was attached to died at the same time the one backing the > server and it broke my zpool quite severely - none of the typical recovery > commands worked at all (this was an OpenIndiana box) and the DDRdrive lost > 100% of it's configuration - the system thought it was a brand new drive > that didn't belong in the pool (it lost it's partition table, label, etc) . > It was a disappointing display by the DDRdrive. I know it's my own fault > for the power, but the thing is not a good idea if you aren't 100% certain > it's battery will outlast the system UPS/shutdown. The SSD that I've had by > far and away the best luck with that has a supercap is the Intel 320. I've > got a couple systems with 300gb Intel 320's, partitioned to use 15gb for > ZIL (and the rest empty). I've been using them for about a year now and > have been monitoring the wear. They will not exceed their expected write > lifetime until they've written about 1.2PB or more - several years at a > fairly heavy workload for me. It can also do 100-175MB/sec and ~10-20k IOPS > depending on the workload, often outpacing the DDRdrives. I'm going to get > my hands on the new Intel drives with supercaps coming out as soon as > they're available - they look quite promising. > > As for the ZuesRAM, it's exceedingly fast at the system level. I haven't > been able to test it thoroughly in my setup though - It seems FreeBSD has a > pretty severe performance issue with sync writes over NFS written to the > ZIL, at least in backing VMware. I have a very high end system from IX that > just can't do more than ~125MB/sec writes (just above 1GbE). It just > flat-lines. The ZuesRAM is certainly not bottle necking and doing o_sync dd > writes over NFS from other *nix sources I can write nearly 500MB/sec (at 4k > bs). My Solaris based systems do not hit the 125MB barrier that FreeBSD > seems to have with VMware. I'm using a 10GbE network for my VMware storage. I know someone mentionned the Fusion-IO drives as expensive but it would be good to get ix-systems to let us know how much the new cards are.. especially the small 'consumer' one. (io-FX).. I work there (Fusion-IO) but I have NO IDEA what the price is. I know ix have tested them in their TrueNAS boxes as L2ARC but I don't really know the numbers. We do have one (old) card in panther2 (I think) in the FreeBSD cluster so if anyne wants to try it out as a ZIL or L2ARC (an know someone with access) then there is that possibility. > _______________________________________________ > freebsd-fs@freebsd.org mailing list > http://lists.freebsd.org/mailman/listinfo/freebsd-fs > To unsubscribe, send any mail to "freebsd-fs-unsubscribe@freebsd.org" > >