From owner-freebsd-questions@FreeBSD.ORG Mon Jan 25 08:32:21 2010 Return-Path: Delivered-To: freebsd-questions@freebsd.org Received: from mx1.freebsd.org (mx1.freebsd.org [IPv6:2001:4f8:fff6::34]) by hub.freebsd.org (Postfix) with ESMTP id 28EB5106566B; Mon, 25 Jan 2010 08:32:21 +0000 (UTC) (envelope-from dan.naumov@gmail.com) Received: from mail-yw0-f197.google.com (mail-yw0-f197.google.com [209.85.211.197]) by mx1.freebsd.org (Postfix) with ESMTP id 966868FC0A; Mon, 25 Jan 2010 08:32:20 +0000 (UTC) Received: by ywh35 with SMTP id 35so2538895ywh.7 for ; Mon, 25 Jan 2010 00:32:19 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=gamma; h=domainkey-signature:mime-version:received:in-reply-to:references :date:message-id:subject:from:to:cc:content-type :content-transfer-encoding; bh=g5zFN5gisNK7vcHBmsGGkEdHWsTjCP+1nRHAdd6EGCk=; b=ozR3+lulCZ7MWvjNi1AV+lTeXB7JwvwG7U9rGyvl8/VGareW7S9rwNYF72DBHvttdX e0RDM3vnTIoVllZQhrhd0zpw/fHrRdc4qInK8ocaqYE+J8fmmoELncO4Tgz8vnazTw/i uPy9Vs1tfhz6O0ICQguA/wgDOx5DGnKAJlUC4= DomainKey-Signature: a=rsa-sha1; c=nofws; d=gmail.com; s=gamma; h=mime-version:in-reply-to:references:date:message-id:subject:from:to :cc:content-type:content-transfer-encoding; b=VqsNGlty9c/zOBZpkqNtxRFYD7stzBA95/5YhqdMOZp4W/9+6hJKBoF5kOkCsgfQvX RBGtak27CpYRcAFwmNXfXKr/BhIM6onDdyaks6j30qBXv/B1XJKpUFpoAYh3BxoQcbdH jjV5T3hkJM+YH5R1JdMroeuTVa9zpK3FYnTz4= MIME-Version: 1.0 Received: by 10.101.6.22 with SMTP id j22mr7317943ani.224.1264408339787; Mon, 25 Jan 2010 00:32:19 -0800 (PST) In-Reply-To: References: <883b2dc51001240905r4cfbf830i3b9b400969ac261b@mail.gmail.com> <1264368182.00211075.1264355402@10.7.7.3> <4B5CC167.5010604@FreeBSD.org> Date: Mon, 25 Jan 2010 10:32:19 +0200 Message-ID: From: Dan Naumov To: Bob Friesenhahn Content-Type: text/plain; charset=ISO-8859-1 Content-Transfer-Encoding: quoted-printable X-Mailman-Approved-At: Mon, 25 Jan 2010 12:33:51 +0000 Cc: freebsd-fs@freebsd.org, Alexander Motin , Jason Edwards , FreeBSD-STABLE Mailing List , freebsd-questions@freebsd.org Subject: Re: 8.0-RELEASE/amd64 - full ZFS install - low read and write disk performance X-BeenThere: freebsd-questions@freebsd.org X-Mailman-Version: 2.1.5 Precedence: list List-Id: User questions List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , X-List-Received-Date: Mon, 25 Jan 2010 08:32:21 -0000 On Mon, Jan 25, 2010 at 9:34 AM, Dan Naumov wrote: > On Mon, Jan 25, 2010 at 7:33 AM, Bob Friesenhahn > wrote: >> On Mon, 25 Jan 2010, Dan Naumov wrote: >>> >>> I've checked with the manufacturer and it seems that the Sil3124 in >>> this NAS is indeed a PCI card. More info on the card in question is >>> available at >>> http://green-pcs.co.uk/2009/01/28/tranquil-bbs2-those-pci-cards/ >>> I have the card described later on the page, the one with 4 SATA ports >>> and no eSATA. Alright, so it being PCI is probably a bottleneck in >>> some ways, but that still doesn't explain the performance THAT bad, >>> considering that same hardware, same disks, same disk controller push >>> over 65mb/s in both reads and writes in Win2008. And agian, I am >>> pretty sure that I've had "close to expected" results when I was >> >> The slow PCI bus and this card look like the bottleneck to me. Remember = that >> your Win2008 tests were with just one disk, your zfs performance with ju= st >> one disk was similar to Win2008, and your zfs performance with a mirror = was >> just under 1/2 that. >> >> I don't think that your performance results are necessarily out of line = for >> the hardware you are using. >> >> On an old Sun SPARC workstation with retrofitted 15K RPM drives on Ultra= -160 >> SCSI channel, I see a zfs mirror write performance of 67,317KB/second an= d a >> read performance of 124,347KB/second. =A0The drives themselves are capab= le of >> 100MB/second range performance. Similar to yourself, I see 1/2 the write >> performance due to bandwidth limitations. >> >> Bob > > There is lots of very sweet irony in my particular situiation. > Initially I was planning to use a single X25-M 80gb SSD in the > motherboard sata port for the actual OS installation as well as to > dedicate 50gb of it to a become a designaed L2ARC vdev for my ZFS > mirrors. The SSD attached to the motherboard port would be recognized > only as a SATA150 device for some reason, but I was still seeing > 150mb/s throughput and sub 0.1 ms latencies on that disk simply > because of how crazy good the X25-M's are. However I ended up having > very bad issues with the Icydock 2,5" to 3,5" converter jacket I was > using to keep/fit the SSD in the system and it would randomly drop > write IO on heavy load due to bad connectors. Having finally figured > out the cause of my OS installations to the SSD going belly up during > applying updates, I decided to move the SSD to my desktop and use it > there instead, additionally thinking that my perhaps my idea of the > SSD was crazy overkill for what I need the system to do. Ironically > now that I am seeing how horrible the performance is when I am > operating on the mirror through this PCI card, I realize that > actually, my idea was pretty bloody brilliant, I just didn't really > know why at the time. > > An L2ARC device on the motherboard port would really help me with > random read IO, but to work around the utterly poor write performance, > I would also need a dedicaled SLOG ZIL device. The catch is that while > L2ARC devices and be removed from the pool at will (should the device > up and die all of a sudden), the dedicated ZILs cannot and currently a > "missing" ZIL device will render the pool it's included in be unable > to import and become inaccessible. There is some work happening in > Solaris to implement removing SLOGs from a pool, but that work hasn't > yet found it's way in FreeBSD yet. > > > - Sincerely, > Dan Naumov OK final question: if/when I go about adding more disks to the system and want redundancy, am I right in thinking that: ZFS pool of disk1+disk2 mirror + disk3+disk4 mirror (a la RAID10) would completely murder my write and read performance even way below the current 28mb/s / 50mb/s I am seeing with 2 disks on that PCI controller and that in order to have the least negative impact, I should simply have 2 independent mirrors in 2 independent pools (with the 5th disk slot in the NAS given to a non-redundant single disk running off the one available SATA port on the motherboard)? - Sincerely, Dan Naumov