From owner-freebsd-hardware@FreeBSD.ORG Thu Nov 6 06:09:43 2008 Return-Path: Delivered-To: freebsd-hardware@freebsd.org Received: from mx1.freebsd.org (mx1.freebsd.org [IPv6:2001:4f8:fff6::34]) by hub.freebsd.org (Postfix) with ESMTP id 4EA071065675; Thu, 6 Nov 2008 06:09:43 +0000 (UTC) (envelope-from danny@dannysplace.net) Received: from mail.dannysplace.net (mail.dannysplace.net [213.133.54.210]) by mx1.freebsd.org (Postfix) with ESMTP id 05D028FC08; Thu, 6 Nov 2008 06:09:42 +0000 (UTC) (envelope-from danny@dannysplace.net) Received: from 203-206-171-212.perm.iinet.net.au ([203.206.171.212] helo=[192.168.10.10]) by mail.dannysplace.net with esmtpsa (TLSv1:AES256-SHA:256) (Exim 4.69 (FreeBSD)) (envelope-from ) id 1Kxy3n-00012m-D5; Thu, 06 Nov 2008 16:09:42 +1000 Message-ID: <49128A27.2080405@dannysplace.net> Date: Thu, 06 Nov 2008 16:09:43 +1000 From: Danny Carroll User-Agent: Thunderbird 2.0.0.17 (Windows/20080914) MIME-Version: 1.0 To: Ivan Voras References: <490A782F.9060406@dannysplace.net> <490FE404.2000308@dannysplace.net> In-Reply-To: X-Enigmail-Version: 0.95.7 Content-Type: text/plain; charset=UTF-8 Content-Transfer-Encoding: 7bit X-Authenticated-User: danny X-Authenticator: plain X-Sender-Verify: SUCCEEDED (sender exists & accepts mail) X-Exim-Version: 4.69 (build at 08-Jul-2008 08:59:40) X-Date: 2008-11-06 16:09:39 X-Connected-IP: 203.206.171.212:2615 X-Message-Linecount: 66 X-Body-Linecount: 52 X-Message-Size: 2912 X-Body-Size: 2314 X-Received-Count: 1 X-Recipient-Count: 2 X-Local-Recipient-Count: 2 X-Local-Recipient-Defer-Count: 0 X-Local-Recipient-Fail-Count: 0 X-SA-Exim-Connect-IP: 203.206.171.212 X-SA-Exim-Rcpt-To: ivoras@freebsd.org, freebsd-hardware@freebsd.org X-SA-Exim-Mail-From: danny@dannysplace.net X-Spam-Checker-Version: SpamAssassin 3.2.5 (2008-06-10) on ferrari.dannysplace.net X-Spam-Level: X-Spam-Status: No, score=0.2 required=8.0 tests=ALL_TRUSTED,AWL,TVD_RCVD_IP autolearn=disabled version=3.2.5 X-SA-Exim-Version: 4.2 X-SA-Exim-Scanned: Yes (on mail.dannysplace.net) Cc: freebsd-hardware@freebsd.org Subject: Re: Areca vs. ZFS performance testing. X-BeenThere: freebsd-hardware@freebsd.org X-Mailman-Version: 2.1.5 Precedence: list List-Id: General discussion of FreeBSD hardware List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , X-List-Received-Date: Thu, 06 Nov 2008 06:09:43 -0000 Ivan Voras wrote: > Danny Carroll wrote: > >> - I have seen sustained 130Mb reads from ZFS: >> capacity operations bandwidth >> pool used avail read write read write >> ---------- ----- ----- ----- ----- ----- ----- >> bigarray 1.29T 3.25T 1.10K 0 140M 0 >> bigarray 1.29T 3.25T 1.00K 0 128M 0 >> bigarray 1.29T 3.25T 945 0 118M 0 >> bigarray 1.29T 3.25T 1.05K 0 135M 0 >> bigarray 1.29T 3.25T 1.01K 0 129M 0 >> bigarray 1.29T 3.25T 994 0 124M 0 >> >> ad4 ad6 ad8 cpu >> KB/t tps MB/s KB/t tps MB/s KB/t tps MB/s us ni sy in id >> 0.00 0 0.00 65.90 375 24.10 63.74 387 24.08 0 0 19 2 78 >> 0.00 0 0.00 66.36 357 23.16 63.93 370 23.11 0 0 23 2 75 >> 16.00 0 0.00 64.84 387 24.51 63.79 389 24.20 0 0 23 2 75 >> 16.00 2 0.03 68.09 407 27.04 64.98 409 25.98 0 0 28 2 70 > >> I'm curious if the ~130M figure shown above is bandwidth from the array >> or a total of all the drives. In other words, does it include reading >> the parity information? I think it does not since if I look at iostat >> figures and add up all of the drives it is greater than that reported by >> zfs by a factor of 5/4 (100M in Zfs iostat = 5 x 25Mb in standard iostat). > > The numbers make sense - you have 5 drives in RAID-Z and 4/5ths of total > bandwidth is the "real" bandwidth. On the other hand, 25 MB/s is very > slow for modern drives (assuming you're doing sequential read/write > tests). Are you having hardware problems? No, just the IO from disk to net is slow... >> Lastly, The windows client which performed these tests was measuring >> local bandwidth at about 30-50Mb/s. I believe this figure to be >> incorrect (given how much I transferred in X seconds...) > > Using Samba? Search the lists for Samba performance advice - the default > configuration isn't nearly optimal. In my second post I mentioned that the IO windows was reporting was right. I was getting about 50Mb/sec but ZFS was reporting about 130M/s. I timed this by copying 20Gb and timing it with my watch. Just as a rough guide. I am curious about this inconsistency. If anyone has any ideas??? -D