From owner-freebsd-stable@FreeBSD.ORG Sat Feb 19 08:37:22 2011 Return-Path: Delivered-To: stable@freebsd.org Received: from mx1.freebsd.org (mx1.freebsd.org [IPv6:2001:4f8:fff6::34]) by hub.freebsd.org (Postfix) with ESMTP id CE662106566B; Sat, 19 Feb 2011 08:37:22 +0000 (UTC) (envelope-from marck@rinet.ru) Received: from woozle.rinet.ru (woozle.rinet.ru [195.54.192.68]) by mx1.freebsd.org (Postfix) with ESMTP id 4EF8D8FC1A; Sat, 19 Feb 2011 08:37:21 +0000 (UTC) Received: from localhost (localhost [127.0.0.1]) by woozle.rinet.ru (8.14.4/8.14.4) with ESMTP id p1J8bKnr009385; Sat, 19 Feb 2011 11:37:20 +0300 (MSK) (envelope-from marck@rinet.ru) Date: Sat, 19 Feb 2011 11:37:20 +0300 (MSK) From: Dmitry Morozovsky To: Jeremy Chadwick In-Reply-To: <20110218231306.GA69028@icarus.home.lan> Message-ID: References: <20110218164209.GA77903@nargothrond.kdm.org> <20110218225204.GA84087@nargothrond.kdm.org> <20110218231306.GA69028@icarus.home.lan> User-Agent: Alpine 2.00 (BSF 1167 2008-08-23) X-NCC-RegID: ru.rinet X-OpenPGP-Key-ID: 6B691B03 MIME-Version: 1.0 Content-Type: TEXT/PLAIN; charset=US-ASCII X-Greylist: Sender IP whitelisted, not delayed by milter-greylist-4.2.6 (woozle.rinet.ru [0.0.0.0]); Sat, 19 Feb 2011 11:37:20 +0300 (MSK) Cc: freebsd-scsi@freebsd.org, stable@freebsd.org, "Kenneth D. Merry" Subject: Re: mps(4) driver (LSI 6Gb SAS) commited to stable/8 X-BeenThere: freebsd-stable@freebsd.org X-Mailman-Version: 2.1.5 Precedence: list List-Id: Production branch of FreeBSD source code List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , X-List-Received-Date: Sat, 19 Feb 2011 08:37:22 -0000 On Fri, 18 Feb 2011, Jeremy Chadwick wrote: JC> > KDM> The best stress test I have found has been to just do a single sequential JC> > KDM> write stream with ZFS. i.e.: JC> > KDM> JC> > KDM> cd /path/to/zfs/pool JC> > KDM> dd if=/dev/zero of=foo bs=1M JC> > KDM> JC> > KDM> Just let it run for a long period of time and see what happens. JC> > JC> > Well, provided that I'm plannign to use ZFSv28 to be in place, wouldn't be JC> > /dev/random more appropriate? JC> JC> No -- /dev/urandom maybe, but not /dev/random. /dev/urandom will also JC> induce significantly higher CPU load than /dev/zero will. Don't forget JC> that ZFS is a processor-centric (read: no offloading) system. We're not on Linux: root@beaver:/FreeBSD/src.8# l /dev/*random crw-rw-rw- 1 root wheel 0, 23 Feb 15 13:50 /dev/random lrwxr-xr-x 1 root wheel 6 Feb 15 13:50 /dev/urandom@ -> random JC> I tend to try different block sizes (starting at bs=8k and working up to JC> bs=256k) for sequential benchmarks. The "sweet spot" on most disks I've JC> found is 64k. Otherwise use benchmarks/bonnie++. Ah yes, bonnie++ was on my list too, thanks for the reminder. -- Sincerely, D.Marck [DM5020, MCK-RIPE, DM3-RIPN] [ FreeBSD committer: marck@FreeBSD.org ] ------------------------------------------------------------------------ *** Dmitry Morozovsky --- D.Marck --- Wild Woozle --- marck@rinet.ru *** ------------------------------------------------------------------------