Skip site navigation (1)Skip section navigation (2)
Date:      Fri, 18 Feb 2011 16:05:21 -0800
From:      "Kevin Oberman" <oberman@es.net>
To:        Jeremy Chadwick <freebsd@jdc.parodius.com>
Cc:        freebsd-scsi@freebsd.org, stable@freebsd.org, "Kenneth D. Merry" <ken@freebsd.org>, Dmitry Morozovsky <marck@rinet.ru>
Subject:   Re: mps(4) driver (LSI 6Gb SAS) commited to stable/8 
Message-ID:  <20110219000521.9918B1CC29@ptavv.es.net>
In-Reply-To: Your message of "Fri, 18 Feb 2011 15:13:06 PST." <20110218231306.GA69028@icarus.home.lan> 

next in thread | previous in thread | raw e-mail | index | archive | help
> Date: Fri, 18 Feb 2011 15:13:06 -0800
> From: Jeremy Chadwick <freebsd@jdc.parodius.com>
> Sender: owner-freebsd-stable@freebsd.org
> 
> On Sat, Feb 19, 2011 at 02:05:33AM +0300, Dmitry Morozovsky wrote:
> > On Fri, 18 Feb 2011, Kenneth D. Merry wrote:
> > 
> > KDM> > KDM> I just merged the mps(4) driver to stable/8, for those of you with LSI 6Gb
> > KDM> > KDM> SAS hardware.
> > KDM> > 
> > KDM> > [snip]
> > KDM> > 
> > KDM> > Again, thank you very much Ken.  I'm planning to stress test this on 846 case 
> > KDM> > filled with 12 (yet) WD RE4 disks organized as raidz2, and will post the 
> > KDM> > results.
> > KDM> > 
> > KDM> > Any hints to particularly I/O stressing patterns?  Out of my mind, I'm planning 
> > KDM> > multiple parallel -j'ed builds, parallel tars, *SQL benchmarks -- what else 
> > KDM> > could you suppose?
> > KDM> 
> > KDM> The best stress test I have found has been to just do a single sequential
> > KDM> write stream with ZFS.  i.e.:
> > KDM> 
> > KDM> cd /path/to/zfs/pool
> > KDM> dd if=/dev/zero of=foo bs=1M
> > KDM> 
> > KDM> Just let it run for a long period of time and see what happens.
> > 
> > Well, provided that I'm plannign to use ZFSv28 to be in place, wouldn't be 
> > /dev/random more appropriate?
> 
> No -- /dev/urandom maybe, but not /dev/random.  /dev/urandom will also
> induce significantly higher CPU load than /dev/zero will.  Don't forget
> that ZFS is a processor-centric (read: no offloading) system.
> 
> I tend to try different block sizes (starting at bs=8k and working up to
> bs=256k) for sequential benchmarks.  The "sweet spot" on most disks I've
> found is 64k.  Otherwise use benchmarks/bonnie++.

When FreeBSD updated its random number engine a couple of years ago,
random and urandom became the same thing. Unless I am missing something,
a switch should make no difference.
-- 
R. Kevin Oberman, Network Engineer
Energy Sciences Network (ESnet)
Ernest O. Lawrence Berkeley National Laboratory (Berkeley Lab)
E-mail: oberman@es.net			Phone: +1 510 486-8634
Key fingerprint:059B 2DDF 031C 9BA3 14A4  EADA 927D EBB3 987B 3751



Want to link to this message? Use this URL: <https://mail-archive.FreeBSD.org/cgi/mid.cgi?20110219000521.9918B1CC29>