Skip site navigation (1)Skip section navigation (2)
Date:      Fri, 29 Jan 2016 21:28:43 +0000
From:      Steven Hartland <killing@multiplay.co.uk>
To:        freebsd-fs@freebsd.org
Subject:   Re: quantifying zpool performance with number of vdevs
Message-ID:  <56ABD98B.3070808@multiplay.co.uk>
In-Reply-To: <56ABAA18.90102@physics.umn.edu>
References:  <56ABAA18.90102@physics.umn.edu>

next in thread | previous in thread | raw e-mail | index | archive | help
Always a good read is:
http://blog.delphix.com/matt/2014/06/06/zfs-stripe-width/

On 29/01/2016 18:06, Graham Allan wrote:
> In many of the storage systems I built to date I was slightly 
> conservative (?) in wanting to keep any one pool confined to a single 
> JBOD chassis. In doing this I've generally been using the Supermicro 
> 45-drive chassis with pools made of 4x (8+2) raidz2, other slots being 
> kept for spares, ZIL and L2ARC.
>
> Now I have several servers with 3-4 such chassis, and reliability has 
> also been such that I'd feel more comfortable about spanning chassis, 
> if there was worthwhile performance benefit.
>
> Obviously theory says that iops should scale with number of vdevs but 
> it would be nice to try and quantify.
>
> Getting relevant data out of iperf seems problematic on machines with 
> 128GB+ RAM - it's hard to blow out the ARC.
>
> It does seem like I get possibly more valid-looking results if I set 
> "zfs set primarycache=metadata" on my test dataset - it seems like 
> this should mostly disable the ARC (seems to be borne out by arcstat 
> output, though there could still be L2ARC effects).
>
> Wonder if anyone has any thoughts on this, and also on benefits/risks 
> of moving from 40-drive to 80- or 120-drive pools.
>
> Graham




Want to link to this message? Use this URL: <https://mail-archive.FreeBSD.org/cgi/mid.cgi?56ABD98B.3070808>