Skip site navigation (1)Skip section navigation (2)
Date:      Fri, 29 Jan 2016 12:06:16 -0600
From:      Graham Allan <allan@physics.umn.edu>
To:        freebsd-fs@freebsd.org
Subject:   quantifying zpool performance with number of vdevs
Message-ID:  <56ABAA18.90102@physics.umn.edu>

next in thread | raw e-mail | index | archive | help
In many of the storage systems I built to date I was slightly 
conservative (?) in wanting to keep any one pool confined to a single 
JBOD chassis. In doing this I've generally been using the Supermicro 
45-drive chassis with pools made of 4x (8+2) raidz2, other slots being 
kept for spares, ZIL and L2ARC.

Now I have several servers with 3-4 such chassis, and reliability has 
also been such that I'd feel more comfortable about spanning chassis, if 
there was worthwhile performance benefit.

Obviously theory says that iops should scale with number of vdevs but it 
would be nice to try and quantify.

Getting relevant data out of iperf seems problematic on machines with 
128GB+ RAM - it's hard to blow out the ARC.

It does seem like I get possibly more valid-looking results if I set 
"zfs set primarycache=metadata" on my test dataset - it seems like this 
should mostly disable the ARC (seems to be borne out by arcstat output, 
though there could still be L2ARC effects).

Wonder if anyone has any thoughts on this, and also on benefits/risks of 
moving from 40-drive to 80- or 120-drive pools.

Graham
-- 



Want to link to this message? Use this URL: <https://mail-archive.FreeBSD.org/cgi/mid.cgi?56ABAA18.90102>