Skip site navigation (1)Skip section navigation (2)
Date:      Mon, 19 Sep 2011 15:27:20 -0400
From:      Gary Palmer <gpalmer@freebsd.org>
To:        Jason Usher <jusher71@yahoo.com>
Cc:        freebsd-fs@freebsd.org
Subject:   Re: ZFS obn FreeBSD hardware model for 48 or 96 sata3 paths...
Message-ID:  <20110919192720.GD10165@in-addr.com>
In-Reply-To: <1316459502.23423.YahooMailClassic@web121212.mail.ne1.yahoo.com>
References:  <alpine.GSO.2.01.1109191403020.7097@freddy.simplesystems.org> <1316459502.23423.YahooMailClassic@web121212.mail.ne1.yahoo.com>

next in thread | previous in thread | raw e-mail | index | archive | help
On Mon, Sep 19, 2011 at 12:11:42PM -0700, Jason Usher wrote:
> 
> 
> --- On Mon, 9/19/11, Bob Friesenhahn <bfriesen@simple.dallas.tx.us> wrote:
> 
> 
> > > Hmmm... I understand this, but is there not any data
> > that might transfer from multiple magnetic disks,
> > simultaneously, at 6GB, that could periodically max out the
> > card bandwidth ?? As in, all drives in a 12 drive array
> > perform an operation on their built-in cache simultaneously
> > ?
> > 
> > The best way to deal with this is by careful zfs pool
> > design so that disks that can be expected to perform related
> > operations (e.g. in same vdev) are carefully split across
> > interface cards and I/O channels. This also helps with
> > reliability.
> 
> 
> Understood.
> 
> But again, can't that all be dismissed completely by having a one drive / one path build ?  And since that does not add extra cost per drive, or per card ... only per motherboard ... it seems an easy cost to swallow - even if it's a very edge case that it might ever be useful.

The message you quoted said to split the load across interface cards
and I/O channels (PCIE lanes I presume).  Unless you are going to somehow
cram 30+ interface cards into a motherboard and chassis, I cannot see how
your query can relate back to the statement unless you are talking about
configurations with SAS/SATA port multipiers, which you are determined
to avoid.  You *cannot* avoid having multiple disks on a single controller
card and it is definitely Best Practice to split drives in any array
across controllers so that any controller failure at most knocks a single
component out of a redundant RAID configuration.  Losing multiple
disks in a single RAID group (or whatever the ZFS name is) normally
results in data loss unless you are extremely lucky.

I also think you are going to be pushed to find a motherboard with
your requirements and will have to use port multipliers, and somewhat
suspect that with the right architecture that the performance hit
is not nearly as bad as you expect.

Gary



Want to link to this message? Use this URL: <https://mail-archive.FreeBSD.org/cgi/mid.cgi?20110919192720.GD10165>