Skip site navigation (1)Skip section navigation (2)
Date:      Tue, 10 Jul 2012 14:48:51 -0500
From:      Kevin Day <toasty@dragondata.com>
To:        Jason Usher <jusher71@yahoo.com>
Cc:        freebsd-fs@freebsd.org
Subject:   Re: chaining JBOD chassic to server ... why am I scared ?   (ZFS)
Message-ID:  <AEA750AD-BF15-45D1-9F78-460FB6A40A40@dragondata.com>
In-Reply-To: <1341946657.18535.YahooMailClassic@web122505.mail.ne1.yahoo.com>
References:  <1341946657.18535.YahooMailClassic@web122505.mail.ne1.yahoo.com>

next in thread | previous in thread | raw e-mail | index | archive | help

On Jul 10, 2012, at 1:57 PM, Jason Usher <jusher71@yahoo.com> wrote:

> The de-facto configuration the smart folks are using for ZFS seems to =
be:
>=20
> - 16/24/36 drive supermicro chassis
> - LSI 9211-8i internal cards
> - ZFS and probably raidz2 or raidz3 vdevs
>=20
> Ok, fine.  But then I see some even smarter folks attaching the =
48-drive 4U JBOD chassis to this configuration, probably using a =
different LSI card that has an external SAS cable.
>=20
> So ... 84 drives accessible to ZFS on one system.  In terms of space =
and money efficiency, it sounds really great - fewer systems to manage, =
etc.
>=20
> But this scares me ...
>=20
> - two different power sources - so the "head unit" can lose power =
independent of the JBOD device ... how well does that turn out ?
>=20
> - external cabling - has anyone just yanked that external SAS cable a =
few times, and what does that look like ?
>=20
> - If you have a single SLOG, or a single L2ARC device, where do you =
put it ?  And then what happens if "the other half" of the system =
detaches from the half that the SLOG/L2ARC is in ?
>=20
> - ... any number of other weird things ?
>=20
>=20
> Just how well does ZFS v28 deal with these kind of situations, and do =
I have a good reason to be awfully shy about doing this ?
>=20


We do this for ftpmirror.your.org (which is ftp3.us.freebsd.org & =
others). It's got an LSI 9280 in it, which has 3 external chassis (each =
with 24 3TB drives) attached to it. Before putting into use, we =
experimented with pulling the power/data cables from random places while =
using it. Nothing we did was any worse than the whole system just losing =
power. The only difference was that in some cases losing all the storage =
would hang the server until it was power cycled, but again=85 no worse =
than if everything lost power.  If something goes bad, it's pretty =
likely things are going to go down, no matter the physical topology. =
There was no crazy data loss or anything if that's what you're worried =
about.

-- Kevin




Want to link to this message? Use this URL: <https://mail-archive.FreeBSD.org/cgi/mid.cgi?AEA750AD-BF15-45D1-9F78-460FB6A40A40>