Skip site navigation (1)Skip section navigation (2)
Date:      Thu, 20 Oct 2011 10:15:44 +0200
From:      Damien Fleuriot <ml@my.gd>
To:        Dennis Glatting <freebsd@penx.com>
Cc:        Albert Shih <albert.shih@obspm.fr>, "zfs-discuss@opensolaris.org" <zfs-discuss@opensolaris.org>, "Fajar A. Nugraha" <work@fajar.net>, "freebsd-questions@freebsd.org" <freebsd-questions@freebsd.org>
Subject:   Re: [zfs-discuss] ZFS on Dell with FreeBSD
Message-ID:  <CBAD4A01-208B-486C-9A30-70EDFF428169@my.gd>
In-Reply-To: <alpine.BSF.2.00.1110192105300.76956@Elmer.dco.penx.com>
References:  <20111019141443.GQ4592@pcjas.obspm.fr> <CAC4DAF9.74F3B%dave.lists@alfordmedia.com> <CAG1y0sfbQuCZy%2BhEAUGpkWfpxGm=eCJuJdCVw=3sTjZCCdLpuA@mail.gmail.com> <alpine.BSF.2.00.1110192105300.76956@Elmer.dco.penx.com>

next in thread | previous in thread | raw e-mail | index | archive | help


On 20 Oct 2011, at 05:24, Dennis Glatting <freebsd@penx.com> wrote:

>=20
>=20
> On Thu, 20 Oct 2011, Fajar A. Nugraha wrote:
>=20
>> On Thu, Oct 20, 2011 at 7:56 AM, Dave Pooser <dave.zfs@alfordmedia.com> w=
rote:
>>> On 10/19/11 9:14 AM, "Albert Shih" <Albert.Shih@obspm.fr> wrote:
>>>=20
>>>> When we buy a MD1200 we need a RAID PERC H800 card on the server
>>>=20
>>> No, you need a card that includes 2 external x4 SFF8088 SAS connectors.
>>> I'd recommend an LSI SAS 9200-8e HBA flashed with the IT firmware-- then=

>>> it presents the individual disks and ZFS can handle redundancy and
>>> recovery.
>>=20
>> Exactly, thanks for suggesting an exact controller model that can
>> present disks as JBOD.
>>=20
>> With hardware RAID, you'd pretty much rely on the controller to behave ni=
cely, which is why I suggested to simply create one big volume for zfs to us=
e (so you pretty much only use features like snapshot, clones, etc, but don'=
t use zfs self healing feature). Again, others might (and have) disagree and=
 suggest using volumes for individual disk (even when you're still relying o=
n hardware RAID controller). But ultimately there's no question that the bes=
t possible setup would be to present the disks as JBOD and let zfs handle it=
 directly.
>>=20
>=20
> I saw something interesting and different today, which I'll just throw out=
.
>=20
> A buddy has a HP370 loaded with disks (not the only machine that provides t=
hese services, rather the one he was showing off). The 370's disks are manag=
ed by the underlying hardware RAID controller, which he built as multiple RA=
ID1 volumes.
>=20
> ESXi 5.0 is loaded and in control of the volumes, some of which are partit=
ioned. Consequently, his result is vendor supported interfaces between disks=
, RAID controller, ESXi, and managing/reporting software.
>=20
> The HP370 has multiple FreeNAS instances whose "disks" are the "disks" (vo=
lumes/partitions) from ESXi (all on the same physical hardware). The FreeNAS=
 instances are partitioned according to their physical and logical function w=
ithin the infrastructure, whether by physical or logical connections. The Fre=
eNAS instances then serves its "disks" to consumers.
>=20
> We have not done any performance testing. Generally, his NAS consumers are=
 not I/O pigs though we want the best performance possible (some consumers a=
re over the WAN resulting in any HP/ESXi/FreeNAS performance issues possibly=
 moot). (I want to do some performance testing because, well, it may have si=
gnificant amusement value.) A question we have is whether ZFS (ARC, maybe L2=
ARC) within FreeNAS is possible or would provide any value.
>=20


Possible, yes.
Provides value, somewhat.

You still get to use snapshots, compression, dedup...
You don't get ZFS self healing though which IMO is a big loss.

Regarding the ARC, it totally depends on the kind of files you serve and the=
 amount of RAM you have available.

If you keep serving huge, different files all the time, it won't help as muc=
h as when clients request the same small/avg files over and over again.=



Want to link to this message? Use this URL: <https://mail-archive.FreeBSD.org/cgi/mid.cgi?CBAD4A01-208B-486C-9A30-70EDFF428169>