Skip site navigation (1)Skip section navigation (2)
Date:      Thu, 9 Jul 2015 13:04:27 -0400
From:      Paul Kraus <paul@kraus-haus.org>
To:        FreeBSD - <freebsd-questions@freebsd.org>
Subject:   Re: Gmirror/graid or hardware raid?
Message-ID:  <BA8F36E0-C3B1-4224-A990-F1B069A773CA@kraus-haus.org>
In-Reply-To: <20150709163926.GA83027@neutralgood.org>
References:  <CA%2ByoEx-T5V3Rchxugke3%2BoUno6SwXHW1%2Bx466kWtb8VNYb%2BBbg@mail.gmail.com> <917A821C-02F8-4F96-88DA-071E3431C335@mac.com> <7F08761C-556E-4147-95DB-E84B4E5179A5@kraus-haus.org> <20150709163926.GA83027@neutralgood.org>

next in thread | previous in thread | raw e-mail | index | archive | help
On Jul 9, 2015, at 12:39, kpneal@pobox.com wrote:

> On Thu, Jul 09, 2015 at 10:32:45AM -0400, Paul Kraus wrote:
>> I do NOT use RaidZ for anything except bulk backup data where =
capacity is all that matters and performance is limited by lots of other =
factors.
>=20
> A 4-drive raidz2 is more reliable than a pair of two drive mirrors, =
striped.
> But the pair of mirrors will perform much better.=20

Agreed. In terms of MTTDL (Mean Time To Data Loss), which Richard Elling =
did lots of work researching, from best to worst:

4-way mirror
RAIDz3
3-way mirror
RAIDz2
2-way mirror
RAIDz1
Stripe (no redundancy)

But =85 The MTTDL for a 2-way mirror and a 2 drive RAIDz1 are the same. =
The same can be said of a 3-way mirror and a 3 drive RAIDz2. A 4-way =
mirror and a 4 drive RAIDz3 also have the same MTTDL. In reality, no one =
configures a RAIDz1 of 2 drives, a RAIDz2 of 3 drives, or a RAIDz3 of 4 =
drives. Take a look at Richards blog post on this topic here: =
http://blog.richardelling.com/2010/02/zfs-data-protection-comparison.html

> It's all a balancing act of performance vs reliability. *shrug*

Don=92t forget cost :-) Fast - Cheap - Reliable =85 maybe you can have =
two :-)

> My main server has a three-way mirror and that's it. Three because =
there
> are only three brands of server-grade SAS drives.

My home server has 3 stripes of 3-way mirrors. And yes, each vdev is =
made up of three different drives (in some cases the same manufacturer, =
but different models and production dates).

>=20
>> I also create a =93do-not-remove=94 dataset in every zpool with 1 GB =
reserved and quota. ZFS behaves very, very badly when FULL. This give me =
a cushion when things go badly so I can delete whatever used up all the =
space =85 Yes, ZFS cannot delete files if the FS is completely FULL. I =
leave the =93do-not-remove=94 dataset unmounted so that it cannot be =
used.
>=20
> Isn't this fixed in FreeBSD 10.2? Or was it 11? I can't remember =
because
> I haven't upgraded to that point yet. I do remember complaints from =
people
> who did upgrade and then saw they didn't have as much space free as =
they
> did before the upgrade.

I was not aware this had been accepted as a bug to fix :-) It has been a =
detail to note for ZFS from the very beginning. Do you know if this is a =
FBSD specific fix or coming down from OpenZFS ?

--
Paul Kraus
paul@kraus-haus.org




Want to link to this message? Use this URL: <https://mail-archive.FreeBSD.org/cgi/mid.cgi?BA8F36E0-C3B1-4224-A990-F1B069A773CA>