Skip site navigation (1)Skip section navigation (2)
Date:      Mon, 13 Jul 2009 15:23:20 -0400
From:      Maxim Khitrov <mkhitrov@gmail.com>
To:        mahlerrd@yahoo.com
Cc:        Free BSD Questions list <freebsd-questions@freebsd.org>
Subject:   Re: ZFS or UFS for 4TB hardware RAID6?
Message-ID:  <26ddd1750907131223k9e20142n1fbc41e16d82bf87@mail.gmail.com>
In-Reply-To: <937260.17107.qm@web51002.mail.re2.yahoo.com>
References:  <937260.17107.qm@web51002.mail.re2.yahoo.com>

next in thread | previous in thread | raw e-mail | index | archive | help
On Mon, Jul 13, 2009 at 2:13 PM, Richard Mahlerwein<mahlerrd@yahoo.com> wro=
te:
>
> --- On Mon, 7/13/09, Maxim Khitrov <mkhitrov@gmail.com> wrote:
>
>> From: Maxim Khitrov <mkhitrov@gmail.com>
>> Subject: Re: ZFS or UFS for 4TB hardware RAID6?
>> To: mahlerrd@yahoo.com
>> Cc: "Free BSD Questions list" <freebsd-questions@freebsd.org>
>> Date: Monday, July 13, 2009, 2:02 PM
>> On Mon, Jul 13, 2009 at 1:46 PM,
>> Richard Mahlerwein<mahlerrd@yahoo.com>
>> wrote:
>> >>
>> >> Your mileage may vary, but...
>> >>
>> >> I would investigate either using more spindles if
>> you want
>> >> to stick to RAID6, or perhaps using another RAID
>> level if
>> >> you will be with 4 drives for a while.=C2=A0 The
>> reasoning
>> >> is that there's an overhead with RAID 6 - parity
>> blocks are
>> >> written to 2 disks, so in a 4 drive combination
>> you have 2
>> >> drives with data and 2 with parity.
>> >>
>> >> With 4 drives, you could get much, much higher
>> performance
>> >> out of RAID10 (which is alternatively called
>> RAID0+1 or
>> >> RAID1+0 depending on the manufacturer and on how
>> accurate
>> >> they wish to be, and on how they actually
>> implemented it,
>> >> too). This would also mean 2 usable drives, as
>> well, so
>> >> you'd have the same space available in RAID10 as
>> your
>> >> proposed RAID6.
>> >>
>> >> I would confirm you can, on the fly, convert from
>> RAID10 to
>> >> RAID6 after you add more drives.=C2=A0 If you can not,
>> then
>> >> by all means stick with RAID6 now!
>> >>
>> >> With 4 1 TB drives (for simpler examples)
>> >> RAID5 =3D 3 TB available, 1 TB worth used in
>> "parity".
>> >> Fast reads, slow writes.
>> >> RAID6 =3D 2 TB available, 2 TB worth used in
>> "parity".
>> >> Moderately fast reads, slow writes.
>> >> RAID10 =3D 2 TB available, 2TB in duplicate copies
>> (easier
>> >> work than parity calculations).=C2=A0 Very fast
>> reads,
>> >> moderately fast writes.
>> >>
>> >> When you switch to, say, 8 drives, the numbers
>> start to
>> >> change a bit.
>> >> RAID5 =3D 7TB available, 1 lost.
>> >> RAID6 =3D 6TB available, 2 lost.
>> >> RAID10 =3D 4TB available, 4 lost.
>> >>
>> >
>> > Sorry, consider myself chastised for having missed the
>> "Security is more important than performance" bit. I tend
>> toward solutions that show the most value, and with 4
>> drives, it seems that I'd stick with the same "data
>> security" only pick up the free speed of RAID10. =C2=A0Change
>> when you get to 6 or more drives, if necessary.
>> >
>> > For data security, I can't answer for the UFS2 vs.
>> ZFS. =C2=A0For hardware setup, let me amend everything I said
>> above with the following:
>> >
>> > Since you are seriously focusing on data integrity,
>> ignore everything I said but make sure you have good
>> backups! =C2=A0:)
>> >
>> > Sorry,
>> > -Rich
>>
>> No problem :) I've been doing some reading since I posted
>> this
>> question and it turns out that the controller will actually
>> not allow
>> me to create a RAID6 array using only 4 drives. 3ware
>> followed the
>> same reasoning as you; with 4 drives use RAID10.
>>
>> I know that you can migrate from one to the other when a
>> 5th disk is
>> added, but RAID10 can only handle 2 failed drives if they
>> are from
>> separate RAID1 groups. In this way, it is just slightly
>> less resilient
>> to failure than RAID6. With this new information, I think I
>> may as
>> well get one more 2TB drive and start with 6TB of RAID6
>> space. This
>> will be less of a headache later on.
>>
>> - Max
>
> Just as a question: how ARE you planning on backing this beast up? =C2=A0=
While I don't want to sound like a worry-wort, I have had odd things happen=
 at the worst of times. =C2=A0RAID cards fail, power supplies let out the m=
agic smoke, users delete items they really want back... *sigh*

Rsync over ssh to another server. Most of the data stored will never
change after the first upload. A daily rsync run will transfer one or
two gigs at the most. History is not required for the same reason;
this is an append-only storage for the most part. A backup for the
previous day is all that is required, but I will keep a weekly backup
as well until I start running out of space.

> A bit of reading shows that ZFS, if it's stable enough, has some really g=
reat features that would be nice on such a large pile o' drives.
>
> See http://wiki.freebsd.org/ZFSQuickStartGuide
>
> I guess the last question I'll ask (as any more may uncover my ignorance)=
 is if you need to use hardware RAID at all?  It seems both UFS2 and ZFS ca=
n do software RAID which seems to be quite reasonable with respect to perfo=
rmance and in many ways seems to be more robust since it is a bit more port=
able (no specialized hardware).

I've thought about this one a lot. In my case, the hard drives are in
a separate enclosure from the server and the two had to be connected
via SAS cables. The 9690SA-8E card was the best choice I could find
for accessing an external SAS enclosure with support for 8 drives.

I could configure it in JBOD mode and then use software to create a
RAID array. In fact, I will likely do this to compare performance of a
hardware vs. software RAID5 solution. The ZFS RAID-Z option does not
appeal to me, because the read performance does not benefit from
additional drives, and I don't think RAID6 is available in software.
For those reasons I'm leaning toward a hardware implementation.

If I go the hardware route, I'll try to purchase a backup controller
in a year or two. :)

> There are others who may respond with better information on that front.  =
I've been a strong proponent of hardware RAID, but have recently begun to r=
ealize many of the reasons for that are only of limited validity now.

Agreed, and many simple RAID setups (0, 1, 10) will give you much
better performance in software. In my case, I have to have some piece
of hardware just to get to the drives, and I'm guessing that hardware
RAID5/6 will be faster than the closest software equivalent. Maybe my
tests will convince me otherwise.

- Max



Want to link to this message? Use this URL: <https://mail-archive.FreeBSD.org/cgi/mid.cgi?26ddd1750907131223k9e20142n1fbc41e16d82bf87>