Skip site navigation (1)Skip section navigation (2)
Date:      Mon, 13 Jul 2009 11:13:57 -0700 (PDT)
From:      Richard Mahlerwein <mahlerrd@yahoo.com>
To:        Free BSD Questions list <freebsd-questions@freebsd.org>
Subject:   Re: ZFS or UFS for 4TB hardware RAID6?
Message-ID:  <937260.17107.qm@web51002.mail.re2.yahoo.com>

next in thread | raw e-mail | index | archive | help

--- On Mon, 7/13/09, Maxim Khitrov <mkhitrov@gmail.com> wrote:=0A=0A> From:=
 Maxim Khitrov <mkhitrov@gmail.com>=0A> Subject: Re: ZFS or UFS for 4TB har=
dware RAID6?=0A> To: mahlerrd@yahoo.com=0A> Cc: "Free BSD Questions list" <=
freebsd-questions@freebsd.org>=0A> Date: Monday, July 13, 2009, 2:02 PM=0A>=
 On Mon, Jul 13, 2009 at 1:46 PM,=0A> Richard Mahlerwein<mahlerrd@yahoo.com=
>=0A> wrote:=0A> >>=0A> >> Your mileage may vary, but...=0A> >>=0A> >> I wo=
uld investigate either using more spindles if=0A> you want=0A> >> to stick =
to RAID6, or perhaps using another RAID=0A> level if=0A> >> you will be wit=
h 4 drives for a while.=A0 The=0A> reasoning=0A> >> is that there's an over=
head with RAID 6 - parity=0A> blocks are=0A> >> written to 2 disks, so in a=
 4 drive combination=0A> you have 2=0A> >> drives with data and 2 with pari=
ty.=0A> >>=0A> >> With 4 drives, you could get much, much higher=0A> perfor=
mance=0A> >> out of RAID10 (which is alternatively called=0A> RAID0+1 or=0A=
> >> RAID1+0 depending on the manufacturer and on how=0A> accurate=0A> >> t=
hey wish to be, and on how they actually=0A> implemented it,=0A> >> too). T=
his would also mean 2 usable drives, as=0A> well, so=0A> >> you'd have the =
same space available in RAID10 as=0A> your=0A> >> proposed RAID6.=0A> >>=0A=
> >> I would confirm you can, on the fly, convert from=0A> RAID10 to=0A> >>=
 RAID6 after you add more drives.=A0 If you can not,=0A> then=0A> >> by all=
 means stick with RAID6 now!=0A> >>=0A> >> With 4 1 TB drives (for simpler =
examples)=0A> >> RAID5 =3D 3 TB available, 1 TB worth used in=0A> "parity".=
=0A> >> Fast reads, slow writes.=0A> >> RAID6 =3D 2 TB available, 2 TB wort=
h used in=0A> "parity".=0A> >> Moderately fast reads, slow writes.=0A> >> R=
AID10 =3D 2 TB available, 2TB in duplicate copies=0A> (easier=0A> >> work t=
han parity calculations).=A0 Very fast=0A> reads,=0A> >> moderately fast wr=
ites.=0A> >>=0A> >> When you switch to, say, 8 drives, the numbers=0A> star=
t to=0A> >> change a bit.=0A> >> RAID5 =3D 7TB available, 1 lost.=0A> >> RA=
ID6 =3D 6TB available, 2 lost.=0A> >> RAID10 =3D 4TB available, 4 lost.=0A>=
 >>=0A> >=0A> > Sorry, consider myself chastised for having missed the=0A> =
"Security is more important than performance" bit. I tend=0A> toward soluti=
ons that show the most value, and with 4=0A> drives, it seems that I'd stic=
k with the same "data=0A> security" only pick up the free speed of RAID10. =
=A0Change=0A> when you get to 6 or more drives, if necessary.=0A> >=0A> > F=
or data security, I can't answer for the UFS2 vs.=0A> ZFS. =A0For hardware =
setup, let me amend everything I said=0A> above with the following:=0A> >=
=0A> > Since you are seriously focusing on data integrity,=0A> ignore every=
thing I said but make sure you have good=0A> backups! =A0:)=0A> >=0A> > Sor=
ry,=0A> > -Rich=0A> =0A> No problem :) I've been doing some reading since I=
 posted=0A> this=0A> question and it turns out that the controller will act=
ually=0A> not allow=0A> me to create a RAID6 array using only 4 drives. 3wa=
re=0A> followed the=0A> same reasoning as you; with 4 drives use RAID10.=0A=
> =0A> I know that you can migrate from one to the other when a=0A> 5th dis=
k is=0A> added, but RAID10 can only handle 2 failed drives if they=0A> are =
from=0A> separate RAID1 groups. In this way, it is just slightly=0A> less r=
esilient=0A> to failure than RAID6. With this new information, I think I=0A=
> may as=0A> well get one more 2TB drive and start with 6TB of RAID6=0A> sp=
ace. This=0A> will be less of a headache later on.=0A> =0A> - Max=0A=0AJust=
 as a question: how ARE you planning on backing this beast up?  While I don=
't want to sound like a worry-wort, I have had odd things happen at the wor=
st of times.  RAID cards fail, power supplies let out the magic smoke, user=
s delete items they really want back... *sigh*=0A=0AA bit of reading shows =
that ZFS, if it's stable enough, has some really great features that would =
be nice on such a large pile o' drives.  =0A=0ASee http://wiki.freebsd.org/=
ZFSQuickStartGuide=0A=0AI guess the last question I'll ask (as any more may=
 uncover my ignorance) is if you need to use hardware RAID at all?  It seem=
s both UFS2 and ZFS can do software RAID which seems to be quite reasonable=
 with respect to performance and in many ways seems to be more robust since=
 it is a bit more portable (no specialized hardware).=0A=0AThere are others=
 who may respond with better information on that front.  I've been a strong=
 proponent of hardware RAID, but have recently begun to realize many of the=
 reasons for that are only of limited validity now.=0A=0A-Rich=0A=0A=0A    =
  



Want to link to this message? Use this URL: <https://mail-archive.FreeBSD.org/cgi/mid.cgi?937260.17107.qm>