Skip site navigation (1)Skip section navigation (2)
Date:      Mon, 13 Jul 2009 13:08:57 -0700 (PDT)
From:      Richard Mahlerwein <mahlerrd@yahoo.com>
To:        Free BSD Questions list <freebsd-questions@freebsd.org>
Subject:   Re: ZFS or UFS for 4TB hardware RAID6?
Message-ID:  <458986.60331.qm@web51004.mail.re2.yahoo.com>

next in thread | raw e-mail | index | archive | help

--- On Mon, 7/13/09, Maxim Khitrov <mkhitrov@gmail.com> wrote:=0A=0A> From:=
 Maxim Khitrov <mkhitrov@gmail.com>=0A> Subject: Re: ZFS or UFS for 4TB har=
dware RAID6?=0A> To: mahlerrd@yahoo.com=0A> Cc: "Free BSD Questions list" <=
freebsd-questions@freebsd.org>=0A> Date: Monday, July 13, 2009, 3:23 PM=0A>=
 On Mon, Jul 13, 2009 at 2:13 PM,=0A> Richard Mahlerwein<mahlerrd@yahoo.com=
>=0A> wrote:=0A> >=0A> > --- On Mon, 7/13/09, Maxim Khitrov <mkhitrov@gmail=
.com>=0A> wrote:=0A> >=0A> >> From: Maxim Khitrov <mkhitrov@gmail.com>=0A> =
>> Subject: Re: ZFS or UFS for 4TB hardware RAID6?=0A> >> To: mahlerrd@yaho=
o.com=0A> >> Cc: "Free BSD Questions list" <freebsd-questions@freebsd.org>=
=0A> >> Date: Monday, July 13, 2009, 2:02 PM=0A> >> On Mon, Jul 13, 2009 at=
 1:46 PM,=0A> >> Richard Mahlerwein<mahlerrd@yahoo.com>=0A> >> wrote:=0A> >=
> >>=0A> >> >> Your mileage may vary, but...=0A> >> >>=0A> >> >> I would in=
vestigate either using more=0A> spindles if=0A> >> you want=0A> >> >> to st=
ick to RAID6, or perhaps using=0A> another RAID=0A> >> level if=0A> >> >> y=
ou will be with 4 drives for a while.=A0=0A> The=0A> >> reasoning=0A> >> >>=
 is that there's an overhead with RAID 6 -=0A> parity=0A> >> blocks are=0A>=
 >> >> written to 2 disks, so in a 4 drive=0A> combination=0A> >> you have =
2=0A> >> >> drives with data and 2 with parity.=0A> >> >>=0A> >> >> With 4 =
drives, you could get much, much=0A> higher=0A> >> performance=0A> >> >> ou=
t of RAID10 (which is alternatively=0A> called=0A> >> RAID0+1 or=0A> >> >> =
RAID1+0 depending on the manufacturer and=0A> on how=0A> >> accurate=0A> >>=
 >> they wish to be, and on how they=0A> actually=0A> >> implemented it,=0A=
> >> >> too). This would also mean 2 usable=0A> drives, as=0A> >> well, so=
=0A> >> >> you'd have the same space available in=0A> RAID10 as=0A> >> your=
=0A> >> >> proposed RAID6.=0A> >> >>=0A> >> >> I would confirm you can, on =
the fly,=0A> convert from=0A> >> RAID10 to=0A> >> >> RAID6 after you add mo=
re drives.=A0 If you=0A> can not,=0A> >> then=0A> >> >> by all means stick =
with RAID6 now!=0A> >> >>=0A> >> >> With 4 1 TB drives (for simpler=0A> exa=
mples)=0A> >> >> RAID5 =3D 3 TB available, 1 TB worth used=0A> in=0A> >> "p=
arity".=0A> >> >> Fast reads, slow writes.=0A> >> >> RAID6 =3D 2 TB availab=
le, 2 TB worth used=0A> in=0A> >> "parity".=0A> >> >> Moderately fast reads=
, slow writes.=0A> >> >> RAID10 =3D 2 TB available, 2TB in duplicate=0A> co=
pies=0A> >> (easier=0A> >> >> work than parity calculations).=A0 Very=0A> f=
ast=0A> >> reads,=0A> >> >> moderately fast writes.=0A> >> >>=0A> >> >> Whe=
n you switch to, say, 8 drives, the=0A> numbers=0A> >> start to=0A> >> >> c=
hange a bit.=0A> >> >> RAID5 =3D 7TB available, 1 lost.=0A> >> >> RAID6 =3D=
 6TB available, 2 lost.=0A> >> >> RAID10 =3D 4TB available, 4 lost.=0A> >> =
>>=0A> >> >=0A> >> > Sorry, consider myself chastised for having=0A> missed=
 the=0A> >> "Security is more important than performance" bit.=0A> I tend=
=0A> >> toward solutions that show the most value, and=0A> with 4=0A> >> dr=
ives, it seems that I'd stick with the same=0A> "data=0A> >> security" only=
 pick up the free speed of RAID10.=0A> =A0Change=0A> >> when you get to 6 o=
r more drives, if necessary.=0A> >> >=0A> >> > For data security, I can't a=
nswer for the=0A> UFS2 vs.=0A> >> ZFS. =A0For hardware setup, let me amend =
everything=0A> I said=0A> >> above with the following:=0A> >> >=0A> >> > Si=
nce you are seriously focusing on data=0A> integrity,=0A> >> ignore everyth=
ing I said but make sure you have=0A> good=0A> >> backups! =A0:)=0A> >> >=
=0A> >> > Sorry,=0A> >> > -Rich=0A> >>=0A> >> No problem :) I've been doing=
 some reading since I=0A> posted=0A> >> this=0A> >> question and it turns o=
ut that the controller will=0A> actually=0A> >> not allow=0A> >> me to crea=
te a RAID6 array using only 4 drives.=0A> 3ware=0A> >> followed the=0A> >> =
same reasoning as you; with 4 drives use RAID10.=0A> >>=0A> >> I know that =
you can migrate from one to the other=0A> when a=0A> >> 5th disk is=0A> >> =
added, but RAID10 can only handle 2 failed drives=0A> if they=0A> >> are fr=
om=0A> >> separate RAID1 groups. In this way, it is just=0A> slightly=0A> >=
> less resilient=0A> >> to failure than RAID6. With this new information,=
=0A> I think I=0A> >> may as=0A> >> well get one more 2TB drive and start w=
ith 6TB of=0A> RAID6=0A> >> space. This=0A> >> will be less of a headache l=
ater on.=0A> >>=0A> >> - Max=0A> >=0A> > Just as a question: how ARE you pl=
anning on backing=0A> this beast up? =A0While I don't want to sound like a=
=0A> worry-wort, I have had odd things happen at the worst of=0A> times. =
=A0RAID cards fail, power supplies let out the magic=0A> smoke, users delet=
e items they really want back... *sigh*=0A> =0A> Rsync over ssh to another =
server. Most of the data stored=0A> will never=0A> change after the first u=
pload. A daily rsync run will=0A> transfer one or=0A> two gigs at the most.=
 History is not required for the same=0A> reason;=0A> this is an append-onl=
y storage for the most part. A backup=0A> for the=0A> previous day is all t=
hat is required, but I will keep a=0A> weekly backup=0A> as well until I st=
art running out of space.=0A> =0A> > A bit of reading shows that ZFS, if it=
's stable=0A> enough, has some really great features that would be nice on=
=0A> such a large pile o' drives.=0A> >=0A> > See http://wiki.freebsd.org/Z=
FSQuickStartGuide=0A> >=0A> > I guess the last question I'll ask (as any mo=
re may=0A> uncover my ignorance) is if you need to use hardware RAID at=0A>=
 all?=A0 It seems both UFS2 and ZFS can do software RAID=0A> which seems to=
 be quite reasonable with respect to=0A> performance and in many ways seems=
 to be more robust since=0A> it is a bit more portable (no specialized hard=
ware).=0A> =0A> I've thought about this one a lot. In my case, the hard=0A>=
 drives are in=0A> a separate enclosure from the server and the two had to =
be=0A> connected=0A> via SAS cables. The 9690SA-8E card was the best choice=
 I=0A> could find=0A> for accessing an external SAS enclosure with support =
for 8=0A> drives.=0A> =0A> I could configure it in JBOD mode and then use s=
oftware to=0A> create a=0A> RAID array. In fact, I will likely do this to c=
ompare=0A> performance of a=0A> hardware vs. software RAID5 solution. The Z=
FS RAID-Z option=0A> does not=0A> appeal to me, because the read performanc=
e does not benefit=0A> from=0A> additional drives, and I don't think RAID6 =
is available in=0A> software.=0A> For those reasons I'm leaning toward a ha=
rdware=0A> implementation.=0A> =0A> If I go the hardware route, I'll try to=
 purchase a backup=0A> controller=0A> in a year or two. :)=0A> =0A> > There=
 are others who may respond with better=0A> information on that front.=A0 I=
've been a strong=0A> proponent of hardware RAID, but have recently begun t=
o=0A> realize many of the reasons for that are only of limited=0A> validity=
 now.=0A> =0A> Agreed, and many simple RAID setups (0, 1, 10) will give=0A>=
 you much=0A> better performance in software. In my case, I have to have=0A=
> some piece=0A> of hardware just to get to the drives, and I'm guessing=0A=
> that hardware=0A> RAID5/6 will be faster than the closest software=0A> eq=
uivalent. Maybe my=0A> tests will convince me otherwise.=0A> =0A> - Max=0A=
=0AI'd love to hear about any test results you may get comparing software w=
ith hardware raid.  =0A=0A=0A      



Want to link to this message? Use this URL: <https://mail-archive.FreeBSD.org/cgi/mid.cgi?458986.60331.qm>