Skip site navigation (1)Skip section navigation (2)
Date:      Sun, 20 Jan 2019 05:01:56 -0500
From:      Rich <rincebrain@gmail.com>
To:        Maciej Jan Broniarz <gausus@gausus.net>
Cc:        andy thomas <andy@time-domain.co.uk>, freebsd-fs <freebsd-fs@freebsd.org>
Subject:   Re: ZFS on Hardware RAID
Message-ID:  <CAOeNLuqQDJ3O1DgzdkshhiJVXd=6aPCMn86BOSwsJLZRdz21aw@mail.gmail.com>
In-Reply-To: <1691666278.63816.1547976245836.JavaMail.zimbra@gausus.net>
References:  <1180280695.63420.1547910313494.JavaMail.zimbra@gausus.net> <92646202.63422.1547910433715.JavaMail.zimbra@gausus.net> <CAOeNLurgn-ep1e=Lq9kgxXK%2By5xqq4ULnudKZAbye59Ys7q96Q@mail.gmail.com> <alpine.BSF.2.21.1901200834470.12592@mail0.time-domain.co.uk> <1691666278.63816.1547976245836.JavaMail.zimbra@gausus.net>

next in thread | previous in thread | raw e-mail | index | archive | help
The performance penalty for using a puny CPU on some PCIe card
shouldn't be that bad. :P

You were suggesting there was a performance penalty for using SW RAID
compared to HW RAID. In practice, shockingly, the CPUs we generally
use for doing lots of operations fast are, in fact, pretty fast at
doing lots of operations.

If you're doing something that absolutely demands all the CPU power
the machine can provide, maybe you'd benefit from the slower but
unshared chip on the controller.

Personally, given that a number of the RAID cards in PE2950s are the
right age to have badcaps (source: I had several nasty surprises that
were, in fact, very obvious once the system was opened and the
bulging+popped capacitors were visible), I would highly recommend you
use other cards, preferably HBAs.

- Rich

On Sun, Jan 20, 2019 at 4:24 AM Maciej Jan Broniarz <gausus@gausus.net> wro=
te:
>
> Hi,
>
> I am thinking about the scenario with ZFS on single disks configured to R=
AID0 by hw raid.
> Please correct me, if i'm wrong, but HW Raid uses a dedicated unit to pro=
cess all RAID related work (eg. parity checks).
> With ZFS the job is done by CPU. How significant is the performance loss =
in that particular case?
>
> mjb
>
>
> ----- Oryginalna wiadomo=C5=9B=C4=87 -----
> Od: "andy thomas" <andy@time-domain.co.uk>
> Do: "Rich" <rincebrain@gmail.com>
> DW: "Maciej Jan Broniarz" <gausus@gausus.net>, "freebsd-fs" <freebsd-fs@f=
reebsd.org>
> Wys=C5=82ane: niedziela, 20 stycze=C5=84 2019 9:45:21
> Temat: Re: ZFS on Hardware RAID
>
> I have to agree with your comment that hardware RAID controllers add
> another layer of opaque complexity but for what it's worth, I have to
> admit ZFS on h/w RAID does work and can work well in practice.
>
> I run a number of very busy webservers (Dell PowerEdge 2950 with LSI
> MegaRAID SAS 1078 controllers) with the first two disks in RAID 1 as the
> FreeBSD system disk and the remaining 4 disks configured as RAID 0 virtua=
l
> disks making up a ZFS RAIDz1 pool with 3 disks plus one hot spare.
> With 6-10 jails running on each server, these have been running for
> years with no problems at all.
>
> Andy
>
> On Sat, 19 Jan 2019, Rich wrote:
>
> > The two caveats I'd offer are:
> > - RAID controllers add an opaque complexity layer if you have problems
> > - e.g. if you're using single-disk RAID0s to make a RAID controller
> > pretend to be an HBA, if the disk starts misbehaving, you have an
> > additional layer of behavior (how the RAID controller interprets
> > drives misbehaving and shows that to the OS) to figure out whether the
> > drive is bad, the connection is loose, the controller is bad, ...
> > - abstracting the redundancy away from ZFS means that ZFS can't
> > recover if it knows there's a problem but the underlying RAID
> > controller doesn't - that is, say you made a RAID-6, and ZFS sees some
> > block fail checksum. There's not a way to say "hey that block was
> > wrong, try that read again with different disks" to the controller, so
> > you're just sad at data loss on your nominally "redundant" array.
> >
> > - Rich
> >
> > On Sat, Jan 19, 2019 at 11:44 AM Maciej Jan Broniarz <gausus@gausus.net=
> wrote:
> >>
> >> Hi,
> >>
> >> I want to use ZFS on a hardware-raid array. I have no option of making=
 it JBOD. I know it is best to use ZFS on JBOD, but
> >> that possible in that particular case. My question is - how bad of an =
idea is it. I have read very different opinions on that subject, but none o=
f them seems conclusive.
> >>
> >> Any comments and especially case studies are most welcome.
> >> All best,
> >> mjb
> >> _______________________________________________
> >> freebsd-fs@freebsd.org mailing list
> >> https://lists.freebsd.org/mailman/listinfo/freebsd-fs
> >> To unsubscribe, send any mail to "freebsd-fs-unsubscribe@freebsd.org"
> > _______________________________________________
> > freebsd-fs@freebsd.org mailing list
> > https://lists.freebsd.org/mailman/listinfo/freebsd-fs
> > To unsubscribe, send any mail to "freebsd-fs-unsubscribe@freebsd.org"
> >
>
>
> ----------------------------
> Andy Thomas,
> Time Domain Systems
>
> Tel: +44 (0)7866 556626
> Fax: +44 (0)20 8372 2582
> http://www.time-domain.co.uk



Want to link to this message? Use this URL: <https://mail-archive.FreeBSD.org/cgi/mid.cgi?CAOeNLuqQDJ3O1DgzdkshhiJVXd=6aPCMn86BOSwsJLZRdz21aw>