Skip site navigation (1)Skip section navigation (2)
Date:      Mon, 15 Jul 2019 05:42:25 +0200
From:      hw <hw@adminart.net>
To:        "Kevin P. Neal" <kpn@neutralgood.org>
Cc:        Karl Denninger <karl@denninger.net>,  freebsd-questions@freebsd.org
Subject:   Re: dead slow update servers
Message-ID:  <87ftn8otem.fsf@toy.adminart.net>
In-Reply-To: <20190715014129.GA62729@neutralgood.org> (Kevin P. Neal's message of "Sun, 14 Jul 2019 21:41:29 -0400")
References:  <87sgrbi3qg.fsf@toy.adminart.net> <20190712171910.GA25091@neutralgood.org> <871ryuj3ex.fsf@toy.adminart.net> <CAGLDxTW8zw2d%2BaBGOmBgEhipjq6ocn536fH_NScMiDD7hD=eSw@mail.gmail.com> <874l3qfvqw.fsf@toy.adminart.net> <20190714011303.GA25317@neutralgood.org> <87v9w58apd.fsf@toy.adminart.net> <f7d8acd6-6adb-2b4b-38ef-dc988d7d96a7@denninger.net> <87v9w4qjy8.fsf@toy.adminart.net> <20190715014129.GA62729@neutralgood.org>

next in thread | previous in thread | raw e-mail | index | archive | help
"Kevin P. Neal" <kpn@neutralgood.org> writes:

> On Mon, Jul 15, 2019 at 01:23:43AM +0200, hw wrote:
>> Karl Denninger <karl@denninger.net> writes:
>>=20
>> > On 7/14/2019 00:10, hw wrote:
>> >> "Kevin P. Neal" <kpn@neutralgood.org> writes:
>> >>
>> >>> On Sat, Jul 13, 2019 at 05:39:51AM +0200, hw wrote:
>> >>>> ZFS is great when you have JBODs while storage performance is
>> >>>> irrelevant.  I do not have JBODs, and in almost all cases, storage
>> >>>> performance is relevant.
>> >>> Huh? Is a _properly_ _designed_ ZFS setup really slower? A raidz
>> >>> setup of N drives gets you the performance of roughly 1 drive, but a
>> >>> mirror gets you the write performance of a titch less than one drive
>> >>> with the read performance of N drives. How does ZFS hurt performance?
>> >> Performance is hurt when you have N disks and only get the performance
>> >> of a single disk from them.
>> >
>> > There's no free lunch.=C2=A0 If you want two copies of the data (or on=
e plus
>> > parity) you must write two copies.=C2=A0 The second one doesn't magica=
lly
>> > appear.=C2=A0 If you think it did you were conned by something that is
>> > cheating (e.g. said it had written something when in fact it was sitti=
ng
>> > in a DRAM chip) and, at a bad time, you're going to discover it was
>> > cheating.
>> >
>> > Murphy is a SOB.
>>=20
>> I'm not sure what your point is.  Even RAID5 gives you better
>> performance than raidz because it doesn't limit you to a single disk.
>
> I don't see how this is possible. With either RAID5 or raidz enough
> drives have to be written to recover the data at a minimum. And since
> raidz1 uses the same number of drives as RAID5 it should have similar
> performance characteristics. So read and write performance of raidz1
> should be about the same as RAID5 -- about the speed of a single disk
> since the disks will be returning data roughly in parallel.

Well, if you follow [1], then, in theory, with no more than 4 disks, the
performance could be the same.


[1]: https://blog.storagecraft.com/raid-performance/

> What have you been testing RAID5 with? Bursty loads with large amounts
> of raid controller cache? Of course that's going to appear faster since
> you are writing to memory and not disk in the very short term. But a
> sustained amount of traffic will show raidz1 and RAID5 about the same.

I have been very happy with the overall system performance after I
switched from software RAID5 (mdraid) to a hardware RAID controller,
using the same disks.  It was like night and day difference.  The cache
on the controller was only 512MB.

I'm suspecting that the mainboard I was using had trouble handling
concurrent data transfers to multiple disks and that the CPU wasn't
great at it, either.  This might explain why the system was so sluggish
before changing to hardware RAID.  It was used as a desktop with a
little bit of server stuff running, and just having it all running
seemed to create sluggishness even without much actual load.


Other than that, I'm seeing that ZFS is disappointingly slow (on
entirely different hardware than what was used above) while hardware
RAID has always been nicely fast.

> Oh, and my Dell machines are old enough that I'm stuck with the hardware
> RAID controller. I use ZFS and have raid0 arrays configured with single
> drives in each. I _hate_ it. When a drive fails the machine reboots and
> the controller hangs the boot until I drive out there and dump the card's
> cache. It's just awful.

That doesn't sound like a good setup.  Usually, nothing reboots when a
drive fails.

Would it be a disadvantage to put all drives into a single RAID10 (or
each half of them into one) and put ZFS on it (or them) if you want to
keep ZFS?

> Now Dell offers a vanilla HBA on the "same" server as an
> option. *phew*

That's cool.



Want to link to this message? Use this URL: <https://mail-archive.FreeBSD.org/cgi/mid.cgi?87ftn8otem.fsf>