Skip site navigation (1)Skip section navigation (2)
Date:      Mon, 15 Jul 2019 01:23:43 +0200
From:      hw <hw@adminart.net>
To:        Karl Denninger <karl@denninger.net>
Cc:        freebsd-questions@freebsd.org
Subject:   Re: dead slow update servers
Message-ID:  <87v9w4qjy8.fsf@toy.adminart.net>
In-Reply-To: <f7d8acd6-6adb-2b4b-38ef-dc988d7d96a7@denninger.net> (Karl Denninger's message of "Sun, 14 Jul 2019 07:23:10 -0500")
References:  <87sgrbi3qg.fsf@toy.adminart.net> <20190712171910.GA25091@neutralgood.org> <871ryuj3ex.fsf@toy.adminart.net> <CAGLDxTW8zw2d%2BaBGOmBgEhipjq6ocn536fH_NScMiDD7hD=eSw@mail.gmail.com> <874l3qfvqw.fsf@toy.adminart.net> <20190714011303.GA25317@neutralgood.org> <87v9w58apd.fsf@toy.adminart.net> <f7d8acd6-6adb-2b4b-38ef-dc988d7d96a7@denninger.net>

next in thread | previous in thread | raw e-mail | index | archive | help
Karl Denninger <karl@denninger.net> writes:

> On 7/14/2019 00:10, hw wrote:
>> "Kevin P. Neal" <kpn@neutralgood.org> writes:
>>
>>> On Sat, Jul 13, 2019 at 05:39:51AM +0200, hw wrote:
>>>> ZFS is great when you have JBODs while storage performance is
>>>> irrelevant.  I do not have JBODs, and in almost all cases, storage
>>>> performance is relevant.
>>> Huh? Is a _properly_ _designed_ ZFS setup really slower? A raidz
>>> setup of N drives gets you the performance of roughly 1 drive, but a
>>> mirror gets you the write performance of a titch less than one drive
>>> with the read performance of N drives. How does ZFS hurt performance?
>> Performance is hurt when you have N disks and only get the performance
>> of a single disk from them.
>
> There's no free lunch.=C2=A0 If you want two copies of the data (or one p=
lus
> parity) you must write two copies.=C2=A0 The second one doesn't magically
> appear.=C2=A0 If you think it did you were conned by something that is
> cheating (e.g. said it had written something when in fact it was sitting
> in a DRAM chip) and, at a bad time, you're going to discover it was
> cheating.
>
> Murphy is a SOB.

I'm not sure what your point is.  Even RAID5 gives you better
performance than raidz because it doesn't limit you to a single disk.

>> Mirroring the N disks would require another N disks, which you don't
>> have.
>>
>> "Performance" isn't much better defined as "properly designed" here.  In
>> practise, I prefer a hardware RAID5 with N disks over a raidz with N
>> disks and a RAID10 over a RAID5.  Unfortunately, in practise, the number
>> of disks is limited because they aren't cheap and because only so many
>> disks can be connected to a machine without further ado while there is a
>> certain requirement for storage capacity.  Reality is not proper
>> designed :/
>>
>>
>> What do you do when you put FreeBSD on a server that has a hardware RAID
>> controller which doesn't do JBOD?  Use ZFS on the RAID?
>
> Throw said controller in the trash and get a proper one.

Show me, for example, such a controller that is certified to be
compatible with HP DL380 gen7 servers or Dell R710s, replacing an H700+,
and doesn't cost anything.

> Raid controllers were very useful a decade ago when ZFS was
> trouble-ridden and the controller's firmware was less-so.=C2=A0 Now it's =
the
> other way around.

ZFS is still trouble ridden when you're using Linux even if only because
it hasn't been integrated so well due to licensing issues.  Software
RAID has advantages and disadvantages, same as hardware RAID.  In all
cases I've been using RAID, hardware RAID has always been the best
option considering ease of use, reliability and performance.  Of all
RAIDs I've been using, ZFS has shown the worst performance which was so
bad that I don't want to use it anymore.

Maybe ZFS works perfect with FreeBSD and has better performance than
what I've seen, but being limited to the performance of a single disk
remains unless you can use a mirror.

> And whether you do your Raid in hardware or software Raidz is Raidz.
>
> I binned the last of the hardware RAID adapters in my production
> machines roughly five years ago.=C2=A0 ZFS got to be faster and more-reli=
able
> than they were.

I'd have to try ZFS with FreeBSD before I would believe that.  In any
case, it leaves you with the problem of connecting the disks to the
machine.  It's not like you could just pull the controller out and
connect the disks through thin air.  Fiddling with another controller
until it supports JBOD (and perhaps works in that server or doesn't)
isn't an option like anything else isn't that costs extra money.

I don't know much about Dell servers; do they usually support JBOD out
of the box?



Want to link to this message? Use this URL: <https://mail-archive.FreeBSD.org/cgi/mid.cgi?87v9w4qjy8.fsf>