Skip site navigation (1)Skip section navigation (2)
Date:      Thu, 24 Sep 2015 09:17:02 -0400
From:      Paul Kraus <paul@kraus-haus.org>
To:        Dmitrijs <war@dim.lv>, FreeBSD Questions <freebsd-questions@freebsd.org>
Subject:   Re: zfs performance degradation
Message-ID:  <782C9CEF-BE07-4E05-83ED-133B7DA96780@kraus-haus.org>
In-Reply-To: <56038054.5060906@dim.lv>
References:  <56019211.2050307@dim.lv> <37A37E9D-9D65-4553-BBA2-C5B032163499@kraus-haus.org> <56038054.5060906@dim.lv>

next in thread | previous in thread | raw e-mail | index | archive | help
On Sep 24, 2015, at 0:47, Dmitrijs <war@dim.lv> wrote:

> 2015.09.23. 23:08, Paul Kraus =D0=BF=D0=B8=D1=88=D0=B5=D1=82:
>> On Sep 22, 2015, at 13:38, Dmitrijs <war@dim.lv> wrote:
>>=20
>>>  I've encountered strange ZFS behavior - serious performance =
degradation over few days.
>>>=20
>>> Could it happen because of pool being 78% full? So I cannot fill =
puls full?
>>> Can anyone please advice how could I fix the situation - or is it =
normal?
>>=20
>> So the short answer (way too late for that) is that you can, in fact, =
not use all of the capacity of a zpool unless the data is written once, =
never modified, and you do not have any snapshots, clones, or the like.

> Thank you very much for explanation. Am I getting it right - it will =
not work faster even if I add +4Gb RAM to be 8Gb in total? I am not =
using DeDuplication and compression, neither planing using them.

If you are seeing the performance degrade due to the zpool being over =
some capacity threshold, then adding RAM will make little difference. If =
you are seeing general performance issues, then adding RAM (increasing =
ARC) _may_ improve the performance.

> So if I plan to work with data a lot, get decent performance and still =
be sure I'm on the safe side with mirror-raid1, should I choose another =
filesystem? Especially, if i do not really need snapshots, clones, etc.

What is your definition of =E2=80=9Cdecent=E2=80=9D performance ? What =
does your _real_ workload look like ?

Did you have performance issues doing real work which caused you to try =
to find the cause -or- were you benchmarking before trying to use the =
system for real work ?

> Or is it not possible at all, and I should put something like raid0 =
for work and tolerate slow backup on raid1 at nights?

There are many places in ZFS where you can run into performance =
bottlenecks. Remember, ZFS was designed for data integrity (end to end =
checksums), data reliability (lots of ways to get redundancy), and =
scalability. Performance was secondary from the very beginning. There =
are lots of other filesystems with much better performance, there are =
few (if any) with more protection for your data. Do not get me wrong, =
the performance of ZFS _can_ be very good, but you need to understand =
your workload and layout the zpool to accommodate that workload.

For example, one of my critical workloads is NFS with sync writes. My =
zpool layout is many vdevs of 3-way mirrors with a separate ZIL device =
(SLOG). I have not been able to go production with this server yet =
because I am waiting on backordered SSDs for the SLOG. The original SSDs =
I used just did not have the small block write performance I needed.

Another example is one of my _backup_ servers, which has a 6 drive =
RAIDz2 zpool layout. In this case I am not terribly concerned about =
performance as I am limited by the 1 Gbps network connection.

Also note that in general, the _best_ performance you can expect of any =
zpool layout is equivalent to _1_ drives worth of I/O per _vdev_. So my =
6 drive RAIDz2 has performance equivalent to _one_ of the drives that =
make up that vdev. Which is fine for _my_ workload. The rule of thumb =
for performance that I received over on the OpenZFS mailing list a while =
back was to assume you can get 100 MB/sec and 100 random I/Ops from a =
consumer SATA hard disk drive. I have seen nothing, even using =
=E2=80=9Centerprise=E2=80=9D grade HDDs, to convince me that is a bad =
rule of thumb. If your workload is strictly sequential you _may_ get =
more.

So a zpool made up of one single vdev, no matter how many drives, will =
average the performance of one of those drives. It does not really =
matter if it is a 2-way mirror vdev, a 3-way mirror vdev, a RAIDz2 vdev, =
a RAIDz3 vdev, etc. This is more true for write operations that read =
(mirrors can achieve higher performance by reading from multiple copies =
at once).

--
Paul Kraus
paul@kraus-haus.org




Want to link to this message? Use this URL: <https://mail-archive.FreeBSD.org/cgi/mid.cgi?782C9CEF-BE07-4E05-83ED-133B7DA96780>