Skip site navigation (1)Skip section navigation (2)
Date:      Wed, 23 Sep 2015 16:08:32 -0400
From:      Paul Kraus <paul@kraus-haus.org>
To:        Dmitrijs <war@dim.lv>, FreeBSD Questions <freebsd-questions@freebsd.org>
Subject:   Re: zfs performance degradation
Message-ID:  <37A37E9D-9D65-4553-BBA2-C5B032163499@kraus-haus.org>
In-Reply-To: <56019211.2050307@dim.lv>
References:  <56019211.2050307@dim.lv>

next in thread | previous in thread | raw e-mail | index | archive | help
On Sep 22, 2015, at 13:38, Dmitrijs <war@dim.lv> wrote:

>  I've encountered strange ZFS behavior - serious performance =
degradation over few days. Right after setup on fresh ZFS (2 hdd in a =
mirror) I made a test on a file 30Gb size with dd like
> dd if=3Dtest.mkv of=3D/dev/null bs=3D64k
> and got 150+Mbs speed.

> I've got brand new 2x HGST HDN724040ALE640, 4=D0=A2=D0=B1, 7200rpm =
(ada0, ada1) for pool data4.
> Another pool, data2, performs slightly better even on older\cheaper WD =
Green 5400 HDDs, up to 99Mbs.

> Zpool list:
>=20
> nas4free: /mnt# zpool list
> NAME    SIZE  ALLOC   FREE  EXPANDSZ   FRAG    CAP  DEDUP HEALTH  =
ALTROOT
> data2  1.81T   578G  1.25T         -    11%    31%  1.00x ONLINE  -
> data4  3.62T  2.85T   797G         -    36%    78%  1.00x ONLINE  -
>=20
>=20
> Could it happen because of pool being 78% full? So I cannot fill puls =
full?
> Can anyone please advice how could I fix the situation - or is it =
normal?

ZFS write performance degrades very steeply when you reach a certain =
point in terms of zpool capacity. The exact threshold depends on many =
factors including your specific workload. This is essentially due to the =
=E2=80=9CCopy on Write=E2=80=9D (CoW) nature of ZFS. When you write to =
an existing file ZFS needs to find space for that write operation as it =
does not overwrite the existing data. As the zpool fills, it becomes =
harder and harder to find contiguous free space and the write operation =
ends up fragmenting the data.

But, you are seeing READ performance drop. If the file was written when =
the ZFS was new (it was one of the first files written) then it is =
certainly un-fragmented. But, if you ran the READ test shortly after =
writing the file, then some of it will still be in the ARC (Adaptive =
Reuse Cache). If there is other activity on the system, then the other =
activity will also be using the ARC.

If you are rewriting the test file and then reading it, the test file =
will be fragmented and that will be part of the performance difference.

For my systems (generally VMs using VBox) I have found that 80% is a =
good threshold because when I get to 85% capacity the performance drops =
to the point where VM I/O starts timing out.

So the short answer (way too late for that) is that you can, in fact, =
not use all of the capacity of a zpool unless the data is written once, =
never modified, and you do not have any snapshots, clones, or the like.

P.S. I assume you are not using DeDupe ? You do not have anywhere enough =
RAM for that.

--
Paul Kraus
paul@kraus-haus.org




Want to link to this message? Use this URL: <https://mail-archive.FreeBSD.org/cgi/mid.cgi?37A37E9D-9D65-4553-BBA2-C5B032163499>