Skip site navigation (1)Skip section navigation (2)
Date:      Sun, 19 Jun 2016 16:45:48 -0400
From:      Paul Kraus <paul@kraus-haus.org>
To:        Kaya Saman <kayasaman@gmail.com>
Cc:        FreeBSD Filesystems <freebsd-fs@freebsd.org>
Subject:   Re: High CPU Interrupt using ZFS
Message-ID:  <2F83F199-80C1-4B98-A18D-C5343EE4F783@kraus-haus.org>
In-Reply-To: <57cfcda4-6ff7-0c2e-4f58-ad09ce7cab28@gmail.com>
References:  <57cfcda4-6ff7-0c2e-4f58-ad09ce7cab28@gmail.com>

next in thread | previous in thread | raw e-mail | index | archive | help

> On Jun 19, 2016, at 3:38 PM, Kaya Saman <kayasaman@gmail.com> wrote:

<snip>

> # zpool list
> NAME         SIZE  ALLOC   FREE  EXPANDSZ   FRAG    CAP  DEDUP HEALTH  =
ALTROOT
> ZPOOL_2     27.2T  26.3T   884G         -    41%    96%  1.00x ONLINE  =
-
> ZPOOL_3      298G   248G  50.2G         -    34%    83%  1.00x ONLINE  =
-
> ZPOOL_4     1.81T  1.75T  66.4G         -    25%    96%  1.00x ONLINE  =
-
> ZPOOL_5      186G   171G  14.9G         -    62%    92%  1.00x ONLINE  =
-
> workspaces   119G  77.7G  41.3G         -    56%    65%  1.00x ONLINE  =
-
> zroot        111G  88.9G  22.1G         -    70%    80%  1.00x ONLINE  =
-

Are you aware that ZFS performance drops substantially once a pool =
exceeds a certain % full, the threshold for which varies with pool type =
and work load. It is generally considered a bad idea to run pools more =
than 80% full with any configuration or workload. ZFS is designed first =
and foremost for data integrity, not performance and running pools too =
full causes _huge_ write performance penalties. Does your system hang =
correspond to a write request to any of the pools that are more than 80% =
full ? The pool that is at 92% capacity and 62% fragmented is especially =
at risk for this behavior.

The underlying reason for this behavior is that as a pool get more and =
more full it takes more and more time to find an appropriate available =
slab to write new data to, since _all_ writes are treated as new data =
(that is the whole point of the Copy on Write design) _any_ write to a =
close to full pool incurs the huge performance penalty.

This means that if you write the data and _never_ modify it and that you =
can stand the write penalty as you add data to the mostly full zpools, =
then you may be able to use ZFS like this, otherwise just don=E2=80=99t.

On my virtual hosts, running FreeBSD 10.x and VirtualBox, a pool more =
than 80% full will make the VMs unacceptably unresponsive, I strive to =
keep the pools at less than 60% capacity. Disk storage is (relatively) =
cheap these days.




Want to link to this message? Use this URL: <https://mail-archive.FreeBSD.org/cgi/mid.cgi?2F83F199-80C1-4B98-A18D-C5343EE4F783>