Skip site navigation (1)Skip section navigation (2)
Date:      Wed, 22 Jul 2015 16:00:06 -0400
From:      Paul Kraus <paul@kraus-haus.org>
To:        FreeBSD Filesystems <freebsd-fs@freebsd.org>
Subject:   Re: Prioritize resilvering priority
Message-ID:  <96FF6F66-06D3-4CAE-ABE5-C608A9A85F7A@kraus-haus.org>
In-Reply-To: <CAP1HOmT-qkOf6EuipPs26aNYTPC59_j6CNvK7tubM-HxVJCH-w@mail.gmail.com>
References:  <CAP1HOmTo28BishnEdPCBsg7V4M4yYfcSKw_AmUbPP-mW4JtRQg@mail.gmail.com> <20150722003218.GD41419@in-addr.com> <CAP1HOmT-qkOf6EuipPs26aNYTPC59_j6CNvK7tubM-HxVJCH-w@mail.gmail.com>

next in thread | previous in thread | raw e-mail | index | archive | help
On Jul 22, 2015, at 14:52, javocado <javocado@gmail.com> wrote:

> But I do have:
> vfs.zfs.vdev.max_pending: 10 (dynamic)
> vfs.zfs.scrub_limit: 10 (loader)
>=20
> So, I think I would want to lower one or both of these to increase I/O
> responsiveness on the system. Correct? How would the 2 play together =
in
> terms of which to adjust to achieve the best system performance at the
> expense of a longer resilver?

vfs.zfs.vdev.max_pending is the limit on the number of disk I/O that can =
be outstanding for a drive (or, IIRC, in this case a given vdev). There =
has been great debate over tuning this one years ago on the zfs list. =
The general consensus is that 10 is a good value for modern SATA drives. =
When I was running 4 SATA drives behind a port multiplier (not a great =
configuration) I tuned this down to 4 to keep from overwhelming the port =
multiplier. Tuning it _down_ will reduce overall throughput to a drive. =
It does not differentiate between production I/O and scrub / resilver =
I/O.

This post: =
https://forums.freebsd.org/threads/how-to-limit-scrub-bandwidth-vfs-zfs-sc=
rub_limit.31628/

Implies that the vfs.zfs.scrub_limit parameter limits the number of =
outstanding I/O but just for scrub / resilver operations. I would start =
by tuning it down to 5 or so and watch carefully with iostat -x to see =
the effect.

Note that newer ZFS code addresses the scrub operation starving the rest =
of the system from I/O. I have not had a problem on either my FBSD 9 or =
10 systems.

--
Paul Kraus
paul@kraus-haus.org




Want to link to this message? Use this URL: <https://mail-archive.FreeBSD.org/cgi/mid.cgi?96FF6F66-06D3-4CAE-ABE5-C608A9A85F7A>