Skip site navigation (1)Skip section navigation (2)
Date:      Thu, 14 Jun 2007 00:03:17 +0200
From:      Stefan Esser <se@FreeBSD.org>
To:        Craig Boston <craig@xfoil.gank.org>,  current@freebsd.org
Subject:   Re: ZFS tuning tips?
Message-ID:  <467069A5.7050902@FreeBSD.org>
In-Reply-To: <20070613160835.GA6461@nowhere>
References:  <20070613160835.GA6461@nowhere>

next in thread | previous in thread | raw e-mail | index | archive | help
Craig Boston schrieb:
> Possibly related to reducing maxvnodes, I'm running into a strange
> problem that appears to be cache-related.  I noticed that during
> port installs, the "Registering installation" phase started taking
> forever.  I tracked it down to pkg_info -qO [name] going very slow and
> causing a _lot_ of disk access.
>
> It seems like the entries in /var/db/pkg were not being cached (or
> rather some old stuff wasn't being purged to make room for it).  Doing
> an umount /usr/obj seemed to help quite a bit (and caused vfs.numvnodes
> to drop from ~45000 to 4000), but it eventually happened again.  What
> perplexes me is that I still had about 1700M Free memory that should
> have been available for caching.

I have found that ZFS prefetch is killing performance on my test
server (which is just an old P3/733 with 512MB RAM). The prefetch
code appears to *always* extend the region to be read, not only in
case of sequential reads (as our UFS code does under control of the
vfs.read_max sysctl tunable).

Therefor I have:

	vfs.zfs.prefetch_disable="1"

in my "/boot/loader.conf", and it helps a lot!


All drives perform zero-cost read-ahead today by placing sectors
in reverse order and starting to read from the target track as
soon as the head is settled, stopping when the first requested
sector has been read (remember: sectors are in reverse order).

Since the drives cache data from the last 10 to 20 requests (at
least), another read has a good chance to find sectors following
the initial request in the drives cache, if no ZFS read-ahead is
performed. ZFS prefetch seems to extend each read request to the
value of recordsize (e.g. 128KB), even for short direct accesses
(e.g. to a DBM file). This may require an extra revolution of the
disk (which the read ahead issued by the drive's firmware avoids
by just reading what happens to be found between the point where
the head is settled and able to read data and the final requested
sector).

As long as ZFS prefetch is not adaptive, I do not consider it a
good idea to have it enabled by default. But I have not performed
reproducible benchmark runs to provide hard numbers, just observed
the qualitative difference (i.e. much reduced load and better
response after ZFS prefetch was disabled).

Regards, STefan



Want to link to this message? Use this URL: <https://mail-archive.FreeBSD.org/cgi/mid.cgi?467069A5.7050902>