Skip site navigation (1)Skip section navigation (2)
Date:      Wed, 15 Sep 2010 14:58:00 +0300
From:      Andriy Gapon <avg@freebsd.org>
To:        Steven Hartland <killing@multiplay.co.uk>
Cc:        freebsd-fs@freebsd.org, jhell <jhell@DataIX.net>, Pawel Jakub Dawidek <pjd@freebsd.org>
Subject:   Re: zfs very poor performance compared to ufs due to lack of cache?
Message-ID:  <4C90B4C8.90203@freebsd.org>
In-Reply-To: <D79F15FEB5794315BD8668E40B414BF0@multiplay.co.uk>
References:  <5DB6E7C798E44D33A05673F4B773405E@multiplay.co.uk><AANLkTikNhsj5myhQCoPaNytUbpHtox1vg9AZm1N-OcMO@mail.gmail.com><4C85E91E.1010602@icyb.net.ua><4C873914.40404@freebsd.org><20100908084855.GF2465@deviant.kiev.zoral.com.ua><4C874F00.3050605@freebsd.org><A6D7E134B24F42E395C30A375A6B50AF@multiplay.co.uk><4C8D087B.5040404@freebsd.org><03537796FAB54E02959E2D64FC83004F@multiplay.co.uk><4C8D280F.3040803@freebsd.org><3FBF66BF11AA4CBBA6124CA435A4A31B@multiplay.co.uk><4C8E4212.30000@freebsd.org> <B98EBECBD399417CA5390C20627384B1@multiplay.co.uk> <D79F15FEB5794315BD8668E40B414BF0@multiplay.co.uk>

next in thread | previous in thread | raw e-mail | index | archive | help
on 15/09/2010 13:32 Steven Hartland said the following:
> === conclusion ===
> The interaction of zfs and sendfile is causing large amounts of memory
> to end up in the inactive pool and only the use of a hard min arc limit is
> ensuring that zfs forces the vm to release said memory so that it can be
> used by zfs arc.

Memory ends up as inactive because of how sendfile works.  It first pulls data
into a page cache as active pages.  After pages are not used for a while, they
become inactive.  Pagedaemon can further recycle inactive pages, but only if
there is any shortage.  In your situation there is no shortage, so pages just
stay there, but are ready to be reclaimed (or re-activated) at any moment.
They are not a waste!  Just a form of a cache.
If ARC size doesn't grow in that condition, then it means that ZFS simply
doesn't need it to.

General problem of double-caching with ZFS still remains and will remain and
nobody promised to fix that.
I.e. with sendfile (or mmap) you will end up with two copies of data, one in
page cache and the other in ARC.  That happens on Solaris too, no magic.

The things I am trying to fix are:
1. Interaction between ARC and the rest of VM during page shortage; you don't
seem to have much of that, so you don't see it.  Besides, your range for ARC
size is quite narrow and your workload is so peculiar that your setup is not the
best one for testing this.
2. Copying of data from ARC to page cache each time the same data is served by
sendfile.  You won't see much changes without monitoring ARC hits as Wiktor has
suggested.  In bad case there would be many hits because the same data is
constantly copied from ARC to page cache (and that simply kills any benefit
sendfile may have).  In good case there would be much less hits, because data is
not copied, but is served directly from page cache.

> The source data, xls's and exported graphs can be found here:-
> http://www.multiplaygameservers.com/dropzone/zfs-sendfile-results.zip

So, what problem, performance or otherwise, do you perceive with your system's
behavior?  Because I don't see any.

To summarize:
1. With sendfile enabled you will have two copies of actively served data in
RAM, but perhaps slightly faster performance, because of avoiding another copy
to mbuf in sendfile(2).
2. With sendfile disabled, you will have one copy of actively served data in RAM
(in ARC), but perhaps slightly slower performance because of a need to make a
copy to mbuf.

Which would serve you better depends on size of your hot data vs RAM size, and
on actual benefit from avoiding the copying to mbuf.  I have never measured the
latter, so I don't have any real numbers.
>From your graphs it seems that your hot data (multiplied by two) is larger than
what your RAM can accommodate, so you should benefit from disabling sendfile.

-- 
Andriy Gapon



Want to link to this message? Use this URL: <https://mail-archive.FreeBSD.org/cgi/mid.cgi?4C90B4C8.90203>