Skip site navigation (1)Skip section navigation (2)
Date:      Mon, 6 Sep 2010 00:57:46 +0100
From:      "Steven Hartland" <killing@multiplay.co.uk>
To:        "jhell" <jhell@DataIX.net>
Cc:        freebsd-fs@freebsd.org
Subject:   Re: zfs very poor performance compared to ufs due to lack of cache?
Message-ID:  <330B5DB2215F43899ABAEC2CF71C2EE0@multiplay.co.uk>
References:  <5DB6E7C798E44D33A05673F4B773405E@multiplay.co.uk><AANLkTi=6bta-Obrh2ejLCHENEbhV5stbMsvfek3Ki4ba@mail.gmail.com><4C825D65.3040004@DataIX.net> <7EA7AD058C0143B2BF2471CC121C1687@multiplay.co.uk> <1F64110BFBD5468B8B26879A9D8C94EF@multiplay.co.uk> <4C83A214.1080204@DataIX.net> <06B9D23F202D4DB88D69B7C4507986B7@multiplay.co.uk> <4C842905.2080602@DataIX.net>

next in thread | previous in thread | raw e-mail | index | archive | help

----- Original Message ----- 
From: "jhell" <jhell@DataIX.net>


> On 09/05/2010 16:13, Steven Hartland wrote:
>>> 3656:  uint64_t available_memory = ptoa((uintmax_t)cnt.v_free_count
>>> 3657:      + cnt.v_cache_count);
> 
>> earlier at 3614 I have what I think your after which is:
>>    uint64_t available_memory = ptoa((uintmax_t)cnt.v_free_count);
> 
> Alright change this to the above, recompile and re-run your tests.
> Effectively before this change that apparently still needs to be MFC'd
> or MFS'd would not allow ZFS to look at or use cnt.v_cache_count. Pretty
> much to sum it up "available mem = cache + free"
> 
> This possibly could cause what your seeing but there might be other
> changes still yet TBD. Ill look into what else has changed from RELEASE
> -> STABLE.
> 
> Also do you check out your sources with svn(1) or csup(1) ?

Based on Jeremy's comments I'm updating the box the stable. Its building now
but will be the morning before I can reboot to activate changes as I need to
deactivate the stream instance and wait for all active connections to finish.

That said the problem doesn't seem to be cache + free but more cache + free
+ inactive with inactive being the large chunk, so not sure this change
would make any difference?

How does ufs deal with this, does it take inactive into account? Seems a bit
silly for inactive pages to prevent reuse for extended periods when the
memory could be better used as cache.

As an experiment I compiled a little app which malloced a large block of
memory, 1.3G in this case and then freed it. This does indeed pull the memory
out of inactive and back into the free pool where zfs is which happy to
re-expand arc and once again cache large files. Seems a bit extreme to have to
do this though.

Will see what happens with stable tomorrow though :)

    Regards
    Steve

================================================
This e.mail is private and confidential between Multiplay (UK) Ltd. and the person or entity to whom it is addressed. In the event of misdirection, the recipient is prohibited from using, copying, printing or otherwise disseminating it or any information contained in it. 

In the event of misdirection, illegible or incomplete transmission please telephone +44 845 868 1337
or return the E.mail to postmaster@multiplay.co.uk.




Want to link to this message? Use this URL: <https://mail-archive.FreeBSD.org/cgi/mid.cgi?330B5DB2215F43899ABAEC2CF71C2EE0>