Skip site navigation (1)Skip section navigation (2)
Date:      Sat, 18 Sep 2010 12:23:48 +0100 (BST)
From:      Robert Watson <rwatson@FreeBSD.org>
To:        Andre Oppermann <andre@freebsd.org>
Cc:        freebsd-hackers@freebsd.org, Jeff Roberson <jeff@freebsd.org>, Andriy Gapon <avg@freebsd.org>
Subject:   Re: zfs + uma
Message-ID:  <alpine.BSF.2.00.1009181221560.86826@fledge.watson.org>
In-Reply-To: <4C935F56.4030903@freebsd.org>
References:  <4C93236B.4050906@freebsd.org> <4C935F56.4030903@freebsd.org>

next in thread | previous in thread | raw e-mail | index | archive | help

On Fri, 17 Sep 2010, Andre Oppermann wrote:

>> Although keeping free items around improves performance, it does consume 
>> memory too.  And the fact that that memory is not freed on lowmem condition 
>> makes the situation worse.
>
> Interesting.  We may run into related issues with excessive mbuf (cluster) 
> caching in the per-cpu buckets as well.
>
> Having a general solutions for that is appreciated.  Maybe the size of the 
> free per-cpu buckets should be specified when setting up the UMA zone.  Of 
> certain frequently re-used elements we may want to cache more, other less.

I've been keeping a vague eye out for this over the last few years, and 
haven't spotted many problems in production machines I've inspected.  You can 
use the umastat tool in the tools tree to look at the distribution of memory 
over buckets (etc) in UMA manually.  It would be nice if it had some automated 
statistics on fragmentation however.  Short-lived fragmentation is likely, and 
isn't an issue, so what you want is a tool that monitors over time and reports 
on longer-lived fragmentation.

The main fragmentation issue we've had in the past has been due to 
mbuf+cluster caching, which prevented mbufs from being freed usefully in some 
cases.  Jeff's ongoing work on variable-sized mbufs would entirely eliminate 
that problem...

Robert



Want to link to this message? Use this URL: <https://mail-archive.FreeBSD.org/cgi/mid.cgi?alpine.BSF.2.00.1009181221560.86826>