Skip site navigation (1)Skip section navigation (2)
Date:      Wed, 7 Mar 2012 01:23:53 +0000
From:      RW <>
Subject:   Re: FreeBSD 8.2 - active plus inactive memory leak!?
Message-ID:  <>
In-Reply-To: <>
References:  <1331061203.2218.38.camel@pow> <>

Next in thread | Previous in thread | Raw E-Mail | Index | Archive | Help
On Tue, 06 Mar 2012 18:30:07 -0500
Chuck Swiger wrote:

> On 3/6/2012 2:13 PM, Luke Marsden wrote:

> >        * Resident corresponds to a subset of the pages above: those
> > pages which actually occupy physical/core memory.  Notably pages may
> >          appear in size but not appear in resident for read-only
> > text pages from libraries which have not been used yet or which have
> >          been malloc()'d but not yet written-to.
> Yes.
> > My understanding for the values for the system as a whole (at the
> > top in 'top') is as follows:
> >
> >        * Active / inactive memory is the same thing: resident
> > memory from processes in use.  Being in the inactive as opposed to
> > active list simply indicates that the pages in question are less
> >          recently used and therefore more likely to get swapped out
> > if the machine comes under memory pressure.
> Well, they aren't exactly the same thing.  The kernel implements a VM
> working set algorithm which periodically looks at all of the pages
> that are in memory and notes whether a process has accessed that page
> recently.  If it has, the page is active; if the page has not been
> used for "some time", it becomes inactive.

I think the previous poster  has it about right, it's mostly about
lifecycle. The inactive queue contains a mixture of resident and
non-resident memory. It's commonly dominated by disk cache pages, and
consequently is easily blown away by recursive greps etc.

> >        * Cache is freed memory which the kernel has decided to keep
> > in case it correspond to a useful page in future; it can be cheaply
> >          evicted into the free list.
> Sort of, although this description fits the "inactive" memory
> category also.
> The major distinction is that the system is actively trying to flush
> any dirty pages in the cache category, so that they are available for
> reuse by something else immediately.

Only clean pages are added to cache. A dirty page will go twice around
the inactive queue as dirty, get flushed and then do a third pass as a
clean page. 

The point of cache is that it's a small stock of memory that's
available for immediate reuse, the pages have nothing else in common.

On Wed, 07 Mar 2012 00:36:21 +0000
Luke Marsden wrote:

> But that's what I'm saying...
>         sum(process resident sizes) >= active + inactive

Inactive memory contains disc cache. 

Want to link to this message? Use this URL: <>