Skip site navigation (1)Skip section navigation (2)
Date:      Wed, 07 Mar 2012 09:28:35 +0000
From:      Luke Marsden <>
Subject:   Re: FreeBSD 8.2 - active plus inactive memory leak!?
Message-ID:  <1331112515.2589.52.camel@pow>
In-Reply-To: <>
References:  <1331061203.2218.38.camel@pow> <> <1331080581.2589.28.camel@pow> <>

Next in thread | Previous in thread | Raw E-Mail | Index | Archive | Help
On Wed, 2012-03-07 at 10:23 +0200, Konstantin Belousov wrote:
> On Wed, Mar 07, 2012 at 12:36:21AM +0000, Luke Marsden wrote:
> > I'm trying to confirm that, on a system with no pages swapped out, that
> > the following is a true statement:
> > 
> >         a page is accounted for in active + inactive if and only if it
> >         corresponds to one or more of the pages accounted for in the
> >         resident memory lists of all the processes on the system (as per
> >         the output of 'top' and 'ps')
> No.
> The pages belonging to vnode vm object can be active or inactive or cached
> but not mapped into any process address space.

Thank you, Konstantin.  Does the number of vnodes we've got open on this
machine (272011) fully explain away the memory gap?

        Memory gap:
        11264M active + 2598M inactive - 9297M sum-of-resident = 4565M
        Active vnodes:
        vfs.numvnodes: 272011

That gives a lower bound at 17.18Kb per vode (or higher if we take into
account shared libs, etc); that seems a bit high for a vnode vm object
doesn't it?

If that doesn't fully explain it, what else might be chewing through
active memory?

Also, when are vnodes freed?

This system does have some tuning...
kern.maxfiles: 1000000
vm.pmap.pv_entry_max: 73296250

Could that be contributing to so much active + inactive memory (5GB+
more than expected), or do PV entries live in wired e.g. kernel memory?

On Tue, 2012-03-06 at 17:48 -0700, Ian Lepore wrote:
> In my experience, the bulk of the memory in the inactive category is
> cached disk blocks, at least for ufs (I think zfs does things
> differently).  On this desktop machine I have 12G physical and
> typically have roughly 11G inactive, and I can unmount one particular
> filesystem where most of my work is done and instantly I have almost
> no inactive and roughly 11G free.

Okay, so this could be UFS disk cache, except the system is ZFS-on-root
with no UFS filesystems active or mounted.  Can I confirm that no
double-caching of ZFS data is happening in active + inactive (+ cache)


CTO, Hybrid Logic
+447791750420  |  +1-415-449-1165  | 

Want to link to this message? Use this URL: <>