Skip site navigation (1)Skip section navigation (2)
Date:      Mon, 28 Mar 2005 23:11:48 -0500 (EST)
From:      Jeff Roberson <jroberson@chesapeake.net>
To:        Stephan Uphoff <ups@tree.com>
Cc:        arch@freebsd.org
Subject:   Re: Freeing vnodes.
Message-ID:  <20050328231118.R54623@mail.chesapeake.net>
In-Reply-To: <1111983665.64310.19.camel@palm>
References:  <20050314213038.V20708@mail.chesapeake.net> <1110856553.29804.37784.camel@palm> <1110896909.29804.39143.camel@palm> <1111983665.64310.19.camel@palm>

next in thread | previous in thread | raw e-mail | index | archive | help
On Sun, 27 Mar 2005, Stephan Uphoff wrote:

> On Tue, 2005-03-15 at 09:28, Stephan Uphoff wrote:
> > On Tue, 2005-03-15 at 00:39, Jeff Roberson wrote:
> > > On Mon, 14 Mar 2005, Stephan Uphoff wrote:
> > >
> > > > On Mon, 2005-03-14 at 21:38, Jeff Roberson wrote:
> > > > > I have a patch at http://www.chesapeake.net/~jroberson/freevnodes.diff
> > > > > that allows us to start reclaiming vnodes from the free list and release
> > > > > their memory.  It also changes the semantics of wantfreevnodes, and makes
> > > > > getnewvnode() much prettier.
> > > > >
> > > > > The changes attempt to keep some number of vnodes, currently 2.5% of
> > > > > desiredvnodes, that are free in memory.  Free vnodes are vnodes which
> > > > > have no references or pages in memory.  For example, if an application
> > > > > simply stat's a vnode, it will end up on the free list at the end of the
> > > > > operation.  The algorithm that is currently in place will immediately
> > > > > recycle these vnodes once there is enough pressure, which will cause us to
> > > > > do a full lookup and reread the inode, etc. as soon as it is stat'd again.
> > > > >
> > > > > This also removes the recycling from the getnewvnode() path.  Instead, it
> > > > > is done by a new helper function that is called from vnlru_proc().  This
> > > > > function just frees vnodes from the head of the list until we reach our
> > > > > wantfreevnodes target.
> > > > >
> > > > > I haven't perf tested this yet, but I have a box that is doing a
> > > > > buildworld with a fairly constant freevnodes count which shows that vnodes
> > > > > are actually being uma_zfree'd.
> > > > >
> > > > > Comments?  Anyone willing to do some perf tests for me?
> > > > >
> > > > > Thanks,
> > > > > Jeff
> > > >
> > > > Just looked at the raw diff and might have missed it -  how are the
> > > > parent directory "name" cache entries ( vnode fields v_dd, v_ddid)
> > > > handled?
> > >
> > > Just as they were before, by calling cache_purge.
> >
> > This purges the fields of the vnode that will be recycled.
> >
> > I am worried about the v_dd,v_ddid fields of a directory B that has the
> > to be released vnode A as parent. (Obviously in this case there is no
> > namecache entry with the vnode A as the directory (nc_dvp))
> >
> > Right now A is type stable - but if A is released, access to B->v_dd
> > may cause a page fault.
> >
> > Stephan
>
> Jeff,
>
> Do you plan to address the problem now that the code is checked in?

Vnodes with children in the name cache are held with vhold() and not
recycled.

>
> Stephan
>



Want to link to this message? Use this URL: <https://mail-archive.FreeBSD.org/cgi/mid.cgi?20050328231118.R54623>