Skip site navigation (1)Skip section navigation (2)
Date:      Tue, 27 Aug 1996 08:45:33 -0700 (MST)
From:      Terry Lambert <terry@lambert.org>
To:        michaelh@cet.co.jp
Cc:        terry@lambert.org, eric@ms.uky.edu, freebsd-fs@FreeBSD.ORG, current@FreeBSD.ORG
Subject:   Re: vclean (was The VIVA file system)
Message-ID:  <199608271545.IAA24710@phaeton.artisoft.com>
In-Reply-To: <Pine.SV4.3.93.960827115155.17910A-100000@parkplace.cet.co.jp> from "Michael Hancock" at Aug 27, 96 12:35:14 pm

next in thread | previous in thread | raw e-mail | index | archive | help
> > > I think what needs to be looked at is having more synchronized buffer
> > > cache/vnode recycling policies.
> > 
> > Inode data, disklabel data, and any other FS object which is not file
> > contents is not cached under the current policy.
> 
> The vnode/inode association with a vnhash() you mentioned before makes
> sense.  I wonder how hard it would be to manage the buffer
> cache/vnodes/inodes with more synergy (sorry I couldn't think of a better
> word).

You're forgiven... you just need to get with the new "paradigm".

8-) 8-).

> > Further, dissociating buffers from vnodes does not require that they
> > be returned to a global pool for clean-behind.
> 
> There's an in-place free list that can have valid buffers hanging off of
> them and vnodes go on the list when inactive() is called.  I guess the
> freelist should be called the inactive list.
> 
> getnewvnode()
> 	vgone()
> 		vclean() should only be called when it needs to, such as when
> file activity moves to a different fs and there aren't enough vnodes.
> 
> The vnode pool was a fixed size pool in lite, but someone put in a
> malloc() into getnewvnode().  The vnode pool is kind of wired so I think
> it can now grow, but it can't shrink unless there's some free()s being
> done somewhere where I haven't noticed.

Yes.  The number one problem is that the in-place freelist with valid
buffers hanging off of them is not recoverable once the inode data has
been disassociated.  The vnode is effectively unrecoverable without
the buffers being freed.

This is very annoying; at the very least, going the other direction,
where the vnode is treated as an opaque handle external to the VFS
itself instead of as a common "subsystem" in kern/, would allow the
buffers to be recovered via the ihash.

This is, in fact, the wrong thing to do, since it would require the
implementation of an ihash per FS.  Better to consider name cache
references in the directory lookup cache as if they were vnode
references, and push the vnodes into a per FS pool, LRU'ing the
shared references out of the name cache.

This would result in recoverability for the buffers, at the same time
removing the annoying race conditions that sow up every time the VM
system is tweaked as "free vnode isn't" or other such crap.  The
vnode management as it stands is much too fragile.

> > I think the non-opacity of vnodes is a mistake.
> 
> I guess they didn't have time to get this aspect right.
> 
> Some of the semantics are very interesting though, they look very
> different from the SysV vnodes I read about in the Vahalia book. 

Yes; there is no reason to lose that going to a per FS vrelease; the
most interesting semantic is stacking.  I don't think that right
now there is a guard against unmounting with a reference active if
the data reference to the FS from the vnode is not asserted.  This
seems to be a problem for a couple of FS's that operate on the basis
of virtual nodes (I don't know if devfs is implicated for sure yet;
I'm still looking at that panic).

For what it's worth, the vnode problems all seem to be related to
lack of data abstraction -- promiscuous use of vnode data, etc. --
and that is not impossible to clean up, just time consuming.


					Terry Lambert
					terry@lambert.org
---
Any opinions in this posting are my own and not those of my present
or previous employers.



Want to link to this message? Use this URL: <https://mail-archive.FreeBSD.org/cgi/mid.cgi?199608271545.IAA24710>