Skip site navigation (1)Skip section navigation (2)
Date:      Thu, 29 Aug 1996 09:16:20 -0700 (MST)
From:      Terry Lambert <terry@lambert.org>
To:        michaelh@cet.co.jp (Michael Hancock)
Cc:        terry@lambert.org, eric@ms.uky.edu, freebsd-fs@FreeBSD.ORG, current@FreeBSD.ORG
Subject:   Re: vclean (was The VIVA file system)
Message-ID:  <199608291616.JAA28774@phaeton.artisoft.com>
In-Reply-To: <Pine.SV4.3.93.960829085831.4475G-100000@parkplace.cet.co.jp> from "Michael Hancock" at Aug 29, 96 09:29:07 am

next in thread | previous in thread | raw e-mail | index | archive | help
> My interpretation of the vnode global pool design was that
> vgone...->vclean wouldn't be called very often.  It would only be called
> by getnewvnode() when free vnodes were not available and for cases when
> the vnode is deliberately revoked.
> 
> Inactive() would mark both the vnode/inode inactive, but the data would be
> left intact even when usecount went to zero so that all the important data
> could be reactivated quickly.
> 
> It's not working this way and it doesn't look trivial to get it work this
> way.

That's right.  This is a natural consequence of moving the cache locality
from its seperate location into its now unified location.

Because you can not look up a buffer by device (and the device association
would never be destroyed for a valid buffer in core, yet unreclaimed),
the buffers on the vnodes in the pool lack the localitiy of the pre
VM/cache unification code.

The unification was such a tremendous win, that this was either hidden,
or more likely, discounted.  I'd like to see it revisited.


> Regarding local per fs pools you still need some kind of global memory
> management policy.  It seems less complicated to manage a global pool,
> than local per fs pools with opaque VOP calls. 

The amount of memeory is relatively small, and we are already running
a modified zone allocator in any case.  I don't see any conflict in
the definition of a dditional zones.  How do I reclaim packet reassembly
buffer when I need another vnode?  Right now, I don't.  The conflict
resoloution is intra-pool.  Inter-pool conflicts are resolved either
by static resource limits, or soft limits and/or watermarking.


> Say you've got FFS, LFS, and NFS systems mounted and fs usage patterns
> migrate between the fs's.  You've got limited memory resources.  How do
> you determine which local pool to recover vnodes from?  It'd be
> inefficient to leave the pools wired until the fs was unmounted. Complex
> LRU-like policies across multiple local per fs vnode pools also sound
> pretty complicated to me. 

You keep a bias statistic, maintained on a per pool basis, for the
reclaimation, and the reclaimer operates at a pool granularity, if
in fact you allow such reclaimation to occur (see my paragraph preceeding
for preferred approaches to a knowledgable reclaimer).


> We also need to preserve the vnode revoking semantics for situations like
> revoking the session terminals from the children of sesssion leaders.

This is a tty subsystem function, and I do not agree with the current
revocation semantics, mostly because I think tty devices should be
instanced per controlling tty reference.  This would allow the reference
to be invalidated via flagging rather than using a seperate opv table.

If you look for "struct fileops", you will see another bogosity that
makes this this problematic.  Resolve the struct fileops, and the
carrying around of all that dead weight in the fd structs, and you have
resolved the deadfs problem at the same time.  The specfs stuff is going
to go away with devfs, leaving UNIX domain sockets, pipes (which should
be implemented as an opaque FS reference no exported as a mount point
mapping to user space), and the VFS fileops (which should be the only
ones, and therefore implicit, anyway).

It's really not as complicated as you want to make it. 8-).


					Terry Lambert
					terry@lambert.org
---
Any opinions in this posting are my own and not those of my present
or previous employers.



Want to link to this message? Use this URL: <https://mail-archive.FreeBSD.org/cgi/mid.cgi?199608291616.JAA28774>