Date: Sat, 2 Oct 2004 15:44:35 -0400 (EDT) From: Garrett Wollman <wollman@khavrinen.lcs.mit.edu> To: "David E. Cross" <crossd@cs.rpi.edu> Cc: port-freebsd@openafs.org Subject: [OpenAFS-port-freebsd] Re: FreeBSD 5.2.1 Client Success! Message-ID: <200410021944.i92JiZ7X099861@khavrinen.lcs.mit.edu> In-Reply-To: <20041002030159.A84436@monica.cs.rpi.edu> References: <66F8A41B3F4164470D9446F2@[192.168.1.16]> <F99BE7989AEABA2B6ABCEF48@[192.168.1.16]> <20041002030159.A84436@monica.cs.rpi.edu>
next in thread | previous in thread | raw e-mail | index | archive | help
<<On Sat, 2 Oct 2004 03:05:50 -0400 (EDT), "David E. Cross" <crossd@cs.rpi.edu> said: > It _used_ to be that AFS kept an artificially incremente VNODE count to > prevent "its" vnodes from re-entering the system pool, there were more > than a couple of places that this cause various OS assumptions to be > false. I changed this last year: an afsnode HAS-A vnode now instead of IS-A. > panic: lockmgr: locking against myself Anyone working on this code should compile the kernel with DEBUG_VFS_LOCKS in order to get better assertions. Generally, the problem is that some AFS vnops are called with the vnode locked, and some not, and AFS doesn't keep track down the call graph of which case the current operation is. Often AFS will call back into the OSI layer to do some manipulation of the underlying vnode which requires the node to be locked. -GAWollman
Want to link to this message? Use this URL: <https://mail-archive.FreeBSD.org/cgi/mid.cgi?200410021944.i92JiZ7X099861>