Skip site navigation (1)Skip section navigation (2)
Date:      Sun, 17 Feb 2002 20:56:24 -0800 (PST)
From:      Matthew Dillon <dillon@apollo.backplane.com>
To:        Andrew Gallatin <gallatin@cs.duke.edu>
Cc:        freebsd-stable@FreeBSD.ORG
Subject:   Re: FFS node recycling?
Message-ID:  <200202180456.g1I4uOk10820@apollo.backplane.com>
References:   <15472.23454.686939.502647@grasshopper.cs.duke.edu>

next in thread | previous in thread | raw e-mail | index | archive | help
:How & how often do FFS nodes get free'd?
:
:I recently had a lockup on my workstation. (alpha UP1000, 640MB,
:4.5-STABLE, also an NFS server for my various testboxes in my home
:office).  I was doing a local cvs diff on the src/sys tree, as well as
:building a kernel, and doing a few other things (xemacs, gnome,
:sawfish, konqueror, 20 or so shells).  The cvs diff got wedged in (I
:think) inode.

    You would need to analyize a kernel core to really figure out what
    happened, but getting wedged in 'inode' with no other processes
    in weird states implies a vnode lock deadlock.  If you do see other
    processes in weird states, like 'vmwait', it could indicate a low
    memory deadlock.  And there are other possibilities.

:At this point, I noticed that the FFS node malloc pool seemed to be
:quite near its limit.  I killed the make and tried to recover, but I
:couldn't seem to get the number of FFS node allocations down & other
:jobs started to wedge on IO.  I was intending to drop into the
:debugger and get a dump, but the machine locked solid when I
:attmpted to vty-switch out of X.

    The FFS node malloc pool should definitely not be near its limit.
    That is, if the limit is typically 79MB (as you show below), then
    the amount actually used should not be anywhere near that number.

    If it is it is quite possible that the kernel malloc subsystem has
    deadlocked.

    The 'kern.maxvnodes' sysctl can be used to limit the size of the
    pool.  It isn't perfect but it should work fairly well.  The system
    does not ever free vnode structures (vmstat -m | fgrep vnode), but it
    will attempt to recycle them under a number of conditions:

	* When you umount a partition
	* When the vnode has no cached pages associated with it (more
	  common on machines with less then 2G of memory).
	* When the number of active vnodes exceeds kern.maxvnodes.

    systat -vm 1 will show you:

	desiredvnodes	- this is kern.maxvnodes
	numvnodes	- number of vnodes allocated.  Might exceed
			  kern.maxvnodes
	freevnodes	- Of the above number, the number of vnodes that
			  are on the free list.

    It is quite possible that we haven't tuned the Alpha's KVM reservation
    as well as we have tuned the i386.

    The 'FFS node' malloc pool is associated with the number of inodes,
    typically inodes associated with active vnodes.  This pool is allocated
    and freed as needed but should not have more elements allocated then
    the number of *active* vnodes in the system.

:I've so far been unable to reproduce the problem.  
:
:BTW - since this happened, I've been paying close attention to what vmstat
:says about FFS node usage:
:
:% vmstat -m | grep FFS 
: 512  ATA generic, UFS dirhash, FFS node, newblk, NFSV3 srvdesc,
:     FFS node 23670 11835K  11861K 79618K   336089    0     0  512
:
:Is this normal?

    Yes, that is typical.

						-Matt

:Thanks,
:
:Drew

To Unsubscribe: send mail to majordomo@FreeBSD.org
with "unsubscribe freebsd-stable" in the body of the message




Want to link to this message? Use this URL: <https://mail-archive.FreeBSD.org/cgi/mid.cgi?200202180456.g1I4uOk10820>