Skip site navigation (1)Skip section navigation (2)
Date:      Sun, 24 Feb 2002 20:44:08 -0800 (PST)
From:      Matthew Dillon <dillon@apollo.backplane.com>
To:        Seigo Tanimura <tanimura@r.dl.itc.u-tokyo.ac.jp>
Cc:        arch@FreeBSD.ORG
Subject:   Re: reclaiming v_data of free vnodes
Message-ID:  <200202250444.g1P4i8X29005@apollo.backplane.com>
References:  <200202231556.g1NFu9N9040749@silver.carrots.uucp.r.dl.itc.u-tokyo.ac.jp> <200202242041.g1OKfXt95731@apollo.backplane.com> <200202250325.g1P3PVN9092431@silver.carrots.uucp.r.dl.itc.u-tokyo.ac.jp>

next in thread | previous in thread | raw e-mail | index | archive | help
:On Sun, 24 Feb 2002 12:41:33 -0800 (PST),
:  Matthew Dillon <dillon@apollo.backplane.com> said:
:
:Matthew>     cache).  330,000 vnodes and/or inodes is pushing what a kernel
:Matthew>     with only 1G of KVM can handle.  For these machines you may want
:Matthew>     to change the kernel start address from c000000 (1G of KVM) to
:Matthew>     8000000 (2G of KVM).  I forget exactly how that is done.
:
:Increasing KVM is not likely to help. The panic message in the Friday
:night was something like this:
:
:kmem_malloc(256): kmem_map too small: (~=200M) total allocated
:
:in kmem_malloc() called by ffs_vget().
:
:It may help me to expand kmem_map to 512M. This, however, scales the
:number of vnodes/inodes to only up to about twice of the present
:number.

    You can use the boot-time tunable 'kern.vm.kmem.size' to set
    the size of kmem.  You may have to reduce the size of the
    buffer cache to make everything fit.  Also, if you make kmem_map
    too large you can run the system out of other types of space,
    like the zalloc memory space (which is allocated from the remaining
    KVA beyond the kmem_map), and memory for pipes.

    If this gets too tight you will have to increase the total amount
    of KVM for the system (which also decreases the size of the user
    per-process VM).

    At some point the number of vnodes will balance against cacheable
    memory.  Vnodes are reclaimed when they no longer have any backing
    VM pages.  The more memory the machine has, the more vnodes it can
    cache before it starts reclaiming them.  This is why this hasn't been
    a problem before now... machines typically did not have enough physical
    memory to be able to cache backing store for a large number of vnodes.

					-Matt
					Matthew Dillon 
					<dillon@backplane.com>

:Matthew>     Did kern.maxvnodes auto-size to 330,000 or did you set it up
:Matthew>     there manually?  Or is kern.maxvnodes set lower and it blew it out
:Matthew>     on its own due to load?
:
:It is set automatically by the kernel.
:
:-- 
:Seigo Tanimura <tanimura@r.dl.itc.u-tokyo.ac.jp> <tanimura@FreeBSD.org>
:


To Unsubscribe: send mail to majordomo@FreeBSD.org
with "unsubscribe freebsd-arch" in the body of the message




Want to link to this message? Use this URL: <https://mail-archive.FreeBSD.org/cgi/mid.cgi?200202250444.g1P4i8X29005>