Skip site navigation (1)Skip section navigation (2)
Date:      Fri, 1 Aug 2003 17:54:57 -0500 (CDT)
From:      Mike Silbersack <silby@silby.com>
To:        Steve Francis <steve@expertcity.com>
Cc:        freebsd-net@freebsd.org
Subject:   Re: mbuf clusters exhausted w/o reaching max?
Message-ID:  <20030801174106.D2165@odysseus.silby.com>
In-Reply-To: <3F2ADA02.7050304@expertcity.com>
References:  <3F2AC3F5.3010804@expertcity.com> <20030801152510.J2165@odysseus.silby.com> <3F2ADA02.7050304@expertcity.com>

next in thread | previous in thread | raw e-mail | index | archive | help

On Fri, 1 Aug 2003, Steve Francis wrote:

>  From LINT (see below),  the comment says the VM_KMEM_SIZE_MAX is 200M,
> yet the option says 100M.  Comment typo, or typo in the option?
> Is increasing the VM_KMEM_SIZE_MAX (which should take us to up to 256M
> given 1G RAM) sufficient to allow extra space for mbuf clusters?

The values listed in LINT are more for illustrative than functional
purposes, so you can ignore them for the most part.  The default
VM_KMEM_SIZE_MAX is actually contained in
/usr/src/sys/i386/include/vmparam.h, where it's set to 200 Megs.

And I misremembered, the scale factor is 1/3rd, not 1/4th.

If you want to change the kmem size, the easiest way to do it would be to
edit /boot/loader.conf and set "kern.vm.kmem.size=300M"  (Or more,
perhaps.)

> I googled and found this from das@freebsd.org on a related question:
> "Within the kernel's share of this address space, memory is split into
> submaps, such as the mb_map (for the network), buffer_map for the
> filesystem buffer cache, and the kmem_map for just about everything
> else. These submaps are size-limited to prevent any one of them from
> getting out of hand."
> I presume I need to increase mb_map, but could not find a specific
> option for that. Does that scale with an increased VM_KMEM_SIZE_MAX?
>
> These servers basically run one process, which is about 500M resident
> and total size, on 1G RAM machine, and do tons of network IO with lots
> of packets.  Given that, do you still anticipate " causing other
> problems in the process" if I tune this?
>
>
> Thanks

The mb_map is allocated as part of the kernel map, so if other kernel
users eat up ram before the mbuf subsystem does, then the mbuf subsystem
has to starve.  Ram for mbufs is not allocated until it is actually used,
so if you only ever use 200 clusters, that's all the real ram that will be
used.  However, ram used by mbufs (and clusters) is never freed back for
non-mbuf usage, so if you have a load spike and use 10000 clusters, that
ram is unuseable by the rest of the system afterwards.

I think if you stay under 400M that you'll probably be ok.  The main
issues with runaway memory usage occur on 2G and 4G machines, where the
kernel map + buffer cache + swap stuff + etc was growing extremely large.
However, if 50 failed memory allocations are all that you saw, and
everything else is working properly, you might consider leaving things as
they are. :)

FWIW, Bosko Milekic is working on a new mbuf allocator for 5.x which will
allow mbuf memory to be freed back to the common pool, PHK is thinking of
ways to remove the memory usage of the buffer cache, and some other memory
issues have already been fixed as the result of 5.x's UMA memory
allocator.  Hopefully by 5.2 or 5.3 you will no longer need to tweak any
of these settings.  (Very little of this work will be MFC'd to 4.x, due to
the size of the changes.)

Mike "Silby" Silbersack



Want to link to this message? Use this URL: <https://mail-archive.FreeBSD.org/cgi/mid.cgi?20030801174106.D2165>