Skip site navigation (1)Skip section navigation (2)
Date:      Mon, 03 Dec 2012 09:24:01 +0100
From:      Andre Oppermann <andre@freebsd.org>
To:        Maxim Sobolev <sobomax@FreeBSD.org>
Cc:        Alfred Perlstein <bright@mu.org>, "src-committers@freebsd.org" <src-committers@freebsd.org>, "svn-src-user@freebsd.org" <svn-src-user@freebsd.org>
Subject:   Re: svn commit: r242910 - in user/andre/tcp_workqueue/sys: kern sys
Message-ID:  <50BC61A1.9040604@freebsd.org>
In-Reply-To: <50BC4EF6.8040902@FreeBSD.org>
References:  <201211120847.qAC8lEAM086331@svn.freebsd.org> <50A0D420.4030106@freebsd.org> <0039CD42-C909-41D0-B0A7-7DFBC5B8D839@mu.org> <50A1206B.1000200@freebsd.org> <3D373186-09E2-48BC-8451-E4439F99B29D@mu.org> <50BC4EF6.8040902@FreeBSD.org>

next in thread | previous in thread | raw e-mail | index | archive | help
On 03.12.2012 08:04, Maxim Sobolev wrote:
> Hi Alfred and Andre,
>
> It's nice somebody takes care of this. Default settings pretty much sucks on any off-the-shelf PC
> hardware in the last 5 years.
>
> We are also in quite mbufs hungry environment, is's not 10GigE, but we are dealing with forwarding
> voice traffic, which consists of predominantly very small packets (20-40 bytes). So we have a lot of
> small packets in-flight, which uses a lot of MBUFS.
>
> What however happens, the network stack consistently lock up after we put more than 16-18MB/sec onto
> it, which corresponds to about 350-400 Kpps.

Can you drop into kdb?  Do you have any backtrace to see where or how it
lock up?

> This is way lower than any nmbclusters/maxusers limits we have (1.5m/1500).
>
> With half of that critical load right now we see something along those lines:
>
> 66365/71953/138318/1597440 mbuf clusters in use (current/cache/total/max)
> 149617K/187910K/337528K bytes allocated to network (current/cache/total)
>
> Machine has 24GB of ram.
>
> vm.kmem_map_free: 24886267904
> vm.kmem_map_size: 70615040
> vm.kmem_size_scale: 1
> vm.kmem_size_max: 329853485875
> vm.kmem_size_min: 0
> vm.kmem_size: 24956903424
>
> So my question is whether there are some other limits that can cause MBUFS starvation if the number
> of allocated clusters grows to more than 200-250k? I am curious how it works in the dynamic system -
> since no memory is pre-allocated for MBUFS, what happens if the network load increases gradually
> while the system is running? Is it possible to get to ENOMEM eventually with all memory already
> taken for other pools?

Yes, mbuf allocation is not guaranteed and can fail before the limit is
reached.  What may happen is that a RX DMA ring refill failed and the
driver wedges.  This would be a driver bug.

Can you give more information on the NIC's and drivers you use?

-- 
Andre

> Mem: 6283M Active, 12G Inact, 3760M Wired, 754M Cache, 2464M Buf, 504M Free
> Swap: 40G Total, 6320K Used, 40G Free
>
> Any pointers/suggestions are greatly appreciated.
>
> -Maxim
>
>




Want to link to this message? Use this URL: <https://mail-archive.FreeBSD.org/cgi/mid.cgi?50BC61A1.9040604>