From owner-svn-src-user@FreeBSD.ORG Mon Dec 3 08:24:10 2012 Return-Path: Delivered-To: svn-src-user@freebsd.org Received: from mx1.freebsd.org (mx1.freebsd.org [69.147.83.52]) by hub.freebsd.org (Postfix) with ESMTP id 424494D2 for ; Mon, 3 Dec 2012 08:24:10 +0000 (UTC) (envelope-from andre@freebsd.org) Received: from c00l3r.networx.ch (c00l3r.networx.ch [62.48.2.2]) by mx1.freebsd.org (Postfix) with ESMTP id 961EC8FC0C for ; Mon, 3 Dec 2012 08:24:08 +0000 (UTC) Received: (qmail 90239 invoked from network); 3 Dec 2012 09:54:38 -0000 Received: from c00l3r.networx.ch (HELO [127.0.0.1]) ([62.48.2.2]) (envelope-sender ) by c00l3r.networx.ch (qmail-ldap-1.03) with SMTP for ; 3 Dec 2012 09:54:38 -0000 Message-ID: <50BC61A1.9040604@freebsd.org> Date: Mon, 03 Dec 2012 09:24:01 +0100 From: Andre Oppermann User-Agent: Mozilla/5.0 (Windows NT 6.1; WOW64; rv:16.0) Gecko/20121026 Thunderbird/16.0.2 MIME-Version: 1.0 To: Maxim Sobolev Subject: Re: svn commit: r242910 - in user/andre/tcp_workqueue/sys: kern sys References: <201211120847.qAC8lEAM086331@svn.freebsd.org> <50A0D420.4030106@freebsd.org> <0039CD42-C909-41D0-B0A7-7DFBC5B8D839@mu.org> <50A1206B.1000200@freebsd.org> <3D373186-09E2-48BC-8451-E4439F99B29D@mu.org> <50BC4EF6.8040902@FreeBSD.org> In-Reply-To: <50BC4EF6.8040902@FreeBSD.org> Content-Type: text/plain; charset=ISO-8859-1; format=flowed Content-Transfer-Encoding: 7bit Cc: Alfred Perlstein , "src-committers@freebsd.org" , "svn-src-user@freebsd.org" X-BeenThere: svn-src-user@freebsd.org X-Mailman-Version: 2.1.14 Precedence: list List-Id: "SVN commit messages for the experimental " user" src tree" List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , X-List-Received-Date: Mon, 03 Dec 2012 08:24:10 -0000 On 03.12.2012 08:04, Maxim Sobolev wrote: > Hi Alfred and Andre, > > It's nice somebody takes care of this. Default settings pretty much sucks on any off-the-shelf PC > hardware in the last 5 years. > > We are also in quite mbufs hungry environment, is's not 10GigE, but we are dealing with forwarding > voice traffic, which consists of predominantly very small packets (20-40 bytes). So we have a lot of > small packets in-flight, which uses a lot of MBUFS. > > What however happens, the network stack consistently lock up after we put more than 16-18MB/sec onto > it, which corresponds to about 350-400 Kpps. Can you drop into kdb? Do you have any backtrace to see where or how it lock up? > This is way lower than any nmbclusters/maxusers limits we have (1.5m/1500). > > With half of that critical load right now we see something along those lines: > > 66365/71953/138318/1597440 mbuf clusters in use (current/cache/total/max) > 149617K/187910K/337528K bytes allocated to network (current/cache/total) > > Machine has 24GB of ram. > > vm.kmem_map_free: 24886267904 > vm.kmem_map_size: 70615040 > vm.kmem_size_scale: 1 > vm.kmem_size_max: 329853485875 > vm.kmem_size_min: 0 > vm.kmem_size: 24956903424 > > So my question is whether there are some other limits that can cause MBUFS starvation if the number > of allocated clusters grows to more than 200-250k? I am curious how it works in the dynamic system - > since no memory is pre-allocated for MBUFS, what happens if the network load increases gradually > while the system is running? Is it possible to get to ENOMEM eventually with all memory already > taken for other pools? Yes, mbuf allocation is not guaranteed and can fail before the limit is reached. What may happen is that a RX DMA ring refill failed and the driver wedges. This would be a driver bug. Can you give more information on the NIC's and drivers you use? -- Andre > Mem: 6283M Active, 12G Inact, 3760M Wired, 754M Cache, 2464M Buf, 504M Free > Swap: 40G Total, 6320K Used, 40G Free > > Any pointers/suggestions are greatly appreciated. > > -Maxim > >