From owner-freebsd-stable@FreeBSD.ORG Fri Feb 26 20:09:35 2010 Return-Path: Delivered-To: stable@freebsd.org Received: from mx1.freebsd.org (mx1.freebsd.org [IPv6:2001:4f8:fff6::34]) by hub.freebsd.org (Postfix) with ESMTP id 02B6A106566B for ; Fri, 26 Feb 2010 20:09:35 +0000 (UTC) (envelope-from danny@cs.huji.ac.il) Received: from kabab.cs.huji.ac.il (kabab.cs.huji.ac.il [132.65.16.84]) by mx1.freebsd.org (Postfix) with ESMTP id 9EFE68FC12 for ; Fri, 26 Feb 2010 20:09:34 +0000 (UTC) Received: from pampa.cs.huji.ac.il ([132.65.80.32]) by kabab.cs.huji.ac.il with esmtp id 1Nl6VA-000557-D9; Fri, 26 Feb 2010 22:09:32 +0200 X-Mailer: exmh version 2.7.2 01/07/2005 with nmh-1.2 To: Gerrit =?ISO-8859-1?Q?K=FChn?= In-reply-to: <20100226174021.8feadad9.gerrit@pmp.uni-hannover.de> References: <4B86F384.3010308@digiware.nl> <2a41acea1002251459v40e8c6ddxd0437decbada4594@mail.gmail.com> <4B8795B1.4020006@digiware.nl> <20100226120339.GB17798@icarus.home.lan> <20100226133138.d47dd080.gerrit@pmp.uni-hannover.de> <20100226134429.041ea6f2.gerrit@pmp.uni-hannover.de> <20100226141754.86ae5a3f.gerrit@pmp.uni-hannover.de> <20100226174021.8feadad9.gerrit@pmp.uni-hannover.de> Comments: In-reply-to Gerrit =?ISO-8859-1?Q?K=FChn?= message dated "Fri, 26 Feb 2010 17:40:21 +0100." Mime-Version: 1.0 Content-Type: text/plain; charset=us-ascii Date: Fri, 26 Feb 2010 22:09:32 +0200 From: Daniel Braniss Message-ID: Cc: stable@freebsd.org, Willem Jan Withagen , Jack Vogel , Jeremy Chadwick Subject: Re: mbuf leakage with nfs/zfs? (was: em0 freezes on ZFS server) X-BeenThere: freebsd-stable@freebsd.org X-Mailman-Version: 2.1.5 Precedence: list List-Id: Production branch of FreeBSD source code List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , X-List-Received-Date: Fri, 26 Feb 2010 20:09:35 -0000 > On Fri, 26 Feb 2010 17:41:02 +0200 Daniel Braniss > wrote about Re: em0 freezes on ZFS server : > > DB> check: > DB> ftp://ftp.cs.huji.ac.il/users/danny/freebsd/plot.ps > DB> x is seconds, y is mbus current. > > Looks not as bad as mine. I had 37k when I rebooted the machine some > minutes ago (and it's basically idle, just serving a few nfs clients that > don't do much). > But from the values Jeremy has posted and from my own comparsisons here I > would think that something like 5k of mbuf clusters would be normal for my > machine (and probably also for yours). > > Some more info from my side: > In the meantime I also tried a different network interface. The > nfe-interface that is onboard causes the same problems, so it is probably > not an em-specific issue. > Furthermore I found this via Google: > . I'll have to do some packet snooping to check if it's TCP or UDP nfs traffic, since some of the clients are Linux ... > I patched and recompiled my kernel with this, just to try it out. Right > now I have > > 2264/1321/3585 mbufs in use (current/cache/total) > 1239/1017/2256/65000 mbuf clusters in use (current/cache/total/max) > 1239/809 mbuf+clusters out of packet secondary zone in use (current/cache) > > but the uptime is only 12min so far. In some hours I'll know for certain > if this patch has anything to do with the problem. at the moment there is not much activity, but if you check the latest plot.ps you will see that the bottom is slowly increasing, so my bet is that there must be some leakage! cheers danny