From owner-freebsd-fs@FreeBSD.ORG Wed Jul 20 20:20:10 2011 Return-Path: Delivered-To: freebsd-fs@freebsd.org Received: from mx1.freebsd.org (mx1.freebsd.org [IPv6:2001:4f8:fff6::34]) by hub.freebsd.org (Postfix) with ESMTP id 352AA1065674 for ; Wed, 20 Jul 2011 20:20:10 +0000 (UTC) (envelope-from rmacklem@uoguelph.ca) Received: from esa-jnhn.mail.uoguelph.ca (esa-jnhn.mail.uoguelph.ca [131.104.91.44]) by mx1.freebsd.org (Postfix) with ESMTP id E6C748FC08 for ; Wed, 20 Jul 2011 20:20:09 +0000 (UTC) X-IronPort-Anti-Spam-Filtered: true X-IronPort-Anti-Spam-Result: Ap0EACE3J06DaFvO/2dsb2JhbABThEqkFLYQkQSBK4QDgQ8Ekm6IMIhJ X-IronPort-AV: E=Sophos;i="4.67,237,1309752000"; d="scan'208";a="131745726" Received: from erie.cs.uoguelph.ca (HELO zcs3.mail.uoguelph.ca) ([131.104.91.206]) by esa-jnhn-pri.mail.uoguelph.ca with ESMTP; 20 Jul 2011 16:20:09 -0400 Received: from zcs3.mail.uoguelph.ca (localhost.localdomain [127.0.0.1]) by zcs3.mail.uoguelph.ca (Postfix) with ESMTP id 0E137B3F21; Wed, 20 Jul 2011 16:20:09 -0400 (EDT) Date: Wed, 20 Jul 2011 16:20:09 -0400 (EDT) From: Rick Macklem To: Zack Kirsch Message-ID: <1443258176.814644.1311193209047.JavaMail.root@erie.cs.uoguelph.ca> In-Reply-To: <1131900815.812133.1311191015011.JavaMail.root@erie.cs.uoguelph.ca> MIME-Version: 1.0 Content-Type: text/plain; charset=utf-8 Content-Transfer-Encoding: 7bit X-Originating-IP: [172.17.91.202] X-Mailer: Zimbra 6.0.10_GA_2692 (ZimbraWebClient - FF3.0 (Win)/6.0.10_GA_2692) Cc: freebsd-fs@freebsd.org Subject: Re: nfsd server cache flooded, try to increase nfsrc_floodlevel X-BeenThere: freebsd-fs@freebsd.org X-Mailman-Version: 2.1.5 Precedence: list List-Id: Filesystems List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , X-List-Received-Date: Wed, 20 Jul 2011 20:20:10 -0000 It's me again: > Zack Kirsch wrote: [good stuff snipped for brevity] > > > We've done a few things to combat this problem: > > 1) We increased the floodlevel to 65536. > > 2) We made the floodlevel configurable via sysctl. > I've thought that it would be nice to define this as a fraction of > what kern.ipc.nmbclusters is set to, but I haven't looked to see how > often an mbuf cluster ends up being a part of the cached reply. > > The 16K was just a very conservative # chosen when the server I did > load tests against had 512Mbytes of RAM. > > I think tying it to kern.ipc.nmbclusters (or directly to the machine's > RAM size or both??) would be nice. Having yet another tunable few > understand > (ie. making it a sysctl) seems a less desirable fallback plan? > I just did a quick test and it seems that the replies cached for these open_owners (and lock_owners too, I think) are usually just one mbuf, so cranking the flood level way up shouldn't be too bad. Can anyone suggest what would be an appropriate upper limit, given that each cached entry will use one small malloc'd data structure plus one mbuf (without a cluster)? rick