From owner-freebsd-net@FreeBSD.ORG Tue Mar 19 15:15:15 2013 Return-Path: Delivered-To: freebsd-net@freebsd.org Received: from mx1.freebsd.org (mx1.freebsd.org [IPv6:2001:1900:2254:206a::19:1]) by hub.freebsd.org (Postfix) with ESMTP id 632FFCD2; Tue, 19 Mar 2013 15:15:15 +0000 (UTC) (envelope-from rmacklem@uoguelph.ca) Received: from esa-annu.net.uoguelph.ca (esa-annu.mail.uoguelph.ca [131.104.91.36]) by mx1.freebsd.org (Postfix) with ESMTP id E8776E22; Tue, 19 Mar 2013 15:15:14 +0000 (UTC) X-IronPort-Anti-Spam-Filtered: true X-IronPort-Anti-Spam-Result: AqEEAO1/SFGDaFvO/2dsb2JhbABDiB+8boFsdIIkAQEBAwEBAQEgKyALBRYYAgINGQIpAQkmBggHBAEcBIdtBgyvXoJAkCGBI4w7fDQHgi2BEwOTGYEIgj6BH49jgyYgMoEFNQ X-IronPort-AV: E=Sophos;i="4.84,872,1355115600"; d="scan'208";a="19716152" Received: from erie.cs.uoguelph.ca (HELO zcs3.mail.uoguelph.ca) ([131.104.91.206]) by esa-annu.net.uoguelph.ca with ESMTP; 19 Mar 2013 11:15:13 -0400 Received: from zcs3.mail.uoguelph.ca (localhost.localdomain [127.0.0.1]) by zcs3.mail.uoguelph.ca (Postfix) with ESMTP id A3225B3F16; Tue, 19 Mar 2013 11:15:13 -0400 (EDT) Date: Tue, 19 Mar 2013 11:15:13 -0400 (EDT) From: Rick Macklem To: Garrett Wollman Message-ID: <1807531371.4052865.1363706113648.JavaMail.root@erie.cs.uoguelph.ca> In-Reply-To: <1784154272.4050462.1363703851697.JavaMail.root@erie.cs.uoguelph.ca> Subject: Re: Limits on jumbo mbuf cluster allocation MIME-Version: 1.0 Content-Type: text/plain; charset=utf-8 Content-Transfer-Encoding: 7bit X-Originating-IP: [172.17.91.202] X-Mailer: Zimbra 6.0.10_GA_2692 (ZimbraWebClient - FF3.0 (Win)/6.0.10_GA_2692) Cc: freebsd-net@freebsd.org, andre@freebsd.org, Ivan Voras X-BeenThere: freebsd-net@freebsd.org X-Mailman-Version: 2.1.14 Precedence: list List-Id: Networking and TCP/IP with FreeBSD List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , X-List-Received-Date: Tue, 19 Mar 2013 15:15:15 -0000 I wrote: > Garrett Wollman wrote: > > < > said: > > > > > I've attached a patch that has assorted changes. > > > > So I've done some preliminary testing on a slightly modified form of > > this patch, and it appears to have no major issues. However, I'm > > still waiting for my user with 500 VMs to have enough free to be > > able > > to run some real stress tests for me. > > > > I was able to get about 2.5 Gbit/s throughput for a single streaming > > client over local 10G interfaces with jumbo frames (through a single > > switch and with LACP on both sides -- how well does lagg(4) interact > > with TSO and checksum offload?) This is a little bit disappointing > > (considering that the filesystem can do 14 Gbit/s locally) but still > > pretty decent for one single-threaded client. This obviously does > > not > > implicate the DRC changes at all, but does suggest that there is > > room > > for more performance improvement. (In previous tests last year, I > > was able to get a sustained 8 Gbit/s when using multiple clients.) I > > also found that one of our 10G switches is reordering TCP segments > > in > > a way that causes poor performance. > > > If the server for this test isn't doing anything else yet, you could > try a test run with a single nfsd thread and see if that improves > performance. > > ken@ emailed yesterday mentioning that out of order reads was > resulting > in poor performance related to ZFS and that a single nfsd thread > improved > that for his test. > > Although a single nfsd thread isn't practical, it suggests that the > nfsd > thread affinity code that I had forgotten about and has never been > ported > to the new server, might be needed for this. (I'm not sure how to do > the > affinity stuff for NFSv4, but it should at least be easy to port the > code > so that it works for NFSv3 mounts.) > Oh, and don't hesitate to play with the rsize and readahead options on the client mount. It is not obvious what is an optimal setting for a given LAN/server config. (I think the Linux client has a readahead option?) rick > rick > ps: For a couple of years I had assumed that Isilon would be doing > this, > but they are no longer working on the FreeBSD NFS server, so the > affinity stuff slipped through the cracks. > > > I'll hopefully have some proper testing results later in the week. > > > > -GAWollman > > _______________________________________________ > > freebsd-net@freebsd.org mailing list > > http://lists.freebsd.org/mailman/listinfo/freebsd-net > > To unsubscribe, send any mail to > > "freebsd-net-unsubscribe@freebsd.org" > _______________________________________________ > freebsd-net@freebsd.org mailing list > http://lists.freebsd.org/mailman/listinfo/freebsd-net > To unsubscribe, send any mail to "freebsd-net-unsubscribe@freebsd.org"