From owner-freebsd-stable@FreeBSD.ORG Fri Oct 3 09:06:17 2008 Return-Path: Delivered-To: freebsd-stable@freebsd.org Received: from mx1.freebsd.org (mx1.freebsd.org [IPv6:2001:4f8:fff6::34]) by hub.freebsd.org (Postfix) with ESMTP id 469C51065687; Fri, 3 Oct 2008 09:06:17 +0000 (UTC) (envelope-from rwatson@FreeBSD.org) Received: from cyrus.watson.org (cyrus.watson.org [65.122.17.42]) by mx1.freebsd.org (Postfix) with ESMTP id F2F2A8FC28; Fri, 3 Oct 2008 09:06:16 +0000 (UTC) (envelope-from rwatson@FreeBSD.org) Received: from fledge.watson.org (fledge.watson.org [65.122.17.41]) by cyrus.watson.org (Postfix) with ESMTP id 7560946B1A; Fri, 3 Oct 2008 05:06:16 -0400 (EDT) Date: Fri, 3 Oct 2008 10:06:16 +0100 (BST) From: Robert Watson X-X-Sender: robert@fledge.watson.org To: Danny Braniss In-Reply-To: Message-ID: References: <20080926081806.GA19055@icarus.home.lan> <20080926095230.GA20789@icarus.home.lan> User-Agent: Alpine 1.10 (BSF 962 2008-03-14) MIME-Version: 1.0 Content-Type: TEXT/PLAIN; charset=US-ASCII; format=flowed Cc: freebsd-hackers@freebsd.org, Jeremy Chadwick , freebsd-stable@freebsd.org, Claus Guttesen Subject: Re: bad NFS/UDP performance X-BeenThere: freebsd-stable@freebsd.org X-Mailman-Version: 2.1.5 Precedence: list List-Id: Production branch of FreeBSD source code List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , X-List-Received-Date: Fri, 03 Oct 2008 09:06:17 -0000 On Fri, 3 Oct 2008, Danny Braniss wrote: >> OK, so it looks like this was almost certainly the rwlock change. What >> happens if you pretty much universally substitute the following in >> udp_usrreq.c: >> >> Currently Change to >> --------- --------- >> INP_RLOCK INP_WLOCK >> INP_RUNLOCK INP_WUNLOCK >> INP_RLOCK_ASSERT INP_WLOCK_ASSERT > > I guess you were almost certainly correct :-) I did the global subst. on the > udp_usrreq.c from 19/08, __FBSDID("$FreeBSD: src/sys/netinet/udp_usrreq.c,v > 1.218.2.3 2008/08/18 23:00:41 bz Exp $"); and now udp is fine again! OK. This is a change I'd rather not back out since it significantly improves performance for many other UDP workloads, so we need to figure out why it's hurting us so much here so that we know if there are reasonable alternatives. Would it be possible for you to do a run of the workload with both kernels using LOCK_PROFILING around the benchmark, and then we can compare lock contention in the two cases? What we often find is that relieving contention at one point causes new contention at another point, and if the primitive used at that point handles contention less well for whatever reason, performance can be reduced rather than improved. So maybe we're looking at an issue in the dispatched UDP code from so_upcall? Another less satisfying (and fundamentally more difficult) answer might be "something to do with the scheduler", but a bit more analysis may shed some light. Robert N M Watson Computer Laboratory University of Cambridge > > danny > > >> Robert N M Watson >> Computer Laboratory >> University of Cambridge >> >>> >>> server is a NetApp: >>> >>> kernel from 18/08/08 00:00:0 : >>> /----- UDP ----//---- TCP -------/ >>> 1*512 38528 0.19s 83.50MB 0.20s 80.82MB/s >>> 2*512 19264 0.21s 76.83MB 0.21s 77.57MB/s >>> 4*512 9632 0.19s 85.51MB 0.22s 73.13MB/s >>> 8*512 4816 0.19s 83.76MB 0.21s 75.84MB/s >>> 16*512 2408 0.19s 83.99MB 0.21s 77.18MB/s >>> 32*512 1204 0.19s 84.45MB 0.22s 71.79MB/s >>> 64*512 602 0.20s 79.98MB 0.20s 78.44MB/s >>> 128*512 301 0.18s 86.51MB 0.22s 71.53MB/s >>> 256*512 150 0.19s 82.83MB 0.20s 78.86MB/s >>> 512*512 75 0.19s 82.77MB 0.21s 76.39MB/s >>> 1024*512 37 0.19s 85.62MB 0.21s 76.64MB/s >>> 2048*512 18 0.21s 77.72MB 0.20s 80.30MB/s >>> 4096*512 9 0.26s 61.06MB 0.30s 53.79MB/s >>> 8192*512 4 0.83s 19.20MB 0.41s 39.12MB/s >>> 16384*512 2 0.84s 19.01MB 0.41s 39.03MB/s >>> 32768*512 1 0.82s 19.59MB 0.39s 40.89MB/s >>> >>> kernel from 19/08/08 00:00:00: >>> 1*512 38528 0.45s 35.59MB 0.20s 81.43MB/s >>> 2*512 19264 0.45s 35.56MB 0.20s 79.24MB/s >>> 4*512 9632 0.49s 32.66MB 0.22s 73.72MB/s >>> 8*512 4816 0.47s 34.06MB 0.21s 75.52MB/s >>> 16*512 2408 0.53s 30.16MB 0.22s 72.58MB/s >>> 32*512 1204 0.31s 51.68MB 0.40s 40.14MB/s >>> 64*512 602 0.43s 37.23MB 0.25s 63.57MB/s >>> 128*512 301 0.51s 31.39MB 0.26s 62.70MB/s >>> 256*512 150 0.47s 34.02MB 0.23s 69.06MB/s >>> 512*512 75 0.47s 34.01MB 0.23s 70.52MB/s >>> 1024*512 37 0.53s 30.12MB 0.22s 73.01MB/s >>> 2048*512 18 0.55s 29.07MB 0.23s 70.64MB/s >>> 4096*512 9 0.46s 34.69MB 0.21s 75.92MB/s >>> 8192*512 4 0.81s 19.66MB 0.43s 36.89MB/s >>> 16384*512 2 0.80s 19.99MB 0.40s 40.29MB/s >>> 32768*512 1 1.11s 14.41MB 0.38s 42.56MB/s >>> >>> >>> >>> >>> > > >