From owner-freebsd-net@FreeBSD.ORG Mon Oct 25 13:44:04 2004 Return-Path: Delivered-To: freebsd-net@freebsd.org Received: from mx1.FreeBSD.org (mx1.freebsd.org [216.136.204.125]) by hub.freebsd.org (Postfix) with ESMTP id 38B2C16A4CE for ; Mon, 25 Oct 2004 13:44:04 +0000 (GMT) Received: from ox.eicat.ca (ox.eicat.ca [66.96.30.35]) by mx1.FreeBSD.org (Postfix) with ESMTP id C939343D45 for ; Mon, 25 Oct 2004 13:44:03 +0000 (GMT) (envelope-from dgilbert@daveg.ca) Received: by ox.eicat.ca (Postfix, from userid 66) id D5032C936; Mon, 25 Oct 2004 09:43:59 -0400 (EDT) Received: by canoe.dclg.ca (Postfix, from userid 101) id 983C21D1DB9; Mon, 25 Oct 2004 09:43:47 -0400 (EDT) From: David Gilbert MIME-Version: 1.0 Content-Type: text/plain; charset=us-ascii Content-Transfer-Encoding: 7bit Message-ID: <16765.787.356237.161540@canoe.dclg.ca> Date: Mon, 25 Oct 2004 09:43:47 -0400 To: lukem.freebsd@cse.unsw.edu.au In-Reply-To: References: X-Mailer: VM 7.17 under 21.5 (beta17) "chayote" (+CVS-20040321) XEmacs Lucid cc: freebsd-net@freebsd.org Subject: Underutilisation of CPU --- am I PCI bus bandwidth limited? X-BeenThere: freebsd-net@freebsd.org X-Mailman-Version: 2.1.1 Precedence: list List-Id: Networking and TCP/IP with FreeBSD List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , X-List-Received-Date: Mon, 25 Oct 2004 13:44:04 -0000 >>>>> "lukem" == lukem freebsd writes: lukem> I posted this to freebsd-performance, but have as yet not lukem> satisfactorily answered the question. Since it is primarily lukem> network related, I'm reposting it here. lukem> The particular benchmark I have been using is a UDP echo test, lukem> where I have a number of linux boxes sending UDP packets to a lukem> freebsd box, which the freebsd box echoes at user-level (think lukem> inetd udp echo, though in fact I have also used an optimised lukem> server which gets higher throughput). Throughput is measured on lukem> the boxes which generate the UDP packets. lukem> I am measuring idle time using a CPU soaker process which runs lukem> at a very low priority. Top seems to confirm the output it lukem> gives. lukem> What I see is strange. CPU utilisation always peaks (and stays) lukem> at between 80 & 85%. If I increase the amount of work done by lukem> the UDP echo program (by inserting additional packet copies), lukem> CPU utilisation does not rise, but rather, throughput lukem> declines. The 80% figure is common to both the slow and fast lukem> PCI cards as well. lukem> This is rather confusing, as I cannot tell if the system is IO lukem> bound or CPU bound. Certainly I would not have expected the lukem> 133/64 PCI bus to be saturated given that peak throughput is lukem> around 550Mbit/s with 1024-byte packets. (Such a low figure is lukem> not unexpected given there are 2 syscalls per packet). lukem> no additional packet copies: (echoed) (applied) (CPU%) lukem> 499.5Mbps 500.0Mbps 76.2 549.0Mbps 550.0Mbps 80.4 562.2Mbps lukem> 600,0Mbps 81.9 lukem> 32 additional packet copies: (echoed) (applied) (CPU%) lukem> 297.8Mbps 500.0Mbps 81.1 298.6Mbps 550.0Mbps 81.8 297.1Mbps lukem> 600.0Mbps 82.4 lukem> I have only included data around the MLFRR. lukem> If anyone has any insight into what might cause this behaviour, lukem> please let me know, as it has me stumped. -- Luke Well... you're going to get a lot more packets through (likely) if you turn on polling, but keep in mind that your low priority "soaker" process will no longer be accurate. Seeing bytes/packets decline and cpu stay the same is the same failure mode that I've observed in my packet passing testing. Dave. -- ============================================================================ |David Gilbert, Independent Contractor. | Two things can only be | |Mail: dave@daveg.ca | equal if and only if they | |http://daveg.ca | are precisely opposite. | =========================================================GLO================