From owner-freebsd-net@FreeBSD.ORG Sat Jun 28 01:55:06 2008 Return-Path: Delivered-To: freebsd-net@freebsd.org Received: from mx1.freebsd.org (mx1.freebsd.org [IPv6:2001:4f8:fff6::34]) by hub.freebsd.org (Postfix) with ESMTP id 76409106566B for ; Sat, 28 Jun 2008 01:55:06 +0000 (UTC) (envelope-from andrew@modulus.org) Received: from email.octopus.com.au (host-122-100-2-232.octopus.com.au [122.100.2.232]) by mx1.freebsd.org (Postfix) with ESMTP id 2A5FD8FC13 for ; Sat, 28 Jun 2008 01:55:05 +0000 (UTC) (envelope-from andrew@modulus.org) Received: by email.octopus.com.au (Postfix, from userid 1002) id 368F117347; Sat, 28 Jun 2008 11:55:04 +1000 (EST) X-Spam-Checker-Version: SpamAssassin 3.2.3 (2007-08-08) on email.octopus.com.au X-Spam-Level: X-Spam-Status: No, score=-1.4 required=10.0 tests=ALL_TRUSTED autolearn=failed version=3.2.3 Received: from [10.1.50.60] (138.21.96.58.exetel.com.au [58.96.21.138]) (using TLSv1 with cipher DHE-RSA-AES256-SHA (256/256 bits)) (No client certificate requested) (Authenticated sender: admin@email.octopus.com.au) by email.octopus.com.au (Postfix) with ESMTP id 16D3C17209; Sat, 28 Jun 2008 11:55:00 +1000 (EST) Message-ID: <486599BD.7030804@modulus.org> Date: Sat, 28 Jun 2008 11:54:05 +1000 From: Andrew Snow User-Agent: Thunderbird 2.0.0.14 (X11/20080523) MIME-Version: 1.0 To: Paul , freebsd-net@freebsd.org References: <48645D9E.7090303@gtcomm.net> <48653340.8060301@gtcomm.net> In-Reply-To: <48653340.8060301@gtcomm.net> Content-Type: text/plain; charset=ISO-8859-1; format=flowed Content-Transfer-Encoding: 7bit Cc: Subject: Re: Weirdness - FBSD 7, Routing, Packet generator, em taskq X-BeenThere: freebsd-net@freebsd.org X-Mailman-Version: 2.1.5 Precedence: list List-Id: Networking and TCP/IP with FreeBSD List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , X-List-Received-Date: Sat, 28 Jun 2008 01:55:06 -0000 Firstly, tried turning off polling? Some of us have found it to be detrimental to performance on the more modern systems. Its use was more practical on older systems were servicing interrupts took alot of CPU power, but the "em" driver supports delaying interrupts until more packets have arrived - see the sysctls available on the em man page. So you're probably better of turning off polling and playing with these tunables, by calculating how many packets you're likely to be able to fit in the tx/rx descriptor array before you need to generate an interrupt. Secondly try pressing shift+c inside top to display "raw cpu usage" instead of the default "weighted cpu usage". Finally, is your test portraying realistic conditions? I am reminded that 1GB/s with 1500 MTU is only roughly 80,000 pps. - Andrew