From owner-freebsd-net@FreeBSD.ORG Tue Sep 16 18:29:20 2003 Return-Path: Delivered-To: freebsd-net@freebsd.org Received: from mx1.FreeBSD.org (mx1.freebsd.org [216.136.204.125]) by hub.freebsd.org (Postfix) with ESMTP id A62AE16A4B3 for ; Tue, 16 Sep 2003 18:29:20 -0700 (PDT) Received: from fe3.cox-internet.com (fe3-cox.cox-internet.com [66.76.2.40]) by mx1.FreeBSD.org (Postfix) with ESMTP id 9535643F75 for ; Tue, 16 Sep 2003 18:29:19 -0700 (PDT) (envelope-from daved@tamu.edu) Received: from tamu.edu ([66.233.125.246]) by fe3.cox-internet.com f018ea6efd6984189790b5f401fab223) with ESMTP id <20030917012918.BBJT636.fe3@tamu.edu>; Tue, 16 Sep 2003 20:29:18 -0500 Date: Tue, 16 Sep 2003 20:29:37 -0500 Content-Type: text/plain; charset=US-ASCII; format=flowed Mime-Version: 1.0 (Apple Message framework v552) To: Luigi Rizzo From: David J Duchscher In-Reply-To: <20030916151221.A29339@xorpc.icir.org> Message-Id: <66EBFE9A-E8AE-11D7-841F-000A956E58AC@tamu.edu> Content-Transfer-Encoding: 7bit X-Mailer: Apple Mail (2.552) cc: freebsd-net@freebsd.org Subject: Re: Bridging Benchmarks X-BeenThere: freebsd-net@freebsd.org X-Mailman-Version: 2.1.1 Precedence: list List-Id: Networking and TCP/IP with FreeBSD List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , X-List-Received-Date: Wed, 17 Sep 2003 01:29:20 -0000 On Tuesday, September 16, 2003, at 05:12 PM, Luigi Rizzo wrote: > On Tue, Sep 16, 2003 at 04:45:36PM -0500, David J Duchscher wrote: >> We have been benchmarking FreeBSD configured as a bridge and I thought >> I would share the data that we have been collecting. Its a work in >> progress so more data will show up as try some more Ethernet cards and >> machine configurations. Everything is 100Mbps at the moment. Would >> be >> very interested in any thoughts, insights or observations people might >> have. >> >> http://wolf.tamu.edu/~daved/bench-100/ > > interesting results, thanks for sharing them. > I would like to add a few comments and suggestions: > > * as the results with the Gbit card show, the system per se > is able to work at wire speed at 100Mbit/s, but some cards and/or > drivers have bugs which prevent full-speed operation. > Among these, i ran extensive experiments on the Intel PRO/100, > and depending on how you program the card, the maximum transmit > speed ranges from ~100kpps (with the default driver) to ~120kpps > no matter how fast the CPU is. I definitely blame the hardware here. We have seen similar results. In a quick test, I didn't see any difference in the performance of the Intel Pro/100 on a 2.4Ghz Xeon machine. That was rather surprising to me since lots of people swear by them. > * I have had very good results with cards supported by the 'dc' > driver (Intel 21143 chipset and various clones) -- wire speed even > at 64-byte frames. Possibly the 'sis' chips might do the same. > I know the 'dc' cards are hard to find these days, but i would > definitely try one of them if possible. > I would also love to see numbers with the 'rl' cards (Realtek8139, > most of the cards you find around in the stores) which are > probably among the slowest ones we have. Yea, I trying to find cards to test but its hard. I can only purchase cards that help with the project. For example, I will be testing the Intel Pro/1000T Desktop Adapters since the gigabit cards have shown to be full bandwidth. > * the "latency" curves for some of the cards are quite strange > (making me suspect bugs in the drivers or the like). > How do you define the 'latency', how do you measure it, and do > you know if it is affected by changing "options HZ=..." in your > kernel config file (default is 100, i usually recommend using > 1000) ? All of this data is coming from a Anritsu MD1230A test unit running the RFC2544 Performance tests. http://snurl.com/2d9x Currently the kernel HZ value is set to 1000. I have it on my list of things to change and perform the tests again. > * especially under heavy load (e.g. when using bridge_ipfw=1 and > largish rulesets), you might want to build a kernel with > options DEVICE_POLLING and do a 'sysctl kern.polling.enable=1' > (see "man polling" for other options you should use). > It would be great to have the graphs with and without polling, > and also with/without bridge_ipfw (even with a simple one-line > firewall config) to get an idea of the overhead. > > The use of polling should prevent the throughput dip after > the box reaches the its throughput limit visible in some > of the 'Frame loss' graphs. > > Polling support is available for a number of cards including > 'dc', 'em', 'sis', 'fxp' and possibly a few others. DEVICE_POLLING is high on the lists of things to test. It looks like its going to be a requirement since all of these cards have livelocked the machine at some point during testing. I tried SMC cards today and the machine overloads so much it stops responding long enough for the testing to fail. Thanks for all the input. I am really hoping to get some useful numbers that others can use. DaveD