From owner-freebsd-stable@FreeBSD.ORG Fri Nov 10 20:04:47 2006 Return-Path: X-Original-To: freebsd-stable@freebsd.org Delivered-To: freebsd-stable@freebsd.org Received: from mx1.FreeBSD.org (mx1.freebsd.org [216.136.204.125]) by hub.freebsd.org (Postfix) with ESMTP id 5ACC816A412; Fri, 10 Nov 2006 20:04:47 +0000 (UTC) (envelope-from mike@sentex.net) Received: from smarthost2.sentex.ca (smarthost2.sentex.ca [205.211.164.50]) by mx1.FreeBSD.org (Postfix) with ESMTP id E347743D5E; Fri, 10 Nov 2006 20:04:46 +0000 (GMT) (envelope-from mike@sentex.net) Received: from lava.sentex.ca (pyroxene.sentex.ca [199.212.134.18]) by smarthost2.sentex.ca (8.13.8/8.13.8) with ESMTP id kAAK4jC8021922; Fri, 10 Nov 2006 15:04:45 -0500 (EST) (envelope-from mike@sentex.net) Received: from mdt-xp.sentex.net (simeon.sentex.ca [192.168.43.27]) by lava.sentex.ca (8.13.6/8.13.3) with ESMTP id kAAK4iO9027778 (version=TLSv1/SSLv3 cipher=DHE-RSA-AES256-SHA bits=256 verify=NO); Fri, 10 Nov 2006 15:04:45 -0500 (EST) (envelope-from mike@sentex.net) Message-Id: <200611102004.kAAK4iO9027778@lava.sentex.ca> X-Mailer: QUALCOMM Windows Eudora Version 7.1.0.9 Date: Fri, 10 Nov 2006 15:04:51 -0500 To: Scott Long From: Mike Tancsa In-Reply-To: <200611092200.kA9M0q1E020473@lava.sentex.ca> References: <2a41acea0611081719h31be096eu614d2f2325aff511@mail.gmail.com> <200611091536.kA9FaltD018819@lava.sentex.ca> <45534E76.6020906@samsco.org> <200611092200.kA9M0q1E020473@lava.sentex.ca> Mime-Version: 1.0 Content-Type: text/plain; charset="us-ascii"; format=flowed Cc: freebsd-net , glebius@freebsd.org, freebsd-stable@freebsd.org, Jack Vogel Subject: Re: Proposed 6.2 em RELEASE patch X-BeenThere: freebsd-stable@freebsd.org X-Mailman-Version: 2.1.5 Precedence: list List-Id: Production branch of FreeBSD source code List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , X-List-Received-Date: Fri, 10 Nov 2006 20:04:47 -0000 At 05:00 PM 11/9/2006, Mike Tancsa wrote: >At 10:51 AM 11/9/2006, Scott Long wrote: >>Mike Tancsa wrote: >>>At 08:19 PM 11/8/2006, Jack Vogel wrote: >>> >>>>BUT, I've added the FAST_INTR changes back into the code, so >>>>if you go into your Makefile and add -DEM_FAST_INTR you will >>>>then get the taskqueue stuff. >>>It certainly does make a difference performance wise. I did some >>>quick testing with netperf and netrate. Back to back boxes, using >>>an AMD x2 with bge nic and one intel box >>>CPU: AMD Athlon(tm) 64 X2 Dual Core Processor 3800+ (2009.27-MHz >>>686-class CPU) >>>CPU: Intel(R) Core(TM)2 CPU 6400 @ 2.13GHz (2144.01-MHz >>>686-class CPU) >>>The intel is a DG965SS with integrated em nic, the AMD a Tyan >>>with integrated bge. Both running SMP kernels with pf built in, no inet6. >>> >>>Intel box as sender. In this test its with the patch from >>>yesterday. The first set with the patch as is, the second test >>>with -DEM_FAST_INTR. >> >>Thanks for the tests. One thing to note is that Gleb reported a higher >>rate of dropped packets with INTR_FAST. He is the only one who has >>reported this, so I'd like to find out if there is something unique to >>his environment, or if there is a larger problem to be addressed. There >>are ways that we can change the driver to not drop any packets at all >>for Gleb, but they expose the system to risk if there is ever an >>accidental (or malicious) RX flood on the interface. > >With a high rate of packets, I am able to live lock the box. I >setup the following Some more tests. I tried again with what was committed to today's RELENG_6. I am guessing its pretty well the same patch. Polling is the only way to avoid livelock at a high pps rate. Does anyone know of any simple tools to measure end to end packet loss ? Polling will end up dropping some packets and I want to be able to compare. Same hardware from the previous post. SMP kernel fastfwd pf ipfw FAST_INTR streams np (Mb) x x x 2 livelock x x x x 2 468 livelock x x 2 453 lost packets, box sluggish x x x 2 lost packets, box sluggish x 2 468 lost packets, box sluggish x x 2 468 livelock x x x 2 468 livelock 2 475 livelock x 2 livelock P 2 OK P x 2 OK P x 2 OK The P is for Uniproc, but with Polling enabled (also kern.polling.idle_poll=1) UP single stream 58Kpps, no polling in kernel [bsd6-32bit]# ./netblast 192.168.44.1 500 10 10 start: 1163184051.627479975 finish: 1163184061.628200458 send calls: 5869051 send errors: 0 approx send rate: 586905 approx error rate: 0 with polling [bsd6-32bit]# ./netblast 192.168.44.1 500 10 10 start: 1163184606.651001121 finish: 1163184616.651288588 send calls: 5866199 send errors: 1 approx send rate: 586619 approx error rate: 0 With polling and 2 streams at the same time (a lot of pps! and its still totally responsive!!) [r6-32bit]# ./netblast 192.168.88.218 500 10 10 start: 1163184712.103954688 finish: 1163184722.104388542 send calls: 4528941 send errors: 0 approx send rate: 452894 approx error rate: 0 [r6-32bit]# [bsd6-32bit]# ./netblast 192.168.44.1 500 10 20 start: 1163184793.172036336 finish: 1163184813.173028921 send calls: 11550594 send errors: 0 approx send rate: 577529 approx error rate: 0 [bsd6-32bit]# polling, 2 streams at the same time [bsd6-32bit]# ./netblast 192.168.44.1 500 10 20 start: 1163185058.477137404 finish: 1163185078.478025226 send calls: 11679831 send errors: 0 approx send rate: 583991 approx error rate: 0 [bsd6-32bit]# ./netblast 192.168.44.1 500 10 20 start: 1163185167.969551943 finish: 1163185187.970435295 send calls: 11706825 send errors: 0 approx send rate: 585341 approx error rate: 0 [bsd6-32bit]#