From owner-freebsd-performance@FreeBSD.ORG Tue Apr 19 21:46:47 2005 Return-Path: Delivered-To: freebsd-performance@freebsd.org Received: from mx1.FreeBSD.org (mx1.freebsd.org [216.136.204.125]) by hub.freebsd.org (Postfix) with ESMTP id B10F816A4CE for ; Tue, 19 Apr 2005 21:46:47 +0000 (GMT) Received: from stephanie.unixdaemons.com (stephanie.unixdaemons.com [67.18.111.194]) by mx1.FreeBSD.org (Postfix) with ESMTP id 52A2843D1D for ; Tue, 19 Apr 2005 21:46:47 +0000 (GMT) (envelope-from bmilekic@technokratis.com) Received: from stephanie.unixdaemons.com (bmilekic@localhost.unixdaemons.com [127.0.0.1])j3JLki1P005886; Tue, 19 Apr 2005 17:46:44 -0400 (EDT) Received: (from bmilekic@localhost) by stephanie.unixdaemons.com (8.13.4/8.12.1/Submit) id j3JLkiBC005885; Tue, 19 Apr 2005 17:46:44 -0400 (EDT) (envelope-from bmilekic@technokratis.com) X-Authentication-Warning: stephanie.unixdaemons.com: bmilekic set sender to bmilekic@technokratis.com using -f Date: Tue, 19 Apr 2005 17:46:44 -0400 From: Bosko Milekic To: Petri Helenius Message-ID: <20050419214644.GB3656@technokratis.com> References: <20050419183335.F18008131@joshua.stabbursmoen.no> <42655887.7060203@alumni.rice.edu> <4265724A.1040705@stabbursmoen.no> <42657420.3040104@he.iki.fi> Mime-Version: 1.0 Content-Type: text/plain; charset=us-ascii Content-Disposition: inline In-Reply-To: <42657420.3040104@he.iki.fi> User-Agent: Mutt/1.4.2.1i cc: Eivind Hestnes cc: performance@freebsd.org Subject: Re: Performance Intel Pro 1000 MT (PWLA8490MT) X-BeenThere: freebsd-performance@freebsd.org X-Mailman-Version: 2.1.1 Precedence: list List-Id: Performance/tuning List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , X-List-Received-Date: Tue, 19 Apr 2005 21:46:47 -0000 My experience with 6.0-CURRENT has been that I am able to push at least about 400kpps INTO THE KERNEL from a gigE em card on its own 64-bit PCI-X 133MHz bus (i.e., the bus is uncontested) and that's basically out of the box GENERIC on a dual-CPU box with HTT disabled and no debugging options, with small 50-60 byte UDP packets. I haven't measured how many I can push THROUGH to a second card and forward. That will probably reduce numbers. My tests were done without polling so with very high interrupt load and that also sucks when you have a high-traffic scenario. But still, way better than your numbers. Also, make sure you are not bottlenecking on the sender-side. e.g., make sure that your sender can actually push out more PPS than what you appear to be bottlenecking on in the router. -Bosko On Wed, Apr 20, 2005 at 12:12:00AM +0300, Petri Helenius wrote: > Eivind Hestnes wrote: > > >It's correct that the card is plugged into a 32-bit 33 Mhz PCI slot. > >If i'm not wrong, 33 Mhz PCI slots has a peak transfer rate of 133 > >MByte/s. However, when pulling 180 mbit/s without the polling enabled > >the system is very little responsive due to the interrupt load. I'll > >try to increase the polling frequency too see if this increases the > >bandwidth with polling enabled.. Thanks for the advice btw.. > > > There is something "interesting" going on in the em driver but I haven't > had the time to profile it properly and Intel has been less than > forthcoming with the specification which makes it more challenging to > try to optimize the driver further. > > Pete > > >- E. > > > >Jon Noack wrote: > > > >>On 4/19/2005 1:32 PM, Eivind Hestnes wrote: > >> > >>>I have an Intel Pro 1000 MT (PWLA8490MT) NIC (em(4) driver 1.7.35) > >>>installed > >>>in a Pentium III 500 Mhz with 512 MB RAM (100 Mhz) running FreeBSD > >>>5.4-RC3. > >>>The machine is routing traffic between multiple VLANs. Recently I did a > >>>benchmark with/without device polling enabled. Without device > >>>polling I was > >>>able to transfer roughly 180 Mbit/s. The router however was > >>>suffering when > >>>doing this benchmark. Interrupt load was peaking 100% - overall the > >>>system > >>>itself was quite unusable (_very_ high system load). With device > >>>polling > >>>enabled the interrupt kept stable around 40-50% and max transfer > >>>rate was > >>>nearly 70 Mbit/s. Not very scientific tests, but it gave me a pin > >>>point. > >> > >> > >> > >>The card is plugged into a 32-bit PCI slot, correct? If so, 180 > >>Mbit/s is decent. I have a gigabit LAN at home using Pro 1000 MTs > >>(in 32-bit PCI slots) and get NFS transfers maxing out around 23 > >>MB/s, which is ~180 Mbit/s. Gigabit performance with 32-bit cards is > >>atrocious. It reminds me of the old 100 Mbit/s ISA cards... > >> > >>> > >>> > >>>HZ set to 1000 as recommended in README for the em(4) driver. Driver > >>>is of > >>>cource compiled into kernel. > >> > >> > >> > >>You'll need HZ set to more than 1000 for gigabit; bump it up to at > >>least 2000. That should increase polling throughput a lot. I'm not > >>sure about other polling parameters, however. > >> > >>Jon > > > > > > > >_______________________________________________ > >freebsd-performance@freebsd.org mailing list > >http://lists.freebsd.org/mailman/listinfo/freebsd-performance > >To unsubscribe, send any mail to > >"freebsd-performance-unsubscribe@freebsd.org" > > > > _______________________________________________ > freebsd-performance@freebsd.org mailing list > http://lists.freebsd.org/mailman/listinfo/freebsd-performance > To unsubscribe, send any mail to > "freebsd-performance-unsubscribe@freebsd.org" -- Bosko Milekic bmilekic@technokratis.com bmilekic@FreeBSD.org