Skip site navigation (1)Skip section navigation (2)
Date:      Wed, 5 Oct 2005 01:27:09 -0500
From:      Kevin Day <toasty@dragondata.com>
To:        Dave+Seddon <dave-dated-1128902191.bcc743@seddon.ca>
Cc:        net@freebsd.org
Subject:   Re: dummynet, em driver, device polling issues :-((
Message-ID:  <979B163D-7078-4558-9095-DC329707A5B4@dragondata.com>
In-Reply-To: <1128470191.75484.TMDA@seddon.ca>
References:  <4341089F.7010504@jku.at> <20051003104548.GB70355@cell.sick.ru> <4341242F.9060602@jku.at> <20051003123210.GF70355@cell.sick.ru> <43426EF3.3020404@jku.at> <9CD8C672-1EF2-42FE-A61E-83DC684C893D@dragondata.com> <43429157.90606@jku.at> <4342987D.7000200@benswebs.com> <20051004161217.GB43195@obiwan.tataz.chchile.org> <1128470191.75484.TMDA@seddon.ca>

next in thread | previous in thread | raw e-mail | index | archive | help

On Oct 4, 2005, at 6:56 PM, Dave+Seddon wrote:


> You mention your running at "near" line rate.  What are you pushing  
> or pulling?  Whats the rough spec of these machines pushing out  
> this much data? What setting do you have for the polling?  I've  
> been trying to do near line rate and can't even get close with new  
> HP-DL380s (Single 3.4 Ghz Xeon).  I think the PCI bus might be the  
> problem.  The em Intel NICs I found to be very slow and stop after  
> about 3 hours.  - The Intel NICs I have are dual port, although  
> they end up on seperate IRQs.
>

In one case, we had a system acting as a router. It was a Dell  
PowerEdge 2650, with two dual "server" adapters. each were on  
separate PCI busses. 3 were "lan" links, and one was a "wan" link.  
The lan links were receiving about 300mbps each, all going out the  
"wan" link at near 900mbps at peak. We were never able to get above  
944mbps, but I never cared enough to figure out where the bottleneck  
was there.

This was with PCI-X, and a pretty stripped config on the server side.

Nothing fancy on polling, i think we set HZ to 10000, turned on  
idle_poll, and set user_frac to 10 because we had some cpu hungry  
tasks that were not a high priority.

For anyone watching, the config we had there that we were successful  
with was:

em0@pci2:6:0:   class=0x020000 card=0x10118086 chip=0x10108086  
rev=0x01 hdr=0x00
     vendor   = 'Intel Corporation'
     device   = '82546EB Dual Port Gigabit Ethernet Controller (Copper)'
     class    = network
     subclass = ethernet
em1@pci2:6:1:   class=0x020000 card=0x10118086 chip=0x10108086  
rev=0x01 hdr=0x00
     vendor   = 'Intel Corporation'
     device   = '82546EB Dual Port Gigabit Ethernet Controller (Copper)'
     class    = network
     subclass = ethernet
em2@pci1:8:0:   class=0x020000 card=0x10128086 chip=0x10128086  
rev=0x01 hdr=0x00
     vendor   = 'Intel Corporation'
     device   = '82546EB Dual Port Gigabit Ethernet Controller (Fiber)'
     class    = network
     subclass = ethernet
em3@pci1:8:1:   class=0x020000 card=0x10128086 chip=0x10128086  
rev=0x01 hdr=0x00
     vendor   = 'Intel Corporation'
     device   = '82546EB Dual Port Gigabit Ethernet Controller (Fiber)'
     class    = network
     subclass = ethernet



We also have some web servers that are sending 300-400mbps each at  
peak using thttpd or lighttpd, with the built in Dell 2850 em parts.  
They also are connected via PCI-X speed buses internally.





Want to link to this message? Use this URL: <https://mail-archive.FreeBSD.org/cgi/mid.cgi?979B163D-7078-4558-9095-DC329707A5B4>