Skip site navigation (1)Skip section navigation (2)
Date:      Thu, 15 Jun 2006 14:06:40 -0700 (PDT)
From:      Danial Thom <danial_thom@yahoo.com>
To:        Paul Marciano <pm940@yahoo.com>, freebsd-questions@freebsd.org
Subject:   Re: fxp driver performance expectations
Message-ID:  <20060615210640.43058.qmail@web33302.mail.mud.yahoo.com>
In-Reply-To: <20060615202050.14310.qmail@web54005.mail.yahoo.com>

next in thread | previous in thread | raw e-mail | index | archive | help


--- Paul Marciano <pm940@yahoo.com> wrote:

> --- Danial Thom <danial_thom@yahoo.com> wrote:
> > You couldn't do 100Mb/s with em on a 100Mb/s
> line
> > with min packets, because there are gaps
> between
> > packets so its impossible.
> 
> Thanks for the detailed reply Danial.
> 
> By 100Mbps I mean line-rate: 148809 packets/sec
> for
> 64-byte Ethernet frames + IPG and preamble
> appended.
> 
> The 10/100 fxp NIC is on straight PCI-33.
> 
> The 1000 em NIC is on PCI-Express x1.  It can
> do
> 100Mbps line-rate (148809pps) in 100Mbps mode. 
> In
> 1000Mbps mode it can do ~700Kpps, so the
> bottleneck
> isn't the FreeBSD IP stack.

Your logic is wrong here; you're dealing with a
different set of timings. The stack eats cycles,
just as bus access eat cycles, and the cycles
contribute to the reduced throughput. My point
what that the stack is a variable that you can
easily eliminate by bridging instead. There also
may be less context switching (although I'm not
sure about that).

> 
> > Realize that fxp parts are only 32bit/33mhz
> so
> > the bus is a factor. Although its a 1Gb/s
> bus,
> > thats only when bursting, so its really
> > substantially less. With shorter packets you
> > have more setups and I/O and therefore more
> > overhead on the bus.
> 
> Yes indeed.
> 
> > fxp performs similarly to an em controller
> when
> > they are both on a 32bit/33mhz bus in Freebsd
> 4.x.
> > 5.x is about 20% slower than 4.x, but I
> expect the
> > drivers to be about the same for 5.x as well.
> 
> Thanks for that.  I realize that comparing a
> PCI-33
> NIC to a PCI-Express NIC isn't fair.  I don't
> have a
> PCI-33 Gig NIC - why I need outside info.
> 
> > Are you using a traffic generator, or are you
> > relying on some server to return packets?
> 
> Ixia traffic generator.
> 
> > We have customers with fxp interfaces on
> freebsd
> > 4.x pushing 90Mb/s+ (while doing a lot of
> other
> > processing also), so its certainly possible.

I'd suggest getting a MOBO with a 32bit PCI slot
and getting an em card (they'll work in 32bit
slots generally) and an fxp card and test on the
same MB in the same slot with the same processor.
Its the only way to do a fair test.

All drivers work better with larger packets,
because you have fewer bus setups and fewer
packets to process. Also it doesn't make sense to
"optimize" for smaller packets or larger packets,
although some benchmarkers may to suit their
agenda. I don't believe that either the fxp or em
driver have been optimized one way or the other.
I've done a lot of testing on both.

the em parts are superior parts feature-wise;
there's really no reason if you have a choice to
go with older parts to save a few pennies.

You shouldn't ever use polling for either one of
these parts as they have interrupt moderation
built in, so you're only adding overhead and
latency. You can tune interrupt moderation in the
em controller to do anything that polling can do
without the added clock tick overhead. I suspect
that context switching is so bad in FreeBSD 5
that you see more of a difference than you should
with fewer interrupts (which is why we don't use
FreeBSD 5), but in a router or network appliance
you can't realistically use polling unless you
set the HZ to 5000 or more, which is just stupid,
and certainly not necessary.

DT

__________________________________________________
Do You Yahoo!?
Tired of spam?  Yahoo! Mail has the best spam protection around 
http://mail.yahoo.com 



Want to link to this message? Use this URL: <https://mail-archive.FreeBSD.org/cgi/mid.cgi?20060615210640.43058.qmail>