Skip site navigation (1)Skip section navigation (2)
Date:      Tue, 12 Nov 2002 14:53:18 -0800
From:      Terry Lambert <tlambert2@mindspring.com>
To:        David Gilbert <dgilbert@velocet.ca>
Cc:        dolemite@wuli.nu, freebsd-hackers@freebsd.org
Subject:   Re: [hackers] Re: Netgraph could be a router also.
Message-ID:  <3DD1865E.B9C72DF5@mindspring.com>
References:  <20021109180321.GA559@unknown.nycap.rr.com> <3DCD8761.5763AAB2@mindspring.com> <15823.51640.68022.555852@canoe.velocet.net>

next in thread | previous in thread | raw e-mail | index | archive | help
David Gilbert wrote:
> >>>>> "Terry" == Terry Lambert <tlambert2@mindspring.com> writes:
> Terry> By "it", I guess you mean "FreeBSD"?
> Terry> What are your performance goals?
> 
> Right now, I'd like to see 500 to 600 kpps.
> 
> Terry> Where is FreeBSD relative to those goals, right now, without
> Terry> you doing anything to it?
> 
> Without any work, we got 75 kpps.
> 
> Terry> Where is FreeBSD relative to those goals, right now, if you
> Terry> tune it very carefully, but don't hack any code?
> 
> With a few patches, including polling and some tuning, we got 150 to
> 200 kpps.
> 
> Note that we've been focusing on pps, not Mbs.  With 100M cards (what
> we're currently using) we want to focus on getting the routing speed
> up.

These stats are moderately meaningless.

The problem is that they don't tell me about where you are measuring
your packets-per-second rate, or how it's being measured, or whether
the interrupt or processing load is high enough to trigger livelock,
or not, or the size of the packet.  And is that a unidirectional or
bidirectional rate?  UDP?

I guess I could guess with 200kpps:

	100mbit/s  /  200kp/s  =  500 bytes per packet

...and that an absolute top end.  Somehow, I think the packets are
smaller.  Bidirectionally, not FDX, we're talking 250 bytes per
packet maximum theoretical throughput.


> One of the largest problems we've found with GigE adapters on FreeBSD
> is that their pps ability (never mind the volume of data) is less than
> half that of the fxp driver.

I've never found this to be the case, using the right hardware,
and a combination of hard and soft interrupt coelescing.  You'd
have to tell me what hardware you are using for me to be able to
stare at the driver.  My personal hardware recommendation in this
regard would be the Tigon III, assuming that the packet size was
1/3 to 1/6th the MTU, as you implied by your numbers.

Personnally, I would *NOT* use polling, particularly if you were
using user space processing with Zebra, since any load at all would
push you to the point of starving the user space process for CPU
time; it's not really worth it (IMO) to do the work necessary to go
to weighted fair share queueing for scheduling, if it came to that.


> But we havn't tested every driver.  The Intel GigE cards were
> especially disapointing.

Have you tried the Tigon III, with Bill Paul's driver?

If so, did you include the polling patches that I made against
the if_ti driver, and posted to -net, when you tested it?

Do you have enough control over the load clients that you can
ramp the load up until *just before* the performance starts to
tank?  If so, what's the high point of the curve on the Gigabit,
before it tanks (and it will)?


> Terry> If you are willing to significantly modify FreeBSD, and address
> Terry> all of the latency issues, a multiport Gigabit router is
> Terry> doable, but you haven't even mentioned the most important
> Terry> aspect of any high speed networking system, so it's not likely
> Terry> that you're going to be able to do this effectively, just
> Terry> approaching it blind.
> 
> We've been looking at the click stuff... and it seems interesting.  I
> like some aspects of the netgraph interface better and may be paying
> for an ng_route to be created shortly.

Frankly, I am not significantly impressed by the Click and
other code.  If all you are doing is routing, and everything
runds in a fixed amount of time at interrupt, it's fine, but
it quickly gets less fine, as you move away from that setup.

If you are running Zebra, you really don't want Click.

If you can gather enough statistics to graph the drop-off curve,
so it's possible to see why the problems you are seeing are
happening, then I can probably provide you some patches that
will increase performance for you.  It's important to know if
you are livelocking, or if you are running out of mbufs, or if
it's a latency issue you are facing, or if we are talking about
context switch overhead, instead, etc..

-- Terry

To Unsubscribe: send mail to majordomo@FreeBSD.org
with "unsubscribe freebsd-hackers" in the body of the message




Want to link to this message? Use this URL: <https://mail-archive.FreeBSD.org/cgi/mid.cgi?3DD1865E.B9C72DF5>