Skip site navigation (1)Skip section navigation (2)
Date:      Wed, 13 Nov 2002 09:13:30 -0500
From:      David Gilbert <dgilbert@velocet.ca>
To:        Terry Lambert <tlambert2@mindspring.com>
Cc:        David Gilbert <dgilbert@velocet.ca>, dolemite@wuli.nu, freebsd-hackers@freebsd.org
Subject:   Re: [hackers] Re: Netgraph could be a router also.
Message-ID:  <15826.24074.605709.966155@canoe.velocet.net>
In-Reply-To: <3DD1865E.B9C72DF5@mindspring.com>
References:  <20021109180321.GA559@unknown.nycap.rr.com> <3DCD8761.5763AAB2@mindspring.com> <15823.51640.68022.555852@canoe.velocet.net> <3DD1865E.B9C72DF5@mindspring.com>

next in thread | previous in thread | raw e-mail | index | archive | help
>>>>> "Terry" == Terry Lambert <tlambert2@mindspring.com> writes:

Terry> These stats are moderately meaningless.

Terry> The problem is that they don't tell me about where you are
Terry> measuring your packets-per-second rate, or how it's being
Terry> measured, or whether the interrupt or processing load is high
Terry> enough to trigger livelock, or not, or the size of the packet.
Terry> And is that a unidirectional or bidirectional rate?  UDP?

Terry> I guess I could guess with 200kpps:

Terry> 	100mbit/s / 200kp/s = 500 bytes per packet

Terry> ...and that an absolute top end.  Somehow, I think the packets
Terry> are smaller.  Bidirectionally, not FDX, we're talking 250 bytes
Terry> per packet maximum theoretical throughput.

Well... I have all those stats, but I wasn't wanting to type that
much.  IIRC, we normally test with 80 byte packets ... they can be UDP
or TCP ... we're testing the routing.  The box has two interfaces and
we measure the number of PPS that get to the box on the other side.

Without polling patches, the single processor box definately
experiences live lock.  Interestingly, the degree of livelock is
fairly motherboard dependant.  We have tested many cards and so far
fxp's are our best performers.

>> One of the largest problems we've found with GigE adapters on
>> FreeBSD is that their pps ability (never mind the volume of data)
>> is less than half that of the fxp driver.

Terry> I've never found this to be the case, using the right hardware,
Terry> and a combination of hard and soft interrupt coelescing.  You'd
Terry> have to tell me what hardware you are using for me to be able
Terry> to stare at the driver.  My personal hardware recommendation in
Terry> this regard would be the Tigon III, assuming that the packet
Terry> size was 1/3 to 1/6th the MTU, as you implied by your numbers.

we were using the intel, which aparently was a mistake.  We had a
couple of others, too, but they were dissapointing.  I can get their
driver name later.

Terry> Personnally, I would *NOT* use polling, particularly if you
Terry> were using user space processing with Zebra, since any load at
Terry> all would push you to the point of starving the user space
Terry> process for CPU time; it's not really worth it (IMO) to do the
Terry> work necessary to go to weighted fair share queueing for
Terry> scheduling, if it came to that.

The polling patches made zebra happy, actually.  Under livelock, zebra
would stop sending bgp hello packets.  Under polling, we could pass
the 150k+ packets and still have user time to run bgp.

>> But we havn't tested every driver.  The Intel GigE cards were
>> especially disapointing.

Terry> Have you tried the Tigon III, with Bill Paul's driver?

Terry> If so, did you include the polling patches that I made against
Terry> the if_ti driver, and posted to -net, when you tested it?

Terry> Do you have enough control over the load clients that you can
Terry> ramp the load up until *just before* the performance starts to
Terry> tank?  If so, what's the high point of the curve on the
Terry> Gigabit, before it tanks (and it will)?

We need new switches, actually, but we'll be testing this soon.

Terry> If you are willing to significantly modify FreeBSD, and address
Terry> all of the latency issues, a multiport Gigabit router is
Terry> doable, but you haven't even mentioned the most important
Terry> aspect of any high speed networking system, so it's not likely
Terry> that you're going to be able to do this effectively, just
Terry> approaching it blind.
>>  We've been looking at the click stuff... and it seems interesting.
>> I like some aspects of the netgraph interface better and may be
>> paying for an ng_route to be created shortly.

Terry> Frankly, I am not significantly impressed by the Click and
Terry> other code.  If all you are doing is routing, and everything
Terry> runds in a fixed amount of time at interrupt, it's fine, but it
Terry> quickly gets less fine, as you move away from that setup.

Terry> If you are running Zebra, you really don't want Click.

I've had that feeling.  A lot of people seem to be working on click,
but it seems to abstract things that I don't see as needing
abstracting.

Terry> If you can gather enough statistics to graph the drop-off
Terry> curve, so it's possible to see why the problems you are seeing
Terry> are happening, then I can probably provide you some patches
Terry> that will increase performance for you.  It's important to know
Terry> if you are livelocking, or if you are running out of mbufs, or
Terry> if it's a latency issue you are facing, or if we are talking
Terry> about context switch overhead, instead, etc..

We're definately livelocking with the fxps.  I'd be interested in your
patches for the GigE drivers.

Dave.

-- 
============================================================================
|David Gilbert, Velocet Communications.       | Two things can only be     |
|Mail:       dgilbert@velocet.net             |  equal if and only if they |
|http://daveg.ca                              |   are precisely opposite.  |
=========================================================GLO================

To Unsubscribe: send mail to majordomo@FreeBSD.org
with "unsubscribe freebsd-hackers" in the body of the message




Want to link to this message? Use this URL: <https://mail-archive.FreeBSD.org/cgi/mid.cgi?15826.24074.605709.966155>