Skip site navigation (1)Skip section navigation (2)
Date:      Mon, 8 Oct 2001 23:10:46 -0600
From:      "Kenneth D. Merry" <ken@kdm.org>
To:        Terry Lambert <tlambert2@mindspring.com>
Cc:        current@FreeBSD.ORG
Subject:   Re: Why do soft interrupt coelescing?
Message-ID:  <20011008231046.A10472@panzer.kdm.org>
In-Reply-To: <3BC00ABC.20ECAAD8@mindspring.com>; from tlambert2@mindspring.com on Sun, Oct 07, 2001 at 12:56:44AM -0700
References:  <3BBF5E49.65AF9D8E@mindspring.com> <20011006144418.A6779@panzer.kdm.org> <3BC00ABC.20ECAAD8@mindspring.com>

next in thread | previous in thread | raw e-mail | index | archive | help
On Sun, Oct 07, 2001 at 00:56:44 -0700, Terry Lambert wrote:
> "Kenneth D. Merry" wrote:
> > [ I don't particularly want to get involved in this thread...but... ]
> > 
> > Can you explain why the ti(4) driver needs a coalescing patch?  It already
> > has in-firmware coalescing paramters that are tuneable by the user.  It
> > also already processes all outstanding BDs in ti_rxeof() and ti_txeof().
> 
> 
> The answer to your question is that the card will continue to DMA
> into the ring buffer, even though you are in the middle of the
> interrupt service routine, and that the amount of time taken in
> ether input is long enough that you can have more packets come in
> while you are processing (this is actually a good thing).
> 
> This is even *more* likely with hardware interrupt coelescing,
> since the default setting is to coelesce 32 packets into a
> single interrupt, meaning that you have up to 32 iterations of
> ether input to call, and thus the amount of time spent processing
> them actually affords *more* time for additional packets to come
> in.

As you say above, this is actually a good thing.  I don't see how this ties
into the patch to introduce some sort of interrupt coalescing into the
ti(4) driver.   IMO, you should be able to tweak the coalescing parameters
on the board to do what you want.

> In my own personal situation, I have also implemented Lazy
> Receiver Processing (per the research done by Rice University
> and in the "Click Router" project; no relation to "ClickArray"),
> which does all stack processing at the hardware interrupt, rather
> then queueing between the hardware interrupt and NETISR, so my
> processing path is actually longer; I get more benefit from the
> change than you would, but on a heavily loaded system, you would
> also get some benefit, if you were able to load the wire heavily
> enough.
> 
> The LRP implementation should be considered by FreeBSD as well,
> since it takes the connection rate from ~7,000/second up to
> ~23,000/second, by avoiding the NetISR.  Rice University did
> an implementation in 2.2.x, and then another one (using resource
> containers -- I recommend against this one, not only because of
> license issues with the second implementation) for 4.2; both
> sets of research were done in FreeBSD.  Unfortunately, neither
> implementation was production quality (among other things, they
> broke RFC 1323, and they have to run a complete duplicate stack
> as a different protocol family because some of their assumptions
> make it non-interoperable with other protocol stacks).

That sounds cool, but I still don't see how this ties into the patch you
sent out.

> > It isn't terribly clear what you're doing in the patch, since it isn't a
> > context diff.
> 
> It's a "cvs diff" output.  You could always check out a sys
> tree, apply it, and then cvs diff -c (or -u or whatever your
> favorite option is) to get a diff more to your tastes.

As Peter Wemm pointed out, we can't use non-context diffs safely without
the exact time, date and branch of the source files.  This introduces an
additional burden for no real reason other than you neglected to use -c or
-u with cvs diff.

> > You also never gave any details behind your statement last week:
> > "Because at the time the Tigon II was released, the jumbogram
> > wire format had not solidified.  Therefore cards built during
> > that time used different wire data for the jumbogram framing."
> > 
> > I asked, in response:
> > 
> > "Can you give more details?  Did someone decide on a different ethertype
> > than 0x8870 or something?
> > 
> > That's really the only thing that's different between a standard ethernet
> > frame and a jumbo frame.  (other than the size)"
> 
> I believe it was the implementation of the length field.  I
> would have to get more information from the person who did
> the interoperability testing for the autonegotiation (which
> failed between the Tigon II and the Intel Gigabit cards).  I
> can assure you anecdotally, however, that autonegotiation
> _did_ fail.

I would believe that autonegotiation (i.e. 10/100/1000) might fail,
especially if you're using 1000BaseT Tigon II boards.  However, I would
like more details on the failure.  It's entirely possible that it could be
fixed in the firmware, probably without too much trouble.

I find it somewhat hard to believe that Intel would ship a gigabit board
that didn't interoperate with the board that up until probably recently was
probably the predominant gigabit board out there.

Ken
-- 
Kenneth Merry
ken@kdm.org

To Unsubscribe: send mail to majordomo@FreeBSD.org
with "unsubscribe freebsd-current" in the body of the message




Want to link to this message? Use this URL: <https://mail-archive.FreeBSD.org/cgi/mid.cgi?20011008231046.A10472>