Skip site navigation (1)Skip section navigation (2)
Date:      Tue, 18 Dec 2001 17:08:17 -0600
From:      Jonathan Lemon <jlemon@flugsvamp.com>
To:        Luigi Rizzo <rizzo@aciri.org>
Cc:        Bosko Milekic <bmilekic@technokratis.com>, Jonathan Lemon <jlemon@flugsvamp.com>, Bruce Evans <bde@zeta.org.au>, arch@FreeBSD.ORG
Subject:   Re: swi_net
Message-ID:  <20011218170817.P377@prism.flugsvamp.com>
In-Reply-To: <20011218145538.A89864@iguana.aciri.org>
References:  <20011213091957.B39991@iguana.aciri.org> <20011219010205.P4481-100000@gamplex.bde.org> <20011218104750.M377@prism.flugsvamp.com> <20011218134149.A89299@iguana.aciri.org> <20011218175421.A37567@technokratis.com> <20011218145538.A89864@iguana.aciri.org>

next in thread | previous in thread | raw e-mail | index | archive | help
On Tue, Dec 18, 2001 at 02:55:38PM -0800, Luigi Rizzo wrote:
> On Tue, Dec 18, 2001 at 05:54:21PM -0500, Bosko Milekic wrote:
> ...
> >   While we're on the subject, running the stack in interrupt context
> > seems to be an attempt to, mainly, remedy the load problem where we have
> > a lot of interrupts and the soft int thread doesn't even get a chance to
> > run... so we have a sort of livelock situation. For the cases that you
> 
> yes that was the part I was in favour with...
> 
> > describe, how effective do you think it would be to do as we presently
> > do and just schedule the soft net thread to run, return from the
> > interrupt but, when under load figure out a way to bump up the priority
> > of the softnetisr thread enough so that it does get a chance to run?
> 
> the polling code in -current tries to do exacly this, and it is
> reasonbly self-adjusting:  it grabs a small amount of packets from
> the interfaces, then processes the netisr to completion, then grabs
> more packets, and so on.

Yes - this is essentially what the code is cleaning up.  Instead
of grabbing packets from the interface, putting them on the queue,
then running the netisr, it gets the packet from the queue and does
the processing (in one step).

So this removes the need for ether_pollmore(); which is really there
because of the intermediate queue limits, and because we want to try
and account for the time spent doing packet processing.


> Additionally, it guarantees that user tasks have a fraction of
> the CPU available, so if this traffic involves userland processing
> at least we avoid complete livelock.

Yes - with the direct dispatch loop, we also have the ability to measure
how long it takes to process each packet, because the "for (;;)" loop
which completely drained the protocol queue is gone, and moved up into
a central loop.  This makes it easier to account for processing time.
-- 
Joanthan

To Unsubscribe: send mail to majordomo@FreeBSD.org
with "unsubscribe freebsd-arch" in the body of the message




Want to link to this message? Use this URL: <https://mail-archive.FreeBSD.org/cgi/mid.cgi?20011218170817.P377>