Skip site navigation (1)Skip section navigation (2)
Date:      Wed, 17 May 2000 08:41:03 -0600
From:      Chuck Paterson <cp@bsdi.com>
To:        Doug Rabson <dfr@nlsystems.com>
Cc:        arch@freebsd.org
Subject:   Sparc & api for asynchronous task execution (2) 
Message-ID:  <200005171441.IAA24145@berserker.bsdi.com>

next in thread | raw e-mail | index | archive | help

	First I should point out that his may really be outside
what this api was/is designed for.

	
	I am looking at putting the various things currently on the
BSD/OS softint onto several different threads. I looked at using
this code to gang together those items that want to be on the same
thread. This is what really got me interested in this discussion
to begin with.

	Here's the rub. Currently the stuff for queueing
a particular software interrupt is strung together macros, where
it gets handled by function calls with this interface. What really
cause a problem is that the queueing often occurs at the very
deepest point in the stack, say as a worst case ether_input(ip
packet). This means that there will be two more stack overflow
faults and two more stack underflow faults per packet on sparc to
use this interface. With FreeBSD on Sparc there will only be one
additional because of the ip_fastforward() check already took the
stack down one. Actually this probably wants to be fixed also, the
very top of ip_fastforward is short and could be a macro.


	I'm hoping there is no need to have ip input share a thread
with anything else. If not I'm not sure what the right answer
is. I really like the queue of tasks to run scheme rather than
dedicated bits. I'm thinking of maybe trying to put together a macro
version of the enqueueing to go with this stuff.


Chuck




To Unsubscribe: send mail to majordomo@FreeBSD.org
with "unsubscribe freebsd-arch" in the body of the message




Want to link to this message? Use this URL: <https://mail-archive.FreeBSD.org/cgi/mid.cgi?200005171441.IAA24145>