Skip site navigation (1)Skip section navigation (2)
Date:      Sun, 28 Nov 1999 15:49:17 -0800 (PST)
From:      Julian Elischer <julian@whistle.com>
To:        "Daniel M. Eischen" <eischen@vigrid.com>
Cc:        arch@freebsd.org
Subject:   Re: Threads stuff
Message-ID:  <Pine.BSF.4.10.9911281515150.544-100000@current1.whistle.com>
In-Reply-To: <384132CD.91D3C180@vigrid.com>

next in thread | previous in thread | raw e-mail | index | archive | help


On Sun, 28 Nov 1999, Daniel M. Eischen wrote:

> Julian Elischer wrote:
> > look at:
> > http://www.freebsd.org/~julian/threads
> 
> What is the typical pattern of processes blocked on I/O, especially
> in a loaded system?  Are there many tsleep/wakeups per I/O request, or
> are there usually just one or two tsleep/wakeup pairs?

On servers the usual action is for Disk or File IO

for disk IO you block while the DMA happens to your userspace, and
then you return. There is a possibility that you might block prior to the 
IO if you need to fault in some pages to put the dta into.

For File IO, you don't block at allif the data is in cache, and you 
block once if it is not. Once again you MAY block prior to the IO if the
pages are not in your address space, but in both these cases that is
unlikely in a running server (though quite common in startup).

> 
> I can see that it would be advantageous to have the kernel automatically
> try to complete unblocked KSEs.  But it needs to track the time spent
> in the system for each KSE, so that its respective thread doesn't starve
> other threads.  Do we also want to place a limit on how much of the
> _process_ quantum is used to complete unblocked KSEs?

For Disk IO, when the thread is awakened, the IO has already completed (DMA)
so there is not much to do. Letting it complete just is a matter of returning 
through all the layers and returning the status.
For File IO, it will (on a read) need to copy the buffer out to user space.
But that tends to be a pretty high priiority item..
you usually are waiting on that data.

> 
> What if we have the UTS dole out time to be used for completing
> unblocked KSEs?  If there are no runnable higher priority threads,
> the UTS can say "here's some time, try to complete as many of the
> unblocked KSEs that you can".  The kernel can use that time all at
> once, piecemeal, or until the UTS says "your time is revoked, I have
> higher priority threads".

Or what if we have a 'process priority'. KSE's below the given priority will not be scheduled until the priority is dropped below that point.
(becoming idle is effectivly dropping your priority. This would require
setting a priority in the IO completion block or somewhere.



> > Here's where I get into difficulty.. shoul d we notify the
> > UTS on unblocking, or on completion? or both?
> 
> Yeah, that's a tough question to answer.  Perhaps we should take a
> simple approach for now, and try to expand on it and optimize it
> later.  I think the simple solution is to notify the UTS and let
> it decide when to resume it.  Once that's working, we can look at
> optimizing it so that the kernel can somehow try to automatically
> complete unblocked KSEs.  Since the UTS knows which KSE is being
> run/resumed, tracking of time spent completing unblocked KSEs
> can also be added later.  My $.02, FWIW.

My default answer would be to let the kernel code do basically what it
does now, which is that the next time the scheduler looks at the (sub)process
structure, ie is about to schedule the process, it would runthrough all
the waiting KSE's and let them complete their kernel work. Some may
simply block again so nothing lost, and others may wind up their stack back
to the user/kernel boundary. I'd stop them at the boundary to collect a
bunch of results I think.

What do we do when a KSE becomes unblocked when the process in question is
already running? Do we pre-empt? when? at the next sycall or
kernel crossing? (like a signal)? or can we literally bust in and let it
complete back to the kernel boundary? Certainly we could not do #2 if
the other KSE is active in the kernel, but if its in userspace, it's a
viable alternative.




> 
> > > 
> > >   o At the request of the scheduler, the kernel schedules a timeout for
> > >     the new quantum and resumes the now unblocked thread.
> > 
> > define " the kernel schedules a timeout for
> >      the new quantum and resumes the now unblocked thread"
> 
> When the UTS is informed that a thread is now unblocked in the
> kernel (to the point that it can return to userland), and now
> wants to resume the thread, the UTS will compute the time in which
> a scheduling signal/upcall should be performed.  It makes a system
> call that both resumes the thread and schedules the signal.

This assumes that you are doing pre-emptive multi threading in user
land, using signals as the sheduler tick.

> Under your different syscall gate, this would be a longjmp followed
> by a call to schedule a signal.  But if we're going to make a
> system call anyways, why not switch to the resumed thread and
> schedule the signal all at once?  If the point of a different
> syscall gate is to eliminate a system call to resume an unblocked
> thread, then my contention is that we still have to make a system
> call for a scheduling signal/upcall.  Combine the resumption of the

that's assuming you  are using signals.
Most threads packages I've used have not used them.
teh threads co-operate and either are of limited duration in activity,
(after which they block on some event) or are IO driven. 

> thread and the scheduling of the signal (thr_resume_with_quantum),
> and you don't need a different syscall gate ;-)

bleah.. signals.. :-(
If you are going to make signals compulsary then you might as well go the
whole way and let the kernel keep the userland contexts as well.
Which is Matt's suggestion. 

Most of the "megga threads' projects I know of don't use signals. 
Threads is just a 'way of getting co-porcesses'


> 
> Dan Eischen
> eischen@vigrid.com
> 
so, do the pictures help?


Julian






To Unsubscribe: send mail to majordomo@FreeBSD.org
with "unsubscribe freebsd-arch" in the body of the message




Want to link to this message? Use this URL: <https://mail-archive.FreeBSD.org/cgi/mid.cgi?Pine.BSF.4.10.9911281515150.544-100000>