Skip site navigation (1)Skip section navigation (2)
Date:      Sun, 28 Nov 1999 19:58:28 -0500
From:      "Daniel M. Eischen" <eischen@vigrid.com>
To:        Julian Elischer <julian@whistle.com>
Cc:        arch@freebsd.org
Subject:   Re: Threads stuff
Message-ID:  <3841CFB4.F5B9A2BD@vigrid.com>
References:  <Pine.BSF.4.10.9911281515150.544-100000@current1.whistle.com>

next in thread | previous in thread | raw e-mail | index | archive | help
Julian Elischer wrote:
> 
> On Sun, 28 Nov 1999, Daniel M. Eischen wrote:
> > What if we have the UTS dole out time to be used for completing
> > unblocked KSEs?  If there are no runnable higher priority threads,
> > the UTS can say "here's some time, try to complete as many of the
> > unblocked KSEs that you can".  The kernel can use that time all at
> > once, piecemeal, or until the UTS says "your time is revoked, I have
> > higher priority threads".
> 
> Or what if we have a 'process priority'. KSE's below the given priority will
> not be scheduled until the priority is dropped below that point.
> (becoming idle is effectivly dropping your priority. This would require
> setting a priority in the IO completion block or somewhere.

Or when the thread was scheduled via a system call...

> 
> > > Here's where I get into difficulty.. shoul d we notify the
> > > UTS on unblocking, or on completion? or both?
> >
> > Yeah, that's a tough question to answer.  Perhaps we should take a
> > simple approach for now, and try to expand on it and optimize it
> > later.  I think the simple solution is to notify the UTS and let
> > it decide when to resume it.  Once that's working, we can look at
> > optimizing it so that the kernel can somehow try to automatically
> > complete unblocked KSEs.  Since the UTS knows which KSE is being
> > run/resumed, tracking of time spent completing unblocked KSEs
> > can also be added later.  My $.02, FWIW.
> 
> My default answer would be to let the kernel code do basically what it
> does now, which is that the next time the scheduler looks at the (sub)process
> structure, ie is about to schedule the process, it would runthrough all
> the waiting KSE's and let them complete their kernel work. Some may
> simply block again so nothing lost, and others may wind up their stack back
> to the user/kernel boundary. I'd stop them at the boundary to collect a
> bunch of results I think.

Yes, and notify the UTS of all completed KSEs at once.  We do need to
keep system time accrued for each KSE also.

> 
> What do we do when a KSE becomes unblocked when the process in question is
> already running? Do we pre-empt? when? at the next sycall or
> kernel crossing? (like a signal)? or can we literally bust in and let it
> complete back to the kernel boundary? Certainly we could not do #2 if
> the other KSE is active in the kernel, but if its in userspace, it's a
> viable alternative.

If the kernel is going to automatically complete KSEs, then I would only
do it when new KSEs block or when the process is resumed.  The UTS is timing
the current thread, so you don't want to do work for other threads while
the UTS is going to count it against the currently running thread.  If the
kernel is not automatically completing the KSEs, then the UTS can be notified
of unblocked KSEs at any time.

> > When the UTS is informed that a thread is now unblocked in the
> > kernel (to the point that it can return to userland), and now
> > wants to resume the thread, the UTS will compute the time in which
> > a scheduling signal/upcall should be performed.  It makes a system
> > call that both resumes the thread and schedules the signal.
> 
> This assumes that you are doing pre-emptive multi threading in user
> land, using signals as the sheduler tick.

Well, I was hoping to get a scheduling upcall instead of a signal,
but yes.  That's the way libc_r works now, and you need some clock
tick type interruption so that threads don't run forever.  Even if
a thread is SCHED_FIFO, you still need to check for other timeouts
like pthread_cond_timedwait().

> > Under your different syscall gate, this would be a longjmp followed
> > by a call to schedule a signal.  But if we're going to make a
> > system call anyways, why not switch to the resumed thread and
> > schedule the signal all at once?  If the point of a different
> > syscall gate is to eliminate a system call to resume an unblocked
> > thread, then my contention is that we still have to make a system
> > call for a scheduling signal/upcall.  Combine the resumption of the
> 
> that's assuming you  are using signals.
> Most threads packages I've used have not used them.
> teh threads co-operate and either are of limited duration in activity,
> (after which they block on some event) or are IO driven.

I'm not assuming signals, upcalls would be preferred, but we do need to
support the standards, and some form of scheduling signal or interruption
will be needed.  I think it will be difficult to remove all system calls
from a thread switch, but I think it will be very easy to limit thread
switches to one system call.

> > thread and the scheduling of the signal (thr_resume_with_quantum),
> > and you don't need a different syscall gate ;-)
> 
> bleah.. signals.. :-(
> If you are going to make signals compulsary then you might as well go the
> whole way and let the kernel keep the userland contexts as well.
> Which is Matt's suggestion.

Like I said, I don't want signals; a scheduling upcall would be better.
For threads blocked in the kernel, I don't see much of a problem with
keeping the trapframe in the KSE since it's already on the kernel stack.
I don't want to keep contexts of threads not blocked in the kernel, though.

> so, do the pictures help?

Yes, I understood them just fine :)  I'm still not sold on the new
syscall gate and IOCB, because I think we have to make at least one
system call when threads are switched or resumed.

Dan Eischen
eischen@vigrid.com




To Unsubscribe: send mail to majordomo@FreeBSD.org
with "unsubscribe freebsd-arch" in the body of the message




Want to link to this message? Use this URL: <https://mail-archive.FreeBSD.org/cgi/mid.cgi?3841CFB4.F5B9A2BD>