Skip site navigation (1)Skip section navigation (2)
Date:      Fri, 26 Nov 1999 21:34:03 -0800 (PST)
From:      Julian Elischer <julian@whistle.com>
To:        "Daniel M. Eischen" <eischen@vigrid.com>
Cc:        arch@freebsd.org
Subject:   Re: Threads stuff
Message-ID:  <Pine.BSF.4.10.9911262111220.544-100000@current1.whistle.com>
In-Reply-To: <383F5982.CC9F132C@vigrid.com>

next in thread | previous in thread | raw e-mail | index | archive | help


On Fri, 26 Nov 1999, Daniel M. Eischen wrote:

> Sorry about the last post; I accidently hit send.
> 
> Julian Elischer wrote:
> >  You HAVE to sing this to get the full effect....
> 
> Thanks!
> 
> > Each thread has a pernamently assigned IO status block and it's errno
> > lives there. The contents of the IO status block are updated when the
> > KSE that was blocked returns. SInce it was blocked it then hangs itself
> > (or a proxy) off the sub-proc "unblocked" queue. Had it not blocked,
> > it would simply update the status block and retunr to the userland
> > which would check the status and return to the caller.
> > 
> > code path in library for (write);
> > 
> >         set IO status to 'incomplete'
> >         setjmp(mycontext)  /* save all the regs in thread context*/
> >         set saved  mycontext->eip to point to 'A'
> >         call kernel
> >         lngjmp (mycontext)
> >         /* notreached */
> > 
> > B:
> >         while status == incomplete
> >                 wait on mutex in IO status block
> > A:
> >         check status
> >         return to calling code
> 
> This seems overly complicated just to perform a system call,
> but OK.  You were talking about putting the trapframe (or most of it)
> on the user stack, so why not just put the address of the trapframe
> (stack pointer) in the IO control block.  No need for setjmp/longjmp
> unless the system call blocks.

you have to save all your registers. if you didn't do it then the kernel
would have to to do it. (which is what happens now). you can eithe rsave
the registers on the user stack, or elsewhere.
I prefer elsewhere because it reduces the chances of a pagefault
half way through, but it's as you say, an implementation detail as to
where they are saved, as long as the UTS know where to find them.
Saving to stack is quicker, saving to somewhere else has less chance of
pagefaults. take your pick.


> 
> > 
> > when the thread is reported blocked:
> > the return mycontect->eip is changed from "A" to "B"
> > code is run to put the thread into whatever
> > structures the UTS maintains for mutexes. That completes all processing.
> > It then goes on to schedule another thread.
> > from the thread's point of view, it has just woken up from waiting
> > on a mutex. The IO is mysteriously complete.
> 
> Why mutexes?  It just needs to be marked blocked/suspended/whatever.
> The UTS will only run threads that are in the run state.  There is
> some overhead with mutexes.

It doesn't have to be a mutex. I was just trying to get the point across
that the thread ends up suspended in exactly the same way that it would be
if it had done a 'yield'. All that matters is that it's supended, and can
be woken up at a later time by an external event. It can be done any way
that is quickest and easiest.

> 
> > > What I don't quite see yet is how you resume a thread at the kernel
> > > level.  You can't just "run the thread" after the IO control block
> > > was updated indicating that the thread unblocked.  You need to resume
> > > the thread within the kernel.  As long as you need a kernel call to
> > > resume/cancel a blocked thread, why try to hide it behind smoke and
> > > mirrors ;-)
> > 
> > No you don't need to go back to the kernel.
> > All kernel processing has completed.
> 
> This is what I don't see.  When KSEs are woken-up in the kernel,
> how are they resumed?

EXACTLY AS THEY ARE NOW.. NO ()zilch, zippo, nada) difference to what
happens now. The KSE's on the sleep queue that are sleeping are taken off
teh sleep queue, and hung off the subproc (well I guess that's a
differrence), and when the subproc is scheduled (which it willbe if it has
 (N > 0) KSE's scheduled, then each of them is run to completion or blocks
again. When the last KSE has run and they have all done their IO and are
waiting up at the user/kernel boundary, (having updated all their IO staus
blocks) the upcall isperformed notifying hte UTS of which status blocks
have been updated. The UTS schedules all appropriate threads. and starts
the one with the higest priority.
All the KSEs that were blocked are freed and back in the KSE cache.

>  If you have 10 blocked KSEs hanging off a
> process, and they all become unblocked before the process runs again,
> what happens the next time the process runs?  KSEs can also hit
> tsleep more than once before leaving the kernel.

what happens now?
before they are unblocked they are sitting in the system's sleep queues.
When they are unblocked they are transfered to the 'unblocked' queue on
the subprocess, and the subproocess is scheduled onto the run queue for
it's processor.

When the processor is scheduled all the waiting KSE's are run, one after
another until they complete. WHenthey are all done, then control -passes
to userland.

there is a possibility that as an optimisation, you might report to the
UTS after each completion so that it might decide that it could let the
completed thread  run on some other processor while we get on with more 
IO, but that  is an optimisation.


 > 
> If the kernel automatically completes the KSEs, then the kernel is
> arbitrarily deciding the priority of the threads.  There could be
> runnable threads with higher priority than any of the threads blocked
> in the kernel.

That's Matt's argument.


If we had a thread that was super High priority, we should have assigned
it a subproc (scheduling class) that was high priority. Then it wouldn't
be competing with the others.


> 
> > All your upcall needs is the address of the IO completion block that contains
> > the mutex to release. All copyin()s and copyout()s have been completed, and the
> > IO status block has been updated, just as if the call would
> > have been synchronous. (in fact the code that did it didn't know that it
> > was not going to go back to the user.)
> > just before it was going to go back to the user, it checked a bit and
> > discoverd that instead, it should hang itself (or a small struct holding the
> > address of the IO completion block) off the subproc. (If the latter the
> > KSE can actually be freed now). At some stage in the future (maybe immediatly)
> > an upcall reports to the UTS the addresses of all completed IO status blocks,
> > and the UTS releases all the mutexes.
> 
> OK, I think this answers some of my questions.  KSEs are automatically
> completed by the kernel to the point where they would return control
> to the application.  I'm not sure I like that because the UTS can't
> decide which blocked KSEs are resumed - they are all resumed, possibly
> stealing time from other higher priority threads.

yes but that's why we can separate them ou tinto separate subprocs. To get 
predictable behaviour and parallelisation.


> 
> Dan Eischen
> eischen@vigrid.com
> 





To Unsubscribe: send mail to majordomo@FreeBSD.org
with "unsubscribe freebsd-arch" in the body of the message




Want to link to this message? Use this URL: <https://mail-archive.FreeBSD.org/cgi/mid.cgi?Pine.BSF.4.10.9911262111220.544-100000>