Skip site navigation (1)Skip section navigation (2)
Date:      Fri, 17 Nov 2000 13:27:29 -0800
From:      Julian Elischer <julian@elischer.org>
To:        jasone@freebsd.org, arch@freebsd.org
Subject:   Threads (KSE etc) comments
Message-ID:  <3A15A2C1.1F3FB6CD@elischer.org>

next in thread | raw e-mail | index | archive | help

Hi jason.

I read your Nov 7 doc on the threading model.
I have a few comments..

I've been thinking about the scheduling queues, and how to
make sure that the process (KSEG actually) acts fairly
with respect to other processes.  I was confised for a while by your
description. I think part of my confusion came from
something that we specified in the meeting but has not been written
in your document directly.

let me see if we are agreed on what we decided..

A KSEG can only have as a maximum  N KSEs associated with it, where
N is the number of processors, (unless artificially reduced by a lower 
concurency declaration). (you said this but only indirectly). In
general,
KSEs are each assigned to a processor. They do not in general move
between 
processors unless some explicit adjustment is being made(*), and as a
general
rule, two KSEs will not be assigned to the same processor. (in some
transitional
moments this may be allowed to briefly happen) This in general if you
run a KSEC
on the same KSE it was run on last time, you should be on the same
processor, 
(and get any affinity advantages that might exist). 

(*)I am inclined to make the requirement of binding KSEs to processors
HARD,
as this allows us to simplify some later decisions. For example, if we
hard bind 
KSEs to procesors then since we assign a different communications
mailbox
for each KSE we create, we can be sure that different
KSEs will never preempt each other when writing out to their mailboxes.
this also means that since there can only be one UTS incarnation active
per
KSE (or one KSE per UTS incarnation), that we can not have a UTS
preempted by another incarnation on the same processor. We can therefore
make sure
that there needs to be no locking on mailboxes, or even any checking.

I think this is what we decided.. is this correct? The binding is not
really 
mentioned in your document.

When we were talking about it, (at least in my memory)
Each KSE had a mailbox. My memory of this was that we called a KSE
creation call
with a different argument, thus each KSE had a different return stack
frame when
it made upcalls. In the version you have outlined, there is no KSE
creation call only KSEG creation calls. Thus all upcalls have the 
same frame, and there is the danger of colliding upcalls for different
processors.

My memory (where is that photo of the whiteboard that Nicole was
supposed to send 
us) is that each KSE is assigned a differnt mailbox address in userland,
which is associated with the frame that it will do upcalls on.
One of the fields of the mailbox contains a pointer to a userland
context 
structure which contains apece where the KSE should dump the user
context
should it need to, and a pointer to other such structures. This
structure
is defined by the kernel, but included in the UTS's 'per thread info'.
Since there is one per thread, there is never a problem in running out
of them
when the kernel links them together in a linked list of completed
operations.

When the thread makes a system call, the KSE looks in the mailbox for
the 
context structure for this thread, and if the thread blocks or resumes,
it can
save any information it needs to tell teh UTS there. The UTS sets the
pointer
into the mailbox when it schedules the thread, so even involintary
blockages
(e.g. page faults) have the pointer available. When the UTS is 
running it's own work, it ZERO's this pointer, which lets the kernel
know that it is not really in a safe state for pre-emmpting.
I think that we decided that a page fault in the UTS simply blocked
until it 
was satisfied.

When an upcall occurs, the stack frame it occurs on, and hence the 
mailbox pointed to are automatically correct, so the UTS doesn't even
have to
look it up. (the mailbox is allocated as a local variable in the frame
of the 
KSE creation call and is this in the local frame of the upcall.

this is something like I imagined the UTS would do to fire off a new
KSE.
The reason I was thinking of it this way was so that each KSE has a UTS
supplied mailbox and (smallish) stack.


/* 
 * use make_new_kse() to do exactly that.
 * Returns -1 on failure and 1 on success.
 *
 * cookie allows the UTS to have it's own way of identifying the
KSE/thread.
 * This stack is effectively lost to us so we first switch
 * to a small throw-away stack. It need only have enough space in it for
the 
 * upcalls to call the UTS, and whatever the UTS will need.
 * Some time after creation, there will be an upcall on the new KSE
looking for work.
 * I could imagine wiring this UTS stack down..
 */
void
start_new_kse(void * cookie, jmp_buf *jb) /*XXX correct definition for
jb? */
{
    struct kse_mailbox;
    int return_value;

    bzero(kse_mailbox, sizeof(kse_mailbox));
    return_value = kse_new(&kse_mailbox);
    switch (return_value) {
      case -1:
        perror("make_new_kse() failed");
          _longjmp(jb, -1);
      case 0:
          printf ("successfully created kse %d\n", kse_mailbox.kse_id);
          _longjmp(jb, 1);
          exit (1); /* not reached */
      default:
          printf(" An upcall of type %d occured\n", return_value);
          uts_scheduler(cookie, &kse_mailbox, return_value);  /* must
never return */
          printf ("it returned!\n");
	  exit (1);
    }
}

make_new_kse(void * cookie) 
{
	int retval;
	jmp_buf env;
	if ((retval = _setjmp(env)) == 0) {
                load_new_stack() /* load a new smaller stack, but copy
the top
                                     100 bytes or so from the old stack
so that
				     our local variables appear to be the same. */
                start_new_kse(cookie, env);
                /* not reached */
        }
        return (retval)
}


When we have per-processor scheduling queues, there is only at most ONE
KSE from any given KSEG in the scheduling queues for any given
processor.
With the single scheduling queue we have now do we allow N to be in the
queues at once? (or do we put the KSEG in instead?)

The terms KSE etc. have probably served their useful life.
It's time to think of or find names that really describe them better


KSE  -- a per process processor..  Scheduler slot? openning? (a-la
CAM/SCSI) 
KSEC ---- stack plus context... KSC.. it's trying to do somethign (a
task?)
KSEG ---- a class of schedulable  entities.. A slot cluster? :-)
PROC ---- probably needs to stay the same.

-- 
      __--_|\  Julian Elischer
     /       \ julian@elischer.org
    (   OZ    ) World tour 2000
---> X_.---._/  presently in:  Budapest
            v


To Unsubscribe: send mail to majordomo@FreeBSD.org
with "unsubscribe freebsd-arch" in the body of the message




Want to link to this message? Use this URL: <https://mail-archive.FreeBSD.org/cgi/mid.cgi?3A15A2C1.1F3FB6CD>