Skip site navigation (1)Skip section navigation (2)
Date:      Mon, 27 Nov 2000 14:30:58 -0800
From:      Jason Evans <jasone@canonware.com>
To:        Julian Elischer <julian@elischer.org>
Cc:        arch@freebsd.org
Subject:   Re: Threads (KSE etc) comments
Message-ID:  <20001127143058.L4140@canonware.com>
In-Reply-To: <3A192821.13463950@elischer.org>; from julian@elischer.org on Mon, Nov 20, 2000 at 05:33:21AM -0800
References:  <3A15A2C1.1F3FB6CD@elischer.org> <3A192821.13463950@elischer.org>

next in thread | previous in thread | raw e-mail | index | archive | help
On Mon, Nov 20, 2000 at 05:33:21AM -0800, Julian Elischer wrote:
> I've been thinking about the scheduling queues, and how to make sure
> that the process (KSEG actually) acts fairly with respect to other
> processes.  I was confised for a while by your description. I think part
> of my confusion came from something that we specified in the meeting but
> has not been written in your document directly. Let me see if we are
> agreed on what we decided..
> 
> A KSEG can only have as a maximum  N KSEs associated with it, where N is
> the number of processors, (unless artificially reduced by a lower
> concurency declaration). (you said this but only indirectly).

There's no particular reason that we need to enforce a limit on the number
of KSEs within a KSEG (aside from resource limits), but in practice,
there's no reason that a program would want to create more KSEs within a
KSEG than there are processors.

> In
> general, KSEs are each assigned to a processor. They do not in general
> move between processors unless some explicit adjustment is being
> made(*), and as a general rule, two KSEs will not be assigned to the
> same processor. (in some transitional moments this may be allowed to
> briefly happen) This in general if you run a KSEC on the same KSE it was
> run on last time, you should be on the same processor,
> (and get any affinity advantages that might exist).

KSEs need to be able to float between processors in order to make use of
all the processors if there are fewer KSEs in a KSEG than there are
processors (in other words, KSEG concurrency less than the number of
processors).  In general practice, KSEs will tend to stay on the same
processor, but CPU load balancing may cause KSEs to migrate from time to
time.

> (*)I am inclined to make the requirement of binding KSEs to processors
> HARD,as this allows us to simplify some later decisions.

I wanted the binding to be soft, in order to simplify things. =)

> For example, if
> we hard bind KSEs to procesors then since we assign a different
> communications mailbox for each KSE we create, we can be sure that
> different KSEs will never preempt each other when writing out to their
> mailboxes. this also means that since there can only be one UTS
> incarnation active per KSE (or one KSE per UTS incarnation), that we can
> not have a UTS preempted by another incarnation on the same processor.
> We can therefore make sure that there needs to be no locking on
> mailboxes, or even any checking.

The case where a KSE is preempted, only to be replaced by another KSE
within the same KSEG has no real meaning, and I expect we'd specifically
write the scheduler to avoid ever doing that.

> I think this is what we decided.. is this correct? The binding is not
> really mentioned in your document.

I made a number of minor changes to the design after our discussions.
Almost all of the changes were made in order to simplify implementation.
In this case, I felt that not binding KSEs to CPUs would make the scheduler
much simpler to implement, with no significant down sides.  If I'm missing
something that actually makes the changes more complex, please don't let
the issue drop; simplicity and efficiency are key.

> When we were talking about it, (at least in my memory) Each KSE had a
> mailbox. My memory of this was that we called a KSE creation call with a
> different argument, thus each KSE had a different return stack frame
> when it made upcalls. In the version you have outlined, there is no KSE
> creation call only KSEG creation calls. Thus all upcalls have the same
> frame, and there is the danger of colliding upcalls for different
> processors. I think it works more naturally with everything just
> 'falling into place' if we have calls to create KSEs rather than KSEGs.
> The "make KSEG" call is simply a version of the "make KSE" call that
> also puts it into the  new different group. You are left with teh very
> first 'original' thread being different in my shceme, but my answer to
> this would be to simply make the first "make KSE" call reuse the current
> stack etc. and not return a new one.
>
> [...]

Yes, this is a shortcoming of the current paper.  I couldn't remember how
we had decided to do this, and was still working it out in my head.  Thanks
for the reminder.

>  When we have per-processor scheduling queues, there is only at most ONE
>  KSE from any given KSEG in the scheduling queues for any given
>  processor.

As mentioned above, I don't think we need to enforce this.

>  With the single scheduling queue we have now do we allow N to be in the
>  queues at once? (or do we put the KSEG in instead?)

We would still put all the KSEs in the scheduling queue.  However, I think
we really need to do the scheduler overhaul close to the same time as the
KSE changes, so that we never have production releases of FreeBSD running
this way.

>  The terms KSE etc. have probably served their useful life.
>  It's time to think of or find names that really describe them better
>  
>  KSE  -- a per process processor.. slot? openning? (a-la CAM/SCSI)
>  KSEC ---- stack plus context... KSC..trying to do something (task?)
>  KSEG ---- a class of schedulable  entities.. A slot cluster? :-)
>  PROC ---- probably needs to stay the same.

I'm not particularly attached to the names, but finding something better
may be hard. =)

Jason


To Unsubscribe: send mail to majordomo@FreeBSD.org
with "unsubscribe freebsd-arch" in the body of the message




Want to link to this message? Use this URL: <https://mail-archive.FreeBSD.org/cgi/mid.cgi?20001127143058.L4140>