Skip site navigation (1)Skip section navigation (2)
Date:      Sun, 18 Feb 2001 02:42:51 -0500
From:      Seth Leigh <seth@pengar.com>
To:        freebsd-smp@FreeBSD.ORG
Subject:   Well I read the stuff, and I get it now.
Message-ID:  <5.0.2.1.0.20010218021929.00aaef98@hobbiton.shire.net>

next in thread | raw e-mail | index | archive | help
OK, so I have been reading the articles you guys recommended about 
scheduler activations, and I now understand what you are talking about.

I have one concern, which I haven't see addressed yet.  Granted, I am not 
done reading the four articles I printed off, but I figured I'd post a 
question here about it.

How are you going to implement thread time-slicing at the user level?  If 
you don't implement preemption, are threads in the user-level thread 
library simply going to run as long as they want until they make a thread 
library call, giving the threads library a chance to block them, or until 
they block in the kernel, resulting in a new activation and upcall into the 
threads library scheduler?

The gist of what I am saying is like this.  Solaris, for example, by 
default gets a timer interupt 100 times per second which causes (unless 
real-time threads are running) the kernel scheduler to run and 
re-prioritize all the runnable kernel threads, and decide which of them 
will run during the next tick.  Now, I assume that FreeBSD is going to 
continue having its kernel get these timer interupts, so that the kernel 
can fairly divvy up the CPUs amongst the various processes running.  Now, 
with a kernel-threads supported thread library, all thread scheduling was 
simply done in the kernel, and the user-level threads library doesn't have 
to worry about it.  Threads get time-sliced, everyone gets some time, and 
we're all happy.  I am talking about purely One to One model here.  In a 
Many to Many model, at least on Solaris which is the only OS I am familiar 
with how stuff works, in user level threads run on a given LWP until they 
give the threads library a chance to block them.  If they don't block in 
the threads library, they hold onto the LWP as long as they want.  This can 
easily result in performance which is unpredictable and undesirable.

Now, take this to the scheduler activations.  One a 4-way machine the 
kernel would provide up to 4 (the Anderson paper left the decision of how 
many to give a process to the kernel in its processor allocation code) 
scheduler activations, which provide the threads library four execution 
contexts in which to run threads.  All scheduling of user-level threads is 
done by the threads library onto the available scheduler activations.

Now, how are you going to time-slice threads at user-level onto the 
available scheduler activations?

If the answer is "we're not, we're gonna let threads run until they block 
in the kernel (thus resulting in a new activation being used to upcall into 
the threads library, giving us a new execution context in which to run one 
of the other runnable threads) or in the threads library (on, say, a 
mutex), giving the threads library code a chance to context switch to some 
other thread" then I personally don't think that's a good idea.  It would 
provide for far too much processor starvation for some of the 
threads.  Basically, it isn't "fair".  Granted, nobody said life as a 
thread would be fair, but still.   On the other hand, setting timers to 
have the threads library interupt into its scheduler 100 times per second 
or whatever for the purposes of time-slicing threads onto the available 
scheduler activations seems like a *horrible* waste of time.  You can lower 
the number of times per second, but you still waste time, and the 
granularity of scheduling gets coarser and coarser.

Inquiring minds (well, mind) want to know.  Please educate me.  How are you 
going to time-slice user-level threads?

Actually, I just thought of how it could be done while re-reading my post 
prior to sending it off.  Is this what you guys have thought of too?  The 
idea is that since the kernel is already getting interupted 100 times per 
second (or however many times FreeBSD does it) anyhow, the running 
scheduler activation is *already* going to be preempted off the cpu for the 
duration of that tick processing.  So, after the tick processing is done 
and the kernel dispatcher decides that this particular scheduling 
activation may continue running as it was doing before the timer interupt 
fired, rather than simply context switch back into that particular 
scheduler activation, the kernel would use a *second* scheduler activation 
and upcall into the threads library's scheduler.  This would basically 
allow the threads library's scheduler to "piggyback" onto the kernel's 
scheduler, without requiring anymore crossings of the protection boundaries 
than were going to be had anyhow.  Basically, this scheme would use twice 
as many scheduler activations as it wanted to really have be run, basically 
using half of them to call up into the threads library after each tick to 
decide whether to keep running the preempted thread or scheduler a 
different one.

What do you all think?  Or is this already the plan?

Seth



To Unsubscribe: send mail to majordomo@FreeBSD.org
with "unsubscribe freebsd-smp" in the body of the message




Want to link to this message? Use this URL: <https://mail-archive.FreeBSD.org/cgi/mid.cgi?5.0.2.1.0.20010218021929.00aaef98>