Skip site navigation (1)Skip section navigation (2)
Date:      Sun, 12 Dec 1999 12:48:37 -0800 (PST)
From:      Julian Elischer <julian@whistle.com>
To:        Arun Sharma <adsharma@sharmas.dhs.org>
Cc:        arch@freebsd.org
Subject:   Re: Recent face to face threads talks.
Message-ID:  <Pine.BSF.4.10.9912121101200.26823-100000@current1.whistle.com>
In-Reply-To: <19991212095553.A7995@sharmas.dhs.org>

next in thread | previous in thread | raw e-mail | index | archive | help


On Sun, 12 Dec 1999, Arun Sharma wrote:

> On Sun, Dec 12, 1999 at 07:59:23AM -0500, Daniel M. Eischen wrote:
> > > A lot of time was spent discussing the mechanisms by which the UTS
> > > received upcalls and how KSE's would resume. It was agreed that
> > > thread classes could be implemented by a structure half way between a
> > > process and a KSE on the linkage scale of things, that would own KSEs
> > > and which would be scheduled onto processors. This basically corresponds
> > > to what I've been calling a "subproc". In our diagrams it was designapted
> > > "Q" as P was already taken (for Proc). This was actually an apt name as
> > > this structure si what is put in the run queue. It was also agreed that to
> > > start off this would be a virtual structure an it would be part of the
> > > "proc"  struct for the first revisions.
> > 
> > Wouldn't this be akin to what other systems call a kernel thread or LWP?
> 
> A kernel thread or a LWP has a 1-to-1 relationship with a kernel stack.
> In our discussion, a KSE has a 1-to-1 relationship with a kernel stack. The
> purpose of the Q structure was to enforce "scheduling classes" - to 
> represent a set of threads having the same priority and to ensure 
> fairness in the scheduler.

> 
> Apart from this distinction, KSE is the closest to a kernel thread/LWP
> and could possibly be used to implement blocking interrupt threads.
> 

Yes we agreed that "kernel threads' would hopefully belong to a "Q" that
was hung off proc0 and which would have 'realtime' scheduling
characteristics. This brings up something that was mentionned but that I
dont remember an explicit agreement to, though I think averyone agreed..

There is a syscall that jumpstarts the threading support. It supplies the
kernel with the information needed to do upcalls. Until this call is made
there can be no upcalls and the process is effectively unthreaded. This
call is repeated for each 'thread class', and in fact we call it foreach
CPU as well. each time you call it you define another 'virtual cpu', and
supply it with upcal information. Each virtual CPU has therefore a
different UTS instantiation (read stack) (they can be small). It can
therefore be stated that each "Q" structure effectively corresponds to an
UTS instance. 

To start with there will be one schedular queue but we should build
with an eye on the fact that we may at some time in the future
want to move to "per CPU" scheduling queues. 

The initital implementation would leave the "Q" struct as part of a proc
struct, and we'd just do 'rfork()' to get the functionality of virtual
machines. The syscall in fact would start life as a variation on the theme
of rfork()


> It was also agreed that it doesn't make sense to do everything at once,
> but do it in some sensible order, without precluding elements of the
> design. Some of the tasks - break the proc structure into a proc and KSE,
> per CPU scheduling queues, implementing SA. 
> 
> My personal opinion is that it might be easier to go in steps - 
> rfork/clone based implementation (aka linuxthreads), then a Solaris like
> m x n implementation and then scheduler activations. Implementation wise, 
> they're in the increasing order of difficulty. From what I heard from
> Terry, this is also the path that Solaris took. This also lets application
> writers pick the right model for their work.

I think this is what is happenning..
Jason, Richard and Russell have been working on the linuxthreads stuff
and that gives us a purely "rfork-per-thread" system.

The next step will be, while people are running that, to break out the
fields of the thread structure from the proc structure. We should be able
to do that relatively quickly. Fixing ps(1) and procfs(4) will be more
'fun'. Once we have the system with them separated, then we can start
supporting "more than one" per process. We'd still be rforking() to get
our virtual machines however. 

At least in the earlier versions of teh upcall variety, The unblocked
sysscalls will be completed back as far as the user boundary as soon as
that process (Q) is next sceduled to run (which may be immediatly). The 
delivery of the status to the UTS and thread was discussed but needs more
work.

> 
> If we stick to the POSIX threads interface strictly, we shouldn't be
> afraid of getting stuck with one of the implementations.


> 
> Other concerns that were expressed - crossing protection boundaries too
> often to pass messages about blocking and rescheduling. Matt Dillon 
> responded that these messages could be batched and made more efficient.

Julian






To Unsubscribe: send mail to majordomo@FreeBSD.org
with "unsubscribe freebsd-arch" in the body of the message




Want to link to this message? Use this URL: <https://mail-archive.FreeBSD.org/cgi/mid.cgi?Pine.BSF.4.10.9912121101200.26823-100000>