Skip site navigation (1)Skip section navigation (2)
Date:      Tue, 5 Dec 1995 19:01:20 -0700 (MST)
From:      Terry Lambert <>
To: (Garrett A. Wollman)
Subject:   Re: Threads?  C++ Task Library?
Message-ID:  <>
In-Reply-To: <> from "Garrett A. Wollman" at Dec 4, 95 11:50:28 am

Next in thread | Previous in thread | Raw E-Mail | Index | Archive | Help
> > At this time there are no KERNEL threads,
> [...]
> > at some stage more distant, there will be kernel support for
> > threaded programming. but that's a ways off still.
> It is strongly believed in some part of the research community that
> implementing threads in the kernel is a Really Bad Idea(tm).  What you
> actually want to do, these people say, is to implement threads in
> user-space with a few hooks in the kernel (``kernel threading
> assist'') to allow the user-mode thread scheduler to get control at
> appropriate times.

These people must be MACH people, if they love to cross protection
domains that frequently.  8-).

> This has the benefit that an individual program can easily specify
> and/or modify its own thread-scheduling policy without affecting any
> other program or requiring changes to the kernel.

The main benefits of kernel threads are:

1)	N kernel threads count as N scheduling entities in terms
	of competing for process quantum with the other M-N process
	on the machine.  Thus a kernel threaded app on a loaded
	machine competes as N:M instead of 1:M for processor time.
	This avoids having to write scheduling classes to prioritize
	a threaded app relative to the number of threads it is running.

2)	It scales under MP, since each thread is a seperately
	schedulable entity from the kernels perspective, and thus
	for O threads in a threaded application, cuncurrency goes
	up f(O)/N relative to N kernel threads for f(O) describing
	the fration of O threads that could possibly be executing
	concurrently for N <= the number of processors on the machine.

The main drawbacks of kernel threads compared to user space threads
are (in typical implementations):

1)	You may only have as many blocking operations outstanding for
	all contexts in the application as there are kernel threads
	to map the operations onto (N-1 on good implementations that
	take scheduing into account).

2)	When a blocking operation occurs, the thread causing the
	blocking cedes the remainder of the quantum to the system
	in a voluntary context switch, even though there may be
	other runnable threads blocked on lack of CPU resources.

3)	The overhead in process context switch for a typical "all
	threads blocking" situation is as high as seperate processes
	(unless the scheduler is smart and schedules threads in a
	thread group to run consecutively on processor resources
	to reduce LDT switches and register set flushes).

Typically, what you *really* want is a hybridization of the two, so
that the full quantum of each kernel thread can be fully utilized
by each user space thread, and allow an n:m mapping of user to kernel
threads (for n<=m).  To do that requires creation of a "highest"
priority thread in the user space thread environment which is then
run on the kernel thread whenever it wants to run... or some similar
hybridization mechanism for scheduling user space threads using
I/O conversion to pick user space threads to run on remaining quanta
of kernel threads.

This is hard, which is probably why Sun and USL didn't do it, but
leaves the program able to easily specify and/or modify its own
thread-scheduling policy without affecting any other program.  At
least no more than any other program that will consume its full quantum,
given the chance.

					Terry Lambert
Any opinions in this posting are my own and not those of my present
or previous employers.

Want to link to this message? Use this URL: <>