Skip site navigation (1)Skip section navigation (2)
From:      Rik van Riel <riel@conectiva.com.br>
To:        Julian Elischer <julian@elischer.org>
Cc:        Bill Huey <billh@gnuppy.monkey.org>, <freebsd-arch@freebsd.org>
Subject:   Re: New Linux threading model
Message-ID:  <Pine.LNX.4.44L.0209201749231.1857-100000@imladris.surriel.com>
In-Reply-To: <Pine.BSF.4.21.0209201300500.16925-100000@InterJet.elischer.org>

next in thread | previous in thread | raw e-mail | index | archive | help
On Fri, 20 Sep 2002, Julian Elischer wrote:

> I didn't sat it's not possible, it's just that the interaction between
> threads and KSEs on the run queue is very complicated in the current
> "interim" scheduler (compatible with the old process scheduler but with
> a huge "tumor" on the side of it to do something with threads)

> ie. You need to schedule threads in the kernel, while not allowing
> a process with a lot of threads to flood the system.

Interesting problem, but that might be better done in a more
generic way.  Ie. first build a thread scheduler and then add
support for generic resource containers that aren't tied to
the thread<->process relation.

Once you have that, you could substitute thread<->user for the
default relationship and prevent users with many threads from
flooding the CPU ;)

Adding resource containers to a scheduler can be hard though, I
still haven't found a pretty way of adding per-container (in my
case I want to start with per-user) CPU time accounting to Ingo's
O(1) scheduler.  Sure, I've got several ugly ideas and one less
ugly idea, but I haven't found anything nice yet...

regards,

Rik
-- 
Bravely reimplemented by the knights who say "NIH".

http://www.surriel.com/		http://distro.conectiva.com/

Spamtraps of the month:  september@surriel.com trac@trac.org


To Unsubscribe: send mail to majordomo@FreeBSD.org
with "unsubscribe freebsd-arch" in the body of the message




Want to link to this message? Use this URL: <https://mail-archive.FreeBSD.org/cgi/mid.cgi?Pine.LNX.4.44L.0209201749231.1857-100000>