From owner-freebsd-arch Thu Nov 25 1:36:25 1999 Delivered-To: freebsd-arch@freebsd.org Received: from ns1.yes.no (ns1.yes.no [195.204.136.10]) by hub.freebsd.org (Postfix) with ESMTP id 21D2214D87 for ; Thu, 25 Nov 1999 01:36:22 -0800 (PST) (envelope-from eivind@bitbox.follo.net) Received: from bitbox.follo.net (bitbox.follo.net [195.204.143.218]) by ns1.yes.no (8.9.3/8.9.3) with ESMTP id KAA19088 for ; Thu, 25 Nov 1999 10:34:30 +0100 (CET) Received: (from eivind@localhost) by bitbox.follo.net (8.8.8/8.8.6) id KAA38491 for freebsd-arch@freebsd.org; Thu, 25 Nov 1999 10:34:29 +0100 (MET) Received: from alpo.whistle.com (alpo.whistle.com [207.76.204.38]) by hub.freebsd.org (Postfix) with ESMTP id 367FF14C4A for ; Thu, 25 Nov 1999 01:34:21 -0800 (PST) (envelope-from julian@whistle.com) Received: from current1.whiste.com (current1.whistle.com [207.76.205.22]) by alpo.whistle.com (8.9.1a/8.9.1) with ESMTP id BAA65566; Thu, 25 Nov 1999 01:33:36 -0800 (PST) Date: Thu, 25 Nov 1999 01:33:35 -0800 (PST) From: Julian Elischer To: Jason Evans Cc: "Daniel M. Eischen" , freebsd-arch@freebsd.org Subject: Re: Threads In-Reply-To: <19991124220406.X301@sturm.canonware.com> Message-ID: MIME-Version: 1.0 Content-Type: TEXT/PLAIN; charset=US-ASCII Sender: owner-freebsd-arch@FreeBSD.ORG Precedence: bulk X-Loop: FreeBSD.ORG On Wed, 24 Nov 1999, Jason Evans wrote: > On Wed, Nov 24, 1999 at 09:03:29AM -0500, Daniel M. Eischen wrote: > > Julian Elischer wrote: > > > > > I think nearly all syscalls can block given page faults etc. and having all > > > syscalls potentially return via the UTS is going to mean some change > > > in the kernel/process protocol. > > > > I guess we just disagree on how the kernel is entered. I just don't see > > why we need to change the method of entering the kernel. We just want > > to switch to a new context (UTS) when a KSE blocks, and that can be done > > from within the kernel without changing the method entering the kernel. > > One of the main advantages I see to adding an asnchronous call gate (ACG) > rather than changing the semantics of the current syscalls is that mixing > traditional and asynchronous syscalls is very easy. I don't see this > benefiting the threads effort much in the final product, but it does have > the advantages of: > > 1) Asynchronous syscalls are useable by programs other than those that use > threads. > > 2) The ability to mix and match traditional and asynchronous syscalls > should make incremental development much less painful. If you have aseparate call-gate you can start off by making all the entries identical to the present call-gate, and change them one by one as you make them fit the new world. We continue to support the old callgate untouhed and old programs continue to work. Mixing old and new libraries also works so we can start with a new threaded libc with nothing in it and build it up a bit at a time, relying on libc to fill in the gaps. > > Toggling the style of syscalls (traditional versus asynchronous) via some > per-process flag would be possible, but it doesn't seem as clean to me, and > it forfeits functionality without reducing complexity. YECH! > > > > If a blocked syscall returns, then when it returns the UTS needs to > > > be able to decide whether it is the most important thread to continue or not. > > > So it can't just 'return', but has to come back via the UTS. This requires > > > that things be considerably different. At least this is how I see it. > > > > Right. And just because it woke up from a tsleep doesn't mean that it > > will eventually be able to finish and return to userland. It may > > encounter more tsleeps before leaving the kernel. The UTS needs > > to enter the kernel in order to resume the thread. And it needs a > > way of telling the kernel which blocked KSE to resume. > > > > The UTS is notified that a KSE has unblocked, but it doesn't have to > > immediately resume it - other threads may have higher priority. I think > > we are in agreement here. I'm just advocating using the stack of the > > UTS event handler context (in the form of parameters to the event handlers) > > to tell the UTS that threads have blocked/unblocked in the kernel. There > > doesn't have to be any magic/wizardry in the system calling convention > > to do this. The kernel can return directly to the predefined UTS event > > handlers (also on a predefined stack) and totally bypass the original system > > call in which it entered the kernel. At some point later, the UTS resumes > > the (now unblocked) KSE and returns the same way it entered. > > > > You also want the ability to inform the UTS of _more_ than just one > > event at a time. Several KSEs may unblock before a subprocess is run. > > You should be able to notify the UTS of them all at once. How does > > that work in your method? The next time the subproces (or maybe any subprocess) is run, all returned KSE's (which are hung under the subproc (or maybe even the proc) have their status and user context passed in to the UTS, (and as they have no purpose left, are freed) The UTS then starts up whatever thread it wants to. > > This sounds similar to Solaris LWPs in that there are potentially KSEs > blocked in the kernel, whereas with scheduler activations (SA), that > doesn't happen under normal circumstances. No, under SA KSE's block. and another KSE is generated to act as a replacement. it is used to run the activation that is passed to the scheduler, and this the next thread the process runs. > It sounds to me like the > disagreement between you two (Daniel and Julian) is much more significant > than what decisions are made by the UTS. Daniel, you say "The UTS is > notified that a KSE has unblocked ...". However, if I understand the SA > way of doing things, there is no KSE associated with a blocked syscall. there must be. there is a stack that must be saved, and registers. this is a KSE. > The syscall context has some kernel context, but there is no bonifide > context, such as with Solaris's LWP model. When the syscall completes, a > new activation is created for the upcall to the UTS. A KSE is basicaly saved proocessor state, including a stack, and some linkage points to allow that to be hug of various other structures, e.g. sleep queues, procs , subprocs, etc. > > That said, I disagree with the idea of the UTS having explicit control over > scheduling of KSEs. I think that there should be exactly one KSE per > processor (with the exception of PTHREAD_SCOPE_SYSTEM (bound) threads), and > that threads should be multiplexed onto the KSEs. This lets the kernel > schedule KSEs as it sees fit, and lets the UTS divide the runtime of the > KSEs as it sees fit. exactly what we are saying except make it exactly one RUNNING KSE per (sub)process, (and 0 or more blocked ones) You get your SMP support by rforking subprocesses, on efor each new processor/priority combination you want to use. Each has one or more KSEs and 0 or 1 Runnable KSEs. > > > > I think you and I are in agreement, but having trouble saying that. > > I don't think you guys are in agreement, but one can hope. =) I think this could all be sorted out with a few pictures. I'm going to try draw some.. :-) (gotta find a drawing tool.. hmm need to rummage around in ports..) pitty we can't use 'wb'. Julian > > Jason > To Unsubscribe: send mail to majordomo@FreeBSD.org with "unsubscribe freebsd-arch" in the body of the message