From owner-freebsd-arch Fri Apr 27 20:37:35 2001 Delivered-To: freebsd-arch@freebsd.org Received: from smtp02.primenet.com (smtp02.primenet.com [206.165.6.132]) by hub.freebsd.org (Postfix) with ESMTP id E75EC37B423 for ; Fri, 27 Apr 2001 20:37:22 -0700 (PDT) (envelope-from tlambert@usr08.primenet.com) Received: (from daemon@localhost) by smtp02.primenet.com (8.9.3/8.9.3) id UAA16292; Fri, 27 Apr 2001 20:29:59 -0700 (MST) Received: from usr08.primenet.com(206.165.6.208) via SMTP by smtp02.primenet.com, id smtpdAAApKayZF; Fri Apr 27 20:29:53 2001 Received: (from tlambert@localhost) by usr08.primenet.com (8.8.5/8.8.5) id UAA29358; Fri, 27 Apr 2001 20:37:51 -0700 (MST) From: Terry Lambert Message-Id: <200104280337.UAA29358@usr08.primenet.com> Subject: Re: KSE threading support (first parts) To: bright@wintelcom.net (Alfred Perlstein) Date: Sat, 28 Apr 2001 03:37:50 +0000 (GMT) Cc: tlambert@primenet.com (Terry Lambert), arch@FreeBSD.ORG, terry@lambert.org In-Reply-To: <20010427160607.M18676@fw.wintelcom.net> from "Alfred Perlstein" at Apr 27, 2001 04:06:07 PM X-Mailer: ELM [version 2.5 PL2] MIME-Version: 1.0 Content-Type: text/plain; charset=us-ascii Content-Transfer-Encoding: 7bit Sender: owner-freebsd-arch@FreeBSD.ORG Precedence: bulk X-Loop: FreeBSD.ORG > The way I envision async call gates is something like each syscall > could borrow a spare pcb and do an rfork back into the user > application using the borrowed pcb and allowing the syscall to > proceed as scheduled as another kernel thread, upon return it would > somehow notify the process of completion. Close. Effectively, it uses the minimal amount of call context it can get away with, and points the VM space and other stuff back to the process control block, which is shared among all system calls. Some calls, which will never block, immediately return without grabbing a context... they just return the status into the per call status block in user space, as if they had completed asynchronously. Other calls, which may block, run to the point where they would block, and allocate a context then, and return. If they don't end up blocking, they return like a non-blocking call. Calls which will always block, or which may block, and get to the point where a return would be too complicated, allocate a context and return. The context is used by the kernel to continue processing. It contains the address of the user space status block, as well as a copy of the stack of the returning program (think of the one that continues as a "setmp", with the one doing the return as getting a longjmp, where the code it would have run is skipped). The final part is that the context runs to completion at the user space boundary; since the call has already returned, it does not return to user space, instead it stops at the user/kernel boundary, after copying out the completion status into the user space status block. The status block is a simplified version of the aioread/aiowrite status block. A program can just use these calls directly. They can also set a flag to make the call synchornous (as in an aiowait). Finally, a user space threads scheduler can use completion notifications to make scheduling decisions. FFor SMP, you can state that you have the ability to return into user space (e.g. similar to vfork/sfork) multiple times. Each of these represents a "scheduler reservation", where you reserve the right to compete for a quanta. You can also easily implement negafinity for up to 32 processors with three 32 bit unsigned int's in the process block: just don't reserve on a processor where the bit is already set, until you have reserved on all available processors at least once. > > My ideal implementation would use async call gates. In effect, > > this would be the same as implementing VMS ASTs in all of FreeBSD. > > Actually, why not just have a syscall that turns on the async > behavior? Libc will break. It does not expect to have to reap completed system call status blocks to report completion status to the user program. > > In any case, you and Nate are getting upset at shortcuts that > > people want to take in implementation, not at the design itself. > > > > Cut it out. > > Well if we have an implementation where the implementators are > unwilling or incapable (because of time constraints, or getting > hit by a bus, etc) of doing the more optimized version then what's > the point besideds getting more IO concurrancy? I don't know, it > just that if someone has a terrific idea that seems to have astounding > complexity and they don't feel like they want to or can take the > final step with it, then it really should not be considered. The point of threads was to reduce context switch overhead, and to increase the useful work that actually gets done in any given time period, as opposed to spending cycles on system overhead or spinning waiting for a call to complete when you have other, better work to do. Somewhere along the way, it became corrupted into a tool to allow people without very much clue to write programs one-per-connection, instead of building finite state automata, and that corruption has proceeded, until now it's a tool to get SMP scalability. > btw, I've read some on scheduler activations, where some references > on async call gates? You're talking to the originator of the idea. See the -arch archives. Terry Lambert terry@lambert.org --- Any opinions in this posting are my own and not those of my present or previous employers. To Unsubscribe: send mail to majordomo@FreeBSD.org with "unsubscribe freebsd-arch" in the body of the message