Skip site navigation (1)Skip section navigation (2)
Date:      Wed, 26 Aug 1998 14:06:42 +0200
From:      Martin Cracauer <cracauer@cons.org>
To:        Mike Smith <mike@smith.net.au>, Michael Hancock <michaelh@cet.co.jp>
Cc:        Gary Palmer <gpalmer@FreeBSD.ORG>, Chuck Robey <chuckr@glue.umd.edu>, freebsd-current@FreeBSD.ORG
Subject:   Re: Threads across processors
Message-ID:  <19980826140642.A20511@cons.org>
In-Reply-To: <199808251338.NAA02533@dingo.cdrom.com>; from Mike Smith on Tue, Aug 25, 1998 at 01:38:18PM %2B0000
References:  <Pine.SV4.3.95.980826000204.19157C-100000@parkplace.cet.co.jp> <199808251338.NAA02533@dingo.cdrom.com>

next in thread | previous in thread | raw e-mail | index | archive | help

--2oS5YaxWCcQjTEyO
Content-Type: text/plain; charset=us-ascii

In <199808251338.NAA02533@dingo.cdrom.com>, Mike Smith wrote: 
> > On Tue, 25 Aug 1998, Gary Palmer wrote:
> > 
> > > Heck, SMI wrote `doors' for the very reason that IPC *blows* in all cases, and 
> > > that to pull off the speedups with NSCD that they wanted, they had to get the 
> > > IPC overhead reduced a lot. I think I even have slides somewhere comparing 
> > > pipes, SYSV SHM, etc times for message passing in terms of transit time.
> > 
> > Our pipes are very fast.  SYSV SHM's blunder is that it uses full blown
> > system calls for synchronization.

Aehm, and pipes don't require full-blown system calls to send/receive
notifications and require kernel resheduling before anything happens
after sending a message?
 
> Yes.  Anyone that thinks in terms of a context switch per transaction 
> between coprocesses is not designing properly.  

For your amusement, I appended a message I once forwarded to -hackers,
regarding mapping of userlevel threads to kernel shedulable entities.

But in a way that is like sendfile and other combined system calls:
Did anyone actually gain any data how much slower a
one-process-per-thread model is? For any application?

> Using a shared mmap() 
> region and datastructures that don't require locking is another 
> cost-effective technique.

I'm afraid I have to count this as *very* cheap shot :-)

Martin
-- 
%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%
Martin Cracauer <cracauer@cons.org> http://www.cons.org/cracauer
BSD User Group Hamburg, Germany     http://www.bsdhh.org/

--2oS5YaxWCcQjTEyO
Content-Type: text/plain; charset=us-ascii
Content-Disposition: attachment; filename=l



I wrote:

> > I still like the simpliticity of a kernel-only thread solution. If
> > that way turns out to be too inefficient, the DEC way seems to be a
> > solution that doesn't need async system calls and has no efficiency
> > disadvantage I can see (compared to a sysyem with async syscalls
> > only).
> > 
> > I hope to get further details on the DEC implementation.

Here's what Dave Butenhof <butenhof@zko.dec.com> told me about DEC's
interface. Dave implements the userlevel part of DEC's thread
interface. This answer is a bit out of context, if you need the
previous discussion, please let me know.

My original question was how blocking syscalls are treated. In Digital
Unix, the kernel reports a blocking syscall in one thread back to the
userlevel library, which reshedules another thread on that "kernel"
thread. I asked for further details how a userlevel library could get
rid of an already blocking syscall, and here's what I heared:

> ~Message-Id: <32A5797C.6231@zko.dec.com>
> ~Date: Wed, 04 Dec 1996 08:15:40 -0500
> ~From: Dave Butenhof <butenhof@zko.dec.com>
> ~Organization: Digital Equipment Corporation
> ~To: cracauer@wavehh.hanse.de
> ~Cc: "Butenhof, Dave" <butenhof@zko.dec.com>
> ~Subject: Re: Blocking syscall handling in one-to-many thread implementations
> 
> [...]
> 
> [This was my, Martins, question]
> > But how exactly is resheduling on your KECs done? If a KECs is waiting
> > in a blocking syscall, how can the userlevel sheduler reassign it? How
> > can the userlevel library free it from the syscall? 
> > 
> > And what happens to the syscall? Is it translated into a non-blocking
> > version and the kernel informs the userlevel sheduler when it arrives?
> 
> I was trying to describe what happens from YOUR perspective, more than
> what actually happens in the thread library & kernel. The internals are,
> as always, a little more complicated.
> 
> The thread library maintains a constant pool of (up to) <n> KECs, where
> <n> is normally set by the number of physical processors in the system.
> (It may be smaller, if your process is locked into a processor set.)
> These are the "virtual processors" (VPs). The thread library schedules
> user threads on the pool of VPs, trying to keep all <n> of them busy. If
> you don't have enough user threads to keep that many VPs busy, they may
> not all get started, or VPs already running may go into the "null
> thread" loop and idle -- which returns the KEC to the kernel's pool for
> reuse. We'll get it back if we need it later.
> 
> When a thread blocks in the kernel, the KEC stays in the kernel, but it
> gives us an upcall in a *new* (or recycled) KEC to replace the VP. When
> the blocking operation finishes, the kernel gives us a completion upcall
> in the original KEC. It's no longer a VP, so we just save the context
> and dismiss it.
> 
> The key is the distinction between "KEC" and "VP". There may be 100 KECs
> attached to a process, but, on a typical quad-CPU system, only (up to) 4
> of them at any time are "VPs". The rest are just holding kernel context
> across some blocking operation. Whereas, in a strict kernel-mode
> implementation, each user thread is a KEC, we have a KEC only for each
> running thread and each thread blocked in the kernel. The number of KECs
> will fluctuate -- and, if you hit your quota for KECs (Mach threads),
> any additional kernel blocking operations will start to lower
> concurrency. The most common blocking operations, mutexes & condition
> variables (and some others), however, are completely in user mode.
> 
> We're going to continue streamlining the process as we go (like moving
> some kernel blocking states out into user mode, and reducing the context
> that the kernel needs to keep below a full KEC), but, in general, it
> works very well. The kernel developer and I (I work mostly in the
> library) have kicked around the idea of doing a paper on the design.
> Mostly, I've been too busy with a book for a ridiculously long time, and
> the development requirements never stop. Maybe some day. Possibly once
> we've gone through a full release cycle and have the architecture
> stabilized better.
> 
> /---[ Dave Butenhof ]-----------------------[ butenhof@zko.dec.com ]---\
> | Digital Equipment Corporation           110 Spit Brook Rd ZKO2-3/Q18 |
> | 603.881.2218, FAX 603.881.0120                  Nashua NH 03062-2698 |
> \-----------------[ Better Living Through Concurrency ]----------------/
> 
> 


--2oS5YaxWCcQjTEyO--

To Unsubscribe: send mail to majordomo@FreeBSD.org
with "unsubscribe freebsd-current" in the body of the message



Want to link to this message? Use this URL: <https://mail-archive.FreeBSD.org/cgi/mid.cgi?19980826140642.A20511>