From owner-freebsd-smp Wed Jul 16 13:58:52 1997 Return-Path: Received: (from root@localhost) by hub.freebsd.org (8.8.5/8.8.5) id NAA07333 for smp-outgoing; Wed, 16 Jul 1997 13:58:52 -0700 (PDT) Received: from Ilsa.StevesCafe.com (Ilsa.StevesCafe.com [205.168.119.129]) by hub.freebsd.org (8.8.5/8.8.5) with ESMTP id NAA07279; Wed, 16 Jul 1997 13:57:59 -0700 (PDT) Received: from Ilsa.StevesCafe.com (localhost [127.0.0.1]) by Ilsa.StevesCafe.com (8.8.6/8.8.5) with ESMTP id OAA09649; Wed, 16 Jul 1997 14:25:35 -0600 (MDT) Message-Id: <199707162025.OAA09649@Ilsa.StevesCafe.com> X-Mailer: exmh version 2.0gamma 1/27/96 From: Steve Passe To: smp@freebsd.org cc: Peter Wemm , dyson@freebsd.org Subject: pushdown of "giant lock" Mime-Version: 1.0 Content-Type: text/plain; charset=us-ascii Date: Wed, 16 Jul 1997 14:25:34 -0600 Sender: owner-smp@freebsd.org X-Loop: FreeBSD.org Precedence: bulk Hi, DESIGN PROPOSAL: The following documents a 1st draft proposal for achieving finer-grained locking. Please comment, paying particular attention to the pseudo-code I propose for doing it. ----------------------------------- cut ----------------------------------- Proposed 1st cut at "giant lock" pushdown. A design proposal for the 1st step in pushing down the "giant lock" (GL) to achieve finer grained locking. --- Create 3 basic get/rel lock functions from the current get/rel_mplock(). Each of these supports 3 basic entry types, respectively: - getISR_mplock/relISR_mplock: interrupt routines - getSYS_mplock/relSYS_mplock: system calls - getTRAP_mplock/relTRAP_mplock: exceptions, traps, etc. For the 1st step, each of these 3 variations will behave identically. Their purpose is to achieve a LOGICAL differentiation. Eventually they will divert as their needs dictate. --- Modify the getISR_mplock/relISR_mplock routines to deal with "MP-safe" and "MP-unsafe" ISRs: The basic idea is to label each ISR as MP-unsafe by default, and let individual ISRs declare themselves to be MP-safe if appropriate. Modify the getISR_mplock() routine to allow one-and-only-one MP-unsafe ISR into the kernal at once, but all MP-safe ISRs in at the same time. If any MP-safe ISR is in the kernel, all MP-unsafe ISRs are blocked. If any MP-unsafe ISR (or SYS/TRAP routine) is in the kernel, all other ISR/SYS/TRAP routines are blocked. As SYS/TRAP routines are made MP-safe, these routines will be modified to allow them to co-exist in the kernel in a similar manner. --- The current code: Review i386/i386/mplock.s for details of the current code. The general algorithm is: - a free lock is 0xffffffff - an 'owned' lock is 0xID00000n where ID is the LOGICAL id of the CPU running the process and n is the lock count. the count allows the lock to be recursive, eg. a process holding the lock can catch a page exception and reacquire it for the page fault. - first attempt to get a lock == current lock + 1 || cpuid (ie process CPU id) - failing that, attempt to get a free lock - failing that, wait for lock to become free then goto step 2 (get free lock) --- Modify the lock as follows: Create a third value-class for the lock, MP-safelock. The lock is represented by the value 0xff0xxxxn where the 1st 3 digits, ff0, represent the MP-safe class type. This pattern is unique to both a free lock and a MP-unsafe GL. getISR_mplock now does: if ( isMPsafe( intnum ) == FALSE ) { /* we are an MP-unsafe ISR */ useOldStyleCode(); /* this blocks if NOT free */ } else { /* we are an MP-safe ISR */ if ( (mp_lock & 0xff000000) == 0xff000000 ) { /* lock held by MP-safe routine */ newval = (mp_lock & 0x000fffff) + 1; newval |= 0xff000000; /* add MP-safe class tag */ /* compete for lock */ } else if ( mp_lock == 0xffffffff ) { /* lock is free, compete for free lock */ newval = 0xff000000 + 1; /* 1st MP-safe lock */ /* compete for lock */ } else { /* lock must be held by MP-unsafe routine, wait for free lock */ /* compete for free lock */ } } --- Summary: the object is to allow both MP-safe and MP-unsafe ISR(SYS/TRAP) routines to co-exist while we transition to a fully MP-safe system. The theory is that you can allow only 1 MP-unsafe routine in the kernal at any one instance, but multiple MP-safe routines (IF no MP-unsafe routines are already in the kernel). The above modifications are a proposal for achieving that goal. --- Related issues: We will probably always have some MP-unsafe drivers, as new drivers will usually first be coded UP. But once we have the majority of drivers MP-safe, we can change the build logic to cause a default SMP kernel to panic if a MP-unsafe ISR is called (during probe perhaps). If a user needs to link an MP-unsafe ISR he can invoker a config option removing this behaviour. Then we can remove the overhead for handling MP-unsafe routines from most SMP kernels. ----------------------------------- cut ----------------------------------- -- Steve Passe | powered by smp@csn.net | Symmetric MultiProcessor FreeBSD