From owner-cvs-all Thu Oct 1 07:21:26 1998 Return-Path: Received: (from daemon@localhost) by hub.freebsd.org (8.8.8/8.8.8) id HAA25475 for cvs-all-outgoing; Thu, 1 Oct 1998 07:21:26 -0700 (PDT) (envelope-from owner-cvs-all) Received: from Kitten.mcs.com (Kitten.mcs.com [192.160.127.90]) by hub.freebsd.org (8.8.8/8.8.8) with ESMTP id HAA25468; Thu, 1 Oct 1998 07:21:20 -0700 (PDT) (envelope-from nash@Venus.mcs.net) Received: from Venus.mcs.net (nash@Venus.mcs.net [192.160.127.92]) by Kitten.mcs.com (8.8.7/8.8.2) with ESMTP id JAA15745; Thu, 1 Oct 1998 09:21:04 -0500 (CDT) Received: (from nash@localhost) by Venus.mcs.net (8.8.7/8.8.2) id JAA08254; Thu, 1 Oct 1998 09:21:01 -0500 (CDT) Message-ID: <19981001092101.B7057@mcs.net> Date: Thu, 1 Oct 1998 09:21:01 -0500 From: Alex Nash To: John Birrell Cc: cvs-committers@FreeBSD.ORG, cvs-all@FreeBSD.ORG Subject: Re: cvs commit: src/lib/libc_r/uthread uthread_gc.c Makefile.inc uthread_init.c uthread_find_thread.c uthread_kern.c uthread_cre Mail-Followup-To: John Birrell , cvs-committers@FreeBSD.ORG, cvs-all@FreeBSD.ORG References: <19980930192221.S9697@pr.mcs.net> <199810010202.MAA07069@cimlogic.com.au> Mime-Version: 1.0 Content-Type: text/plain; charset=us-ascii X-Mailer: Mutt 0.93.2i In-Reply-To: <199810010202.MAA07069@cimlogic.com.au>; from John Birrell on Thu, Oct 01, 1998 at 12:02:33PM +1000 Sender: owner-cvs-all@FreeBSD.ORG X-Loop: FreeBSD.org Precedence: bulk On Thu, Oct 01, 1998 at 12:02:33PM +1000, John Birrell wrote: > > What are the semantics that allow these functions to obtain the spinlock > > and not release it before returning? > > If that were to happen, it would show up a bug in either the malloc/realloc/ > free code or a function called from within that code. The check that phk has > in his code simply allows him to find when his code is broken by other > people's changes. If you ever see the recursion warning, someone has > broken the code. So going to great pains to get the locking semantics > right for broken code is a waste of time. The only reason the lock & return without unlocking code is correct in the recursive case is that a single unlock releases all previous locks. This is dependent on the spinlock implementation (something malloc() shouldn't need to know). > > > The malloc/free/ > > > realloc functions check for recursion within the malloc code itself. In > > > a thread-safe library, the single spinlock ensures that no two threads > > > go inside the protected code at the same time. > > > > It doesn't prevent the *same* thread from being inside the protected > > code more than once (e.g. due to a signal). > > The *same* thread will lock against itself. A deadlock. That's a bug. I wish this predictable behavior were true. The spinlock code allows recursive locking by the same thread (libc_r/uthread/uthread_spinlock.c): while(_atomic_lock(&lck->access_lock)) { /* Give up the time slice: */ sched_yield(); /* Check if already locked by the running thread: */ if (lck->lock_owner == (long) _thread_run) return; } > The behaviour of malloc/realloc/free is the same whether threaded or not. > The thread implementation has to behave in such a way that it doesn't > allow the same thread to go back into the malloc/realloc/free code if it > has already taken the lock. The only time this would occur is if the > thread kernel was trying to go in there. I've moved that code so that it > doesn't need to do that anymore. There still appears to be a problem, regardless of the thread kernel. I think we first need to agree/disagree on whether or not a single thread can _SPINLOCK multiple times. > > Because our spinlock allows a single thread to recursively obtain the > > lock without counting, it's conceivable that the following scenario > > might occur: > > It doesn't. A thread can't tell from the spinlock value if _it_ locked the > lock or another thread did. I don't follow. In the threaded library, _SPINLOCK calls _spinlock which does check the lock owner and acts differently depending on which thread (if any) currently owns the lock. Alex