Skip site navigation (1)Skip section navigation (2)
Date:      Wed, 04 Jul 2001 11:49:01 -0700 (PDT)
From:      John Baldwin <jhb@FreeBSD.org>
To:        Matt Dillon <dillon@earth.backplane.com>
Cc:        cvs-all@FreeBSD.ORG, cvs-committers@FreeBSD.ORG, Jake Burkholder <jake@FreeBSD.ORG>, Matthew Jacob <mjacob@feral.com>
Subject:   Re: cvs commit: src/sys/sys systm.h condvar.h src/sys/kern kern_
Message-ID:  <XFMail.010704114901.jhb@FreeBSD.org>
In-Reply-To: <200107041838.f64Ic4V46525@earth.backplane.com>

next in thread | previous in thread | raw e-mail | index | archive | help

On 04-Jul-01 Matt Dillon wrote:
>:Releasing the lock before the wakeup leaves a window open (interrupts can
>:make
>:small windows into large ones) during which the state of the subsystem can
>:change before the wakeup is delivered, possibly resulting in a bogus wakeup
>:being sent.  However, I'm not sure that this window is actually a problem,
>:and
>:I'm less convinced than when mwakeup() was first proposed.
> 
>     We use this trick (clear variable then wakeup and who cares if there is
>     a window there or not) all over the codebase.

If the window isn't a problem, then releasing the lock before
wakeup/cv_signal/cv_wakeup is fine.

>:>     In the non-preemptive case, on a MP system, the scheduling overhead
>:>     will take far longer then it takes the original thread to release the
>:>     mutex so it does not matter whether the mutex is released before or
>:>     after
>:>     the wakeup.
>:
>:Unless you get an interrupt in between the wakeup and lock release.
> 
>     A window which has no real effect other then to cause one wakeup out
>     of several million (or billion) to delay another thread.  That verses
>     the overhead of doing some sort of interlock on every single wakeup
>     means you are actually making things worse with the interlock.

There is no overhead with the interlock silly.  We have to grab the sched_lock
in wakeup() no matter what, and whether or not we hold a sleep lock has no
bearing on that at all.  We also have to release the other mutex at some point
in time, and it takes just as long to release it if we hold the sched_lock than
if we don't hold it.  What overhead?  All we are doing is changing the location
that the lock is dropped.

>:>     My assertion is that mwakeup() is solving a problem created by the
>:>     preemption in the first place and that assertion still holds true.
>:
>:No.  If you must hold the lock across wakeup(), then you have the same
>:problem
>:with contending on the lock in the SMP case as in the preemptive case.  If
>:you
>:release the lock before wakeup(), then both cases each have the same window
>:open during which the state of the subsystem can change between the lock
>:release and the wakeup being delivered.  Preemption is an SMP environment on
>:a
>:UP system.  The problems encountered by preemption will be encountered in SMP
>:systems anyway.
> 
>     No, if you wakeup a thread and then release the mutex in a non-preemptive
>     case, even if you have multiple idle CPUs available, the scheduling and
>     switching overhead in the thread being woken up will be vastly greater
>     then the time it takes for the caller to release the mutex.  So by the
>     time the newly woken up thread actually checks the mutex, it will have
>     already been released in the vast majority of cases.  The cache
> mastership
>     changes between the two cpu's alone is probably sufficient to cover the
>     time.

Imagine an interrupt between the wakeup and lock release.  That would give
plenty enough time for another CPU to grab the woken process and then block on
the lock.

>                                                   -Matt

-- 

John Baldwin <jhb@FreeBSD.org> -- http://www.FreeBSD.org/~jhb/
PGP Key: http://www.baldwin.cx/~john/pgpkey.asc
"Power Users Use the Power to Serve!"  -  http://www.FreeBSD.org/

To Unsubscribe: send mail to majordomo@FreeBSD.org
with "unsubscribe cvs-all" in the body of the message




Want to link to this message? Use this URL: <https://mail-archive.FreeBSD.org/cgi/mid.cgi?XFMail.010704114901.jhb>