Skip site navigation (1)Skip section navigation (2)
Date:      Wed, 02 Jan 2002 16:42:53 -0800 (PST)
From:      John Baldwin <jhb@FreeBSD.org>
To:        Matthew Dillon <dillon@apollo.backplane.com>
Cc:        arch@FreeBSD.ORG, Bernd Walter <ticso@cicely8.cicely.de>, Mike Smith <msmith@FreeBSD.ORG>, Bruce Evans <bde@zeta.org.au>, Michal Mertl <mime@traveller.cz>, Peter Jeremy <peter.jeremy@alcatel.com.au>
Subject:   Re: When to use atomic_ functions? (was: 64 bit counters)
Message-ID:  <XFMail.020102164253.jhb@FreeBSD.org>
In-Reply-To: <200201030024.g030Oip60860@apollo.backplane.com>

next in thread | previous in thread | raw e-mail | index | archive | help

On 03-Jan-02 Matthew Dillon wrote:
>:Note that critical sections don't impose locking, right now they just disable
>:interrupts on the local CPU.  Eventually they will also prevent preemptions
>:for
>:any setrunqueue's done inside a critical section and defer the switches until
>:the critical section is exited.  If you pin processes/threads to CPU's when
>:they get interrupted so they resume on the same CPU and only migrate at
>:setrunqueue(), then you still might need to disable interrupts if your update
>:of a per-CPU variable isn't atomic since when you return to the thread, it
>:might do a modify-write of a stale variable.  Think of an interrupt handler
>:being interrupted by another interrupt.  Thus, I think it would still be wise
>:to disable interrupts for per-CPU stuff.  At least, for ones that can be
>:modified by interrupt handlers.  Also, per-thread counters don't need
>:locking.
> 
>     But if it is protected by a mutex, and an interrupt occurs while you
>     hold the mutex, the interrupt thread will not be able to run (or
>     at least will wind up blocking while getting the mutex) until you release
>     your mutex, at which point your modifications have been synchronized out
>     (releasing the mutex ensures this).

Yes, you can use a mutex and that will work fine.  If the data you are
protecting is per-CPU then to prevent migration in teh current model you still
need to use a critical section.

>     The critical section stuff would be more palettable if it weren't so
>     expensive.  Couldn't we just have a per-cpu critical section count
>     and defer the interrupt?  (e.g. like the deferred mechanism we used for
>     spl()s).  Then we would have an incredibly cheap mechanism for accessing
>     per-cpu caches (like per-cpu mbuf freelists, for example) which could
>     further be adapted for use by zalloc[i]() and malloc().

Err, it does maintain a count right now and only does the actual change of
interrupt state when entering and exiting critical sections at the top-level.
sti and cli aren't very expensive though.  Critical sections are more expensive
on other archs though.  For example, it's a PAL call on alpha.  Although, spl's
were PAL calls on alpha as well.  On ia64 it's a single instruction but one that
requires a stop.

Actually, we could allow most critical sections to allow interrupts.  The only
critical section that really needs to disable interrupts are those that need
are in spinlocks shared with bottom half code (namely, the locks in the sio
family of drivers and the scheduler lock used to schedule interrupt threads.) 
In theory, most other critical sections need only tweak their per-thread
nesting count (which doesn't need a lock).  This has been sitting in the back
of my mind for a while but I want to figure out how to do this cleanly. 

One idea is to allow spinlocks to use slightly different versions of the
critical enter/exit calls that do disable interrutps if needed and enable them
again when needed.  Right now the current critical_enter/exit code assumes that
it needs to enable interrupts (or disable them for that matter) when crossing
nesting 0.  What I could do is add a per-thread variable to mark the first
nesting level at which we were actually told to disable interrupts.  We could
then have the nesting count default to -1 (as a special token meaning
interrupts haven't been disabled yet) and when entering a critical section for
a spin lock, if the per-thread first-disabled nesting level (needs a better
name) is -1, we disable interrupts and save the returned state from
cpu_critical_enter() and then restore it on the right critical_exit().    This
would require that one not interlock critical_exit with spin locks i.e. do
critical_enter() / mtx_lock_spin() / critical_exit(), taht would break in this
algo but doesnt' break right now.  I'm ok with this limitation though.

This won't change the critical API for consumers other than spin locks and just
needs a different critical_enter() for spin locks.  Hmm, OTOH, I could fix that
last case by using a separate nesting count (rather than first nesting level)
for hte interrupt disabled critical_enter/exits.  That's a bit too complicated
though I think.  Anyways, this can fit into the current API under the covers as
an optimization later on.

-- 

John Baldwin <jhb@FreeBSD.org>  <><  http://www.FreeBSD.org/~jhb/
"Power Users Use the Power to Serve!"  -  http://www.FreeBSD.org/

To Unsubscribe: send mail to majordomo@FreeBSD.org
with "unsubscribe freebsd-arch" in the body of the message




Want to link to this message? Use this URL: <https://mail-archive.FreeBSD.org/cgi/mid.cgi?XFMail.020102164253.jhb>