Skip site navigation (1)Skip section navigation (2)
Date:      Mon, 25 Jan 2010 14:56:32 -0500
From:      John Baldwin <jhb@freebsd.org>
To:        Attilio Rao <attilio@freebsd.org>
Cc:        svn-src-head@freebsd.org, svn-src-all@freebsd.org, src-committers@freebsd.org
Subject:   Re: svn commit: r202889 - head/sys/kern
Message-ID:  <201001251456.32459.jhb@freebsd.org>
In-Reply-To: <201001231554.o0NFsMbx049837@svn.freebsd.org>
References:  <201001231554.o0NFsMbx049837@svn.freebsd.org>

next in thread | previous in thread | raw e-mail | index | archive | help
On Saturday 23 January 2010 10:54:22 am Attilio Rao wrote:
> Author: attilio
> Date: Sat Jan 23 15:54:21 2010
> New Revision: 202889
> URL: http://svn.freebsd.org/changeset/base/202889
> 
> Log:
>   - Fix a race in sched_switch() of sched_4bsd.
>     In the case of the thread being on a sleepqueue or a turnstile, the
>     sched_lock was acquired (without the aid of the td_lock interface) and
>     the td_lock was dropped. This was going to break locking rules on other
>     threads willing to access to the thread (via the td_lock interface) and
>     modify his flags (allowed as long as the container lock was different
>     by the one used in sched_switch).
>     In order to prevent this situation, while sched_lock is acquired there
>     the td_lock gets blocked. [0]
>   - Merge the ULE's internal function thread_block_switch() into the global
>     thread_lock_block() and make the former semantic as the default for
>     thread_lock_block(). This means that thread_lock_block() will not
>     disable interrupts when called (and consequently thread_unlock_block()
>     will not re-enabled them when called). This should be done manually
>     when necessary.
>     Note, however, that ULE's thread_unblock_switch() is not reaped
>     because it does reflect a difference in semantic due in ULE (the
>     td_lock may not be necessarilly still blocked_lock when calling this).
>     While asymmetric, it does describe a remarkable difference in semantic
>     that is good to keep in mind.

Does this affect the various #ifdef's for handling the third argument to 
cpu_switch()?  E.g. does 4BSD need to spin if td_lock is &blocked_lock?

Also, BLOCK_SPIN() on x86 is non-optimal.  It should not do cmpxchg in a loop.  
Instead, it should do cmp in a loop, and if the cmp succeeds, then try 
cmpxchg.

-- 
John Baldwin



Want to link to this message? Use this URL: <https://mail-archive.FreeBSD.org/cgi/mid.cgi?201001251456.32459.jhb>