From owner-freebsd-threads@FreeBSD.ORG Wed Dec 21 18:46:28 2011 Return-Path: Delivered-To: freebsd-threads@freebsd.org Received: from mx1.freebsd.org (mx1.freebsd.org [IPv6:2001:4f8:fff6::34]) by hub.freebsd.org (Postfix) with ESMTP id DB0251065670 for ; Wed, 21 Dec 2011 18:46:28 +0000 (UTC) (envelope-from jhb@freebsd.org) Received: from cyrus.watson.org (cyrus.watson.org [65.122.17.42]) by mx1.freebsd.org (Postfix) with ESMTP id B33288FC0C for ; Wed, 21 Dec 2011 18:46:28 +0000 (UTC) Received: from bigwig.baldwin.cx (bigwig.baldwin.cx [96.47.65.170]) by cyrus.watson.org (Postfix) with ESMTPSA id 6B7D346B0D; Wed, 21 Dec 2011 13:46:28 -0500 (EST) Received: from jhbbsd.localnet (unknown [209.249.190.124]) by bigwig.baldwin.cx (Postfix) with ESMTPSA id F33E3B95C; Wed, 21 Dec 2011 13:46:27 -0500 (EST) From: John Baldwin To: freebsd-threads@freebsd.org Date: Wed, 21 Dec 2011 13:44:51 -0500 User-Agent: KMail/1.13.5 (FreeBSD/8.2-CBSD-20110714-p8; KDE/4.5.5; amd64; ; ) References: <20111220222617.GC75886@stack.nl> In-Reply-To: <20111220222617.GC75886@stack.nl> MIME-Version: 1.0 Content-Type: Text/Plain; charset="iso-8859-1" Content-Transfer-Encoding: 7bit Message-Id: <201112211344.51792.jhb@freebsd.org> X-Greylist: Sender succeeded SMTP AUTH, not delayed by milter-greylist-4.2.7 (bigwig.baldwin.cx); Wed, 21 Dec 2011 13:46:28 -0500 (EST) Cc: Subject: Re: [patch] Fix unchecked atomic_cmpset when unlocking mutex X-BeenThere: freebsd-threads@freebsd.org X-Mailman-Version: 2.1.5 Precedence: list List-Id: Threading on FreeBSD List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , X-List-Received-Date: Wed, 21 Dec 2011 18:46:28 -0000 On Tuesday, December 20, 2011 5:26:17 pm Jilles Tjoelker wrote: > When unlocking a contested mutex that is not PI or PP, the mutex is made > available for new lockers faster by unlocking it in userland and only > calling the kernel to wake sleeping threads. This seems to make sense. > > Although the atomic_cmpset here should never fail because this thread > owns the mutex and other threads may only set UMUTEX_CONTESTED which is > already set, it seems prudent anyway to check for this. > > If the atomic_cmpset fails, use the slow path that unlocks in the > kernel. > > Index: lib/libthr/thread/thr_umtx.c > =================================================================== > --- lib/libthr/thread/thr_umtx.c (revision 228504) > +++ lib/libthr/thread/thr_umtx.c (working copy) > @@ -154,10 +154,9 @@ > { > #ifndef __ia64__ > /* XXX this logic has a race-condition on ia64. */ > - if ((mtx->m_flags & (UMUTEX_PRIO_PROTECT | UMUTEX_PRIO_INHERIT)) == 0) { > - atomic_cmpset_rel_32(&mtx->m_owner, id | UMUTEX_CONTESTED, UMUTEX_CONTESTED); > + if ((mtx->m_flags & (UMUTEX_PRIO_PROTECT | UMUTEX_PRIO_INHERIT)) == 0 && > + atomic_cmpset_rel_32(&mtx->m_owner, id | UMUTEX_CONTESTED, UMUTEX_CONTESTED)) > return _umtx_op_err(mtx, UMTX_OP_MUTEX_WAKE, 0, 0, 0); > - } > #endif /* __ia64__ */ > return _umtx_op_err(mtx, UMTX_OP_MUTEX_UNLOCK, 0, 0, 0); > } Not checking for a failing atomic_cmpset seems inherently broken to me. When I first saw this, I wondered if fixing this would fix the "race" on ia64. -- John Baldwin