From owner-svn-src-head@freebsd.org Wed Feb 8 08:43:00 2017 Return-Path: Delivered-To: svn-src-head@mailman.ysv.freebsd.org Received: from mx1.freebsd.org (mx1.freebsd.org [IPv6:2001:1900:2254:206a::19:1]) by mailman.ysv.freebsd.org (Postfix) with ESMTP id 95E60CD4928; Wed, 8 Feb 2017 08:43:00 +0000 (UTC) (envelope-from julian@freebsd.org) Received: from vps1.elischer.org (vps1.elischer.org [204.109.63.16]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (Client CN "vps1.elischer.org", Issuer "CA Cert Signing Authority" (not verified)) by mx1.freebsd.org (Postfix) with ESMTPS id 7D817689; Wed, 8 Feb 2017 08:43:00 +0000 (UTC) (envelope-from julian@freebsd.org) Received: from Julian-MBP3.local (ppp121-45-246-6.lns20.per4.internode.on.net [121.45.246.6]) (authenticated bits=0) by vps1.elischer.org (8.15.2/8.15.2) with ESMTPSA id v188gseq012371 (version=TLSv1.2 cipher=DHE-RSA-AES128-SHA bits=128 verify=NO); Wed, 8 Feb 2017 00:42:57 -0800 (PST) (envelope-from julian@freebsd.org) Subject: Re: svn commit: r313272 - in head/sys: kern sys To: Mateusz Guzik , src-committers@freebsd.org, svn-src-all@freebsd.org, svn-src-head@freebsd.org References: <201702050520.v155KTW8080845@repo.freebsd.org> <20170205120536.GA16310@dft-labs.eu> From: Julian Elischer Message-ID: Date: Wed, 8 Feb 2017 16:42:48 +0800 User-Agent: Mozilla/5.0 (Macintosh; Intel Mac OS X 10.12; rv:45.0) Gecko/20100101 Thunderbird/45.7.0 MIME-Version: 1.0 In-Reply-To: <20170205120536.GA16310@dft-labs.eu> Content-Type: text/plain; charset=utf-8; format=flowed Content-Transfer-Encoding: 7bit X-BeenThere: svn-src-head@freebsd.org X-Mailman-Version: 2.1.23 Precedence: list List-Id: SVN commit messages for the src tree for head/-current List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , X-List-Received-Date: Wed, 08 Feb 2017 08:43:00 -0000 On 5/2/17 8:05 pm, Mateusz Guzik wrote: > On Sun, Feb 05, 2017 at 05:20:29AM +0000, Mateusz Guzik wrote: >> Author: mjg >> Date: Sun Feb 5 05:20:29 2017 >> New Revision: 313272 >> URL: https://svnweb.freebsd.org/changeset/base/313272 >> >> Log: >> sx: uninline slock/sunlock >> >> Shared locking routines explicitly read the value and test it. If the >> change attempt fails, they fall back to a regular function which would >> retry in a loop. >> >> The problem is that with many concurrent readers the risk of failure is pretty >> high and even the value returned by fcmpset is very likely going to be stale >> by the time the loop in the fallback routine is reached. >> >> Uninline said primitives. It gives a throughput increase when doing concurrent >> slocks/sunlocks with 80 hardware threads from ~50 mln/s to ~56 mln/s. >> >> Interestingly, rwlock primitives are already not inlined. >> > Note that calling to "hard" primitives each is somewhat expensive. > In order to remedy part of the cost I intend to introduce uninlined > variants which only know how to spin. I have a WIP patch for rwlocks and > after I flesh it out I'll adapt it for sx. > please remember to report back with some ministat plots when you are done so we can see the results.. Thanks for doing the work. >> Modified: >> head/sys/kern/kern_sx.c >> head/sys/sys/sx.h >> >> Modified: head/sys/kern/kern_sx.c >> ============================================================================== >> --- head/sys/kern/kern_sx.c Sun Feb 5 04:54:20 2017 (r313271) >> +++ head/sys/kern/kern_sx.c Sun Feb 5 05:20:29 2017 (r313272) >> @@ -276,29 +276,6 @@ sx_destroy(struct sx *sx) >> } >> >> int >> -_sx_slock(struct sx *sx, int opts, const char *file, int line) >> -{ >> - int error = 0; >> - >> - if (SCHEDULER_STOPPED()) >> - return (0); >> - KASSERT(kdb_active != 0 || !TD_IS_IDLETHREAD(curthread), >> - ("sx_slock() by idle thread %p on sx %s @ %s:%d", >> - curthread, sx->lock_object.lo_name, file, line)); >> - KASSERT(sx->sx_lock != SX_LOCK_DESTROYED, >> - ("sx_slock() of destroyed sx @ %s:%d", file, line)); >> - WITNESS_CHECKORDER(&sx->lock_object, LOP_NEWORDER, file, line, NULL); >> - error = __sx_slock(sx, opts, file, line); >> - if (!error) { >> - LOCK_LOG_LOCK("SLOCK", &sx->lock_object, 0, 0, file, line); >> - WITNESS_LOCK(&sx->lock_object, 0, file, line); >> - TD_LOCKS_INC(curthread); >> - } >> - >> - return (error); >> -} >> - >> -int >> sx_try_slock_(struct sx *sx, const char *file, int line) >> { >> uintptr_t x; >> @@ -391,21 +368,6 @@ sx_try_xlock_(struct sx *sx, const char >> } >> >> void >> -_sx_sunlock(struct sx *sx, const char *file, int line) >> -{ >> - >> - if (SCHEDULER_STOPPED()) >> - return; >> - KASSERT(sx->sx_lock != SX_LOCK_DESTROYED, >> - ("sx_sunlock() of destroyed sx @ %s:%d", file, line)); >> - _sx_assert(sx, SA_SLOCKED, file, line); >> - WITNESS_UNLOCK(&sx->lock_object, 0, file, line); >> - LOCK_LOG_LOCK("SUNLOCK", &sx->lock_object, 0, 0, file, line); >> - __sx_sunlock(sx, file, line); >> - TD_LOCKS_DEC(curthread); >> -} >> - >> -void >> _sx_xunlock(struct sx *sx, const char *file, int line) >> { >> >> @@ -840,14 +802,8 @@ _sx_xunlock_hard(struct sx *sx, uintptr_ >> kick_proc0(); >> } >> >> -/* >> - * This function represents the so-called 'hard case' for sx_slock >> - * operation. All 'easy case' failures are redirected to this. Note >> - * that ideally this would be a static function, but it needs to be >> - * accessible from at least sx.h. >> - */ >> int >> -_sx_slock_hard(struct sx *sx, int opts, const char *file, int line) >> +_sx_slock(struct sx *sx, int opts, const char *file, int line) >> { >> GIANT_DECLARE; >> #ifdef ADAPTIVE_SX >> @@ -1051,14 +1007,8 @@ _sx_slock_hard(struct sx *sx, int opts, >> return (error); >> } >> >> -/* >> - * This function represents the so-called 'hard case' for sx_sunlock >> - * operation. All 'easy case' failures are redirected to this. Note >> - * that ideally this would be a static function, but it needs to be >> - * accessible from at least sx.h. >> - */ >> void >> -_sx_sunlock_hard(struct sx *sx, const char *file, int line) >> +_sx_sunlock(struct sx *sx, const char *file, int line) >> { >> uintptr_t x; >> int wakeup_swapper; >> @@ -1066,6 +1016,7 @@ _sx_sunlock_hard(struct sx *sx, const ch >> if (SCHEDULER_STOPPED()) >> return; >> >> + LOCKSTAT_PROFILE_RELEASE_RWLOCK(sx__release, sx, LOCKSTAT_READER); >> x = SX_READ_VALUE(sx); >> for (;;) { >> /* >> >> Modified: head/sys/sys/sx.h >> ============================================================================== >> --- head/sys/sys/sx.h Sun Feb 5 04:54:20 2017 (r313271) >> +++ head/sys/sys/sx.h Sun Feb 5 05:20:29 2017 (r313272) >> @@ -111,10 +111,8 @@ void _sx_sunlock(struct sx *sx, const ch >> void _sx_xunlock(struct sx *sx, const char *file, int line); >> int _sx_xlock_hard(struct sx *sx, uintptr_t v, uintptr_t tid, int opts, >> const char *file, int line); >> -int _sx_slock_hard(struct sx *sx, int opts, const char *file, int line); >> void _sx_xunlock_hard(struct sx *sx, uintptr_t tid, const char *file, int >> line); >> -void _sx_sunlock_hard(struct sx *sx, const char *file, int line); >> #if defined(INVARIANTS) || defined(INVARIANT_SUPPORT) >> void _sx_assert(const struct sx *sx, int what, const char *file, int line); >> #endif >> @@ -179,41 +177,6 @@ __sx_xunlock(struct sx *sx, struct threa >> _sx_xunlock_hard(sx, tid, file, line); >> } >> >> -/* Acquire a shared lock. */ >> -static __inline int >> -__sx_slock(struct sx *sx, int opts, const char *file, int line) >> -{ >> - uintptr_t x = sx->sx_lock; >> - int error = 0; >> - >> - if (!(x & SX_LOCK_SHARED) || >> - !atomic_cmpset_acq_ptr(&sx->sx_lock, x, x + SX_ONE_SHARER)) >> - error = _sx_slock_hard(sx, opts, file, line); >> - else >> - LOCKSTAT_PROFILE_OBTAIN_RWLOCK_SUCCESS(sx__acquire, sx, >> - 0, 0, file, line, LOCKSTAT_READER); >> - >> - return (error); >> -} >> - >> -/* >> - * Release a shared lock. We can just drop a single shared lock so >> - * long as we aren't trying to drop the last shared lock when other >> - * threads are waiting for an exclusive lock. This takes advantage of >> - * the fact that an unlocked lock is encoded as a shared lock with a >> - * count of 0. >> - */ >> -static __inline void >> -__sx_sunlock(struct sx *sx, const char *file, int line) >> -{ >> - uintptr_t x = sx->sx_lock; >> - >> - LOCKSTAT_PROFILE_RELEASE_RWLOCK(sx__release, sx, LOCKSTAT_READER); >> - if (x == (SX_SHARERS_LOCK(1) | SX_LOCK_EXCLUSIVE_WAITERS) || >> - !atomic_cmpset_rel_ptr(&sx->sx_lock, x, x - SX_ONE_SHARER)) >> - _sx_sunlock_hard(sx, file, line); >> -} >> - >> /* >> * Public interface for lock operations. >> */ >> @@ -227,12 +190,6 @@ __sx_sunlock(struct sx *sx, const char * >> _sx_xlock((sx), SX_INTERRUPTIBLE, (file), (line)) >> #define sx_xunlock_(sx, file, line) \ >> _sx_xunlock((sx), (file), (line)) >> -#define sx_slock_(sx, file, line) \ >> - (void)_sx_slock((sx), 0, (file), (line)) >> -#define sx_slock_sig_(sx, file, line) \ >> - _sx_slock((sx), SX_INTERRUPTIBLE, (file) , (line)) >> -#define sx_sunlock_(sx, file, line) \ >> - _sx_sunlock((sx), (file), (line)) >> #else >> #define sx_xlock_(sx, file, line) \ >> (void)__sx_xlock((sx), curthread, 0, (file), (line)) >> @@ -240,13 +197,13 @@ __sx_sunlock(struct sx *sx, const char * >> __sx_xlock((sx), curthread, SX_INTERRUPTIBLE, (file), (line)) >> #define sx_xunlock_(sx, file, line) \ >> __sx_xunlock((sx), curthread, (file), (line)) >> +#endif /* LOCK_DEBUG > 0 || SX_NOINLINE */ >> #define sx_slock_(sx, file, line) \ >> - (void)__sx_slock((sx), 0, (file), (line)) >> + (void)_sx_slock((sx), 0, (file), (line)) >> #define sx_slock_sig_(sx, file, line) \ >> - __sx_slock((sx), SX_INTERRUPTIBLE, (file), (line)) >> + _sx_slock((sx), SX_INTERRUPTIBLE, (file) , (line)) >> #define sx_sunlock_(sx, file, line) \ >> - __sx_sunlock((sx), (file), (line)) >> -#endif /* LOCK_DEBUG > 0 || SX_NOINLINE */ >> + _sx_sunlock((sx), (file), (line)) >> #define sx_try_slock(sx) sx_try_slock_((sx), LOCK_FILE, LOCK_LINE) >> #define sx_try_xlock(sx) sx_try_xlock_((sx), LOCK_FILE, LOCK_LINE) >> #define sx_try_upgrade(sx) sx_try_upgrade_((sx), LOCK_FILE, LOCK_LINE) >> _______________________________________________ >> svn-src-all@freebsd.org mailing list >> https://lists.freebsd.org/mailman/listinfo/svn-src-all >> To unsubscribe, send any mail to "svn-src-all-unsubscribe@freebsd.org"