Skip site navigation (1)Skip section navigation (2)
Date:      Thu, 5 Nov 2015 13:45:19 -0800
From:      Adrian Chadd <adrian.chadd@gmail.com>
To:        Mateusz Guzik <mjguzik@gmail.com>, John Baldwin <jhb@freebsd.org>,  freebsd-current <freebsd-current@freebsd.org>, Konstantin Belousov <kostikbel@gmail.com>
Subject:   Re: [PATCH] microoptimize by trying to avoid locking a locked mutex
Message-ID:  <CAJ-VmonnH4JJg0XqX1SoBXBa%2B9Xfmk%2BHFv58ETaQ9v1-uAAhdQ@mail.gmail.com>
In-Reply-To: <20151105192623.GB27709@dft-labs.eu>
References:  <20151104233218.GA27709@dft-labs.eu> <20151105142628.GJ2257@kib.kiev.ua> <13871467.CBcqGMncpJ@ralph.baldwin.cx> <20151105192623.GB27709@dft-labs.eu>

next in thread | previous in thread | raw e-mail | index | archive | help
On 5 November 2015 at 11:26, Mateusz Guzik <mjguzik@gmail.com> wrote:
> On Thu, Nov 05, 2015 at 11:04:13AM -0800, John Baldwin wrote:
>> On Thursday, November 05, 2015 04:26:28 PM Konstantin Belousov wrote:
>> > On Thu, Nov 05, 2015 at 12:32:18AM +0100, Mateusz Guzik wrote:
>> > > mtx_lock will unconditionally try to grab the lock and if that fails,
>> > > will call __mtx_lock_sleep which will immediately try to do the same
>> > > atomic op again.
>> > >
>> > > So, the obvious microoptimization is to check the state in
>> > > __mtx_lock_sleep and avoid the operation if the lock is not free.
>> > >
>> > > This gives me ~40% speedup in a microbenchmark of 40 find processes
>> > > traversing tmpfs and contending on mount mtx (only used as an easy
>> > > benchmark, I have WIP patches to get rid of it).
>> > >
>> > > Second part of the patch is optional and just checks the state of the
>> > > lock prior to doing any atomic operations, but it gives a very modest
>> > > speed up when applied on top of the __mtx_lock_sleep change. As such,
>> > > I'm not going to defend this part.
>> > Shouldn't the same consideration applied to all spinning loops, i.e.
>> > also to the spin/thread mutexes, and to the spinning parts of sx and
>> > lockmgr ?
>>
>> I agree.  I think both changes are good and worth doing in our other
>> primitives.
>>
>
> I glanced over e.g. rw_rlock and it did not have the issue, now that I
> see _sx_xlock_hard it wuld indeed use fixing.
>
> Expect a patch in few h for all primitives I'll find. I'll stress test
> the kernel, but it is unlikely I'll do microbenchmarks for remaining
> primitives.

Is this stuff you're proposing still valid for non-x86 platforms?



-adrian



Want to link to this message? Use this URL: <https://mail-archive.FreeBSD.org/cgi/mid.cgi?CAJ-VmonnH4JJg0XqX1SoBXBa%2B9Xfmk%2BHFv58ETaQ9v1-uAAhdQ>