Skip site navigation (1)Skip section navigation (2)
Date:      Thu, 15 May 2008 12:27:34 +0300
From:      Andriy Gapon <avg@icyb.net.ua>
To:        David Xu <davidxu@freebsd.org>
Cc:        freebsd-stable@freebsd.org, Brent Casavant <b.j.casavant@ieee.org>, freebsd-threads@freebsd.org
Subject:   Re: thread scheduling at mutex unlock
Message-ID:  <482C0206.1050206@icyb.net.ua>
In-Reply-To: <482BFCE3.7080704@freebsd.org>
References:  <482B0297.2050300@icyb.net.ua> <482BBA77.8000704@freebsd.org> <482BF5EA.5010806@icyb.net.ua> <482BFCE3.7080704@freebsd.org>

next in thread | previous in thread | raw e-mail | index | archive | help
on 15/05/2008 12:05 David Xu said the following:
> Andriy Gapon wrote:
> 
>> Brent, David,
>>
>> thank you for the responses.
>> I think I incorrectly formulated my original concern.
>> It is not about behavior at mutex unlock but about behavior at mutex 
>> re-lock. You are right that waking waiters at unlock would hurt 
>> performance. But I think that it is not "fair" that at re-lock former 
>> owner gets the lock immediately and the thread that waited on it for 
>> longer time doesn't get a chance.
>>
>> Here's a more realistic example than the mock up code.
>> Say you have a worker thread that processes queued requests and the 
>> load is such that there is always something on the queue. Thus the 
>> worker thread doesn't ever have to block waiting on it.
>> And let's say that there is a GUI thread that wants to convey some 
>> information to the worker thread. And for that it needs to acquire 
>> some mutex and "do something".
>> With current libthr behavior the GUI thread would never have a chance 
>> to get the mutex as worker thread would always be a winner (as my 
>> small program shows).
>> Or even more realistic: there should be a feeder thread that puts 
>> things on the queue, it would never be able to enqueue new items until 
>> the queue becomes empty if worker thread's code looks like the following:
>>
>> while(1)
>> {
>> pthread_mutex_lock(&work_mutex);
>> while(queue.is_empty())
>>     pthread_cond_wait(...);
>> //dequeue item
>> ...
>> pthread_mutex_unlock(&work mutex);
>> //perform some short and non-blocking processing of the item
>> ...
>> }
>>
>> Because the worker thread (while the queue is not empty) would never 
>> enter cond_wait and would always re-lock the mutex shortly after 
>> unlocking it.
>>
>> So while improving performance on small scale this mutex 
>> re-acquire-ing unfairness may be hurting interactivity and thread 
>> concurrency and thus performance in general. E.g. in the above example 
>> queue would always be effectively of depth 1.
>> Something about "lock starvation" comes to mind.
>>
>> So, yes, this is not about standards, this is about reasonable 
>> expectations about thread concurrency behavior in a particular 
>> implementation (libthr).
>> I see now that performance advantage of libthr over libkse came with a 
>> price. I think that something like queued locks is needed. They would 
>> clearly reduce raw throughput performance, so maybe that should be a 
>> new (non-portable?) mutex attribute.
>>
> 
> You forgot that default scheduling policy is time-sharing, after thread
> #2 has blocked on the mutex for a while, when thread #1 unlocks the 
> mutex and unblocks thread #1, the thread #2's priority will be raised
> and it preempts thread #1, the thread #2 then acquires the mutex,
> that's how it balances between fairness and performance.

Maybe. But that's not what I see with my small example program. One 
thread releases and re-acquires a mutex 10 times in a row while the 
other doesn't get it a single time.
I think that there is a very slim chance of a blocked thread preempting 
a running thread in this circumstances. Especially if execution time 
between unlock and re-lock is very small.
I'd rather prefer to have an option to have FIFO fairness in mutex lock 
rather than always avoiding context switch at all costs and depending on 
scheduler to eventually do priority magic.

-- 
Andriy Gapon



Want to link to this message? Use this URL: <https://mail-archive.FreeBSD.org/cgi/mid.cgi?482C0206.1050206>