From owner-freebsd-stable@FreeBSD.ORG Thu May 15 20:48:34 2008 Return-Path: Delivered-To: freebsd-stable@freebsd.org Received: from mx1.freebsd.org (mx1.freebsd.org [IPv6:2001:4f8:fff6::34]) by hub.freebsd.org (Postfix) with ESMTP id 611F1106564A; Thu, 15 May 2008 20:48:34 +0000 (UTC) (envelope-from avg@icyb.net.ua) Received: from hosted.kievnet.com (hosted.kievnet.com [193.138.144.10]) by mx1.freebsd.org (Postfix) with ESMTP id 16F398FC20; Thu, 15 May 2008 20:48:33 +0000 (UTC) (envelope-from avg@icyb.net.ua) Received: from localhost ([127.0.0.1] helo=edge.pp.kiev.ua) by hosted.kievnet.com with esmtpa (Exim 4.62) (envelope-from ) id 1JwkNJ-000C2s-PY; Thu, 15 May 2008 23:48:29 +0300 Message-ID: <482CA191.1030004@icyb.net.ua> Date: Thu, 15 May 2008 23:48:17 +0300 From: Andriy Gapon User-Agent: Thunderbird 2.0.0.12 (X11/20080320) MIME-Version: 1.0 To: David Xu References: <482B0297.2050300@icyb.net.ua> <482BBA77.8000704@freebsd.org> <482BF5EA.5010806@icyb.net.ua> <482BFCE3.7080704@freebsd.org> <482C0206.1050206@icyb.net.ua> <482C3333.1070205@freebsd.org> In-Reply-To: <482C3333.1070205@freebsd.org> Content-Type: text/plain; charset=ISO-8859-1; format=flowed Content-Transfer-Encoding: 7bit Cc: freebsd-stable@freebsd.org, Brent Casavant , freebsd-threads@freebsd.org Subject: Re: thread scheduling at mutex unlock X-BeenThere: freebsd-stable@freebsd.org X-Mailman-Version: 2.1.5 Precedence: list List-Id: Production branch of FreeBSD source code List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , X-List-Received-Date: Thu, 15 May 2008 20:48:34 -0000 on 15/05/2008 15:57 David Xu said the following: > Andriy Gapon wrote: >> >> Maybe. But that's not what I see with my small example program. One >> thread releases and re-acquires a mutex 10 times in a row while the >> other doesn't get it a single time. >> I think that there is a very slim chance of a blocked thread >> preempting a running thread in this circumstances. Especially if >> execution time between unlock and re-lock is very small. > It does not depends on how many times your thread acquires or > re-acquires mutex, or > how small the region the mutex is protecting. as long as current thread > runs too long, > other threads will have higher priorities and the ownership definitely > will be transfered, > though there will be some extra context switchings. David, did you examine or try the small program that I sent before? The "lucky" thread slept for 1 second each time it held mutex. So in total it spent about 8 seconds sleeping and holding the mutex. And the "unlucky" thread, consequently, spent 8 seconds blocked waiting for that mutex. And it didn't get "lucky". Yes, technically the "lucky" thread was not running while holding the mutex, so probably this is why scheduling algorithm didn't immediately work. I did more testing and see that the "unlucky" thread eventually gets a chance (eventually means after very many lock/unlock cycles), but I think that it is penalized too much still. I wonder if with current code it is possible and easy to make this behavior more deterministic. Maybe something like the following: if (oldest_waiter.wait_time < X) do what we do now... else go into kernel for possible switch I have very little idea about unit and value of X. >> I'd rather prefer to have an option to have FIFO fairness in mutex >> lock rather than always avoiding context switch at all costs and >> depending on scheduler to eventually do priority magic. >> > It is better to implement this behavior in your application code, if it > is implemented in thread library, you still can not control how many > times acquiring and re-acquiring can be allowed for a thread without > context switching, a simple FIFO as you said here will cause dreadful > performance problem. I almost agree. But I still wouldn't take your last statement for a fact. "Dreadful performance" - on micro-scale maybe, not necessarily on macro scale. After all, never switching context would be the best performance for a single CPU-bound task, but you wouldn't think that this is the best performance for the whole system. As a data point: it seems that current Linux threading library is not significantly worse than libthr, but my small test program on Fedora 7 works to my expectations. -- Andriy Gapon