Skip site navigation (1)Skip section navigation (2)
Date:      Thu, 15 May 2008 13:54:09 -0400 (EDT)
From:      Daniel Eischen <deischen@freebsd.org>
To:        Andriy Gapon <avg@icyb.net.ua>
Cc:        freebsd-stable@freebsd.org, David Xu <davidxu@freebsd.org>, Brent Casavant <b.j.casavant@ieee.org>, freebsd-threads@freebsd.org
Subject:   Re: thread scheduling at mutex unlock
Message-ID:  <Pine.GSO.4.64.0805151329150.29431@sea.ntplx.net>
In-Reply-To: <Pine.GSO.4.64.0805150916400.28524@sea.ntplx.net>
References:  <482B0297.2050300@icyb.net.ua> <482BBA77.8000704@freebsd.org> <482BF5EA.5010806@icyb.net.ua> <Pine.GSO.4.64.0805150916400.28524@sea.ntplx.net>

next in thread | previous in thread | raw e-mail | index | archive | help
On Thu, 15 May 2008, Daniel Eischen wrote:

> On Thu, 15 May 2008, Andriy Gapon wrote:
>
>> Or even more realistic: there should be a feeder thread that puts things on 
>> the queue, it would never be able to enqueue new items until the queue 
>> becomes empty if worker thread's code looks like the following:
>> 
>> while(1)
>> {
>> pthread_mutex_lock(&work_mutex);
>> while(queue.is_empty())
>> 	pthread_cond_wait(...);
>> //dequeue item
>> ...
>> pthread_mutex_unlock(&work mutex);
>> //perform some short and non-blocking processing of the item
>> ...
>> }
>> 
>> Because the worker thread (while the queue is not empty) would never enter 
>> cond_wait and would always re-lock the mutex shortly after unlocking it.
>
> Well in theory, the kernel scheduler will let both threads run fairly
> with regards to their cpu usage, so this should even out the enqueueing
> and dequeueing threads.
>
> You could also optimize the above a little bit by dequeueing everything
> in the queue instead of one at a time.

I suppose you could also enforce your own scheduling with
something like the following:

 	pthread_cond_t	writer_cv;
 	pthread_cond_t	reader_cv;
 	pthread_mutex_t	q_mutex;
 	...
 	thingy_q_t	thingy_q;
 	int		writers_waiting = 0;
 	int		readers_waiting = 0;
 	...

 	void
 	enqueue(thingy_t *thingy)
 	{
 		pthread_mutex_lock(q_mutex);
 		/* Insert into thingy q */
 		...
 		if (readers_waiting > 0) {
 			pthread_cond_broadcast(&reader_cv, &q_mutex);
 			readers_waiting = 0;
 		}
 		while (thingy_q.size > ENQUEUE_THRESHOLD_HIGH) {
 			writers_waiting++;
 			pthread_cond_wait(&writer_cv, &q_mutex);
 		}
 		pthread_mutex_unlock(&q_mutex);
 	}

 	thingy_t *
 	dequeue(void)
 	{
 		thingy_t *thingy;

 		pthread_mutex_lock(&q_mutex);
 		while (thingy_q.size == 0) {
 			readers_waiting++;
 			pthread_cond_wait(&reader_cv, &q_mutex);
 		}
 		/* Dequeue thingy */
 		...

 		if ((writers_waiting > 0)
 		    && thingy_q.size < ENQUEUE_THRESHOLD_LOW)) {
 			/* Wakeup the writers. */
 			pthread_cond_broadcast(&writer_cv, &q_mutex);
 			writers_waiting = 0;
 		}
 		pthread_mutex_unlock(&q_mutex);
 		return (thingy);
 	}

The above is completely untested and probably contains some
bugs ;-)

You probably shouldn't need anything like that if the kernel
scheduler is scheduling your threads fairly.

-- 
DE



Want to link to this message? Use this URL: <https://mail-archive.FreeBSD.org/cgi/mid.cgi?Pine.GSO.4.64.0805151329150.29431>