From owner-freebsd-bugs Fri Oct 23 10:00:06 1998 Return-Path: Received: (from majordom@localhost) by hub.freebsd.org (8.8.8/8.8.8) id KAA19855 for freebsd-bugs-outgoing; Fri, 23 Oct 1998 10:00:06 -0700 (PDT) (envelope-from owner-freebsd-bugs@FreeBSD.ORG) Received: from freefall.freebsd.org (freefall.FreeBSD.ORG [204.216.27.21]) by hub.freebsd.org (8.8.8/8.8.8) with ESMTP id KAA19850 for ; Fri, 23 Oct 1998 10:00:04 -0700 (PDT) (envelope-from gnats@FreeBSD.org) Received: (from gnats@localhost) by freefall.freebsd.org (8.8.8/8.8.5) id KAA21223; Fri, 23 Oct 1998 10:00:00 -0700 (PDT) Date: Fri, 23 Oct 1998 10:00:00 -0700 (PDT) Message-Id: <199810231700.KAA21223@freefall.freebsd.org> To: freebsd-bugs@FreeBSD.ORG From: "Daniel M. Eischen" Subject: Re: kern/8375: pthread_cond_wait() spins the CPU Reply-To: "Daniel M. Eischen" Sender: owner-freebsd-bugs@FreeBSD.ORG Precedence: bulk X-Loop: FreeBSD.org The following reply was made to PR kern/8375; it has been noted by GNATS. From: "Daniel M. Eischen" To: freebsd-gnats-submit@freebsd.org, info@highwind.com Cc: jb@freebsd.org Subject: Re: kern/8375: pthread_cond_wait() spins the CPU Date: Fri, 23 Oct 1998 12:40:19 -0400 This is a multi-part message in MIME format. --------------446B9B3D2781E494167EB0E7 Content-Type: text/plain; charset=us-ascii Content-Transfer-Encoding: 7bit The spinlock ordering of the cond_wait and mutex structures can lead to a deadlock. Also, kernel scheduling initiated by a SIGVTALRM can cause improper mutex and condition variable operation. Attach is a patch that fixes these problems and also makes pthread_mutex[try]lock and pthread_cond_[timed]wait return EDEADLK when the caller has not locked the mutex. Dan Eischen eischen@vigrid.com --------------446B9B3D2781E494167EB0E7 Content-Type: text/plain; charset=us-ascii; name="uthread.diffs" Content-Transfer-Encoding: 7bit Content-Disposition: inline; filename="uthread.diffs" *** pthread_private.h.orig Thu Oct 22 12:15:26 1998 --- pthread_private.h Thu Oct 22 12:16:04 1998 *************** *** 446,451 **** --- 446,463 ---- /* Signal number when in state PS_SIGWAIT: */ int signo; + /* + * Set to non-zero when this thread has locked out thread + * scheduling. We allow this lock to be recursively taken. + */ + int sched_lock; + + /* + * Set to TRUE if this thread should yield after reenabling + * thread scheduling. + */ + int yield_on_sched_unlock; + /* Miscellaneous data. */ char flags; #define PTHREAD_EXITING 0x0100 *************** *** 655,660 **** --- 667,674 ---- void _thread_kern_sched(struct sigcontext *); void _thread_kern_sched_state(enum pthread_state,char *fname,int lineno); void _thread_kern_set_timeout(struct timespec *); + void _thread_kern_sched_lock(void); + void _thread_kern_sched_unlock(void); void _thread_sig_handler(int, int, struct sigcontext *); void _thread_start(void); void _thread_start_sig_handler(void); *** uthread_cond.c.orig Sat Oct 17 19:46:57 1998 --- uthread_cond.c Thu Oct 22 11:44:46 1998 *************** *** 137,142 **** --- 137,150 ---- */ else if (*cond != NULL || (rval = pthread_cond_init(cond,NULL)) == 0) { + /* + * Disable thread scheduling. If the thread blocks + * in some way, thread scheduling is reenabled. When + * the thread wakes up, thread scheduling will again + * be disabled. + */ + _thread_kern_sched_lock(); + /* Lock the condition variable structure: */ _SPINLOCK(&(*cond)->lock); *************** *** 150,184 **** */ _thread_queue_enq(&(*cond)->c_queue, _thread_run); - /* Unlock the mutex: */ - pthread_mutex_unlock(mutex); - - /* Wait forever: */ - _thread_run->wakeup_time.tv_sec = -1; - /* Unlock the condition variable structure: */ _SPINUNLOCK(&(*cond)->lock); ! /* Schedule the next thread: */ ! _thread_kern_sched_state(PS_COND_WAIT, ! __FILE__, __LINE__); ! /* Lock the condition variable structure: */ ! _SPINLOCK(&(*cond)->lock); ! /* Lock the mutex: */ ! rval = pthread_mutex_lock(mutex); break; /* Trap invalid condition variable types: */ default: /* Return an invalid argument error: */ rval = EINVAL; break; } ! /* Unlock the condition variable structure: */ ! _SPINUNLOCK(&(*cond)->lock); } /* Return the completion status: */ --- 158,193 ---- */ _thread_queue_enq(&(*cond)->c_queue, _thread_run); /* Unlock the condition variable structure: */ _SPINUNLOCK(&(*cond)->lock); ! /* Unlock the mutex: */ ! if (rval = pthread_mutex_unlock(mutex) == 0) { ! ! /* Wait forever: */ ! _thread_run->wakeup_time.tv_sec = -1; ! /* Schedule the next thread: */ ! _thread_kern_sched_state(PS_COND_WAIT, ! __FILE__, __LINE__); ! /* Lock the mutex: */ ! rval = pthread_mutex_lock(mutex); ! } break; /* Trap invalid condition variable types: */ default: + /* Unlock the condition variable structure: */ + _SPINUNLOCK(&(*cond)->lock); + /* Return an invalid argument error: */ rval = EINVAL; break; } ! /* Reenable thread scheduling. */ ! _thread_kern_sched_unlock(); } /* Return the completion status: */ *************** *** 201,206 **** --- 210,223 ---- */ else if (*cond != NULL || (rval = pthread_cond_init(cond,NULL)) == 0) { + /* + * Disable thread scheduling. If the thread blocks + * in some way, thread scheduling is reenabled. When + * the thread wakes up, thread scheduling will again + * be disabled. + */ + _thread_kern_sched_lock(); + /* Lock the condition variable structure: */ _SPINLOCK(&(*cond)->lock); *************** *** 221,226 **** --- 238,246 ---- */ _thread_queue_enq(&(*cond)->c_queue, _thread_run); + /* Unlock the condition structure: */ + _SPINUNLOCK(&(*cond)->lock); + /* Unlock the mutex: */ if ((rval = pthread_mutex_unlock(mutex)) != 0) { /* *************** *** 230,245 **** */ _thread_queue_deq(&(*cond)->c_queue); } else { - /* Unlock the condition variable structure: */ - _SPINUNLOCK(&(*cond)->lock); - /* Schedule the next thread: */ _thread_kern_sched_state(PS_COND_WAIT, __FILE__, __LINE__); - /* Lock the condition variable structure: */ - _SPINLOCK(&(*cond)->lock); - /* Lock the mutex: */ if ((rval = pthread_mutex_lock(mutex)) != 0) { } --- 250,259 ---- *************** *** 253,265 **** /* Trap invalid condition variable types: */ default: /* Return an invalid argument error: */ rval = EINVAL; break; } ! /* Unlock the condition variable structure: */ ! _SPINUNLOCK(&(*cond)->lock); } /* Return the completion status: */ --- 267,282 ---- /* Trap invalid condition variable types: */ default: + /* Unlock the condition structure: */ + _SPINUNLOCK(&(*cond)->lock); + /* Return an invalid argument error: */ rval = EINVAL; break; } ! /* Reenable thread scheduling. */ ! _thread_kern_sched_unlock(); } /* Return the completion status: */ *************** *** 276,281 **** --- 293,306 ---- if (cond == NULL || *cond == NULL) rval = EINVAL; else { + /* + * Disable thread scheduling. If the thread blocks + * in some way, thread scheduling is reenabled. When + * the thread wakes up, thread scheduling will again + * be disabled. + */ + _thread_kern_sched_lock(); + /* Lock the condition variable structure: */ _SPINLOCK(&(*cond)->lock); *************** *** 299,304 **** --- 324,332 ---- /* Unlock the condition variable structure: */ _SPINUNLOCK(&(*cond)->lock); + + /* Reenable thread scheduling. */ + _thread_kern_sched_unlock(); } /* Return the completion status: */ *************** *** 315,320 **** --- 343,356 ---- if (cond == NULL || *cond == NULL) rval = EINVAL; else { + /* + * Disable thread scheduling. If the thread blocks + * in some way, thread scheduling is reenabled. When + * the thread wakes up, thread scheduling will again + * be disabled. + */ + _thread_kern_sched_lock(); + /* Lock the condition variable structure: */ _SPINLOCK(&(*cond)->lock); *************** *** 342,347 **** --- 378,386 ---- /* Unlock the condition variable structure: */ _SPINUNLOCK(&(*cond)->lock); + + /* Reenable thread scheduling. */ + _thread_kern_sched_unlock(); } /* Return the completion status: */ *** uthread_create.c.orig Sat Oct 17 19:46:57 1998 --- uthread_create.c Thu Oct 22 11:44:46 1998 *************** *** 90,95 **** --- 90,97 ---- new_thread->stack = stack; new_thread->start_routine = start_routine; new_thread->arg = arg; + new_thread->sched_lock = 0; + new_thread->yield_on_sched_unlock = 0; /* * Write a magic value to the thread structure *** uthread_kern.c.orig Thu Oct 22 12:20:14 1998 --- uthread_kern.c Thu Oct 22 23:44:55 1998 *************** *** 196,201 **** --- 196,206 ---- /* Check if there is a current thread: */ if (_thread_run != &_thread_kern_thread) { /* + * This thread no longer needs to yield the CPU. + */ + _thread_run->yield_on_sched_unlock = 0; + + /* * Save the current time as the time that the thread * became inactive: */ *************** *** 1303,1307 **** --- 1308,1333 ---- } } return; + } + + void + _thread_kern_sched_lock(void) + { + /* Allow the scheduling lock to be recursively set. */ + _thread_run->sched_lock++; + } + + void + _thread_kern_sched_unlock(void) + { + /* Decrement the scheduling lock count. */ + _thread_run->sched_lock--; + + /* Check to see if we need to yield. */ + if ((_thread_run->sched_lock == 0) && + (_thread_run->yield_on_sched_unlock != 0)) { + _thread_run->yield_on_sched_unlock = 0; + sched_yield(); + } } #endif *** uthread_mutex.c.orig Sat Oct 17 19:46:58 1998 --- uthread_mutex.c Thu Oct 22 11:44:46 1998 *************** *** 168,173 **** --- 168,181 ---- * initialization: */ else if (*mutex != NULL || (ret = init_static(mutex)) == 0) { + /* + * Disable thread scheduling. If the thread blocks + * in some way, thread scheduling is reenabled. When + * the thread wakes up, thread scheduling will again + * be disabled. + */ + _thread_kern_sched_lock(); + /* Lock the mutex structure: */ _SPINLOCK(&(*mutex)->lock); *************** *** 215,220 **** --- 223,231 ---- /* Unlock the mutex structure: */ _SPINUNLOCK(&(*mutex)->lock); + + /* Reenable thread scheduling. */ + _thread_kern_sched_unlock(); } /* Return the completion status: */ *************** *** 234,239 **** --- 245,258 ---- * initialization: */ else if (*mutex != NULL || (ret = init_static(mutex)) == 0) { + /* + * Disable thread scheduling. If the thread blocks + * in some way, thread scheduling is reenabled. When + * the thread wakes up, thread scheduling will again + * be disabled. + */ + _thread_kern_sched_lock(); + /* Lock the mutex structure: */ _SPINLOCK(&(*mutex)->lock); *************** *** 241,270 **** switch ((*mutex)->m_type) { /* Fast mutexes do not check for any error conditions: */ case MUTEX_TYPE_FAST: ! /* ! * Enter a loop to wait for the mutex to be locked by the ! * current thread: ! */ ! while ((*mutex)->m_owner != _thread_run) { ! /* Check if the mutex is not locked: */ ! if ((*mutex)->m_owner == NULL) { ! /* Lock the mutex for this thread: */ ! (*mutex)->m_owner = _thread_run; ! } else { ! /* ! * Join the queue of threads waiting to lock ! * the mutex: ! */ ! _thread_queue_enq(&(*mutex)->m_queue, _thread_run); ! ! /* Unlock the mutex structure: */ ! _SPINUNLOCK(&(*mutex)->lock); ! ! /* Block signals: */ ! _thread_kern_sched_state(PS_MUTEX_WAIT, __FILE__, __LINE__); ! ! /* Lock the mutex again: */ ! _SPINLOCK(&(*mutex)->lock); } } break; --- 260,295 ---- switch ((*mutex)->m_type) { /* Fast mutexes do not check for any error conditions: */ case MUTEX_TYPE_FAST: ! if ((*mutex)->m_owner == _thread_run) ! ret = EDEADLK; ! else { ! /* ! * Enter a loop to wait for the mutex to be ! * locked by the current thread: ! */ ! while ((*mutex)->m_owner != _thread_run) { ! /* Check if the mutex is not locked: */ ! if ((*mutex)->m_owner == NULL) { ! /* Lock the mutex for this thread: */ ! (*mutex)->m_owner = _thread_run; ! } else { ! /* ! * Join the queue of threads ! * waiting to lock the mutex: ! */ ! _thread_queue_enq(&(*mutex)->m_queue, ! _thread_run); ! ! /* Unlock the mutex structure: */ ! _SPINUNLOCK(&(*mutex)->lock); ! ! /* Schedule the next thread: */ ! _thread_kern_sched_state(PS_MUTEX_WAIT, ! __FILE__, __LINE__); ! ! /* Lock the mutex structure again: */ ! _SPINLOCK(&(*mutex)->lock); ! } } } break; *************** *** 283,288 **** --- 308,316 ---- /* Reset the lock count for this mutex: */ (*mutex)->m_data.m_count = 0; + + /* Unlock the mutex structure: */ + _SPINUNLOCK(&(*mutex)->lock); } else { /* * Join the queue of threads waiting to lock *************** *** 293,302 **** /* Unlock the mutex structure: */ _SPINUNLOCK(&(*mutex)->lock); ! /* Block signals: */ _thread_kern_sched_state(PS_MUTEX_WAIT, __FILE__, __LINE__); ! /* Lock the mutex again: */ _SPINLOCK(&(*mutex)->lock); } } --- 321,330 ---- /* Unlock the mutex structure: */ _SPINUNLOCK(&(*mutex)->lock); ! /* Schedule the next thread: */ _thread_kern_sched_state(PS_MUTEX_WAIT, __FILE__, __LINE__); ! /* Lock the mutex structure again: */ _SPINLOCK(&(*mutex)->lock); } } *************** *** 307,319 **** /* Trap invalid mutex types: */ default: /* Return an invalid argument error: */ ret = EINVAL; break; } ! /* Unlock the mutex structure: */ ! _SPINUNLOCK(&(*mutex)->lock); } /* Return the completion status: */ --- 335,350 ---- /* Trap invalid mutex types: */ default: + /* Unlock the mutex structure: */ + _SPINUNLOCK(&(*mutex)->lock); + /* Return an invalid argument error: */ ret = EINVAL; break; } ! /* Reenable thread scheduling. */ ! _thread_kern_sched_unlock(); } /* Return the completion status: */ *************** *** 328,333 **** --- 359,372 ---- if (mutex == NULL || *mutex == NULL) { ret = EINVAL; } else { + /* + * Disable thread scheduling. If the thread blocks + * in some way, thread scheduling is reenabled. When + * the thread wakes up, thread scheduling will again + * be disabled. + */ + _thread_kern_sched_lock(); + /* Lock the mutex structure: */ _SPINLOCK(&(*mutex)->lock); *************** *** 381,386 **** --- 420,428 ---- /* Unlock the mutex structure: */ _SPINUNLOCK(&(*mutex)->lock); + + /* Reenable thread scheduling. */ + _thread_kern_sched_unlock(); } /* Return the completion status: */ *** uthread_sig.c.orig Thu Oct 22 12:28:16 1998 --- uthread_sig.c Thu Oct 22 12:14:24 1998 *************** *** 115,120 **** --- 115,128 ---- yield_on_unlock_thread = 1; /* + * Check if the scheduler interrupt has come when + * the currently running thread has disabled thread + * scheduling. + */ + else if (_thread_run->sched_lock) + _thread_run->yield_on_sched_unlock = 1; + + /* * Check if the kernel has not been interrupted while * executing scheduler code: */ --------------446B9B3D2781E494167EB0E7-- To Unsubscribe: send mail to majordomo@FreeBSD.org with "unsubscribe freebsd-bugs" in the body of the message