From owner-freebsd-stable@FreeBSD.ORG Wed Mar 29 22:28:46 2006 Return-Path: X-Original-To: freebsd-stable@freebsd.org Delivered-To: freebsd-stable@freebsd.org Received: from mx1.FreeBSD.org (mx1.freebsd.org [216.136.204.125]) by hub.freebsd.org (Postfix) with ESMTP id EC03716A401 for ; Wed, 29 Mar 2006 22:28:46 +0000 (UTC) (envelope-from bsam@bsam.ru) Received: from mail.kuban.ru (mail.kuban.ru [62.183.66.246]) by mx1.FreeBSD.org (Postfix) with ESMTP id F0FCF43D46 for ; Wed, 29 Mar 2006 22:28:44 +0000 (GMT) (envelope-from bsam@bsam.ru) Received: from bsam.ru ([83.239.48.142]) by mail.kuban.ru (8.9.1/8.9.1) with ESMTP id k2TMSDKI073655; Thu, 30 Mar 2006 02:28:23 +0400 (MSD) Received: from bsam by bsam.ru with local (Exim 4.60 (FreeBSD)) (envelope-from ) id 1FOj8p-0007X6-1O; Thu, 30 Mar 2006 02:27:51 +0400 To: Sam Leffler References: <84395955@ho.ipt.ru> <4428483F.6090404@tvnetwork.hu> <44285322.4020000@errno.com> From: Boris Samorodov Date: Thu, 30 Mar 2006 02:27:50 +0400 In-Reply-To: <44285322.4020000@errno.com> (Sam Leffler's message of "Mon, 27 Mar 2006 13:03:30 -0800") Message-ID: <07561033@ho.ipt.ru> User-Agent: Gnus/5.11 (Gnus v5.11) Emacs/22.0.50 (berkeley-unix) MIME-Version: 1.0 Content-Type: multipart/mixed; boundary="=-=-=" Sender: "Boris B. Samorodov" Cc: freebsd-stable@freebsd.org, =?iso-8859-1?Q?L=E1szl=F3_K=E1roly?= Subject: Re: 6.1-PRERELEASE: freezing X-BeenThere: freebsd-stable@freebsd.org X-Mailman-Version: 2.1.5 Precedence: list List-Id: Production branch of FreeBSD source code List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , X-List-Received-Date: Wed, 29 Mar 2006 22:28:47 -0000 --=-=-= Content-Type: text/plain; charset=iso-8859-1 Content-Transfer-Encoding: quoted-printable On Mon, 27 Mar 2006 13:03:30 -0800 Sam Leffler wrote: > L=E1szl=F3 K=E1roly wrote: > > Boris Samorodov wrote: > >> I've had 6.0-STABLE as of jan-2006. Yesterday it was upgraded to > >> current 6.1-PRERELEASE (tag=3DRELENG_6). The hardware is HP/Compaq > >> nx6110 notebook. > >> > >> After upgrading mashine is freezing under load. After booting the > >> OS without any actions it's OK for two hours. But after starting of > >> make buildkernel the mashine freezes. It's freezing (actually it's an IRQ storm) when the cooler temperature is increased and the speed of the fan should be increased. > > I have the same box and I too made an upgrade yesterday (from a > > two-week-old 6.1-PRERELEASE). The same experience: the system became > > unusably slow, no problem without ACPI. > >> What type of debugging should I do to find up what's up? > > Good question ;-): how to debug a system which practically does not > > react but "runs"? > Are you running powerd? I've got an nx6125 (amd cpu) that has > numerous acpi issues and also would lockup when idle. I found turning > off powerd stopped the latter. Unfortunately there are still many > other unresolved issues (and no time to pursue them). I found when things went broken. It is the patch(es) as of 2006-01-13. The patch is at attachment. After applying (reversed) patch acpi is working as well as management of the cooler speed. Sam, should I do some other testing? L=E1szl=F3, can you apply the patch (cd /usr/src; patch -R -p0 _the_patch_), make kernel and tell us if the problem go out? As for me the reversed patch worked for current 6.1-PRERELEASE (cvsupped some hours ago). WBR --=20 Boris B. Samorodov, Research Engineer InPharmTech Co, http://www.ipt.ru Telephone & Internet Service Provider --=-=-= Content-Type: text/x-patch Content-Disposition: attachment; filename=src.patch Content-Description: the patch that brokes thermal management diff -ruN src.03.14/sys/dev/acpica/Osd/OsdSchedule.c src.03.15/sys/dev/acpica/Osd/OsdSchedule.c --- sys/dev/acpica/Osd/OsdSchedule.c.orig Wed Mar 29 17:46:29 2006 +++ sys/dev/acpica/Osd/OsdSchedule.c Wed Mar 29 18:02:22 2006 @@ -30,7 +30,7 @@ */ #include -__FBSDID("$FreeBSD: src/sys/dev/acpica/Osd/OsdSchedule.c,v 1.32.2.2 2005/11/07 09:53:23 obrien Exp $"); +__FBSDID("$FreeBSD: src/sys/dev/acpica/Osd/OsdSchedule.c,v 1.32.2.3 2006/03/14 23:28:30 sam Exp $"); #include "opt_acpi.h" #include @@ -65,31 +65,8 @@ void *at_context; }; -/* - * Private task queue definition for ACPI - */ -static struct proc * -acpi_task_start_threads(struct taskqueue **tqp) -{ - struct proc *acpi_kthread_proc; - int err, i; - - KASSERT(*tqp != NULL, ("acpi taskqueue not created before threads")); - - /* Start one or more threads to service our taskqueue. */ - for (i = 0; i < acpi_max_threads; i++) { - err = kthread_create(taskqueue_thread_loop, tqp, &acpi_kthread_proc, - 0, 0, "acpi_task%d", i); - if (err) { - printf("%s: kthread_create failed (%d)\n", __func__, err); - break; - } - } - return (acpi_kthread_proc); -} - TASKQUEUE_DEFINE(acpi, taskqueue_thread_enqueue, &taskqueue_acpi, - taskqueue_acpi_proc = acpi_task_start_threads(&taskqueue_acpi)); + taskqueue_start_threads(&taskqueue_acpi, 3, PWAIT, "acpi_task")); /* * Bounce through this wrapper function since ACPI-CA doesn't understand diff -ruN src.03.14/sys/kern/kern_synch.c src.03.15/sys/kern/kern_synch.c --- sys/kern/kern_synch.c.orig Wed Mar 29 17:46:44 2006 +++ sys/kern/kern_synch.c Wed Mar 29 18:02:38 2006 @@ -35,7 +35,7 @@ */ #include -__FBSDID("$FreeBSD: src/sys/kern/kern_synch.c,v 1.270.2.2 2006/02/27 00:19:40 davidxu Exp $"); +__FBSDID("$FreeBSD: src/sys/kern/kern_synch.c,v 1.270.2.3 2006/03/14 23:28:30 sam Exp $"); #include "opt_ktrace.h" @@ -218,6 +218,88 @@ mtx_lock(mtx); WITNESS_RESTORE(&mtx->mtx_object, mtx); } + return (rval); +} + +int +msleep_spin(ident, mtx, wmesg, timo) + void *ident; + struct mtx *mtx; + const char *wmesg; + int timo; +{ + struct thread *td; + struct proc *p; + int rval; + WITNESS_SAVE_DECL(mtx); + + td = curthread; + p = td->td_proc; + KASSERT(mtx != NULL, ("sleeping without a mutex")); + KASSERT(p != NULL, ("msleep1")); + KASSERT(ident != NULL && TD_IS_RUNNING(td), ("msleep")); + + if (cold) { + /* + * During autoconfiguration, just return; + * don't run any other threads or panic below, + * in case this is the idle thread and already asleep. + * XXX: this used to do "s = splhigh(); splx(safepri); + * splx(s);" to give interrupts a chance, but there is + * no way to give interrupts a chance now. + */ + return (0); + } + + sleepq_lock(ident); + CTR5(KTR_PROC, "msleep_spin: thread %p (pid %ld, %s) on %s (%p)", + (void *)td, (long)p->p_pid, p->p_comm, wmesg, ident); + + DROP_GIANT(); + mtx_assert(mtx, MA_OWNED | MA_NOTRECURSED); + WITNESS_SAVE(&mtx->mtx_object, mtx); + mtx_unlock_spin(mtx); + + /* + * We put ourselves on the sleep queue and start our timeout. + */ + sleepq_add(ident, mtx, wmesg, SLEEPQ_MSLEEP); + if (timo) + sleepq_set_timeout(ident, timo); + + /* + * Can't call ktrace with any spin locks held so it can lock the + * ktrace_mtx lock, and WITNESS_WARN considers it an error to hold + * any spin lock. Thus, we have to drop the sleepq spin lock while + * we handle those requests. This is safe since we have placed our + * thread on the sleep queue already. + */ +#ifdef KTRACE + if (KTRPOINT(td, KTR_CSW)) { + sleepq_release(ident); + ktrcsw(1, 0); + sleepq_lock(ident); + } +#endif +#ifdef WITNESS + sleepq_release(ident); + WITNESS_WARN(WARN_GIANTOK | WARN_SLEEPOK, NULL, "Sleeping on \"%s\"", + wmesg); + sleepq_lock(ident); +#endif + if (timo) + rval = sleepq_timedwait(ident); + else { + sleepq_wait(ident); + rval = 0; + } +#ifdef KTRACE + if (KTRPOINT(td, KTR_CSW)) + ktrcsw(0, 0); +#endif + PICKUP_GIANT(); + mtx_lock_spin(mtx); + WITNESS_RESTORE(&mtx->mtx_object, mtx); return (rval); } diff -ruN src.03.14/sys/kern/subr_taskqueue.c src.03.15/sys/kern/subr_taskqueue.c --- sys/kern/subr_taskqueue.c.orig Wed Mar 29 17:46:44 2006 +++ sys/kern/subr_taskqueue.c Wed Mar 29 18:02:38 2006 @@ -25,7 +25,7 @@ */ #include -__FBSDID("$FreeBSD: src/sys/kern/subr_taskqueue.c,v 1.27.2.1 2006/01/30 07:51:10 scottl Exp $"); +__FBSDID("$FreeBSD: src/sys/kern/subr_taskqueue.c,v 1.27.2.2 2006/03/14 23:28:30 sam Exp $"); #include #include @@ -37,8 +37,10 @@ #include #include #include +#include #include #include +#include static MALLOC_DEFINE(M_TASKQUEUE, "taskqueue", "Task Queues"); static void *taskqueue_giant_ih; @@ -55,10 +57,42 @@ struct task *tq_running; struct mtx tq_mutex; struct proc **tq_pproc; + int tq_pcount; + int tq_spin; + int tq_flags; }; +#define TQ_FLAGS_ACTIVE (1 << 0) + +static __inline void +TQ_LOCK(struct taskqueue *tq) +{ + if (tq->tq_spin) + mtx_lock_spin(&tq->tq_mutex); + else + mtx_lock(&tq->tq_mutex); +} + +static __inline void +TQ_UNLOCK(struct taskqueue *tq) +{ + if (tq->tq_spin) + mtx_unlock_spin(&tq->tq_mutex); + else + mtx_unlock(&tq->tq_mutex); +} + static void init_taskqueue_list(void *data); +static __inline int +TQ_SLEEP(struct taskqueue *tq, void *p, struct mtx *m, int pri, const char *wm, + int t) +{ + if (tq->tq_spin) + return (msleep_spin(p, m, wm, t)); + return (msleep(p, m, pri, wm, t)); +} + static void init_taskqueue_list(void *data __unused) { @@ -69,10 +103,10 @@ SYSINIT(taskqueue_list, SI_SUB_INTRINSIC, SI_ORDER_ANY, init_taskqueue_list, NULL); -struct taskqueue * -taskqueue_create(const char *name, int mflags, +static struct taskqueue * +_taskqueue_create(const char *name, int mflags, taskqueue_enqueue_fn enqueue, void *context, - struct proc **pp) + int mtxflags, const char *mtxname) { struct taskqueue *queue; @@ -84,8 +118,9 @@ queue->tq_name = name; queue->tq_enqueue = enqueue; queue->tq_context = context; - queue->tq_pproc = pp; - mtx_init(&queue->tq_mutex, "taskqueue", NULL, MTX_DEF); + queue->tq_spin = (mtxflags & MTX_SPIN) != 0; + queue->tq_flags |= TQ_FLAGS_ACTIVE; + mtx_init(&queue->tq_mutex, mtxname, NULL, mtxflags); mtx_lock(&taskqueue_queues_mutex); STAILQ_INSERT_TAIL(&taskqueue_queues, queue, tq_link); @@ -94,23 +129,26 @@ return queue; } +struct taskqueue * +taskqueue_create(const char *name, int mflags, + taskqueue_enqueue_fn enqueue, void *context, + struct proc **pp) +{ + (void) pp; + return _taskqueue_create(name, mflags, enqueue, context, + MTX_DEF, "taskqueue"); +} + /* * Signal a taskqueue thread to terminate. */ static void taskqueue_terminate(struct proc **pp, struct taskqueue *tq) { - struct proc *p; - p = *pp; - *pp = NULL; - if (p) { - wakeup_one(tq); - PROC_LOCK(p); /* NB: insure we don't miss wakeup */ - mtx_unlock(&tq->tq_mutex); /* let taskqueue thread run */ - msleep(p, &p->p_mtx, PWAIT, "taskqueue_destroy", 0); - PROC_UNLOCK(p); - mtx_lock(&tq->tq_mutex); + while (tq->tq_pcount > 0) { + wakeup(tq); + TQ_SLEEP(tq, pp, &tq->tq_mutex, PWAIT, "taskqueue_destroy", 0); } } @@ -122,10 +160,12 @@ STAILQ_REMOVE(&taskqueue_queues, queue, taskqueue, tq_link); mtx_unlock(&taskqueue_queues_mutex); - mtx_lock(&queue->tq_mutex); + TQ_LOCK(queue); + queue->tq_flags &= ~TQ_FLAGS_ACTIVE; taskqueue_run(queue); taskqueue_terminate(queue->tq_pproc, queue); mtx_destroy(&queue->tq_mutex); + free(queue->tq_pproc, M_TASKQUEUE); free(queue, M_TASKQUEUE); } @@ -140,7 +180,7 @@ mtx_lock(&taskqueue_queues_mutex); STAILQ_FOREACH(queue, &taskqueue_queues, tq_link) { if (strcmp(queue->tq_name, name) == 0) { - mtx_lock(&queue->tq_mutex); + TQ_LOCK(queue); mtx_unlock(&taskqueue_queues_mutex); return queue; } @@ -155,14 +195,14 @@ struct task *ins; struct task *prev; - mtx_lock(&queue->tq_mutex); + TQ_LOCK(queue); /* * Count multiple enqueues. */ if (task->ta_pending) { task->ta_pending++; - mtx_unlock(&queue->tq_mutex); + TQ_UNLOCK(queue); return 0; } @@ -188,7 +228,7 @@ task->ta_pending = 1; queue->tq_enqueue(queue->tq_context); - mtx_unlock(&queue->tq_mutex); + TQ_UNLOCK(queue); return 0; } @@ -201,7 +241,7 @@ owned = mtx_owned(&queue->tq_mutex); if (!owned) - mtx_lock(&queue->tq_mutex); + TQ_LOCK(queue); while (STAILQ_FIRST(&queue->tq_queue)) { /* * Carefully remove the first task from the queue and @@ -212,11 +252,11 @@ pending = task->ta_pending; task->ta_pending = 0; queue->tq_running = task; - mtx_unlock(&queue->tq_mutex); + TQ_UNLOCK(queue); task->ta_func(task->ta_context, pending); - mtx_lock(&queue->tq_mutex); + TQ_LOCK(queue); queue->tq_running = NULL; wakeup(task); } @@ -226,18 +266,25 @@ * on entry, although this opens a race window. */ if (!owned) - mtx_unlock(&queue->tq_mutex); + TQ_UNLOCK(queue); } void taskqueue_drain(struct taskqueue *queue, struct task *task) { - WITNESS_WARN(WARN_GIANTOK | WARN_SLEEPOK, NULL, "taskqueue_drain"); + if (queue->tq_spin) { /* XXX */ + mtx_lock_spin(&queue->tq_mutex); + while (task->ta_pending != 0 || task == queue->tq_running) + msleep_spin(task, &queue->tq_mutex, "-", 0); + mtx_unlock_spin(&queue->tq_mutex); + } else { + WITNESS_WARN(WARN_GIANTOK | WARN_SLEEPOK, NULL, __func__); - mtx_lock(&queue->tq_mutex); - while (task->ta_pending != 0 || task == queue->tq_running) - msleep(task, &queue->tq_mutex, PWAIT, "-", 0); - mtx_unlock(&queue->tq_mutex); + mtx_lock(&queue->tq_mutex); + while (task->ta_pending != 0 || task == queue->tq_running) + msleep(task, &queue->tq_mutex, PWAIT, "-", 0); + mtx_unlock(&queue->tq_mutex); + } } static void @@ -264,6 +311,43 @@ taskqueue_run(taskqueue_swi_giant); } +int +taskqueue_start_threads(struct taskqueue **tqp, int count, int pri, + const char *name, ...) +{ + va_list ap; + struct taskqueue *tq; + char ktname[MAXCOMLEN]; + int i; + + if (count <= 0) + return (EINVAL); + tq = *tqp; + + if ((tq->tq_pproc = malloc(sizeof(struct proc *) * count, M_TASKQUEUE, + M_NOWAIT | M_ZERO)) == NULL) + return (ENOMEM); + + va_start(ap, name); + vsnprintf(ktname, MAXCOMLEN, name, ap); + va_end(ap); + + for (i = 0; i < count; i++) { + if (count == 1) + kthread_create(taskqueue_thread_loop, tqp, + &tq->tq_pproc[i], 0, 0, ktname); + else + kthread_create(taskqueue_thread_loop, tqp, + &tq->tq_pproc[i], 0, 0, "%s_%d", ktname, i); + mtx_lock_spin(&sched_lock); + sched_prio(FIRST_THREAD_IN_PROC(tq->tq_pproc[i]), pri); + mtx_unlock_spin(&sched_lock); + tq->tq_pcount++; + } + + return (0); +} + void taskqueue_thread_loop(void *arg) { @@ -271,15 +355,16 @@ tqp = arg; tq = *tqp; - mtx_lock(&tq->tq_mutex); + TQ_LOCK(tq); do { taskqueue_run(tq); - msleep(tq, &tq->tq_mutex, PWAIT, "-", 0); - } while (*tq->tq_pproc != NULL); + TQ_SLEEP(tq, tq, &tq->tq_mutex, curthread->td_priority, "-", 0); + } while ((tq->tq_flags & TQ_FLAGS_ACTIVE) != 0); /* rendezvous with thread that asked us to terminate */ - wakeup_one(tq); - mtx_unlock(&tq->tq_mutex); + tq->tq_pcount--; + wakeup_one(tq->tq_pproc); + TQ_UNLOCK(tq); kthread_exit(0); } @@ -300,85 +385,30 @@ INTR_MPSAFE, &taskqueue_ih)); TASKQUEUE_DEFINE(swi_giant, taskqueue_swi_giant_enqueue, 0, - swi_add(NULL, "Giant task queue", taskqueue_swi_giant_run, + swi_add(NULL, "Giant taskq", taskqueue_swi_giant_run, NULL, SWI_TQ_GIANT, 0, &taskqueue_giant_ih)); TASKQUEUE_DEFINE_THREAD(thread); -int -taskqueue_enqueue_fast(struct taskqueue *queue, struct task *task) +struct taskqueue * +taskqueue_create_fast(const char *name, int mflags, + taskqueue_enqueue_fn enqueue, void *context) { - struct task *ins; - struct task *prev; - - mtx_lock_spin(&queue->tq_mutex); - - /* - * Count multiple enqueues. - */ - if (task->ta_pending) { - task->ta_pending++; - mtx_unlock_spin(&queue->tq_mutex); - return 0; - } - - /* - * Optimise the case when all tasks have the same priority. - */ - prev = STAILQ_LAST(&queue->tq_queue, task, ta_link); - if (!prev || prev->ta_priority >= task->ta_priority) { - STAILQ_INSERT_TAIL(&queue->tq_queue, task, ta_link); - } else { - prev = 0; - for (ins = STAILQ_FIRST(&queue->tq_queue); ins; - prev = ins, ins = STAILQ_NEXT(ins, ta_link)) - if (ins->ta_priority < task->ta_priority) - break; - - if (prev) - STAILQ_INSERT_AFTER(&queue->tq_queue, prev, task, ta_link); - else - STAILQ_INSERT_HEAD(&queue->tq_queue, task, ta_link); - } - - task->ta_pending = 1; - queue->tq_enqueue(queue->tq_context); - - mtx_unlock_spin(&queue->tq_mutex); - - return 0; + return _taskqueue_create(name, mflags, enqueue, context, + MTX_SPIN, "fast_taskqueue"); } -static void -taskqueue_run_fast(struct taskqueue *queue) +/* NB: for backwards compatibility */ +int +taskqueue_enqueue_fast(struct taskqueue *queue, struct task *task) { - struct task *task; - int pending; - - mtx_lock_spin(&queue->tq_mutex); - while (STAILQ_FIRST(&queue->tq_queue)) { - /* - * Carefully remove the first task from the queue and - * zero its pending count. - */ - task = STAILQ_FIRST(&queue->tq_queue); - STAILQ_REMOVE_HEAD(&queue->tq_queue, ta_link); - pending = task->ta_pending; - task->ta_pending = 0; - mtx_unlock_spin(&queue->tq_mutex); - - task->ta_func(task->ta_context, pending); - - mtx_lock_spin(&queue->tq_mutex); - } - mtx_unlock_spin(&queue->tq_mutex); + return taskqueue_enqueue(queue, task); } -struct taskqueue *taskqueue_fast; static void *taskqueue_fast_ih; static void -taskqueue_fast_schedule(void *context) +taskqueue_fast_enqueue(void *context) { swi_sched(taskqueue_fast_ih, 0); } @@ -386,31 +416,9 @@ static void taskqueue_fast_run(void *dummy) { - taskqueue_run_fast(taskqueue_fast); + taskqueue_run(taskqueue_fast); } -static void -taskqueue_define_fast(void *arg) -{ - - taskqueue_fast = malloc(sizeof(struct taskqueue), M_TASKQUEUE, - M_NOWAIT | M_ZERO); - if (!taskqueue_fast) { - printf("%s: Unable to allocate fast task queue!\n", __func__); - return; - } - - STAILQ_INIT(&taskqueue_fast->tq_queue); - taskqueue_fast->tq_name = "fast"; - taskqueue_fast->tq_enqueue = taskqueue_fast_schedule; - mtx_init(&taskqueue_fast->tq_mutex, "taskqueue_fast", NULL, MTX_SPIN); - - mtx_lock(&taskqueue_queues_mutex); - STAILQ_INSERT_TAIL(&taskqueue_queues, taskqueue_fast, tq_link); - mtx_unlock(&taskqueue_queues_mutex); - - swi_add(NULL, "Fast task queue", taskqueue_fast_run, - NULL, SWI_TQ_FAST, INTR_MPSAFE, &taskqueue_fast_ih); -} -SYSINIT(taskqueue_fast, SI_SUB_CONFIGURE, SI_ORDER_SECOND, - taskqueue_define_fast, NULL); +TASKQUEUE_FAST_DEFINE(fast, taskqueue_fast_enqueue, 0, + swi_add(NULL, "Fast task queue", taskqueue_fast_run, NULL, + SWI_TQ_FAST, INTR_MPSAFE, &taskqueue_fast_ih)); diff -ruN src.03.14/sys/sys/systm.h src.03.15/sys/sys/systm.h --- sys/sys/systm.h.orig Wed Mar 29 17:46:55 2006 +++ sys/sys/systm.h Wed Mar 29 18:02:50 2006 @@ -32,7 +32,7 @@ * SUCH DAMAGE. * * @(#)systm.h 8.7 (Berkeley) 3/29/95 - * $FreeBSD: src/sys/sys/systm.h,v 1.234.2.2 2006/03/13 03:07:23 jeff Exp $ + * $FreeBSD: src/sys/sys/systm.h,v 1.234.2.3 2006/03/14 23:28:30 sam Exp $ */ #ifndef _SYS_SYSTM_H_ @@ -296,6 +296,7 @@ */ int msleep(void *chan, struct mtx *mtx, int pri, const char *wmesg, int timo); +int msleep_spin(void *chan, struct mtx *mtx, const char *wmesg, int timo); #define tsleep(chan, pri, wmesg, timo) msleep(chan, NULL, pri, wmesg, timo) void wakeup(void *chan) __nonnull(1); void wakeup_one(void *chan) __nonnull(1); diff -ruN src.03.14/sys/sys/taskqueue.h src.03.15/sys/sys/taskqueue.h --- sys/sys/taskqueue.h.orig Wed Mar 29 17:46:55 2006 +++ sys/sys/taskqueue.h Wed Mar 29 18:02:50 2006 @@ -23,7 +23,7 @@ * OUT OF THE USE OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF * SUCH DAMAGE. * - * $FreeBSD: src/sys/sys/taskqueue.h,v 1.14 2005/05/01 00:38:11 sam Exp $ + * $FreeBSD: src/sys/sys/taskqueue.h,v 1.14.2.1 2006/03/14 23:28:30 sam Exp $ */ #ifndef _SYS_TASKQUEUE_H_ @@ -51,6 +51,8 @@ struct taskqueue *taskqueue_create(const char *name, int mflags, taskqueue_enqueue_fn enqueue, void *context, struct proc **); +int taskqueue_start_threads(struct taskqueue **tqp, int count, int pri, + const char *name, ...) __printflike(4, 5); int taskqueue_enqueue(struct taskqueue *queue, struct task *task); void taskqueue_drain(struct taskqueue *queue, struct task *task); struct taskqueue *taskqueue_find(const char *name); @@ -80,7 +82,7 @@ extern struct taskqueue *taskqueue_##name /* - * Define and initialise a taskqueue. + * Define and initialise a global taskqueue that uses sleep mutexes. */ #define TASKQUEUE_DEFINE(name, enqueue, context, init) \ \ @@ -89,10 +91,8 @@ static void \ taskqueue_define_##name(void *arg) \ { \ - static struct proc *taskqueue_##name##_proc; \ taskqueue_##name = \ - taskqueue_create(#name, M_NOWAIT, (enqueue), (context), \ - &taskqueue_##name##_proc); \ + taskqueue_create(#name, M_NOWAIT, (enqueue), (context), NULL);\ init; \ } \ \ @@ -102,8 +102,33 @@ struct __hack #define TASKQUEUE_DEFINE_THREAD(name) \ TASKQUEUE_DEFINE(name, taskqueue_thread_enqueue, &taskqueue_##name, \ - kthread_create(taskqueue_thread_loop, &taskqueue_##name, \ - &taskqueue_##name##_proc, 0, 0, #name " taskq")) + taskqueue_start_threads(&taskqueue_##name, 1, PWAIT, \ + "%s taskq", #name)) + +/* + * Define and initialise a global taskqueue that uses spin mutexes. + */ +#define TASKQUEUE_FAST_DEFINE(name, enqueue, context, init) \ + \ +struct taskqueue *taskqueue_##name; \ + \ +static void \ +taskqueue_define_##name(void *arg) \ +{ \ + taskqueue_##name = \ + taskqueue_create_fast(#name, M_NOWAIT, (enqueue), \ + (context)); \ + init; \ +} \ + \ +SYSINIT(taskqueue_##name, SI_SUB_CONFIGURE, SI_ORDER_SECOND, \ + taskqueue_define_##name, NULL) \ + \ +struct __hack +#define TASKQUEUE_FAST_DEFINE_THREAD(name) \ +TASKQUEUE_FAST_DEFINE(name, taskqueue_thread_enqueue, \ + &taskqueue_##name, taskqueue_start_threads(&taskqueue_##name \ + 1, PWAIT, "%s taskq", #name)) /* * These queues are serviced by software interrupt handlers. To enqueue @@ -127,5 +152,8 @@ */ TASKQUEUE_DECLARE(fast); int taskqueue_enqueue_fast(struct taskqueue *queue, struct task *task); +struct taskqueue *taskqueue_create_fast(const char *name, int mflags, + taskqueue_enqueue_fn enqueue, + void *context); #endif /* !_SYS_TASKQUEUE_H_ */ --=-=-=--