From owner-svn-soc-all@FreeBSD.ORG Mon May 23 01:00:57 2011 Return-Path: Delivered-To: svn-soc-all@FreeBSD.org Received: from socsvn.FreeBSD.org (unknown [IPv6:2001:4f8:fff6::2f]) by hub.freebsd.org (Postfix) with SMTP id A69CC106566B for ; Mon, 23 May 2011 01:00:56 +0000 (UTC) (envelope-from aalvarez@FreeBSD.org) Received: by socsvn.FreeBSD.org (sSMTP sendmail emulation); Mon, 23 May 2011 01:00:56 +0000 Date: Mon, 23 May 2011 01:00:56 +0000 From: aalvarez@FreeBSD.org To: svn-soc-all@FreeBSD.org MIME-Version: 1.0 Content-Type: text/plain; charset=UTF-8 Content-Transfer-Encoding: 8bit Message-Id: <20110523010056.A69CC106566B@hub.freebsd.org> Cc: Subject: socsvn commit: r222291 - soc2011/aalvarez X-BeenThere: svn-soc-all@freebsd.org X-Mailman-Version: 2.1.5 Precedence: list List-Id: SVN commit messages for the entire Summer of Code repository List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , X-List-Received-Date: Mon, 23 May 2011 01:00:57 -0000 Author: aalvarez Date: Mon May 23 01:00:56 2011 New Revision: 222291 URL: http://svnweb.FreeBSD.org/socsvn/?view=rev&rev=222291 Log: Create aalvarez directory Added: soc2011/aalvarez/ From owner-svn-soc-all@FreeBSD.ORG Mon May 23 01:03:02 2011 Return-Path: Delivered-To: svn-soc-all@FreeBSD.org Received: from socsvn.FreeBSD.org (unknown [IPv6:2001:4f8:fff6::2f]) by hub.freebsd.org (Postfix) with SMTP id 545AC1065676 for ; Mon, 23 May 2011 01:03:01 +0000 (UTC) (envelope-from aalvarez@FreeBSD.org) Received: by socsvn.FreeBSD.org (sSMTP sendmail emulation); Mon, 23 May 2011 01:03:01 +0000 Date: Mon, 23 May 2011 01:03:01 +0000 From: aalvarez@FreeBSD.org To: svn-soc-all@FreeBSD.org MIME-Version: 1.0 Content-Type: text/plain; charset=UTF-8 Content-Transfer-Encoding: 8bit Message-Id: <20110523010301.545AC1065676@hub.freebsd.org> Cc: Subject: socsvn commit: r222292 - soc2011/aalvarez/pbmac X-BeenThere: svn-soc-all@freebsd.org X-Mailman-Version: 2.1.5 Precedence: list List-Id: SVN commit messages for the entire Summer of Code repository List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , X-List-Received-Date: Mon, 23 May 2011 01:03:02 -0000 Author: aalvarez Date: Mon May 23 01:03:01 2011 New Revision: 222292 URL: http://svnweb.FreeBSD.org/socsvn/?view=rev&rev=222292 Log: Initial import of head Added: soc2011/aalvarez/pbmac/ (props changed) - copied from r222291, mirror/FreeBSD/head/ From owner-svn-soc-all@FreeBSD.ORG Mon May 23 10:39:08 2011 Return-Path: Delivered-To: svn-soc-all@FreeBSD.org Received: from socsvn.FreeBSD.org (unknown [IPv6:2001:4f8:fff6::2f]) by hub.freebsd.org (Postfix) with SMTP id 28C5D1065673 for ; Mon, 23 May 2011 10:39:07 +0000 (UTC) (envelope-from lassi@FreeBSD.org) Received: by socsvn.FreeBSD.org (sSMTP sendmail emulation); Mon, 23 May 2011 10:39:07 +0000 Date: Mon, 23 May 2011 10:39:07 +0000 From: lassi@FreeBSD.org To: svn-soc-all@FreeBSD.org MIME-Version: 1.0 Content-Type: text/plain; charset=UTF-8 Content-Transfer-Encoding: 8bit Message-Id: <20110523103907.28C5D1065673@hub.freebsd.org> Cc: Subject: socsvn commit: r222296 - soc2011/lassi X-BeenThere: svn-soc-all@freebsd.org X-Mailman-Version: 2.1.5 Precedence: list List-Id: SVN commit messages for the entire Summer of Code repository List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , X-List-Received-Date: Mon, 23 May 2011 10:39:08 -0000 Author: lassi Date: Mon May 23 10:39:06 2011 New Revision: 222296 URL: http://svnweb.FreeBSD.org/socsvn/?view=rev&rev=222296 Log: - Initial commit Added: soc2011/lassi/ From owner-svn-soc-all@FreeBSD.ORG Tue May 24 08:51:10 2011 Return-Path: Delivered-To: svn-soc-all@FreeBSD.org Received: from socsvn.FreeBSD.org (unknown [IPv6:2001:4f8:fff6::2f]) by hub.freebsd.org (Postfix) with SMTP id 75826106567E for ; Tue, 24 May 2011 08:51:08 +0000 (UTC) (envelope-from rudot@FreeBSD.org) Received: by socsvn.FreeBSD.org (sSMTP sendmail emulation); Tue, 24 May 2011 08:51:08 +0000 Date: Tue, 24 May 2011 08:51:08 +0000 From: rudot@FreeBSD.org To: svn-soc-all@FreeBSD.org MIME-Version: 1.0 Content-Type: text/plain; charset=UTF-8 Content-Transfer-Encoding: 8bit Message-Id: <20110524085108.75826106567E@hub.freebsd.org> Cc: Subject: socsvn commit: r222332 - in soc2011/rudot: . kern X-BeenThere: svn-soc-all@freebsd.org X-Mailman-Version: 2.1.5 Precedence: list List-Id: SVN commit messages for the entire Summer of Code repository List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , X-List-Received-Date: Tue, 24 May 2011 08:51:10 -0000 Author: rudot Date: Tue May 24 08:51:08 2011 New Revision: 222332 URL: http://svnweb.FreeBSD.org/socsvn/?view=rev&rev=222332 Log: Original 4.4BSD scheduler as a starting point Added: soc2011/rudot/kern/ soc2011/rudot/kern/sched_4bsd.c Deleted: soc2011/rudot/test.c Added: soc2011/rudot/kern/sched_4bsd.c ============================================================================== --- /dev/null 00:00:00 1970 (empty, because file is newly added) +++ soc2011/rudot/kern/sched_4bsd.c Tue May 24 08:51:08 2011 (r222332) @@ -0,0 +1,1679 @@ +/*- + * Copyright (c) 1982, 1986, 1990, 1991, 1993 + * The Regents of the University of California. All rights reserved. + * (c) UNIX System Laboratories, Inc. + * All or some portions of this file are derived from material licensed + * to the University of California by American Telephone and Telegraph + * Co. or Unix System Laboratories, Inc. and are reproduced herein with + * the permission of UNIX System Laboratories, Inc. + * + * Redistribution and use in source and binary forms, with or without + * modification, are permitted provided that the following conditions + * are met: + * 1. Redistributions of source code must retain the above copyright + * notice, this list of conditions and the following disclaimer. + * 2. Redistributions in binary form must reproduce the above copyright + * notice, this list of conditions and the following disclaimer in the + * documentation and/or other materials provided with the distribution. + * 4. Neither the name of the University nor the names of its contributors + * may be used to endorse or promote products derived from this software + * without specific prior written permission. + * + * THIS SOFTWARE IS PROVIDED BY THE REGENTS AND CONTRIBUTORS ``AS IS'' AND + * ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT LIMITED TO, THE + * IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR A PARTICULAR PURPOSE + * ARE DISCLAIMED. IN NO EVENT SHALL THE REGENTS OR CONTRIBUTORS BE LIABLE + * FOR ANY DIRECT, INDIRECT, INCIDENTAL, SPECIAL, EXEMPLARY, OR CONSEQUENTIAL + * DAMAGES (INCLUDING, BUT NOT LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS + * OR SERVICES; LOSS OF USE, DATA, OR PROFITS; OR BUSINESS INTERRUPTION) + * HOWEVER CAUSED AND ON ANY THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT + * LIABILITY, OR TORT (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY + * OUT OF THE USE OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF + * SUCH DAMAGE. + */ + +#include +__FBSDID("$FreeBSD: src/sys/kern/sched_4bsd.c,v 1.131.2.7.2.1 2010/12/21 17:09:25 kensmith Exp $"); + +#include "opt_hwpmc_hooks.h" +#include "opt_sched.h" +#include "opt_kdtrace.h" + +#include +#include +#include +#include +#include +#include +#include +#include +#include +#include +#include +#include +#include +#include +#include +#include +#include +#include + +#ifdef HWPMC_HOOKS +#include +#endif + +#ifdef KDTRACE_HOOKS +#include +int dtrace_vtime_active; +dtrace_vtime_switch_func_t dtrace_vtime_switch_func; +#endif + +/* + * INVERSE_ESTCPU_WEIGHT is only suitable for statclock() frequencies in + * the range 100-256 Hz (approximately). + */ +#define ESTCPULIM(e) \ + min((e), INVERSE_ESTCPU_WEIGHT * (NICE_WEIGHT * (PRIO_MAX - PRIO_MIN) - \ + RQ_PPQ) + INVERSE_ESTCPU_WEIGHT - 1) +#ifdef SMP +#define INVERSE_ESTCPU_WEIGHT (8 * smp_cpus) +#else +#define INVERSE_ESTCPU_WEIGHT 8 /* 1 / (priorities per estcpu level). */ +#endif +#define NICE_WEIGHT 1 /* Priorities per nice level. */ + +#define TS_NAME_LEN (MAXCOMLEN + sizeof(" td ") + sizeof(__XSTRING(UINT_MAX))) + +/* + * The schedulable entity that runs a context. + * This is an extension to the thread structure and is tailored to + * the requirements of this scheduler + */ +struct td_sched { + fixpt_t ts_pctcpu; /* (j) %cpu during p_swtime. */ + int ts_cpticks; /* (j) Ticks of cpu time. */ + int ts_slptime; /* (j) Seconds !RUNNING. */ + int ts_flags; + struct runq *ts_runq; /* runq the thread is currently on */ +#ifdef KTR + char ts_name[TS_NAME_LEN]; +#endif +}; + +/* flags kept in td_flags */ +#define TDF_DIDRUN TDF_SCHED0 /* thread actually ran. */ +#define TDF_BOUND TDF_SCHED1 /* Bound to one CPU. */ + +/* flags kept in ts_flags */ +#define TSF_AFFINITY 0x0001 /* Has a non-"full" CPU set. */ + +#define SKE_RUNQ_PCPU(ts) \ + ((ts)->ts_runq != 0 && (ts)->ts_runq != &runq) + +#define THREAD_CAN_SCHED(td, cpu) \ + CPU_ISSET((cpu), &(td)->td_cpuset->cs_mask) + +static struct td_sched td_sched0; +struct mtx sched_lock; + +static int sched_tdcnt; /* Total runnable threads in the system. */ +static int sched_quantum; /* Roundrobin scheduling quantum in ticks. */ +#define SCHED_QUANTUM (hz / 10) /* Default sched quantum */ + +static void setup_runqs(void); +static void schedcpu(void); +static void schedcpu_thread(void); +static void sched_priority(struct thread *td, u_char prio); +static void sched_setup(void *dummy); +static void maybe_resched(struct thread *td); +static void updatepri(struct thread *td); +static void resetpriority(struct thread *td); +static void resetpriority_thread(struct thread *td); +#ifdef SMP +static int sched_pickcpu(struct thread *td); +static int forward_wakeup(int cpunum); +static void kick_other_cpu(int pri, int cpuid); +#endif + +static struct kproc_desc sched_kp = { + "schedcpu", + schedcpu_thread, + NULL +}; +SYSINIT(schedcpu, SI_SUB_RUN_SCHEDULER, SI_ORDER_FIRST, kproc_start, + &sched_kp); +SYSINIT(sched_setup, SI_SUB_RUN_QUEUE, SI_ORDER_FIRST, sched_setup, NULL); + +/* + * Global run queue. + */ +static struct runq runq; + +#ifdef SMP +/* + * Per-CPU run queues + */ +static struct runq runq_pcpu[MAXCPU]; +long runq_length[MAXCPU]; +#endif + +static void +setup_runqs(void) +{ +#ifdef SMP + int i; + + for (i = 0; i < MAXCPU; ++i) + runq_init(&runq_pcpu[i]); +#endif + + runq_init(&runq); +} + +static int +sysctl_kern_quantum(SYSCTL_HANDLER_ARGS) +{ + int error, new_val; + + new_val = sched_quantum * tick; + error = sysctl_handle_int(oidp, &new_val, 0, req); + if (error != 0 || req->newptr == NULL) + return (error); + if (new_val < tick) + return (EINVAL); + sched_quantum = new_val / tick; + hogticks = 2 * sched_quantum; + return (0); +} + +SYSCTL_NODE(_kern, OID_AUTO, sched, CTLFLAG_RD, 0, "Scheduler"); + +SYSCTL_STRING(_kern_sched, OID_AUTO, name, CTLFLAG_RD, "4BSD", 0, + "Scheduler name"); + +SYSCTL_PROC(_kern_sched, OID_AUTO, quantum, CTLTYPE_INT | CTLFLAG_RW, + 0, sizeof sched_quantum, sysctl_kern_quantum, "I", + "Roundrobin scheduling quantum in microseconds"); + +#ifdef SMP +/* Enable forwarding of wakeups to all other cpus */ +SYSCTL_NODE(_kern_sched, OID_AUTO, ipiwakeup, CTLFLAG_RD, NULL, "Kernel SMP"); + +static int runq_fuzz = 1; +SYSCTL_INT(_kern_sched, OID_AUTO, runq_fuzz, CTLFLAG_RW, &runq_fuzz, 0, ""); + +static int forward_wakeup_enabled = 1; +SYSCTL_INT(_kern_sched_ipiwakeup, OID_AUTO, enabled, CTLFLAG_RW, + &forward_wakeup_enabled, 0, + "Forwarding of wakeup to idle CPUs"); + +static int forward_wakeups_requested = 0; +SYSCTL_INT(_kern_sched_ipiwakeup, OID_AUTO, requested, CTLFLAG_RD, + &forward_wakeups_requested, 0, + "Requests for Forwarding of wakeup to idle CPUs"); + +static int forward_wakeups_delivered = 0; +SYSCTL_INT(_kern_sched_ipiwakeup, OID_AUTO, delivered, CTLFLAG_RD, + &forward_wakeups_delivered, 0, + "Completed Forwarding of wakeup to idle CPUs"); + +static int forward_wakeup_use_mask = 1; +SYSCTL_INT(_kern_sched_ipiwakeup, OID_AUTO, usemask, CTLFLAG_RW, + &forward_wakeup_use_mask, 0, + "Use the mask of idle cpus"); + +static int forward_wakeup_use_loop = 0; +SYSCTL_INT(_kern_sched_ipiwakeup, OID_AUTO, useloop, CTLFLAG_RW, + &forward_wakeup_use_loop, 0, + "Use a loop to find idle cpus"); + +static int forward_wakeup_use_single = 0; +SYSCTL_INT(_kern_sched_ipiwakeup, OID_AUTO, onecpu, CTLFLAG_RW, + &forward_wakeup_use_single, 0, + "Only signal one idle cpu"); + +static int forward_wakeup_use_htt = 0; +SYSCTL_INT(_kern_sched_ipiwakeup, OID_AUTO, htt2, CTLFLAG_RW, + &forward_wakeup_use_htt, 0, + "account for htt"); + +#endif +#if 0 +static int sched_followon = 0; +SYSCTL_INT(_kern_sched, OID_AUTO, followon, CTLFLAG_RW, + &sched_followon, 0, + "allow threads to share a quantum"); +#endif + +static __inline void +sched_load_add(void) +{ + + sched_tdcnt++; + KTR_COUNTER0(KTR_SCHED, "load", "global load", sched_tdcnt); +} + +static __inline void +sched_load_rem(void) +{ + + sched_tdcnt--; + KTR_COUNTER0(KTR_SCHED, "load", "global load", sched_tdcnt); +} +/* + * Arrange to reschedule if necessary, taking the priorities and + * schedulers into account. + */ +static void +maybe_resched(struct thread *td) +{ + + THREAD_LOCK_ASSERT(td, MA_OWNED); + if (td->td_priority < curthread->td_priority) + curthread->td_flags |= TDF_NEEDRESCHED; +} + +/* + * This function is called when a thread is about to be put on run queue + * because it has been made runnable or its priority has been adjusted. It + * determines if the new thread should be immediately preempted to. If so, + * it switches to it and eventually returns true. If not, it returns false + * so that the caller may place the thread on an appropriate run queue. + */ +int +maybe_preempt(struct thread *td) +{ +#ifdef PREEMPTION + struct thread *ctd; + int cpri, pri; + + /* + * The new thread should not preempt the current thread if any of the + * following conditions are true: + * + * - The kernel is in the throes of crashing (panicstr). + * - The current thread has a higher (numerically lower) or + * equivalent priority. Note that this prevents curthread from + * trying to preempt to itself. + * - It is too early in the boot for context switches (cold is set). + * - The current thread has an inhibitor set or is in the process of + * exiting. In this case, the current thread is about to switch + * out anyways, so there's no point in preempting. If we did, + * the current thread would not be properly resumed as well, so + * just avoid that whole landmine. + * - If the new thread's priority is not a realtime priority and + * the current thread's priority is not an idle priority and + * FULL_PREEMPTION is disabled. + * + * If all of these conditions are false, but the current thread is in + * a nested critical section, then we have to defer the preemption + * until we exit the critical section. Otherwise, switch immediately + * to the new thread. + */ + ctd = curthread; + THREAD_LOCK_ASSERT(td, MA_OWNED); + KASSERT((td->td_inhibitors == 0), + ("maybe_preempt: trying to run inhibited thread")); + pri = td->td_priority; + cpri = ctd->td_priority; + if (panicstr != NULL || pri >= cpri || cold /* || dumping */ || + TD_IS_INHIBITED(ctd)) + return (0); +#ifndef FULL_PREEMPTION + if (pri > PRI_MAX_ITHD && cpri < PRI_MIN_IDLE) + return (0); +#endif + + if (ctd->td_critnest > 1) { + CTR1(KTR_PROC, "maybe_preempt: in critical section %d", + ctd->td_critnest); + ctd->td_owepreempt = 1; + return (0); + } + /* + * Thread is runnable but not yet put on system run queue. + */ + MPASS(ctd->td_lock == td->td_lock); + MPASS(TD_ON_RUNQ(td)); + TD_SET_RUNNING(td); + CTR3(KTR_PROC, "preempting to thread %p (pid %d, %s)\n", td, + td->td_proc->p_pid, td->td_name); + mi_switch(SW_INVOL | SW_PREEMPT | SWT_PREEMPT, td); + /* + * td's lock pointer may have changed. We have to return with it + * locked. + */ + spinlock_enter(); + thread_unlock(ctd); + thread_lock(td); + spinlock_exit(); + return (1); +#else + return (0); +#endif +} + +/* + * Constants for digital decay and forget: + * 90% of (td_estcpu) usage in 5 * loadav time + * 95% of (ts_pctcpu) usage in 60 seconds (load insensitive) + * Note that, as ps(1) mentions, this can let percentages + * total over 100% (I've seen 137.9% for 3 processes). + * + * Note that schedclock() updates td_estcpu and p_cpticks asynchronously. + * + * We wish to decay away 90% of td_estcpu in (5 * loadavg) seconds. + * That is, the system wants to compute a value of decay such + * that the following for loop: + * for (i = 0; i < (5 * loadavg); i++) + * td_estcpu *= decay; + * will compute + * td_estcpu *= 0.1; + * for all values of loadavg: + * + * Mathematically this loop can be expressed by saying: + * decay ** (5 * loadavg) ~= .1 + * + * The system computes decay as: + * decay = (2 * loadavg) / (2 * loadavg + 1) + * + * We wish to prove that the system's computation of decay + * will always fulfill the equation: + * decay ** (5 * loadavg) ~= .1 + * + * If we compute b as: + * b = 2 * loadavg + * then + * decay = b / (b + 1) + * + * We now need to prove two things: + * 1) Given factor ** (5 * loadavg) ~= .1, prove factor == b/(b+1) + * 2) Given b/(b+1) ** power ~= .1, prove power == (5 * loadavg) + * + * Facts: + * For x close to zero, exp(x) =~ 1 + x, since + * exp(x) = 0! + x**1/1! + x**2/2! + ... . + * therefore exp(-1/b) =~ 1 - (1/b) = (b-1)/b. + * For x close to zero, ln(1+x) =~ x, since + * ln(1+x) = x - x**2/2 + x**3/3 - ... -1 < x < 1 + * therefore ln(b/(b+1)) = ln(1 - 1/(b+1)) =~ -1/(b+1). + * ln(.1) =~ -2.30 + * + * Proof of (1): + * Solve (factor)**(power) =~ .1 given power (5*loadav): + * solving for factor, + * ln(factor) =~ (-2.30/5*loadav), or + * factor =~ exp(-1/((5/2.30)*loadav)) =~ exp(-1/(2*loadav)) = + * exp(-1/b) =~ (b-1)/b =~ b/(b+1). QED + * + * Proof of (2): + * Solve (factor)**(power) =~ .1 given factor == (b/(b+1)): + * solving for power, + * power*ln(b/(b+1)) =~ -2.30, or + * power =~ 2.3 * (b + 1) = 4.6*loadav + 2.3 =~ 5*loadav. QED + * + * Actual power values for the implemented algorithm are as follows: + * loadav: 1 2 3 4 + * power: 5.68 10.32 14.94 19.55 + */ + +/* calculations for digital decay to forget 90% of usage in 5*loadav sec */ +#define loadfactor(loadav) (2 * (loadav)) +#define decay_cpu(loadfac, cpu) (((loadfac) * (cpu)) / ((loadfac) + FSCALE)) + +/* decay 95% of `ts_pctcpu' in 60 seconds; see CCPU_SHIFT before changing */ +static fixpt_t ccpu = 0.95122942450071400909 * FSCALE; /* exp(-1/20) */ +SYSCTL_INT(_kern, OID_AUTO, ccpu, CTLFLAG_RD, &ccpu, 0, ""); + +/* + * If `ccpu' is not equal to `exp(-1/20)' and you still want to use the + * faster/more-accurate formula, you'll have to estimate CCPU_SHIFT below + * and possibly adjust FSHIFT in "param.h" so that (FSHIFT >= CCPU_SHIFT). + * + * To estimate CCPU_SHIFT for exp(-1/20), the following formula was used: + * 1 - exp(-1/20) ~= 0.0487 ~= 0.0488 == 1 (fixed pt, *11* bits). + * + * If you don't want to bother with the faster/more-accurate formula, you + * can set CCPU_SHIFT to (FSHIFT + 1) which will use a slower/less-accurate + * (more general) method of calculating the %age of CPU used by a process. + */ +#define CCPU_SHIFT 11 + +/* + * Recompute process priorities, every hz ticks. + * MP-safe, called without the Giant mutex. + */ +/* ARGSUSED */ +static void +schedcpu(void) +{ + register fixpt_t loadfac = loadfactor(averunnable.ldavg[0]); + struct thread *td; + struct proc *p; + struct td_sched *ts; + int awake, realstathz; + + realstathz = stathz ? stathz : hz; + sx_slock(&allproc_lock); + FOREACH_PROC_IN_SYSTEM(p) { + PROC_LOCK(p); + FOREACH_THREAD_IN_PROC(p, td) { + awake = 0; + thread_lock(td); + ts = td->td_sched; + /* + * Increment sleep time (if sleeping). We + * ignore overflow, as above. + */ + /* + * The td_sched slptimes are not touched in wakeup + * because the thread may not HAVE everything in + * memory? XXX I think this is out of date. + */ + if (TD_ON_RUNQ(td)) { + awake = 1; + td->td_flags &= ~TDF_DIDRUN; + } else if (TD_IS_RUNNING(td)) { + awake = 1; + /* Do not clear TDF_DIDRUN */ + } else if (td->td_flags & TDF_DIDRUN) { + awake = 1; + td->td_flags &= ~TDF_DIDRUN; + } + + /* + * ts_pctcpu is only for ps and ttyinfo(). + */ + ts->ts_pctcpu = (ts->ts_pctcpu * ccpu) >> FSHIFT; + /* + * If the td_sched has been idle the entire second, + * stop recalculating its priority until + * it wakes up. + */ + if (ts->ts_cpticks != 0) { +#if (FSHIFT >= CCPU_SHIFT) + ts->ts_pctcpu += (realstathz == 100) + ? ((fixpt_t) ts->ts_cpticks) << + (FSHIFT - CCPU_SHIFT) : + 100 * (((fixpt_t) ts->ts_cpticks) + << (FSHIFT - CCPU_SHIFT)) / realstathz; +#else + ts->ts_pctcpu += ((FSCALE - ccpu) * + (ts->ts_cpticks * + FSCALE / realstathz)) >> FSHIFT; +#endif + ts->ts_cpticks = 0; + } + /* + * If there are ANY running threads in this process, + * then don't count it as sleeping. + * XXX: this is broken. + */ + if (awake) { + if (ts->ts_slptime > 1) { + /* + * In an ideal world, this should not + * happen, because whoever woke us + * up from the long sleep should have + * unwound the slptime and reset our + * priority before we run at the stale + * priority. Should KASSERT at some + * point when all the cases are fixed. + */ + updatepri(td); + } + ts->ts_slptime = 0; + } else + ts->ts_slptime++; + if (ts->ts_slptime > 1) { + thread_unlock(td); + continue; + } + td->td_estcpu = decay_cpu(loadfac, td->td_estcpu); + resetpriority(td); + resetpriority_thread(td); + thread_unlock(td); + } + PROC_UNLOCK(p); + } + sx_sunlock(&allproc_lock); +} + +/* + * Main loop for a kthread that executes schedcpu once a second. + */ +static void +schedcpu_thread(void) +{ + + for (;;) { + schedcpu(); + pause("-", hz); + } +} + +/* + * Recalculate the priority of a process after it has slept for a while. + * For all load averages >= 1 and max td_estcpu of 255, sleeping for at + * least six times the loadfactor will decay td_estcpu to zero. + */ +static void +updatepri(struct thread *td) +{ + struct td_sched *ts; + fixpt_t loadfac; + unsigned int newcpu; + + ts = td->td_sched; + loadfac = loadfactor(averunnable.ldavg[0]); + if (ts->ts_slptime > 5 * loadfac) + td->td_estcpu = 0; + else { + newcpu = td->td_estcpu; + ts->ts_slptime--; /* was incremented in schedcpu() */ + while (newcpu && --ts->ts_slptime) + newcpu = decay_cpu(loadfac, newcpu); + td->td_estcpu = newcpu; + } +} + +/* + * Compute the priority of a process when running in user mode. + * Arrange to reschedule if the resulting priority is better + * than that of the current process. + */ +static void +resetpriority(struct thread *td) +{ + register unsigned int newpriority; + + if (td->td_pri_class == PRI_TIMESHARE) { + newpriority = PUSER + td->td_estcpu / INVERSE_ESTCPU_WEIGHT + + NICE_WEIGHT * (td->td_proc->p_nice - PRIO_MIN); + newpriority = min(max(newpriority, PRI_MIN_TIMESHARE), + PRI_MAX_TIMESHARE); + sched_user_prio(td, newpriority); + } +} + +/* + * Update the thread's priority when the associated process's user + * priority changes. + */ +static void +resetpriority_thread(struct thread *td) +{ + + /* Only change threads with a time sharing user priority. */ + if (td->td_priority < PRI_MIN_TIMESHARE || + td->td_priority > PRI_MAX_TIMESHARE) + return; + + /* XXX the whole needresched thing is broken, but not silly. */ + maybe_resched(td); + + sched_prio(td, td->td_user_pri); +} + +/* ARGSUSED */ +static void +sched_setup(void *dummy) +{ + setup_runqs(); + + if (sched_quantum == 0) + sched_quantum = SCHED_QUANTUM; + hogticks = 2 * sched_quantum; + + /* Account for thread0. */ + sched_load_add(); +} + +/* External interfaces start here */ + +/* + * Very early in the boot some setup of scheduler-specific + * parts of proc0 and of some scheduler resources needs to be done. + * Called from: + * proc0_init() + */ +void +schedinit(void) +{ + /* + * Set up the scheduler specific parts of proc0. + */ + proc0.p_sched = NULL; /* XXX */ + thread0.td_sched = &td_sched0; + thread0.td_lock = &sched_lock; + mtx_init(&sched_lock, "sched lock", NULL, MTX_SPIN | MTX_RECURSE); +} + +int +sched_runnable(void) +{ +#ifdef SMP + return runq_check(&runq) + runq_check(&runq_pcpu[PCPU_GET(cpuid)]); +#else + return runq_check(&runq); +#endif +} + +int +sched_rr_interval(void) +{ + if (sched_quantum == 0) + sched_quantum = SCHED_QUANTUM; + return (sched_quantum); +} + +/* + * We adjust the priority of the current process. The priority of + * a process gets worse as it accumulates CPU time. The cpu usage + * estimator (td_estcpu) is increased here. resetpriority() will + * compute a different priority each time td_estcpu increases by + * INVERSE_ESTCPU_WEIGHT + * (until MAXPRI is reached). The cpu usage estimator ramps up + * quite quickly when the process is running (linearly), and decays + * away exponentially, at a rate which is proportionally slower when + * the system is busy. The basic principle is that the system will + * 90% forget that the process used a lot of CPU time in 5 * loadav + * seconds. This causes the system to favor processes which haven't + * run much recently, and to round-robin among other processes. + */ +void +sched_clock(struct thread *td) +{ + struct td_sched *ts; + + THREAD_LOCK_ASSERT(td, MA_OWNED); + ts = td->td_sched; + + ts->ts_cpticks++; + td->td_estcpu = ESTCPULIM(td->td_estcpu + 1); + if ((td->td_estcpu % INVERSE_ESTCPU_WEIGHT) == 0) { + resetpriority(td); + resetpriority_thread(td); + } + + /* + * Force a context switch if the current thread has used up a full + * quantum (default quantum is 100ms). + */ + if (!TD_IS_IDLETHREAD(td) && + ticks - PCPU_GET(switchticks) >= sched_quantum) + td->td_flags |= TDF_NEEDRESCHED; +} + +/* + * Charge child's scheduling CPU usage to parent. + */ +void +sched_exit(struct proc *p, struct thread *td) +{ + + KTR_STATE1(KTR_SCHED, "thread", sched_tdname(td), "proc exit", + "prio:%d", td->td_priority); + + PROC_LOCK_ASSERT(p, MA_OWNED); + sched_exit_thread(FIRST_THREAD_IN_PROC(p), td); +} + +void +sched_exit_thread(struct thread *td, struct thread *child) +{ + + KTR_STATE1(KTR_SCHED, "thread", sched_tdname(child), "exit", + "prio:%d", child->td_priority); + thread_lock(td); + td->td_estcpu = ESTCPULIM(td->td_estcpu + child->td_estcpu); + thread_unlock(td); + mtx_lock_spin(&sched_lock); + if ((child->td_proc->p_flag & P_NOLOAD) == 0) + sched_load_rem(); + mtx_unlock_spin(&sched_lock); +} + +void +sched_fork(struct thread *td, struct thread *childtd) +{ + sched_fork_thread(td, childtd); +} + +void +sched_fork_thread(struct thread *td, struct thread *childtd) +{ + struct td_sched *ts; + + childtd->td_estcpu = td->td_estcpu; + childtd->td_lock = &sched_lock; + childtd->td_cpuset = cpuset_ref(td->td_cpuset); + ts = childtd->td_sched; + bzero(ts, sizeof(*ts)); + ts->ts_flags |= (td->td_sched->ts_flags & TSF_AFFINITY); +} + +void +sched_nice(struct proc *p, int nice) +{ + struct thread *td; + + PROC_LOCK_ASSERT(p, MA_OWNED); + p->p_nice = nice; + FOREACH_THREAD_IN_PROC(p, td) { + thread_lock(td); + resetpriority(td); + resetpriority_thread(td); + thread_unlock(td); + } +} + +void +sched_class(struct thread *td, int class) +{ + THREAD_LOCK_ASSERT(td, MA_OWNED); + td->td_pri_class = class; +} + +/* + * Adjust the priority of a thread. + */ +static void +sched_priority(struct thread *td, u_char prio) +{ + + + KTR_POINT3(KTR_SCHED, "thread", sched_tdname(td), "priority change", + "prio:%d", td->td_priority, "new prio:%d", prio, KTR_ATTR_LINKED, + sched_tdname(curthread)); + if (td != curthread && prio > td->td_priority) { + KTR_POINT3(KTR_SCHED, "thread", sched_tdname(curthread), + "lend prio", "prio:%d", td->td_priority, "new prio:%d", + prio, KTR_ATTR_LINKED, sched_tdname(td)); + } + THREAD_LOCK_ASSERT(td, MA_OWNED); + if (td->td_priority == prio) + return; + td->td_priority = prio; + if (TD_ON_RUNQ(td) && td->td_rqindex != (prio / RQ_PPQ)) { + sched_rem(td); + sched_add(td, SRQ_BORING); + } +} + +/* + * Update a thread's priority when it is lent another thread's + * priority. + */ +void +sched_lend_prio(struct thread *td, u_char prio) +{ + + td->td_flags |= TDF_BORROWING; + sched_priority(td, prio); +} + +/* + * Restore a thread's priority when priority propagation is + * over. The prio argument is the minimum priority the thread + * needs to have to satisfy other possible priority lending + * requests. If the thread's regulary priority is less + * important than prio the thread will keep a priority boost + * of prio. + */ +void +sched_unlend_prio(struct thread *td, u_char prio) +{ + u_char base_pri; + + if (td->td_base_pri >= PRI_MIN_TIMESHARE && + td->td_base_pri <= PRI_MAX_TIMESHARE) + base_pri = td->td_user_pri; + else + base_pri = td->td_base_pri; + if (prio >= base_pri) { + td->td_flags &= ~TDF_BORROWING; + sched_prio(td, base_pri); + } else + sched_lend_prio(td, prio); +} + +void +sched_prio(struct thread *td, u_char prio) +{ + u_char oldprio; + + /* First, update the base priority. */ + td->td_base_pri = prio; + + /* + * If the thread is borrowing another thread's priority, don't ever + * lower the priority. + */ + if (td->td_flags & TDF_BORROWING && td->td_priority < prio) + return; + + /* Change the real priority. */ + oldprio = td->td_priority; + sched_priority(td, prio); + + /* + * If the thread is on a turnstile, then let the turnstile update + * its state. + */ + if (TD_ON_LOCK(td) && oldprio != prio) + turnstile_adjust(td, oldprio); +} + +void +sched_user_prio(struct thread *td, u_char prio) +{ + u_char oldprio; + + THREAD_LOCK_ASSERT(td, MA_OWNED); + td->td_base_user_pri = prio; + if (td->td_flags & TDF_UBORROWING && td->td_user_pri <= prio) + return; + oldprio = td->td_user_pri; + td->td_user_pri = prio; +} + +void +sched_lend_user_prio(struct thread *td, u_char prio) +{ + u_char oldprio; + + THREAD_LOCK_ASSERT(td, MA_OWNED); + td->td_flags |= TDF_UBORROWING; + oldprio = td->td_user_pri; + td->td_user_pri = prio; +} + +void +sched_unlend_user_prio(struct thread *td, u_char prio) +{ + u_char base_pri; + + THREAD_LOCK_ASSERT(td, MA_OWNED); + base_pri = td->td_base_user_pri; + if (prio >= base_pri) { + td->td_flags &= ~TDF_UBORROWING; + sched_user_prio(td, base_pri); + } else { + sched_lend_user_prio(td, prio); + } +} + +void +sched_sleep(struct thread *td, int pri) +{ + + THREAD_LOCK_ASSERT(td, MA_OWNED); + td->td_slptick = ticks; + td->td_sched->ts_slptime = 0; + if (pri) + sched_prio(td, pri); + if (TD_IS_SUSPENDED(td) || pri >= PSOCK) + td->td_flags |= TDF_CANSWAP; +} + +void +sched_switch(struct thread *td, struct thread *newtd, int flags) +{ + struct mtx *tmtx; + struct td_sched *ts; + struct proc *p; + + tmtx = NULL; + ts = td->td_sched; + p = td->td_proc; + + THREAD_LOCK_ASSERT(td, MA_OWNED); + + /* + * Switch to the sched lock to fix things up and pick + * a new thread. + * Block the td_lock in order to avoid breaking the critical path. + */ + if (td->td_lock != &sched_lock) { + mtx_lock_spin(&sched_lock); + tmtx = thread_lock_block(td); + } + + if ((p->p_flag & P_NOLOAD) == 0) + sched_load_rem(); + + if (newtd) { + MPASS(newtd->td_lock == &sched_lock); + newtd->td_flags |= (td->td_flags & TDF_NEEDRESCHED); + } + + td->td_lastcpu = td->td_oncpu; + td->td_flags &= ~TDF_NEEDRESCHED; + td->td_owepreempt = 0; + td->td_oncpu = NOCPU; + + /* + * At the last moment, if this thread is still marked RUNNING, + * then put it back on the run queue as it has not been suspended + * or stopped or any thing else similar. We never put the idle + * threads on the run queue, however. + */ + if (td->td_flags & TDF_IDLETD) { + TD_SET_CAN_RUN(td); +#ifdef SMP + idle_cpus_mask &= ~PCPU_GET(cpumask); +#endif + } else { + if (TD_IS_RUNNING(td)) { + /* Put us back on the run queue. */ + sched_add(td, (flags & SW_PREEMPT) ? + SRQ_OURSELF|SRQ_YIELDING|SRQ_PREEMPTED : + SRQ_OURSELF|SRQ_YIELDING); + } + } + if (newtd) { + /* + * The thread we are about to run needs to be counted + * as if it had been added to the run queue and selected. + * It came from: + * * A preemption + * * An upcall + * * A followon + */ + KASSERT((newtd->td_inhibitors == 0), + ("trying to run inhibited thread")); + newtd->td_flags |= TDF_DIDRUN; + TD_SET_RUNNING(newtd); + if ((newtd->td_proc->p_flag & P_NOLOAD) == 0) + sched_load_add(); + } else { + newtd = choosethread(); + MPASS(newtd->td_lock == &sched_lock); + } + + if (td != newtd) { +#ifdef HWPMC_HOOKS + if (PMC_PROC_IS_USING_PMCS(td->td_proc)) *** DIFF OUTPUT TRUNCATED AT 1000 LINES *** From owner-svn-soc-all@FreeBSD.ORG Wed May 25 08:28:24 2011 Return-Path: Delivered-To: svn-soc-all@FreeBSD.org Received: from socsvn.FreeBSD.org (unknown [IPv6:2001:4f8:fff6::2f]) by hub.freebsd.org (Postfix) with SMTP id 1B89A1065670 for ; Wed, 25 May 2011 08:28:22 +0000 (UTC) (envelope-from rudot@FreeBSD.org) Received: by socsvn.FreeBSD.org (sSMTP sendmail emulation); Wed, 25 May 2011 08:28:22 +0000 Date: Wed, 25 May 2011 08:28:22 +0000 From: rudot@FreeBSD.org To: svn-soc-all@FreeBSD.org MIME-Version: 1.0 Content-Type: text/plain; charset=UTF-8 Content-Transfer-Encoding: 8bit Message-Id: <20110525082822.1B89A1065670@hub.freebsd.org> Cc: Subject: socsvn commit: r222364 - soc2011/rudot/kern X-BeenThere: svn-soc-all@freebsd.org X-Mailman-Version: 2.1.5 Precedence: list List-Id: SVN commit messages for the entire Summer of Code repository List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , X-List-Received-Date: Wed, 25 May 2011 08:28:24 -0000 Author: rudot Date: Wed May 25 08:28:21 2011 New Revision: 222364 URL: http://svnweb.FreeBSD.org/socsvn/?view=rev&rev=222364 Log: Single run-queue per system (even if it has more CPUs) Modified: soc2011/rudot/kern/sched_4bsd.c Modified: soc2011/rudot/kern/sched_4bsd.c ============================================================================== --- soc2011/rudot/kern/sched_4bsd.c Wed May 25 07:34:49 2011 (r222363) +++ soc2011/rudot/kern/sched_4bsd.c Wed May 25 08:28:21 2011 (r222364) @@ -129,11 +129,6 @@ static void updatepri(struct thread *td); static void resetpriority(struct thread *td); static void resetpriority_thread(struct thread *td); -#ifdef SMP -static int sched_pickcpu(struct thread *td); -static int forward_wakeup(int cpunum); -static void kick_other_cpu(int pri, int cpuid); -#endif static struct kproc_desc sched_kp = { "schedcpu", @@ -149,24 +144,9 @@ */ static struct runq runq; -#ifdef SMP -/* - * Per-CPU run queues - */ -static struct runq runq_pcpu[MAXCPU]; -long runq_length[MAXCPU]; -#endif - static void setup_runqs(void) { -#ifdef SMP - int i; - - for (i = 0; i < MAXCPU; ++i) - runq_init(&runq_pcpu[i]); -#endif - runq_init(&runq); } @@ -652,11 +632,7 @@ int sched_runnable(void) { -#ifdef SMP - return runq_check(&runq) + runq_check(&runq_pcpu[PCPU_GET(cpuid)]); -#else return runq_check(&runq); -#endif } int @@ -1060,247 +1036,8 @@ sched_add(td, SRQ_BORING); } -#ifdef SMP -static int -forward_wakeup(int cpunum) -{ - struct pcpu *pc; - cpumask_t dontuse, id, map, map2, map3, me; - - mtx_assert(&sched_lock, MA_OWNED); - - CTR0(KTR_RUNQ, "forward_wakeup()"); - - if ((!forward_wakeup_enabled) || - (forward_wakeup_use_mask == 0 && forward_wakeup_use_loop == 0)) - return (0); - if (!smp_started || cold || panicstr) - return (0); - - forward_wakeups_requested++; - - /* - * Check the idle mask we received against what we calculated - * before in the old version. - */ - me = PCPU_GET(cpumask); - - /* Don't bother if we should be doing it ourself. */ - if ((me & idle_cpus_mask) && (cpunum == NOCPU || me == (1 << cpunum))) - return (0); - - dontuse = me | stopped_cpus | hlt_cpus_mask; - map3 = 0; - if (forward_wakeup_use_loop) { - SLIST_FOREACH(pc, &cpuhead, pc_allcpu) { - id = pc->pc_cpumask; - if ((id & dontuse) == 0 && - pc->pc_curthread == pc->pc_idlethread) { - map3 |= id; - } - } - } - - if (forward_wakeup_use_mask) { - map = 0; - map = idle_cpus_mask & ~dontuse; - - /* If they are both on, compare and use loop if different. */ - if (forward_wakeup_use_loop) { - if (map != map3) { - printf("map (%02X) != map3 (%02X)\n", map, - map3); - map = map3; - } - } - } else { - map = map3; - } - - /* If we only allow a specific CPU, then mask off all the others. */ - if (cpunum != NOCPU) { - KASSERT((cpunum <= mp_maxcpus),("forward_wakeup: bad cpunum.")); - map &= (1 << cpunum); - } else { - /* Try choose an idle die. */ - if (forward_wakeup_use_htt) { - map2 = (map & (map >> 1)) & 0x5555; - if (map2) { - map = map2; - } - } - - /* Set only one bit. */ - if (forward_wakeup_use_single) { - map = map & ((~map) + 1); - } - } - if (map) { - forward_wakeups_delivered++; - ipi_selected(map, IPI_AST); - return (1); - } - if (cpunum == NOCPU) - printf("forward_wakeup: Idle processor not found\n"); - return (0); -} - -static void -kick_other_cpu(int pri, int cpuid) -{ - struct pcpu *pcpu; - int cpri; - - pcpu = pcpu_find(cpuid); - if (idle_cpus_mask & pcpu->pc_cpumask) { - forward_wakeups_delivered++; - ipi_cpu(cpuid, IPI_AST); - return; - } - - cpri = pcpu->pc_curthread->td_priority; - if (pri >= cpri) - return; - -#if defined(IPI_PREEMPTION) && defined(PREEMPTION) -#if !defined(FULL_PREEMPTION) - if (pri <= PRI_MAX_ITHD) -#endif /* ! FULL_PREEMPTION */ - { - ipi_cpu(cpuid, IPI_PREEMPT); - return; - } -#endif /* defined(IPI_PREEMPTION) && defined(PREEMPTION) */ - - pcpu->pc_curthread->td_flags |= TDF_NEEDRESCHED; - ipi_cpu(cpuid, IPI_AST); - return; -} -#endif /* SMP */ - -#ifdef SMP -static int -sched_pickcpu(struct thread *td) -{ - int best, cpu; - - mtx_assert(&sched_lock, MA_OWNED); - - if (THREAD_CAN_SCHED(td, td->td_lastcpu)) - best = td->td_lastcpu; - else - best = NOCPU; - for (cpu = 0; cpu <= mp_maxid; cpu++) { - if (CPU_ABSENT(cpu)) - continue; - if (!THREAD_CAN_SCHED(td, cpu)) - continue; - - if (best == NOCPU) - best = cpu; - else if (runq_length[cpu] < runq_length[best]) - best = cpu; - } - KASSERT(best != NOCPU, ("no valid CPUs")); - - return (best); -} -#endif - void sched_add(struct thread *td, int flags) -#ifdef SMP -{ - struct td_sched *ts; - int forwarded = 0; - int cpu; - int single_cpu = 0; - - ts = td->td_sched; - THREAD_LOCK_ASSERT(td, MA_OWNED); - KASSERT((td->td_inhibitors == 0), - ("sched_add: trying to run inhibited thread")); - KASSERT((TD_CAN_RUN(td) || TD_IS_RUNNING(td)), - ("sched_add: bad thread state")); - KASSERT(td->td_flags & TDF_INMEM, - ("sched_add: thread swapped out")); - - KTR_STATE2(KTR_SCHED, "thread", sched_tdname(td), "runq add", - "prio:%d", td->td_priority, KTR_ATTR_LINKED, - sched_tdname(curthread)); - KTR_POINT1(KTR_SCHED, "thread", sched_tdname(curthread), "wokeup", - KTR_ATTR_LINKED, sched_tdname(td)); - - - /* - * Now that the thread is moving to the run-queue, set the lock - * to the scheduler's lock. - */ - if (td->td_lock != &sched_lock) { - mtx_lock_spin(&sched_lock); - thread_lock_set(td, &sched_lock); - } - TD_SET_RUNQ(td); - - if (td->td_pinned != 0) { - cpu = td->td_lastcpu; - ts->ts_runq = &runq_pcpu[cpu]; - single_cpu = 1; - CTR3(KTR_RUNQ, - "sched_add: Put td_sched:%p(td:%p) on cpu%d runq", ts, td, - cpu); - } else if (td->td_flags & TDF_BOUND) { - /* Find CPU from bound runq. */ - KASSERT(SKE_RUNQ_PCPU(ts), - ("sched_add: bound td_sched not on cpu runq")); - cpu = ts->ts_runq - &runq_pcpu[0]; - single_cpu = 1; - CTR3(KTR_RUNQ, - "sched_add: Put td_sched:%p(td:%p) on cpu%d runq", ts, td, - cpu); - } else if (ts->ts_flags & TSF_AFFINITY) { - /* Find a valid CPU for our cpuset */ - cpu = sched_pickcpu(td); - ts->ts_runq = &runq_pcpu[cpu]; - single_cpu = 1; - CTR3(KTR_RUNQ, - "sched_add: Put td_sched:%p(td:%p) on cpu%d runq", ts, td, - cpu); - } else { - CTR2(KTR_RUNQ, - "sched_add: adding td_sched:%p (td:%p) to gbl runq", ts, - td); - cpu = NOCPU; - ts->ts_runq = &runq; - } - - if (single_cpu && (cpu != PCPU_GET(cpuid))) { - kick_other_cpu(td->td_priority, cpu); - } else { - if (!single_cpu) { - cpumask_t me = PCPU_GET(cpumask); - cpumask_t idle = idle_cpus_mask & me; - - if (!idle && ((flags & SRQ_INTR) == 0) && - (idle_cpus_mask & ~(hlt_cpus_mask | me))) - forwarded = forward_wakeup(cpu); - } - - if (!forwarded) { - if ((flags & SRQ_YIELDING) == 0 && maybe_preempt(td)) - return; - else - maybe_resched(td); - } - } - - if ((td->td_proc->p_flag & P_NOLOAD) == 0) - sched_load_add(); - runq_add(ts->ts_runq, td, flags); - if (cpu != NOCPU) - runq_length[cpu]++; -} -#else /* SMP */ { struct td_sched *ts; @@ -1348,7 +1085,6 @@ runq_add(ts->ts_runq, td, flags); maybe_resched(td); } -#endif /* SMP */ void sched_rem(struct thread *td) @@ -1367,10 +1103,6 @@ if ((td->td_proc->p_flag & P_NOLOAD) == 0) sched_load_rem(); -#ifdef SMP - if (ts->ts_runq != &runq) - runq_length[ts->ts_runq - runq_pcpu]--; -#endif runq_remove(ts->ts_runq, td); TD_SET_CAN_RUN(td); } @@ -1386,34 +1118,11 @@ struct runq *rq; mtx_assert(&sched_lock, MA_OWNED); -#ifdef SMP - struct thread *tdcpu; rq = &runq; - td = runq_choose_fuzz(&runq, runq_fuzz); - tdcpu = runq_choose(&runq_pcpu[PCPU_GET(cpuid)]); - - if (td == NULL || - (tdcpu != NULL && - tdcpu->td_priority < td->td_priority)) { - CTR2(KTR_RUNQ, "choosing td %p from pcpu runq %d", tdcpu, - PCPU_GET(cpuid)); - td = tdcpu; - rq = &runq_pcpu[PCPU_GET(cpuid)]; - } else { - CTR1(KTR_RUNQ, "choosing td_sched %p from main runq", td); - } - -#else - rq = &runq; td = runq_choose(&runq); -#endif if (td) { -#ifdef SMP - if (td == tdcpu) - runq_length[PCPU_GET(cpuid)]--; -#endif runq_remove(rq, td); td->td_flags |= TDF_DIDRUN; @@ -1469,7 +1178,6 @@ td->td_flags |= TDF_BOUND; #ifdef SMP - ts->ts_runq = &runq_pcpu[cpu]; if (PCPU_GET(cpuid) == cpu) return; @@ -1647,19 +1355,6 @@ return; switch (td->td_state) { - case TDS_RUNQ: - /* - * If we are on a per-CPU runqueue that is in the set, - * then nothing needs to be done. - */ - if (ts->ts_runq != &runq && - THREAD_CAN_SCHED(td, ts->ts_runq - runq_pcpu)) - return; - - /* Put this thread on a valid per-CPU runqueue. */ - sched_rem(td); - sched_add(td, SRQ_BORING); - break; case TDS_RUNNING: /* * See if our current CPU is in the set. If not, force a From owner-svn-soc-all@FreeBSD.ORG Wed May 25 15:57:57 2011 Return-Path: Delivered-To: svn-soc-all@FreeBSD.org Received: from socsvn.FreeBSD.org (unknown [IPv6:2001:4f8:fff6::2f]) by hub.freebsd.org (Postfix) with SMTP id 68C88106566C for ; Wed, 25 May 2011 15:57:56 +0000 (UTC) (envelope-from xxp@FreeBSD.org) Received: by socsvn.FreeBSD.org (sSMTP sendmail emulation); Wed, 25 May 2011 15:57:56 +0000 Date: Wed, 25 May 2011 15:57:56 +0000 From: xxp@FreeBSD.org To: svn-soc-all@FreeBSD.org MIME-Version: 1.0 Content-Type: text/plain; charset=UTF-8 Content-Transfer-Encoding: 8bit Message-Id: <20110525155756.68C88106566C@hub.freebsd.org> Cc: Subject: socsvn commit: r222374 - soc2011/xxp X-BeenThere: svn-soc-all@freebsd.org X-Mailman-Version: 2.1.5 Precedence: list List-Id: SVN commit messages for the entire Summer of Code repository List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , X-List-Received-Date: Wed, 25 May 2011 15:57:57 -0000 Author: xxp Date: Wed May 25 15:57:56 2011 New Revision: 222374 URL: http://svnweb.FreeBSD.org/socsvn/?view=rev&rev=222374 Log: XXP's working directory Added: soc2011/xxp/ From owner-svn-soc-all@FreeBSD.ORG Wed May 25 15:59:31 2011 Return-Path: Delivered-To: svn-soc-all@FreeBSD.org Received: from socsvn.FreeBSD.org (unknown [IPv6:2001:4f8:fff6::2f]) by hub.freebsd.org (Postfix) with SMTP id 16DBE106564A for ; Wed, 25 May 2011 15:59:30 +0000 (UTC) (envelope-from xxp@FreeBSD.org) Received: by socsvn.FreeBSD.org (sSMTP sendmail emulation); Wed, 25 May 2011 15:59:30 +0000 Date: Wed, 25 May 2011 15:59:30 +0000 From: xxp@FreeBSD.org To: svn-soc-all@FreeBSD.org MIME-Version: 1.0 Content-Type: text/plain; charset=UTF-8 Content-Transfer-Encoding: 8bit Message-Id: <20110525155930.16DBE106564A@hub.freebsd.org> Cc: Subject: socsvn commit: r222375 - soc2011/xxp/dwarf-libc X-BeenThere: svn-soc-all@freebsd.org X-Mailman-Version: 2.1.5 Precedence: list List-Id: SVN commit messages for the entire Summer of Code repository List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , X-List-Received-Date: Wed, 25 May 2011 15:59:31 -0000 Author: xxp Date: Wed May 25 15:59:29 2011 New Revision: 222375 URL: http://svnweb.FreeBSD.org/socsvn/?view=rev&rev=222375 Log: Libc branch with DWARF Added: soc2011/xxp/dwarf-libc/ (props changed) - copied from r222374, mirror/FreeBSD/release/8.2.0/lib/libc/ From owner-svn-soc-all@FreeBSD.ORG Wed May 25 19:54:03 2011 Return-Path: Delivered-To: svn-soc-all@FreeBSD.org Received: from socsvn.FreeBSD.org (unknown [IPv6:2001:4f8:fff6::2f]) by hub.freebsd.org (Postfix) with SMTP id 0B8581065670 for ; Wed, 25 May 2011 19:54:01 +0000 (UTC) (envelope-from rudot@FreeBSD.org) Received: by socsvn.FreeBSD.org (sSMTP sendmail emulation); Wed, 25 May 2011 19:54:01 +0000 Date: Wed, 25 May 2011 19:54:01 +0000 From: rudot@FreeBSD.org To: svn-soc-all@FreeBSD.org MIME-Version: 1.0 Content-Type: text/plain; charset=UTF-8 Content-Transfer-Encoding: 8bit Message-Id: <20110525195401.0B8581065670@hub.freebsd.org> Cc: Subject: socsvn commit: r222377 - soc2011/rudot/kern X-BeenThere: svn-soc-all@freebsd.org X-Mailman-Version: 2.1.5 Precedence: list List-Id: SVN commit messages for the entire Summer of Code repository List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , X-List-Received-Date: Wed, 25 May 2011 19:54:03 -0000 Author: rudot Date: Wed May 25 19:54:00 2011 New Revision: 222377 URL: http://svnweb.FreeBSD.org/socsvn/?view=rev&rev=222377 Log: Removed periodical recalculating of threads' priorities Modified: soc2011/rudot/kern/sched_4bsd.c Modified: soc2011/rudot/kern/sched_4bsd.c ============================================================================== --- soc2011/rudot/kern/sched_4bsd.c Wed May 25 18:04:11 2011 (r222376) +++ soc2011/rudot/kern/sched_4bsd.c Wed May 25 19:54:00 2011 (r222377) @@ -121,22 +121,9 @@ #define SCHED_QUANTUM (hz / 10) /* Default sched quantum */ static void setup_runqs(void); -static void schedcpu(void); -static void schedcpu_thread(void); static void sched_priority(struct thread *td, u_char prio); static void sched_setup(void *dummy); -static void maybe_resched(struct thread *td); -static void updatepri(struct thread *td); -static void resetpriority(struct thread *td); -static void resetpriority_thread(struct thread *td); - -static struct kproc_desc sched_kp = { - "schedcpu", - schedcpu_thread, - NULL -}; -SYSINIT(schedcpu, SI_SUB_RUN_SCHEDULER, SI_ORDER_FIRST, kproc_start, - &sched_kp); + SYSINIT(sched_setup, SI_SUB_RUN_QUEUE, SI_ORDER_FIRST, sched_setup, NULL); /* @@ -240,18 +227,6 @@ sched_tdcnt--; KTR_COUNTER0(KTR_SCHED, "load", "global load", sched_tdcnt); } -/* - * Arrange to reschedule if necessary, taking the priorities and - * schedulers into account. - */ -static void -maybe_resched(struct thread *td) -{ - - THREAD_LOCK_ASSERT(td, MA_OWNED); - if (td->td_priority < curthread->td_priority) - curthread->td_flags |= TDF_NEEDRESCHED; -} /* * This function is called when a thread is about to be put on run queue @@ -419,181 +394,6 @@ */ #define CCPU_SHIFT 11 -/* - * Recompute process priorities, every hz ticks. - * MP-safe, called without the Giant mutex. - */ -/* ARGSUSED */ -static void -schedcpu(void) -{ - register fixpt_t loadfac = loadfactor(averunnable.ldavg[0]); - struct thread *td; - struct proc *p; - struct td_sched *ts; - int awake, realstathz; - - realstathz = stathz ? stathz : hz; - sx_slock(&allproc_lock); - FOREACH_PROC_IN_SYSTEM(p) { - PROC_LOCK(p); - FOREACH_THREAD_IN_PROC(p, td) { - awake = 0; - thread_lock(td); - ts = td->td_sched; - /* - * Increment sleep time (if sleeping). We - * ignore overflow, as above. - */ - /* - * The td_sched slptimes are not touched in wakeup - * because the thread may not HAVE everything in - * memory? XXX I think this is out of date. - */ - if (TD_ON_RUNQ(td)) { - awake = 1; - td->td_flags &= ~TDF_DIDRUN; - } else if (TD_IS_RUNNING(td)) { - awake = 1; - /* Do not clear TDF_DIDRUN */ - } else if (td->td_flags & TDF_DIDRUN) { - awake = 1; - td->td_flags &= ~TDF_DIDRUN; - } - - /* - * ts_pctcpu is only for ps and ttyinfo(). - */ - ts->ts_pctcpu = (ts->ts_pctcpu * ccpu) >> FSHIFT; - /* - * If the td_sched has been idle the entire second, - * stop recalculating its priority until - * it wakes up. - */ - if (ts->ts_cpticks != 0) { -#if (FSHIFT >= CCPU_SHIFT) - ts->ts_pctcpu += (realstathz == 100) - ? ((fixpt_t) ts->ts_cpticks) << - (FSHIFT - CCPU_SHIFT) : - 100 * (((fixpt_t) ts->ts_cpticks) - << (FSHIFT - CCPU_SHIFT)) / realstathz; -#else - ts->ts_pctcpu += ((FSCALE - ccpu) * - (ts->ts_cpticks * - FSCALE / realstathz)) >> FSHIFT; -#endif - ts->ts_cpticks = 0; - } - /* - * If there are ANY running threads in this process, - * then don't count it as sleeping. - * XXX: this is broken. - */ - if (awake) { - if (ts->ts_slptime > 1) { - /* - * In an ideal world, this should not - * happen, because whoever woke us - * up from the long sleep should have - * unwound the slptime and reset our - * priority before we run at the stale - * priority. Should KASSERT at some - * point when all the cases are fixed. - */ - updatepri(td); - } - ts->ts_slptime = 0; - } else - ts->ts_slptime++; - if (ts->ts_slptime > 1) { - thread_unlock(td); - continue; - } - td->td_estcpu = decay_cpu(loadfac, td->td_estcpu); - resetpriority(td); - resetpriority_thread(td); - thread_unlock(td); - } - PROC_UNLOCK(p); - } - sx_sunlock(&allproc_lock); -} - -/* - * Main loop for a kthread that executes schedcpu once a second. - */ -static void -schedcpu_thread(void) -{ - - for (;;) { - schedcpu(); - pause("-", hz); - } -} - -/* - * Recalculate the priority of a process after it has slept for a while. - * For all load averages >= 1 and max td_estcpu of 255, sleeping for at - * least six times the loadfactor will decay td_estcpu to zero. - */ -static void -updatepri(struct thread *td) -{ - struct td_sched *ts; - fixpt_t loadfac; - unsigned int newcpu; - - ts = td->td_sched; - loadfac = loadfactor(averunnable.ldavg[0]); - if (ts->ts_slptime > 5 * loadfac) - td->td_estcpu = 0; - else { - newcpu = td->td_estcpu; - ts->ts_slptime--; /* was incremented in schedcpu() */ - while (newcpu && --ts->ts_slptime) - newcpu = decay_cpu(loadfac, newcpu); - td->td_estcpu = newcpu; - } -} - -/* - * Compute the priority of a process when running in user mode. - * Arrange to reschedule if the resulting priority is better - * than that of the current process. - */ -static void -resetpriority(struct thread *td) -{ - register unsigned int newpriority; - - if (td->td_pri_class == PRI_TIMESHARE) { - newpriority = PUSER + td->td_estcpu / INVERSE_ESTCPU_WEIGHT + - NICE_WEIGHT * (td->td_proc->p_nice - PRIO_MIN); - newpriority = min(max(newpriority, PRI_MIN_TIMESHARE), - PRI_MAX_TIMESHARE); - sched_user_prio(td, newpriority); - } -} - -/* - * Update the thread's priority when the associated process's user - * priority changes. - */ -static void -resetpriority_thread(struct thread *td) -{ - - /* Only change threads with a time sharing user priority. */ - if (td->td_priority < PRI_MIN_TIMESHARE || - td->td_priority > PRI_MAX_TIMESHARE) - return; - - /* XXX the whole needresched thing is broken, but not silly. */ - maybe_resched(td); - - sched_prio(td, td->td_user_pri); -} /* ARGSUSED */ static void @@ -667,10 +467,6 @@ ts->ts_cpticks++; td->td_estcpu = ESTCPULIM(td->td_estcpu + 1); - if ((td->td_estcpu % INVERSE_ESTCPU_WEIGHT) == 0) { - resetpriority(td); - resetpriority_thread(td); - } /* * Force a context switch if the current thread has used up a full @@ -736,12 +532,6 @@ PROC_LOCK_ASSERT(p, MA_OWNED); p->p_nice = nice; - FOREACH_THREAD_IN_PROC(p, td) { - thread_lock(td); - resetpriority(td); - resetpriority_thread(td); - thread_unlock(td); - } } void @@ -844,13 +634,10 @@ void sched_user_prio(struct thread *td, u_char prio) { - u_char oldprio; - THREAD_LOCK_ASSERT(td, MA_OWNED); td->td_base_user_pri = prio; if (td->td_flags & TDF_UBORROWING && td->td_user_pri <= prio) return; - oldprio = td->td_user_pri; td->td_user_pri = prio; } @@ -1027,10 +814,6 @@ THREAD_LOCK_ASSERT(td, MA_OWNED); ts = td->td_sched; td->td_flags &= ~TDF_CANSWAP; - if (ts->ts_slptime > 1) { - updatepri(td); - resetpriority(td); - } td->td_slptick = 0; ts->ts_slptime = 0; sched_add(td, SRQ_BORING); @@ -1067,23 +850,9 @@ CTR2(KTR_RUNQ, "sched_add: adding td_sched:%p (td:%p) to runq", ts, td); ts->ts_runq = &runq; - /* - * If we are yielding (on the way out anyhow) or the thread - * being saved is US, then don't try be smart about preemption - * or kicking off another CPU as it won't help and may hinder. - * In the YIEDLING case, we are about to run whoever is being - * put in the queue anyhow, and in the OURSELF case, we are - * puting ourself on the run queue which also only happens - * when we are about to yield. - */ - if ((flags & SRQ_YIELDING) == 0) { - if (maybe_preempt(td)) - return; - } if ((td->td_proc->p_flag & P_NOLOAD) == 0) sched_load_add(); runq_add(ts->ts_runq, td, flags); - maybe_resched(td); } void From owner-svn-soc-all@FreeBSD.ORG Thu May 26 02:34:42 2011 Return-Path: Delivered-To: svn-soc-all@FreeBSD.org Received: from socsvn.FreeBSD.org (unknown [IPv6:2001:4f8:fff6::2f]) by hub.freebsd.org (Postfix) with SMTP id AF955106564A for ; Thu, 26 May 2011 02:34:41 +0000 (UTC) (envelope-from xxp@FreeBSD.org) Received: by socsvn.FreeBSD.org (sSMTP sendmail emulation); Thu, 26 May 2011 02:34:41 +0000 Date: Thu, 26 May 2011 02:34:41 +0000 From: xxp@FreeBSD.org To: svn-soc-all@FreeBSD.org MIME-Version: 1.0 Content-Type: text/plain; charset=UTF-8 Content-Transfer-Encoding: 8bit Message-Id: <20110526023441.AF955106564A@hub.freebsd.org> Cc: Subject: socsvn commit: r222388 - soc2011/xxp/sys X-BeenThere: svn-soc-all@freebsd.org X-Mailman-Version: 2.1.5 Precedence: list List-Id: SVN commit messages for the entire Summer of Code repository List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , X-List-Received-Date: Thu, 26 May 2011 02:34:42 -0000 Author: xxp Date: Thu May 26 02:34:41 2011 New Revision: 222388 URL: http://svnweb.FreeBSD.org/socsvn/?view=rev&rev=222388 Log: copy of sys Added: soc2011/xxp/sys/ From owner-svn-soc-all@FreeBSD.ORG Thu May 26 02:35:51 2011 Return-Path: Delivered-To: svn-soc-all@FreeBSD.org Received: from socsvn.FreeBSD.org (unknown [IPv6:2001:4f8:fff6::2f]) by hub.freebsd.org (Postfix) with SMTP id F2D8D106564A for ; Thu, 26 May 2011 02:35:49 +0000 (UTC) (envelope-from xxp@FreeBSD.org) Received: by socsvn.FreeBSD.org (sSMTP sendmail emulation); Thu, 26 May 2011 02:35:49 +0000 Date: Thu, 26 May 2011 02:35:49 +0000 From: xxp@FreeBSD.org To: svn-soc-all@FreeBSD.org MIME-Version: 1.0 Content-Type: text/plain; charset=UTF-8 Content-Transfer-Encoding: 8bit Message-Id: <20110526023549.F2D8D106564A@hub.freebsd.org> Cc: Subject: socsvn commit: r222389 - soc2011/xxp/sys/i386 X-BeenThere: svn-soc-all@freebsd.org X-Mailman-Version: 2.1.5 Precedence: list List-Id: SVN commit messages for the entire Summer of Code repository List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , X-List-Received-Date: Thu, 26 May 2011 02:35:51 -0000 Author: xxp Date: Thu May 26 02:35:49 2011 New Revision: 222389 URL: http://svnweb.FreeBSD.org/socsvn/?view=rev&rev=222389 Log: making i386 dir Added: soc2011/xxp/sys/i386/ From owner-svn-soc-all@FreeBSD.ORG Thu May 26 02:45:52 2011 Return-Path: Delivered-To: svn-soc-all@FreeBSD.org Received: from socsvn.FreeBSD.org (unknown [IPv6:2001:4f8:fff6::2f]) by hub.freebsd.org (Postfix) with SMTP id F0BA8106566C for ; Thu, 26 May 2011 02:45:50 +0000 (UTC) (envelope-from xxp@FreeBSD.org) Received: by socsvn.FreeBSD.org (sSMTP sendmail emulation); Thu, 26 May 2011 02:45:50 +0000 Date: Thu, 26 May 2011 02:45:50 +0000 From: xxp@FreeBSD.org To: svn-soc-all@FreeBSD.org MIME-Version: 1.0 Content-Type: text/plain; charset=UTF-8 Content-Transfer-Encoding: 8bit Message-Id: <20110526024550.F0BA8106566C@hub.freebsd.org> Cc: Subject: socsvn commit: r222390 - soc2011/xxp/sys X-BeenThere: svn-soc-all@freebsd.org X-Mailman-Version: 2.1.5 Precedence: list List-Id: SVN commit messages for the entire Summer of Code repository List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , X-List-Received-Date: Thu, 26 May 2011 02:45:52 -0000 Author: xxp Date: Thu May 26 02:45:50 2011 New Revision: 222390 URL: http://svnweb.FreeBSD.org/socsvn/?view=rev&rev=222390 Log: remove sys Deleted: soc2011/xxp/sys/ From owner-svn-soc-all@FreeBSD.ORG Thu May 26 02:46:29 2011 Return-Path: Delivered-To: svn-soc-all@FreeBSD.org Received: from socsvn.FreeBSD.org (unknown [IPv6:2001:4f8:fff6::2f]) by hub.freebsd.org (Postfix) with SMTP id 168E8106566B for ; Thu, 26 May 2011 02:46:28 +0000 (UTC) (envelope-from xxp@FreeBSD.org) Received: by socsvn.FreeBSD.org (sSMTP sendmail emulation); Thu, 26 May 2011 02:46:28 +0000 Date: Thu, 26 May 2011 02:46:28 +0000 From: xxp@FreeBSD.org To: svn-soc-all@FreeBSD.org MIME-Version: 1.0 Content-Type: text/plain; charset=UTF-8 Content-Transfer-Encoding: 8bit Message-Id: <20110526024628.168E8106566B@hub.freebsd.org> Cc: Subject: socsvn commit: r222391 - soc2011/xxp/dwarf-libc X-BeenThere: svn-soc-all@freebsd.org X-Mailman-Version: 2.1.5 Precedence: list List-Id: SVN commit messages for the entire Summer of Code repository List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , X-List-Received-Date: Thu, 26 May 2011 02:46:29 -0000 Author: xxp Date: Thu May 26 02:46:27 2011 New Revision: 222391 URL: http://svnweb.FreeBSD.org/socsvn/?view=rev&rev=222391 Log: remove dwarf-libc Deleted: soc2011/xxp/dwarf-libc/ From owner-svn-soc-all@FreeBSD.ORG Thu May 26 02:50:08 2011 Return-Path: Delivered-To: svn-soc-all@FreeBSD.org Received: from socsvn.FreeBSD.org (unknown [IPv6:2001:4f8:fff6::2f]) by hub.freebsd.org (Postfix) with SMTP id 06634106566B for ; Thu, 26 May 2011 02:50:06 +0000 (UTC) (envelope-from xxp@FreeBSD.org) Received: by socsvn.FreeBSD.org (sSMTP sendmail emulation); Thu, 26 May 2011 02:50:06 +0000 Date: Thu, 26 May 2011 02:50:06 +0000 From: xxp@FreeBSD.org To: svn-soc-all@FreeBSD.org MIME-Version: 1.0 Content-Type: text/plain; charset=UTF-8 Content-Transfer-Encoding: 8bit Message-Id: <20110526025007.06634106566B@hub.freebsd.org> Cc: Subject: socsvn commit: r222392 - soc2011/xxp/xxp-head X-BeenThere: svn-soc-all@freebsd.org X-Mailman-Version: 2.1.5 Precedence: list List-Id: SVN commit messages for the entire Summer of Code repository List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , X-List-Received-Date: Thu, 26 May 2011 02:50:08 -0000 Author: xxp Date: Thu May 26 02:50:06 2011 New Revision: 222392 URL: http://svnweb.FreeBSD.org/socsvn/?view=rev&rev=222392 Log: Creating a branch for DWARF Added: soc2011/xxp/xxp-head/ (props changed) - copied from r222391, mirror/FreeBSD/head/ From owner-svn-soc-all@FreeBSD.ORG Thu May 26 13:31:07 2011 Return-Path: Delivered-To: svn-soc-all@FreeBSD.org Received: from socsvn.FreeBSD.org (unknown [IPv6:2001:4f8:fff6::2f]) by hub.freebsd.org (Postfix) with SMTP id BA8B01065674 for ; Thu, 26 May 2011 13:31:05 +0000 (UTC) (envelope-from rudot@FreeBSD.org) Received: by socsvn.FreeBSD.org (sSMTP sendmail emulation); Thu, 26 May 2011 13:31:05 +0000 Date: Thu, 26 May 2011 13:31:05 +0000 From: rudot@FreeBSD.org To: svn-soc-all@FreeBSD.org MIME-Version: 1.0 Content-Type: text/plain; charset=UTF-8 Content-Transfer-Encoding: 8bit Message-Id: <20110526133105.BA8B01065674@hub.freebsd.org> Cc: Subject: socsvn commit: r222402 - soc2011/rudot/kern X-BeenThere: svn-soc-all@freebsd.org X-Mailman-Version: 2.1.5 Precedence: list List-Id: SVN commit messages for the entire Summer of Code repository List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , X-List-Received-Date: Thu, 26 May 2011 13:31:07 -0000 Author: rudot Date: Thu May 26 13:31:05 2011 New Revision: 222402 URL: http://svnweb.FreeBSD.org/socsvn/?view=rev&rev=222402 Log: For the sake of simplicity, I removed some optimization. I will deal with optimization later. Added: soc2011/rudot/kern/proc.h Modified: soc2011/rudot/kern/sched_4bsd.c Added: soc2011/rudot/kern/proc.h ============================================================================== --- /dev/null 00:00:00 1970 (empty, because file is newly added) +++ soc2011/rudot/kern/proc.h Thu May 26 13:31:05 2011 (r222402) @@ -0,0 +1,896 @@ +/*- + * Copyright (c) 1986, 1989, 1991, 1993 + * The Regents of the University of California. All rights reserved. + * (c) UNIX System Laboratories, Inc. + * All or some portions of this file are derived from material licensed + * to the University of California by American Telephone and Telegraph + * Co. or Unix System Laboratories, Inc. and are reproduced herein with + * the permission of UNIX System Laboratories, Inc. + * + * Redistribution and use in source and binary forms, with or without + * modification, are permitted provided that the following conditions + * are met: + * 1. Redistributions of source code must retain the above copyright + * notice, this list of conditions and the following disclaimer. + * 2. Redistributions in binary form must reproduce the above copyright + * notice, this list of conditions and the following disclaimer in the + * documentation and/or other materials provided with the distribution. + * 4. Neither the name of the University nor the names of its contributors + * may be used to endorse or promote products derived from this software + * without specific prior written permission. + * + * THIS SOFTWARE IS PROVIDED BY THE REGENTS AND CONTRIBUTORS ``AS IS'' AND + * ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT LIMITED TO, THE + * IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR A PARTICULAR PURPOSE + * ARE DISCLAIMED. IN NO EVENT SHALL THE REGENTS OR CONTRIBUTORS BE LIABLE + * FOR ANY DIRECT, INDIRECT, INCIDENTAL, SPECIAL, EXEMPLARY, OR CONSEQUENTIAL + * DAMAGES (INCLUDING, BUT NOT LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS + * OR SERVICES; LOSS OF USE, DATA, OR PROFITS; OR BUSINESS INTERRUPTION) + * HOWEVER CAUSED AND ON ANY THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT + * LIABILITY, OR TORT (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY + * OUT OF THE USE OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF + * SUCH DAMAGE. + * + * @(#)proc.h 8.15 (Berkeley) 5/19/95 + * $FreeBSD: src/sys/sys/proc.h,v 1.535.2.14.2.1 2010/12/21 17:09:25 kensmith Exp $ + */ + +#ifndef _SYS_PROC_H_ +#define _SYS_PROC_H_ + +#include /* For struct callout. */ +#include /* For struct klist. */ +#include +#ifndef _KERNEL +#include +#endif +#include +#include +#include +#include +#include +#include +#include /* XXX. */ +#include +#include +#include +#include +#include +#ifndef _KERNEL +#include /* For structs itimerval, timeval. */ +#else +#include +#endif +#include +#include +#include /* Machine-dependent proc substruct. */ + +/* + * One structure allocated per session. + * + * List of locks + * (m) locked by s_mtx mtx + * (e) locked by proctree_lock sx + * (c) const until freeing + */ +struct session { + u_int s_count; /* Ref cnt; pgrps in session - atomic. */ + struct proc *s_leader; /* (m + e) Session leader. */ + struct vnode *s_ttyvp; /* (m) Vnode of controlling tty. */ + struct tty *s_ttyp; /* (e) Controlling tty. */ + pid_t s_sid; /* (c) Session ID. */ + /* (m) Setlogin() name: */ + char s_login[roundup(MAXLOGNAME, sizeof(long))]; + struct mtx s_mtx; /* Mutex to protect members. */ +}; + +/* + * One structure allocated per process group. + * + * List of locks + * (m) locked by pg_mtx mtx + * (e) locked by proctree_lock sx + * (c) const until freeing + */ +struct pgrp { + LIST_ENTRY(pgrp) pg_hash; /* (e) Hash chain. */ + LIST_HEAD(, proc) pg_members; /* (m + e) Pointer to pgrp members. */ + struct session *pg_session; /* (c) Pointer to session. */ + struct sigiolst pg_sigiolst; /* (m) List of sigio sources. */ + pid_t pg_id; /* (c) Process group id. */ + int pg_jobc; /* (m) Job control process count. */ + struct mtx pg_mtx; /* Mutex to protect members */ +}; + +/* + * pargs, used to hold a copy of the command line, if it had a sane length. + */ +struct pargs { + u_int ar_ref; /* Reference count. */ + u_int ar_length; /* Length. */ + u_char ar_args[1]; /* Arguments. */ +}; + +/*- + * Description of a process. + * + * This structure contains the information needed to manage a thread of + * control, known in UN*X as a process; it has references to substructures + * containing descriptions of things that the process uses, but may share + * with related processes. The process structure and the substructures + * are always addressable except for those marked "(CPU)" below, + * which might be addressable only on a processor on which the process + * is running. + * + * Below is a key of locks used to protect each member of struct proc. The + * lock is indicated by a reference to a specific character in parens in the + * associated comment. + * * - not yet protected + * a - only touched by curproc or parent during fork/wait + * b - created at fork, never changes + * (exception aiods switch vmspaces, but they are also + * marked 'P_SYSTEM' so hopefully it will be left alone) + * c - locked by proc mtx + * d - locked by allproc_lock lock + * e - locked by proctree_lock lock + * f - session mtx + * g - process group mtx + * h - callout_lock mtx + * i - by curproc or the master session mtx + * j - locked by proc slock + * k - only accessed by curthread + * k*- only accessed by curthread and from an interrupt + * l - the attaching proc or attaching proc parent + * m - Giant + * n - not locked, lazy + * o - ktrace lock + * q - td_contested lock + * r - p_peers lock + * t - thread lock + * x - created at fork, only changes during single threading in exec + * y - created at first aio, doesn't change until exit or exec at which + * point we are single-threaded and only curthread changes it + * z - zombie threads lock + * + * If the locking key specifies two identifiers (for example, p_pptr) then + * either lock is sufficient for read access, but both locks must be held + * for write access. + */ +struct kaudit_record; +struct td_sched; +struct nlminfo; +struct kaioinfo; +struct p_sched; +struct proc; +struct sleepqueue; +struct thread; +struct trapframe; +struct turnstile; +struct mqueue_notifier; +struct kdtrace_proc; +struct kdtrace_thread; +struct cpuset; + +/* + * XXX: Does this belong in resource.h or resourcevar.h instead? + * Resource usage extension. The times in rusage structs in the kernel are + * never up to date. The actual times are kept as runtimes and tick counts + * (with control info in the "previous" times), and are converted when + * userland asks for rusage info. Backwards compatibility prevents putting + * this directly in the user-visible rusage struct. + * + * Locking for p_rux: (cj) means (j) for p_rux and (c) for p_crux. + * Locking for td_rux: (t) for all fields. + */ +struct rusage_ext { + u_int64_t rux_runtime; /* (cj) Real time. */ + u_int64_t rux_uticks; /* (cj) Statclock hits in user mode. */ + u_int64_t rux_sticks; /* (cj) Statclock hits in sys mode. */ + u_int64_t rux_iticks; /* (cj) Statclock hits in intr mode. */ + u_int64_t rux_uu; /* (c) Previous user time in usec. */ + u_int64_t rux_su; /* (c) Previous sys time in usec. */ + u_int64_t rux_tu; /* (c) Previous total time in usec. */ +}; + +/* + * Kernel runnable context (thread). + * This is what is put to sleep and reactivated. + * Thread context. Processes may have multiple threads. + */ +struct thread { + struct mtx *volatile td_lock; /* replaces sched lock */ + struct proc *td_proc; /* (*) Associated process. */ + TAILQ_ENTRY(thread) td_plist; /* (*) All threads in this proc. */ + TAILQ_ENTRY(thread) td_runq; /* (t) Run queue. */ + TAILQ_ENTRY(thread) td_slpq; /* (t) Sleep queue. */ + TAILQ_ENTRY(thread) td_lockq; /* (t) Lock queue. */ + struct cpuset *td_cpuset; /* (t) CPU affinity mask. */ + struct seltd *td_sel; /* Select queue/channel. */ + struct sleepqueue *td_sleepqueue; /* (k) Associated sleep queue. */ + struct turnstile *td_turnstile; /* (k) Associated turnstile. */ + struct umtx_q *td_umtxq; /* (c?) Link for when we're blocked. */ + lwpid_t td_tid; /* (b) Thread ID. */ + sigqueue_t td_sigqueue; /* (c) Sigs arrived, not delivered. */ +#define td_siglist td_sigqueue.sq_signals + +/* Cleared during fork1() */ +#define td_startzero td_flags + int td_flags; /* (t) TDF_* flags. */ + int td_inhibitors; /* (t) Why can not run. */ + int td_pflags; /* (k) Private thread (TDP_*) flags. */ + int td_dupfd; /* (k) Ret value from fdopen. XXX */ + int td_sqqueue; /* (t) Sleepqueue queue blocked on. */ + void *td_wchan; /* (t) Sleep address. */ + const char *td_wmesg; /* (t) Reason for sleep. */ + u_char td_lastcpu; /* (t) Last cpu we were on. */ + u_char td_oncpu; /* (t) Which cpu we are on. */ + volatile u_char td_owepreempt; /* (k*) Preempt on last critical_exit */ + u_char td_tsqueue; /* (t) Turnstile queue blocked on. */ + short td_locks; /* (k) Count of non-spin locks. */ + short td_rw_rlocks; /* (k) Count of rwlock read locks. */ + short td_lk_slocks; /* (k) Count of lockmgr shared locks. */ + struct turnstile *td_blocked; /* (t) Lock thread is blocked on. */ + const char *td_lockname; /* (t) Name of lock blocked on. */ + LIST_HEAD(, turnstile) td_contested; /* (q) Contested locks. */ + struct lock_list_entry *td_sleeplocks; /* (k) Held sleep locks. */ + int td_intr_nesting_level; /* (k) Interrupt recursion. */ + int td_pinned; /* (k) Temporary cpu pin count. */ + struct ucred *td_ucred; /* (k) Reference to credentials. */ + u_int td_estcpu; /* (t) estimated cpu utilization */ + int td_slptick; /* (t) Time at sleep. */ + int td_blktick; /* (t) Time spent blocked. */ + struct rusage td_ru; /* (t) rusage information. */ + uint64_t td_incruntime; /* (t) Cpu ticks to transfer to proc. */ + uint64_t td_runtime; /* (t) How many cpu ticks we've run. */ + u_int td_pticks; /* (t) Statclock hits for profiling */ + u_int td_sticks; /* (t) Statclock hits in system mode. */ + u_int td_iticks; /* (t) Statclock hits in intr mode. */ + u_int td_uticks; /* (t) Statclock hits in user mode. */ + int td_intrval; /* (t) Return value for sleepq. */ + sigset_t td_oldsigmask; /* (k) Saved mask from pre sigpause. */ + sigset_t td_sigmask; /* (c) Current signal mask. */ + volatile u_int td_generation; /* (k) For detection of preemption */ + stack_t td_sigstk; /* (k) Stack ptr and on-stack flag. */ + int td_xsig; /* (c) Signal for ptrace */ + u_long td_profil_addr; /* (k) Temporary addr until AST. */ + u_int td_profil_ticks; /* (k) Temporary ticks until AST. */ + char td_name[MAXCOMLEN + 1]; /* (*) Thread name. */ + struct file *td_fpop; /* (k) file referencing cdev under op */ + int td_dbgflags; /* (c) Userland debugger flags */ + struct ksiginfo td_dbgksi; /* (c) ksi reflected to debugger. */ + int td_ng_outbound; /* (k) Thread entered ng from above. */ + struct osd td_osd; /* (k) Object specific data. */ +#define td_endzero td_base_pri + +/* Copied during fork1() or thread_sched_upcall(). */ +#define td_startcopy td_endzero + u_char td_rqindex; /* (t) Run queue index. */ + u_char td_base_pri; /* (t) Thread base kernel priority. */ + u_char td_priority; /* (t) Thread active priority. */ + u_char td_pri_class; /* (t) Scheduling class. */ + u_char td_user_pri; /* (t) User pri from estcpu and nice. */ + u_char td_base_user_pri; /* (t) Base user pri */ +#define td_endcopy td_pcb + +/* + * Fields that must be manually set in fork1() or thread_sched_upcall() + * or already have been set in the allocator, constructor, etc. + */ + struct pcb *td_pcb; /* (k) Kernel VA of pcb and kstack. */ + enum { + TDS_INACTIVE = 0x0, + TDS_INHIBITED, + TDS_CAN_RUN, + TDS_RUNQ, + TDS_RUNNING + } td_state; /* (t) thread state */ + register_t td_retval[2]; /* (k) Syscall aux returns. */ + struct callout td_slpcallout; /* (h) Callout for sleep. */ + struct trapframe *td_frame; /* (k) */ + struct vm_object *td_kstack_obj;/* (a) Kstack object. */ + vm_offset_t td_kstack; /* (a) Kernel VA of kstack. */ + int td_kstack_pages; /* (a) Size of the kstack. */ + void *td_unused1; + vm_offset_t td_unused2; + int td_unused3; + volatile u_int td_critnest; /* (k*) Critical section nest level. */ + struct mdthread td_md; /* (k) Any machine-dependent fields. */ + struct td_sched *td_sched; /* (*) Scheduler-specific data. */ + struct kaudit_record *td_ar; /* (k) Active audit record, if any. */ + int td_syscalls; /* per-thread syscall count (used by NFS :)) */ + struct lpohead td_lprof[2]; /* (a) lock profiling objects. */ + struct kdtrace_thread *td_dtrace; /* (*) DTrace-specific data. */ + int td_errno; /* Error returned by last syscall. */ + struct vnet *td_vnet; /* (k) Effective vnet. */ + const char *td_vnet_lpush; /* (k) Debugging vnet push / pop. */ + struct rusage_ext td_rux; /* (t) Internal rusage information. */ + struct vm_map_entry *td_map_def_user; /* (k) Deferred entries. */ +}; + +struct mtx *thread_lock_block(struct thread *); +void thread_lock_unblock(struct thread *, struct mtx *); +void thread_lock_set(struct thread *, struct mtx *); +#define THREAD_LOCK_ASSERT(td, type) \ +do { \ + struct mtx *__m = (td)->td_lock; \ + if (__m != &blocked_lock) \ + mtx_assert(__m, (type)); \ +} while (0) + +#ifdef INVARIANTS +#define THREAD_LOCKPTR_ASSERT(td, lock) \ +do { \ + struct mtx *__m = (td)->td_lock; \ + KASSERT((__m == &blocked_lock || __m == (lock)), \ + ("Thread %p lock %p does not match %p", td, __m, (lock))); \ +} while (0) +#else +#define THREAD_LOCKPTR_ASSERT(td, lock) +#endif + +#define CRITICAL_ASSERT(td) \ + KASSERT((td)->td_critnest >= 1, ("Not in critical section")); + +/* + * Flags kept in td_flags: + * To change these you MUST have the scheduler lock. + */ +#define TDF_BORROWING 0x00000001 /* Thread is borrowing pri from another. */ +#define TDF_INPANIC 0x00000002 /* Caused a panic, let it drive crashdump. */ +#define TDF_INMEM 0x00000004 /* Thread's stack is in memory. */ +#define TDF_SINTR 0x00000008 /* Sleep is interruptible. */ +#define TDF_TIMEOUT 0x00000010 /* Timing out during sleep. */ +#define TDF_IDLETD 0x00000020 /* This is a per-CPU idle thread. */ +#define TDF_CANSWAP 0x00000040 /* Thread can be swapped. */ +#define TDF_SLEEPABORT 0x00000080 /* sleepq_abort was called. */ +#define TDF_KTH_SUSP 0x00000100 /* kthread is suspended */ +#define TDF_UBORROWING 0x00000200 /* Thread is borrowing user pri. */ +#define TDF_BOUNDARY 0x00000400 /* Thread suspended at user boundary */ +#define TDF_ASTPENDING 0x00000800 /* Thread has some asynchronous events. */ +#define TDF_TIMOFAIL 0x00001000 /* Timeout from sleep after we were awake. */ +#define TDF_SBDRY 0x00002000 /* Stop only on usermode boundary. */ +#define TDF_UPIBLOCKED 0x00004000 /* Thread blocked on user PI mutex. */ +#define TDF_NEEDSUSPCHK 0x00008000 /* Thread may need to suspend. */ +#define TDF_NEEDRESCHED 0x00010000 /* Thread needs to yield. */ +#define TDF_NEEDSIGCHK 0x00020000 /* Thread may need signal delivery. */ +#define TDF_UNUSED18 0x00040000 /* --available-- */ +#define TDF_UNUSED19 0x00080000 /* Thread is sleeping on a umtx. */ +#define TDF_THRWAKEUP 0x00100000 /* Libthr thread must not suspend itself. */ +#define TDF_UNUSED21 0x00200000 /* --available-- */ +#define TDF_SWAPINREQ 0x00400000 /* Swapin request due to wakeup. */ +#define TDF_UNUSED23 0x00800000 /* --available-- */ +#define TDF_SCHED0 0x01000000 /* Reserved for scheduler private use */ +#define TDF_SCHED1 0x02000000 /* Reserved for scheduler private use */ +#define TDF_SCHED2 0x04000000 /* Reserved for scheduler private use */ +#define TDF_SCHED3 0x08000000 /* Reserved for scheduler private use */ +#define TDF_ALRMPEND 0x10000000 /* Pending SIGVTALRM needs to be posted. */ +#define TDF_PROFPEND 0x20000000 /* Pending SIGPROF needs to be posted. */ +#define TDF_MACPEND 0x40000000 /* AST-based MAC event pending. */ + +/* Userland debug flags */ +#define TDB_SUSPEND 0x00000001 /* Thread is suspended by debugger */ +#define TDB_XSIG 0x00000002 /* Thread is exchanging signal under trace */ +#define TDB_USERWR 0x00000004 /* Debugger modified memory or registers */ +#define TDB_SCE 0x00000008 /* Thread performs syscall enter */ +#define TDB_SCX 0x00000010 /* Thread performs syscall exit */ +#define TDB_EXEC 0x00000020 /* TDB_SCX from exec(2) family */ + +/* + * "Private" flags kept in td_pflags: + * These are only written by curthread and thus need no locking. + */ +#define TDP_OLDMASK 0x00000001 /* Need to restore mask after suspend. */ +#define TDP_INKTR 0x00000002 /* Thread is currently in KTR code. */ +#define TDP_INKTRACE 0x00000004 /* Thread is currently in KTRACE code. */ +#define TDP_BUFNEED 0x00000008 /* Do not recurse into the buf flush */ +#define TDP_COWINPROGRESS 0x00000010 /* Snapshot copy-on-write in progress. */ +#define TDP_ALTSTACK 0x00000020 /* Have alternate signal stack. */ +#define TDP_DEADLKTREAT 0x00000040 /* Lock aquisition - deadlock treatment. */ +#define TDP_UNUSED80 0x00000080 /* available. */ +#define TDP_NOSLEEPING 0x00000100 /* Thread is not allowed to sleep on a sq. */ +#define TDP_OWEUPC 0x00000200 /* Call addupc() at next AST. */ +#define TDP_ITHREAD 0x00000400 /* Thread is an interrupt thread. */ +#define TDP_UNUSED800 0x00000800 /* available. */ +#define TDP_SCHED1 0x00001000 /* Reserved for scheduler private use */ +#define TDP_SCHED2 0x00002000 /* Reserved for scheduler private use */ +#define TDP_SCHED3 0x00004000 /* Reserved for scheduler private use */ +#define TDP_SCHED4 0x00008000 /* Reserved for scheduler private use */ +#define TDP_GEOM 0x00010000 /* Settle GEOM before finishing syscall */ +#define TDP_SOFTDEP 0x00020000 /* Stuck processing softdep worklist */ +#define TDP_NORUNNINGBUF 0x00040000 /* Ignore runningbufspace check */ +#define TDP_WAKEUP 0x00080000 /* Don't sleep in umtx cond_wait */ +#define TDP_INBDFLUSH 0x00100000 /* Already in BO_BDFLUSH, do not recurse */ +#define TDP_KTHREAD 0x00200000 /* This is an official kernel thread */ +#define TDP_CALLCHAIN 0x00400000 /* Capture thread's callchain */ +#define TDP_IGNSUSP 0x00800000 /* Permission to ignore the MNTK_SUSPEND* */ +#define TDP_AUDITREC 0x01000000 /* Audit record pending on thread */ + +/* + * Reasons that the current thread can not be run yet. + * More than one may apply. + */ +#define TDI_SUSPENDED 0x0001 /* On suspension queue. */ +#define TDI_SLEEPING 0x0002 /* Actually asleep! (tricky). */ +#define TDI_SWAPPED 0x0004 /* Stack not in mem. Bad juju if run. */ +#define TDI_LOCK 0x0008 /* Stopped on a lock. */ +#define TDI_IWAIT 0x0010 /* Awaiting interrupt. */ + +#define TD_IS_SLEEPING(td) ((td)->td_inhibitors & TDI_SLEEPING) +#define TD_ON_SLEEPQ(td) ((td)->td_wchan != NULL) +#define TD_IS_SUSPENDED(td) ((td)->td_inhibitors & TDI_SUSPENDED) +#define TD_IS_SWAPPED(td) ((td)->td_inhibitors & TDI_SWAPPED) +#define TD_ON_LOCK(td) ((td)->td_inhibitors & TDI_LOCK) +#define TD_AWAITING_INTR(td) ((td)->td_inhibitors & TDI_IWAIT) +#define TD_IS_RUNNING(td) ((td)->td_state == TDS_RUNNING) +#define TD_ON_RUNQ(td) ((td)->td_state == TDS_RUNQ) +#define TD_CAN_RUN(td) ((td)->td_state == TDS_CAN_RUN) +#define TD_IS_INHIBITED(td) ((td)->td_state == TDS_INHIBITED) +#define TD_ON_UPILOCK(td) ((td)->td_flags & TDF_UPIBLOCKED) +#define TD_IS_IDLETHREAD(td) ((td)->td_flags & TDF_IDLETD) + + +#define TD_SET_INHIB(td, inhib) do { \ + (td)->td_state = TDS_INHIBITED; \ + (td)->td_inhibitors |= (inhib); \ +} while (0) + +#define TD_CLR_INHIB(td, inhib) do { \ + if (((td)->td_inhibitors & (inhib)) && \ + (((td)->td_inhibitors &= ~(inhib)) == 0)) \ + (td)->td_state = TDS_CAN_RUN; \ +} while (0) + +#define TD_SET_SLEEPING(td) TD_SET_INHIB((td), TDI_SLEEPING) +#define TD_SET_SWAPPED(td) TD_SET_INHIB((td), TDI_SWAPPED) +#define TD_SET_LOCK(td) TD_SET_INHIB((td), TDI_LOCK) +#define TD_SET_SUSPENDED(td) TD_SET_INHIB((td), TDI_SUSPENDED) +#define TD_SET_IWAIT(td) TD_SET_INHIB((td), TDI_IWAIT) +#define TD_SET_EXITING(td) TD_SET_INHIB((td), TDI_EXITING) + +#define TD_CLR_SLEEPING(td) TD_CLR_INHIB((td), TDI_SLEEPING) +#define TD_CLR_SWAPPED(td) TD_CLR_INHIB((td), TDI_SWAPPED) +#define TD_CLR_LOCK(td) TD_CLR_INHIB((td), TDI_LOCK) +#define TD_CLR_SUSPENDED(td) TD_CLR_INHIB((td), TDI_SUSPENDED) +#define TD_CLR_IWAIT(td) TD_CLR_INHIB((td), TDI_IWAIT) + +#define TD_SET_RUNNING(td) (td)->td_state = TDS_RUNNING +#define TD_SET_RUNQ(td) (td)->td_state = TDS_RUNQ +#define TD_SET_CAN_RUN(td) (td)->td_state = TDS_CAN_RUN + +/* + * Process structure. + */ +struct proc { + LIST_ENTRY(proc) p_list; /* (d) List of all processes. */ + TAILQ_HEAD(, thread) p_threads; /* (c) all threads. */ + struct mtx p_slock; /* process spin lock */ + struct ucred *p_ucred; /* (c) Process owner's identity. */ + struct filedesc *p_fd; /* (b) Open files. */ + struct filedesc_to_leader *p_fdtol; /* (b) Tracking node */ + struct pstats *p_stats; /* (b) Accounting/statistics (CPU). */ + struct plimit *p_limit; /* (c) Process limits. */ + struct callout p_limco; /* (c) Limit callout handle */ + struct sigacts *p_sigacts; /* (x) Signal actions, state (CPU). */ + + /* + * The following don't make too much sense. + * See the td_ or ke_ versions of the same flags. + */ + int p_flag; /* (c) P_* flags. */ + enum { + PRS_NEW = 0, /* In creation */ + PRS_NORMAL, /* threads can be run. */ + PRS_ZOMBIE + } p_state; /* (j/c) S* process status. */ + pid_t p_pid; /* (b) Process identifier. */ + LIST_ENTRY(proc) p_hash; /* (d) Hash chain. */ + LIST_ENTRY(proc) p_pglist; /* (g + e) List of processes in pgrp. */ + struct proc *p_pptr; /* (c + e) Pointer to parent process. */ + LIST_ENTRY(proc) p_sibling; /* (e) List of sibling processes. */ + LIST_HEAD(, proc) p_children; /* (e) Pointer to list of children. */ + struct mtx p_mtx; /* (n) Lock for this struct. */ + struct ksiginfo *p_ksi; /* Locked by parent proc lock */ + sigqueue_t p_sigqueue; /* (c) Sigs not delivered to a td. */ +#define p_siglist p_sigqueue.sq_signals + +/* The following fields are all zeroed upon creation in fork. */ +#define p_startzero p_oppid + pid_t p_oppid; /* (c + e) Save ppid in ptrace. XXX */ + struct vmspace *p_vmspace; /* (b) Address space. */ + u_int p_swtick; /* (c) Tick when swapped in or out. */ + struct itimerval p_realtimer; /* (c) Alarm timer. */ + struct rusage p_ru; /* (a) Exit information. */ + struct rusage_ext p_rux; /* (cj) Internal resource usage. */ + struct rusage_ext p_crux; /* (c) Internal child resource usage. */ + int p_profthreads; /* (c) Num threads in addupc_task. */ + volatile int p_exitthreads; /* (j) Number of threads exiting */ + int p_traceflag; /* (o) Kernel trace points. */ + struct vnode *p_tracevp; /* (c + o) Trace to vnode. */ + struct ucred *p_tracecred; /* (o) Credentials to trace with. */ + struct vnode *p_textvp; /* (b) Vnode of executable. */ + u_int p_lock; /* (c) Proclock (prevent swap) count. */ + struct sigiolst p_sigiolst; /* (c) List of sigio sources. */ + int p_sigparent; /* (c) Signal to parent on exit. */ + int p_sig; /* (n) For core dump/debugger XXX. */ + u_long p_code; /* (n) For core dump/debugger XXX. */ + u_int p_stops; /* (c) Stop event bitmask. */ + u_int p_stype; /* (c) Stop event type. */ + char p_step; /* (c) Process is stopped. */ + u_char p_pfsflags; /* (c) Procfs flags. */ + struct nlminfo *p_nlminfo; /* (?) Only used by/for lockd. */ + struct kaioinfo *p_aioinfo; /* (y) ASYNC I/O info. */ + struct thread *p_singlethread;/* (c + j) If single threading this is it */ + int p_suspcount; /* (j) Num threads in suspended mode. */ + struct thread *p_xthread; /* (c) Trap thread */ + int p_boundary_count;/* (c) Num threads at user boundary */ + int p_pendingcnt; /* how many signals are pending */ + struct itimers *p_itimers; /* (c) POSIX interval timers. */ +/* End area that is zeroed on creation. */ +#define p_endzero p_magic + +/* The following fields are all copied upon creation in fork. */ +#define p_startcopy p_endzero + u_int p_magic; /* (b) Magic number. */ + int p_osrel; /* (x) osreldate for the + binary (from ELF note, if any) */ + char p_comm[MAXCOMLEN + 1]; /* (b) Process name. */ + struct pgrp *p_pgrp; /* (c + e) Pointer to process group. */ + struct sysentvec *p_sysent; /* (b) Syscall dispatch info. */ + struct pargs *p_args; /* (c) Process arguments. */ + rlim_t p_cpulimit; /* (c) Current CPU limit in seconds. */ + signed char p_nice; /* (c) Process "nice" value. */ + int p_fibnum; /* in this routing domain XXX MRT */ +/* End area that is copied on creation. */ +#define p_endcopy p_xstat + + u_short p_xstat; /* (c) Exit status; also stop sig. */ + struct knlist p_klist; /* (c) Knotes attached to this proc. */ + int p_numthreads; /* (c) Number of threads. */ + struct mdproc p_md; /* Any machine-dependent fields. */ + struct callout p_itcallout; /* (h + c) Interval timer callout. */ + u_short p_acflag; /* (c) Accounting flags. */ + struct proc *p_peers; /* (r) */ + struct proc *p_leader; /* (b) */ + void *p_emuldata; /* (c) Emulator state data. */ + struct label *p_label; /* (*) Proc (not subject) MAC label. */ + struct p_sched *p_sched; /* (*) Scheduler-specific data. */ + STAILQ_HEAD(, ktr_request) p_ktr; /* (o) KTR event queue. */ + LIST_HEAD(, mqueue_notifier) p_mqnotifier; /* (c) mqueue notifiers.*/ + struct kdtrace_proc *p_dtrace; /* (*) DTrace-specific data. */ + struct cv p_pwait; /* (*) wait cv for exit/exec */ +}; + +#define p_session p_pgrp->pg_session +#define p_pgid p_pgrp->pg_id + +#define NOCPU 0xff /* For when we aren't on a CPU. */ + +#define PROC_SLOCK(p) mtx_lock_spin(&(p)->p_slock) +#define PROC_SUNLOCK(p) mtx_unlock_spin(&(p)->p_slock) +#define PROC_SLOCK_ASSERT(p, type) mtx_assert(&(p)->p_slock, (type)) + +/* These flags are kept in p_flag. */ +#define P_ADVLOCK 0x00001 /* Process may hold a POSIX advisory lock. */ +#define P_CONTROLT 0x00002 /* Has a controlling terminal. */ +#define P_KTHREAD 0x00004 /* Kernel thread (*). */ +#define P_NOLOAD 0x00008 /* Ignore during load avg calculations. */ +#define P_PPWAIT 0x00010 /* Parent is waiting for child to exec/exit. */ +#define P_PROFIL 0x00020 /* Has started profiling. */ +#define P_STOPPROF 0x00040 /* Has thread requesting to stop profiling. */ +#define P_HADTHREADS 0x00080 /* Has had threads (no cleanup shortcuts) */ +#define P_SUGID 0x00100 /* Had set id privileges since last exec. */ +#define P_SYSTEM 0x00200 /* System proc: no sigs, stats or swapping. */ +#define P_SINGLE_EXIT 0x00400 /* Threads suspending should exit, not wait. */ +#define P_TRACED 0x00800 /* Debugged process being traced. */ +#define P_WAITED 0x01000 /* Someone is waiting for us. */ +#define P_WEXIT 0x02000 /* Working on exiting. */ +#define P_EXEC 0x04000 /* Process called exec. */ +#define P_WKILLED 0x08000 /* Killed, go to kernel/user boundary ASAP. */ +#define P_CONTINUED 0x10000 /* Proc has continued from a stopped state. */ +#define P_STOPPED_SIG 0x20000 /* Stopped due to SIGSTOP/SIGTSTP. */ +#define P_STOPPED_TRACE 0x40000 /* Stopped because of tracing. */ +#define P_STOPPED_SINGLE 0x80000 /* Only 1 thread can continue (not to user). */ +#define P_PROTECTED 0x100000 /* Do not kill on memory overcommit. */ +#define P_SIGEVENT 0x200000 /* Process pending signals changed. */ +#define P_SINGLE_BOUNDARY 0x400000 /* Threads should suspend at user boundary. */ +#define P_HWPMC 0x800000 /* Process is using HWPMCs */ + +#define P_JAILED 0x1000000 /* Process is in jail. */ +#define P_INEXEC 0x4000000 /* Process is in execve(). */ +#define P_STATCHILD 0x8000000 /* Child process stopped or exited. */ +#define P_INMEM 0x10000000 /* Loaded into memory. */ +#define P_SWAPPINGOUT 0x20000000 /* Process is being swapped out. */ +#define P_SWAPPINGIN 0x40000000 /* Process is being swapped in. */ + +#define P_STOPPED (P_STOPPED_SIG|P_STOPPED_SINGLE|P_STOPPED_TRACE) +#define P_SHOULDSTOP(p) ((p)->p_flag & P_STOPPED) +#define P_KILLED(p) ((p)->p_flag & P_WKILLED) + +/* + * These were process status values (p_stat), now they are only used in + * legacy conversion code. + */ +#define SIDL 1 /* Process being created by fork. */ +#define SRUN 2 /* Currently runnable. */ +#define SSLEEP 3 /* Sleeping on an address. */ +#define SSTOP 4 /* Process debugging or suspension. */ +#define SZOMB 5 /* Awaiting collection by parent. */ +#define SWAIT 6 /* Waiting for interrupt. */ +#define SLOCK 7 /* Blocked on a lock. */ + +#define P_MAGIC 0xbeefface + +#ifdef _KERNEL + +/* Types and flags for mi_switch(). */ +#define SW_TYPE_MASK 0xff /* First 8 bits are switch type */ +#define SWT_NONE 0 /* Unspecified switch. */ +#define SWT_PREEMPT 1 /* Switching due to preemption. */ +#define SWT_OWEPREEMPT 2 /* Switching due to opepreempt. */ +#define SWT_TURNSTILE 3 /* Turnstile contention. */ +#define SWT_SLEEPQ 4 /* Sleepq wait. */ +#define SWT_SLEEPQTIMO 5 /* Sleepq timeout wait. */ +#define SWT_RELINQUISH 6 /* yield call. */ +#define SWT_NEEDRESCHED 7 /* NEEDRESCHED was set. */ +#define SWT_IDLE 8 /* Switching from the idle thread. */ +#define SWT_IWAIT 9 /* Waiting for interrupts. */ +#define SWT_SUSPEND 10 /* Thread suspended. */ +#define SWT_REMOTEPREEMPT 11 /* Remote processor preempted. */ +#define SWT_REMOTEWAKEIDLE 12 /* Remote processor preempted idle. */ +#define SWT_COUNT 13 /* Number of switch types. */ +/* Flags */ +#define SW_VOL 0x0100 /* Voluntary switch. */ +#define SW_INVOL 0x0200 /* Involuntary switch. */ +#define SW_PREEMPT 0x0400 /* The invol switch is a preemption */ + +/* How values for thread_single(). */ +#define SINGLE_NO_EXIT 0 +#define SINGLE_EXIT 1 +#define SINGLE_BOUNDARY 2 + +#ifdef MALLOC_DECLARE +MALLOC_DECLARE(M_PARGS); +MALLOC_DECLARE(M_PGRP); +MALLOC_DECLARE(M_SESSION); +MALLOC_DECLARE(M_SUBPROC); +MALLOC_DECLARE(M_ZOMBIE); +#endif + +#define FOREACH_PROC_IN_SYSTEM(p) \ + LIST_FOREACH((p), &allproc, p_list) +#define FOREACH_THREAD_IN_PROC(p, td) \ + TAILQ_FOREACH((td), &(p)->p_threads, td_plist) + +#define FIRST_THREAD_IN_PROC(p) TAILQ_FIRST(&(p)->p_threads) + +/* + * We use process IDs <= PID_MAX; PID_MAX + 1 must also fit in a pid_t, + * as it is used to represent "no process group". + */ +#define PID_MAX 99999 +#define NO_PID 100000 + +#define SESS_LEADER(p) ((p)->p_session->s_leader == (p)) + + +#define STOPEVENT(p, e, v) do { \ + if ((p)->p_stops & (e)) { \ + PROC_LOCK(p); \ + stopevent((p), (e), (v)); \ + PROC_UNLOCK(p); \ + } \ +} while (0) +#define _STOPEVENT(p, e, v) do { \ + PROC_LOCK_ASSERT(p, MA_OWNED); \ + WITNESS_WARN(WARN_GIANTOK | WARN_SLEEPOK, &p->p_mtx.lock_object, \ + "checking stopevent %d", (e)); \ + if ((p)->p_stops & (e)) \ + stopevent((p), (e), (v)); \ +} while (0) + +/* Lock and unlock a process. */ +#define PROC_LOCK(p) mtx_lock(&(p)->p_mtx) +#define PROC_TRYLOCK(p) mtx_trylock(&(p)->p_mtx) +#define PROC_UNLOCK(p) mtx_unlock(&(p)->p_mtx) +#define PROC_LOCKED(p) mtx_owned(&(p)->p_mtx) +#define PROC_LOCK_ASSERT(p, type) mtx_assert(&(p)->p_mtx, (type)) + +/* Lock and unlock a process group. */ +#define PGRP_LOCK(pg) mtx_lock(&(pg)->pg_mtx) +#define PGRP_UNLOCK(pg) mtx_unlock(&(pg)->pg_mtx) +#define PGRP_LOCKED(pg) mtx_owned(&(pg)->pg_mtx) +#define PGRP_LOCK_ASSERT(pg, type) mtx_assert(&(pg)->pg_mtx, (type)) + +#define PGRP_LOCK_PGSIGNAL(pg) do { \ + if ((pg) != NULL) \ + PGRP_LOCK(pg); \ +} while (0) +#define PGRP_UNLOCK_PGSIGNAL(pg) do { \ + if ((pg) != NULL) \ + PGRP_UNLOCK(pg); \ +} while (0) + +/* Lock and unlock a session. */ +#define SESS_LOCK(s) mtx_lock(&(s)->s_mtx) +#define SESS_UNLOCK(s) mtx_unlock(&(s)->s_mtx) +#define SESS_LOCKED(s) mtx_owned(&(s)->s_mtx) +#define SESS_LOCK_ASSERT(s, type) mtx_assert(&(s)->s_mtx, (type)) + +/* Hold process U-area in memory, normally for ptrace/procfs work. */ +#define PHOLD(p) do { \ + PROC_LOCK(p); \ + _PHOLD(p); \ + PROC_UNLOCK(p); \ +} while (0) +#define _PHOLD(p) do { \ + PROC_LOCK_ASSERT((p), MA_OWNED); \ + KASSERT(!((p)->p_flag & P_WEXIT) || (p) == curproc, \ + ("PHOLD of exiting process")); \ + (p)->p_lock++; \ + if (((p)->p_flag & P_INMEM) == 0) \ + faultin((p)); \ +} while (0) +#define PROC_ASSERT_HELD(p) do { \ + KASSERT((p)->p_lock > 0, ("process not held")); \ +} while (0) + +#define PRELE(p) do { \ + PROC_LOCK((p)); \ + _PRELE((p)); \ + PROC_UNLOCK((p)); \ +} while (0) +#define _PRELE(p) do { \ + PROC_LOCK_ASSERT((p), MA_OWNED); \ + (--(p)->p_lock); \ + if (((p)->p_flag & P_WEXIT) && (p)->p_lock == 0) \ + wakeup(&(p)->p_lock); \ +} while (0) +#define PROC_ASSERT_NOT_HELD(p) do { \ + KASSERT((p)->p_lock == 0, ("process held")); \ +} while (0) + +/* Check whether a thread is safe to be swapped out. */ +#define thread_safetoswapout(td) ((td)->td_flags & TDF_CANSWAP) + +/* Control whether or not it is safe for curthread to sleep. */ +#define THREAD_NO_SLEEPING() do { \ + KASSERT(!(curthread->td_pflags & TDP_NOSLEEPING), \ + ("nested no sleeping")); \ + curthread->td_pflags |= TDP_NOSLEEPING; \ +} while (0) + +#define THREAD_SLEEPING_OK() do { \ + KASSERT((curthread->td_pflags & TDP_NOSLEEPING), \ + ("nested sleeping ok")); \ + curthread->td_pflags &= ~TDP_NOSLEEPING; \ +} while (0) + +#define PIDHASH(pid) (&pidhashtbl[(pid) & pidhash]) +extern LIST_HEAD(pidhashhead, proc) *pidhashtbl; +extern u_long pidhash; + +#define PGRPHASH(pgid) (&pgrphashtbl[(pgid) & pgrphash]) +extern LIST_HEAD(pgrphashhead, pgrp) *pgrphashtbl; +extern u_long pgrphash; + +extern struct sx allproc_lock; +extern struct sx proctree_lock; +extern struct mtx ppeers_lock; +extern struct proc proc0; /* Process slot for swapper. */ +extern struct thread thread0; /* Primary thread in proc0. */ +extern struct vmspace vmspace0; /* VM space for proc0. */ +extern int hogticks; /* Limit on kernel cpu hogs. */ +extern int lastpid; +extern int nprocs, maxproc; /* Current and max number of procs. */ +extern int maxprocperuid; /* Max procs per uid. */ +extern u_long ps_arg_cache_limit; + +LIST_HEAD(proclist, proc); +TAILQ_HEAD(procqueue, proc); +TAILQ_HEAD(threadqueue, thread); +extern struct proclist allproc; /* List of all processes. */ +extern struct proclist zombproc; /* List of zombie processes. */ +extern struct proc *initproc, *pageproc; /* Process slots for init, pager. */ + +extern struct uma_zone *proc_zone; + +struct proc *pfind(pid_t); /* Find process by id. */ +struct pgrp *pgfind(pid_t); /* Find process group by id. */ +struct proc *zpfind(pid_t); /* Find zombie process by id. */ + +void ast(struct trapframe *framep); +struct thread *choosethread(void); +int cr_cansignal(struct ucred *cred, struct proc *proc, int signum); +int enterpgrp(struct proc *p, pid_t pgid, struct pgrp *pgrp, + struct session *sess); +int enterthispgrp(struct proc *p, struct pgrp *pgrp); +void faultin(struct proc *p); +void fixjobc(struct proc *p, struct pgrp *pgrp, int entering); +int fork1(struct thread *, int, int, struct proc **); +void fork_exit(void (*)(void *, struct trapframe *), void *, + struct trapframe *); +void fork_return(struct thread *, struct trapframe *); +int inferior(struct proc *p); +void kick_proc0(void); +int leavepgrp(struct proc *p); +void mi_switch(int flags, struct thread *newtd); +int p_candebug(struct thread *td, struct proc *p); +int p_cansee(struct thread *td, struct proc *p); +int p_cansched(struct thread *td, struct proc *p); +int p_cansignal(struct thread *td, struct proc *p, int signum); +int p_canwait(struct thread *td, struct proc *p); +struct pargs *pargs_alloc(int len); +void pargs_drop(struct pargs *pa); +void pargs_hold(struct pargs *pa); +void procinit(void); +void proc_linkup0(struct proc *p, struct thread *td); +void proc_linkup(struct proc *p, struct thread *td); +void proc_reparent(struct proc *child, struct proc *newparent); +struct pstats *pstats_alloc(void); +void pstats_fork(struct pstats *src, struct pstats *dst); +void pstats_free(struct pstats *ps); +int securelevel_ge(struct ucred *cr, int level); +int securelevel_gt(struct ucred *cr, int level); +void sess_hold(struct session *); +void sess_release(struct session *); +int setrunnable(struct thread *); +void setsugid(struct proc *p); +int sigonstack(size_t sp); +void sleepinit(void); +void stopevent(struct proc *, u_int, u_int); +void threadinit(void); +void cpu_idle(int); +int cpu_idle_wakeup(int); +extern void (*cpu_idle_hook)(void); /* Hook to machdep CPU idler. */ +void cpu_switch(struct thread *, struct thread *, struct mtx *); +void cpu_throw(struct thread *, struct thread *) __dead2; +void unsleep(struct thread *); +void userret(struct thread *, struct trapframe *); +struct syscall_args; +int syscallenter(struct thread *, struct syscall_args *); +void syscallret(struct thread *, int, struct syscall_args *); + +void cpu_exit(struct thread *); +void exit1(struct thread *, int) __dead2; +struct syscall_args; +int cpu_fetch_syscall_args(struct thread *td, struct syscall_args *sa); +void cpu_fork(struct thread *, struct proc *, struct thread *, int); +void cpu_set_fork_handler(struct thread *, void (*)(void *), void *); +void cpu_set_syscall_retval(struct thread *, int); +void cpu_set_upcall(struct thread *td, struct thread *td0); +void cpu_set_upcall_kse(struct thread *, void (*)(void *), void *, + stack_t *); +int cpu_set_user_tls(struct thread *, void *tls_base); +void cpu_thread_alloc(struct thread *); +void cpu_thread_clean(struct thread *); +void cpu_thread_exit(struct thread *); +void cpu_thread_free(struct thread *); +void cpu_thread_swapin(struct thread *); +void cpu_thread_swapout(struct thread *); +struct thread *thread_alloc(int pages); +int thread_alloc_stack(struct thread *, int pages); +void thread_exit(void) __dead2; +void thread_free(struct thread *td); +void thread_link(struct thread *td, struct proc *p); +void thread_reap(void); +int thread_single(int how); +void thread_single_end(void); +void thread_stash(struct thread *td); +void thread_stopped(struct proc *p); +void childproc_stopped(struct proc *child, int reason); +void childproc_continued(struct proc *child); +void childproc_exited(struct proc *child); +int thread_suspend_check(int how); +void thread_suspend_switch(struct thread *); +void thread_suspend_one(struct thread *td); +void thread_unlink(struct thread *td); +void thread_unsuspend(struct proc *p); +int thread_unsuspend_one(struct thread *td); +void thread_unthread(struct thread *td); +void thread_wait(struct proc *p); +struct thread *thread_find(struct proc *p, lwpid_t tid); +void thr_exit1(void); + +#endif /* _KERNEL */ + +#endif /* !_SYS_PROC_H_ */ Modified: soc2011/rudot/kern/sched_4bsd.c ============================================================================== --- soc2011/rudot/kern/sched_4bsd.c Thu May 26 10:10:10 2011 (r222401) +++ soc2011/rudot/kern/sched_4bsd.c Thu May 26 13:31:05 2011 (r222402) @@ -229,86 +229,6 @@ } /* - * This function is called when a thread is about to be put on run queue - * because it has been made runnable or its priority has been adjusted. It - * determines if the new thread should be immediately preempted to. If so, - * it switches to it and eventually returns true. If not, it returns false - * so that the caller may place the thread on an appropriate run queue. - */ -int -maybe_preempt(struct thread *td) -{ -#ifdef PREEMPTION - struct thread *ctd; - int cpri, pri; - - /* - * The new thread should not preempt the current thread if any of the - * following conditions are true: - * - * - The kernel is in the throes of crashing (panicstr). - * - The current thread has a higher (numerically lower) or - * equivalent priority. Note that this prevents curthread from - * trying to preempt to itself. - * - It is too early in the boot for context switches (cold is set). - * - The current thread has an inhibitor set or is in the process of - * exiting. In this case, the current thread is about to switch - * out anyways, so there's no point in preempting. If we did, - * the current thread would not be properly resumed as well, so - * just avoid that whole landmine. - * - If the new thread's priority is not a realtime priority and - * the current thread's priority is not an idle priority and - * FULL_PREEMPTION is disabled. - * - * If all of these conditions are false, but the current thread is in - * a nested critical section, then we have to defer the preemption - * until we exit the critical section. Otherwise, switch immediately - * to the new thread. - */ - ctd = curthread; - THREAD_LOCK_ASSERT(td, MA_OWNED); - KASSERT((td->td_inhibitors == 0), - ("maybe_preempt: trying to run inhibited thread")); - pri = td->td_priority; - cpri = ctd->td_priority; - if (panicstr != NULL || pri >= cpri || cold /* || dumping */ || - TD_IS_INHIBITED(ctd)) - return (0); -#ifndef FULL_PREEMPTION - if (pri > PRI_MAX_ITHD && cpri < PRI_MIN_IDLE) - return (0); -#endif - - if (ctd->td_critnest > 1) { - CTR1(KTR_PROC, "maybe_preempt: in critical section %d", - ctd->td_critnest); - ctd->td_owepreempt = 1; - return (0); - } - /* - * Thread is runnable but not yet put on system run queue. - */ - MPASS(ctd->td_lock == td->td_lock); - MPASS(TD_ON_RUNQ(td)); - TD_SET_RUNNING(td); - CTR3(KTR_PROC, "preempting to thread %p (pid %d, %s)\n", td, - td->td_proc->p_pid, td->td_name); - mi_switch(SW_INVOL | SW_PREEMPT | SWT_PREEMPT, td); - /* - * td's lock pointer may have changed. We have to return with it - * locked. - */ - spinlock_enter(); - thread_unlock(ctd); - thread_lock(td); - spinlock_exit(); - return (1); -#else - return (0); -#endif -} - -/* * Constants for digital decay and forget: * 90% of (td_estcpu) usage in 5 * loadav time * 95% of (ts_pctcpu) usage in 60 seconds (load insensitive) @@ -528,8 +448,6 @@ void sched_nice(struct proc *p, int nice) { - struct thread *td; - PROC_LOCK_ASSERT(p, MA_OWNED); p->p_nice = nice; } @@ -547,8 +465,6 @@ static void sched_priority(struct thread *td, u_char prio) *** DIFF OUTPUT TRUNCATED AT 1000 LINES *** From owner-svn-soc-all@FreeBSD.ORG Sat May 28 03:13:11 2011 Return-Path: Delivered-To: svn-soc-all@FreeBSD.org Received: from socsvn.FreeBSD.org (unknown [IPv6:2001:4f8:fff6::2f]) by hub.freebsd.org (Postfix) with SMTP id BAA251065670 for ; Sat, 28 May 2011 03:13:09 +0000 (UTC) (envelope-from zy@FreeBSD.org) Received: by socsvn.FreeBSD.org (sSMTP sendmail emulation); Sat, 28 May 2011 03:13:09 +0000 Date: Sat, 28 May 2011 03:13:09 +0000 From: zy@FreeBSD.org To: svn-soc-all@FreeBSD.org MIME-Version: 1.0 Content-Type: text/plain; charset=UTF-8 Content-Transfer-Encoding: 8bit Message-Id: <20110528031309.BAA251065670@hub.freebsd.org> Cc: Subject: socsvn commit: r222496 - soc2011/zy/nvi-iconv/head/usr.bin/nvi/cl X-BeenThere: svn-soc-all@freebsd.org X-Mailman-Version: 2.1.5 Precedence: list List-Id: SVN commit messages for the entire Summer of Code repository List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , X-List-Received-Date: Sat, 28 May 2011 03:13:12 -0000 Author: zy Date: Sat May 28 03:13:09 2011 New Revision: 222496 URL: http://svnweb.FreeBSD.org/socsvn/?view=rev&rev=222496 Log: Uses the HAVE_TERM_H to handle the problem suggested by r170399 Modified: soc2011/zy/nvi-iconv/head/usr.bin/nvi/cl/cl_screen.c Modified: soc2011/zy/nvi-iconv/head/usr.bin/nvi/cl/cl_screen.c ============================================================================== --- soc2011/zy/nvi-iconv/head/usr.bin/nvi/cl/cl_screen.c Sat May 28 00:58:19 2011 (r222495) +++ soc2011/zy/nvi-iconv/head/usr.bin/nvi/cl/cl_screen.c Sat May 28 03:13:09 2011 (r222496) @@ -25,8 +25,11 @@ #include #include #include +#ifdef HAVE_TERM_H #include +#else #include +#endif #include #include "../common/common.h" From owner-svn-soc-all@FreeBSD.ORG Sat May 28 03:18:05 2011 Return-Path: Delivered-To: svn-soc-all@FreeBSD.org Received: from socsvn.FreeBSD.org (unknown [IPv6:2001:4f8:fff6::2f]) by hub.freebsd.org (Postfix) with SMTP id 7A27C106564A for ; Sat, 28 May 2011 03:18:03 +0000 (UTC) (envelope-from zy@FreeBSD.org) Received: by socsvn.FreeBSD.org (sSMTP sendmail emulation); Sat, 28 May 2011 03:18:03 +0000 Date: Sat, 28 May 2011 03:18:03 +0000 From: zy@FreeBSD.org To: svn-soc-all@FreeBSD.org MIME-Version: 1.0 Content-Type: text/plain; charset=UTF-8 Content-Transfer-Encoding: 8bit Message-Id: <20110528031803.7A27C106564A@hub.freebsd.org> Cc: Subject: socsvn commit: r222497 - in soc2011/zy/nvi-iconv/head/usr.bin/nvi: common ex X-BeenThere: svn-soc-all@freebsd.org X-Mailman-Version: 2.1.5 Precedence: list List-Id: SVN commit messages for the entire Summer of Code repository List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , X-List-Received-Date: Sat, 28 May 2011 03:18:05 -0000 Author: zy Date: Sat May 28 03:18:03 2011 New Revision: 222497 URL: http://svnweb.FreeBSD.org/socsvn/?view=rev&rev=222497 Log: Resolves two number overflow problems based on following facts: 1. CHAT_T (unsigned) overflows to 0; 2. recno_t (unsigned) is set to MAX_REC_NUMBER if strtoul() overflows. Modified: soc2011/zy/nvi-iconv/head/usr.bin/nvi/common/key.c soc2011/zy/nvi-iconv/head/usr.bin/nvi/ex/ex_subst.c Modified: soc2011/zy/nvi-iconv/head/usr.bin/nvi/common/key.c ============================================================================== --- soc2011/zy/nvi-iconv/head/usr.bin/nvi/common/key.c Sat May 28 03:13:09 2011 (r222496) +++ soc2011/zy/nvi-iconv/head/usr.bin/nvi/common/key.c Sat May 28 03:18:03 2011 (r222497) @@ -145,7 +145,7 @@ } /* Find a non-printable character to use as a message separator. */ - for (ch = 1; ch <= MAX_CHAR_T; ++ch) + for (ch = 1; ch != 0; ++ch) /* XXX quit if overflowed */ if (!isprint(ch)) { gp->noprint = ch; break; Modified: soc2011/zy/nvi-iconv/head/usr.bin/nvi/ex/ex_subst.c ============================================================================== --- soc2011/zy/nvi-iconv/head/usr.bin/nvi/ex/ex_subst.c Sat May 28 03:13:09 2011 (r222496) +++ soc2011/zy/nvi-iconv/head/usr.bin/nvi/ex/ex_subst.c Sat May 28 03:18:03 2011 (r222497) @@ -418,12 +418,10 @@ if (*s == '\0') /* Loop increment correction. */ --s; if (errno == ERANGE) { - if (lno == LONG_MAX) + if (lno == MAX_REC_NUMBER) msgq(sp, M_ERR, "153|Count overflow"); - else if (lno == LONG_MIN) - msgq(sp, M_ERR, "154|Count underflow"); else - msgq(sp, M_SYSERR, NULL); + msgq(sp, M_ERR, "154|Count underflow"); return (1); } /*