From owner-svn-src-stable-10@FreeBSD.ORG Fri Feb 13 00:30:00 2015 Return-Path: Delivered-To: svn-src-stable-10@freebsd.org Received: from mx1.freebsd.org (mx1.freebsd.org [8.8.178.115]) (using TLSv1.2 with cipher AECDH-AES256-SHA (256/256 bits)) (No client certificate requested) by hub.freebsd.org (Postfix) with ESMTPS id 957E345F; Fri, 13 Feb 2015 00:30:00 +0000 (UTC) Received: from svn.freebsd.org (svn.freebsd.org [IPv6:2001:1900:2254:2068::e6a:0]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (Client did not present a certificate) by mx1.freebsd.org (Postfix) with ESMTPS id 7EA883BB; Fri, 13 Feb 2015 00:30:00 +0000 (UTC) Received: from svn.freebsd.org ([127.0.1.70]) by svn.freebsd.org (8.14.9/8.14.9) with ESMTP id t1D0U0Cq002333; Fri, 13 Feb 2015 00:30:00 GMT (envelope-from sbruno@FreeBSD.org) Received: (from sbruno@localhost) by svn.freebsd.org (8.14.9/8.14.9/Submit) id t1D0TwbR002255; Fri, 13 Feb 2015 00:29:58 GMT (envelope-from sbruno@FreeBSD.org) Message-Id: <201502130029.t1D0TwbR002255@svn.freebsd.org> X-Authentication-Warning: svn.freebsd.org: sbruno set sender to sbruno@FreeBSD.org using -f From: Sean Bruno Date: Fri, 13 Feb 2015 00:29:58 +0000 (UTC) To: src-committers@freebsd.org, svn-src-all@freebsd.org, svn-src-stable@freebsd.org, svn-src-stable-10@freebsd.org Subject: svn commit: r278650 - in stable/10: sys/kern tools/sched usr.bin/man X-SVN-Group: stable-10 MIME-Version: 1.0 Content-Type: text/plain; charset=UTF-8 Content-Transfer-Encoding: 8bit X-BeenThere: svn-src-stable-10@freebsd.org X-Mailman-Version: 2.1.18-1 Precedence: list List-Id: SVN commit messages for only the 10-stable src tree List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , X-List-Received-Date: Fri, 13 Feb 2015 00:30:00 -0000 Author: sbruno Date: Fri Feb 13 00:29:57 2015 New Revision: 278650 URL: https://svnweb.freebsd.org/changeset/base/278650 Log: MFC 272315 272757 274091 274902 r272315 Explicitly return None for negative event indices. Prior to this, eventat(-1) would return the next-to-last event causing the back button to cycle back to the end of an event source instead of stopping at the start. r272757 Add schedgraph traces for callout handlers. Specifically, a callwheel logs a running event each time it executes a callout function. The event includes the function pointer, argument, and whether or not it was run from hardware interrupt context. The callwheel is marked idle when each handler completes. This effectively logs the duration of each callout routine in the graph. r274091 Bind Ctrl-Q as a global hotkey to exit. Bind Ctrl-W as a hotkey to close dialogs. r274902 Add a new thread state "spinning" to schedgraph and add tracepoints at the start and stop of spinning waits in lock primitives. Reviewed by: jhb Modified: stable/10/sys/kern/kern_lock.c stable/10/sys/kern/kern_mutex.c stable/10/sys/kern/kern_rwlock.c stable/10/sys/kern/kern_sx.c stable/10/sys/kern/kern_timeout.c stable/10/tools/sched/schedgraph.py stable/10/usr.bin/man/man.sh Directory Properties: stable/10/ (props changed) Modified: stable/10/sys/kern/kern_lock.c ============================================================================== --- stable/10/sys/kern/kern_lock.c Fri Feb 13 00:21:54 2015 (r278649) +++ stable/10/sys/kern/kern_lock.c Fri Feb 13 00:29:57 2015 (r278650) @@ -583,6 +583,9 @@ __lockmgr_args(struct lock *lk, u_int fl CTR3(KTR_LOCK, "%s: spinning on %p held by %p", __func__, lk, owner); + KTR_STATE1(KTR_SCHED, "thread", + sched_tdname(td), "spinning", + "lockname:\"%s\"", lk->lock_object.lo_name); /* * If we are holding also an interlock drop it @@ -598,11 +601,16 @@ __lockmgr_args(struct lock *lk, u_int fl while (LK_HOLDER(lk->lk_lock) == (uintptr_t)owner && TD_IS_RUNNING(owner)) cpu_spinwait(); + KTR_STATE0(KTR_SCHED, "thread", + sched_tdname(td), "running"); GIANT_RESTORE(); continue; } else if (LK_CAN_ADAPT(lk, flags) && (x & LK_SHARE) != 0 && LK_SHARERS(x) && spintries < alk_retries) { + KTR_STATE1(KTR_SCHED, "thread", + sched_tdname(td), "spinning", + "lockname:\"%s\"", lk->lock_object.lo_name); if (flags & LK_INTERLOCK) { class->lc_unlock(ilk); flags &= ~LK_INTERLOCK; @@ -620,6 +628,8 @@ __lockmgr_args(struct lock *lk, u_int fl break; cpu_spinwait(); } + KTR_STATE0(KTR_SCHED, "thread", + sched_tdname(td), "running"); GIANT_RESTORE(); if (i != alk_loops) continue; @@ -815,6 +825,9 @@ __lockmgr_args(struct lock *lk, u_int fl CTR3(KTR_LOCK, "%s: spinning on %p held by %p", __func__, lk, owner); + KTR_STATE1(KTR_SCHED, "thread", + sched_tdname(td), "spinning", + "lockname:\"%s\"", lk->lock_object.lo_name); /* * If we are holding also an interlock drop it @@ -830,6 +843,8 @@ __lockmgr_args(struct lock *lk, u_int fl while (LK_HOLDER(lk->lk_lock) == (uintptr_t)owner && TD_IS_RUNNING(owner)) cpu_spinwait(); + KTR_STATE0(KTR_SCHED, "thread", + sched_tdname(td), "running"); GIANT_RESTORE(); continue; } else if (LK_CAN_ADAPT(lk, flags) && @@ -839,6 +854,9 @@ __lockmgr_args(struct lock *lk, u_int fl !atomic_cmpset_ptr(&lk->lk_lock, x, x | LK_EXCLUSIVE_SPINNERS)) continue; + KTR_STATE1(KTR_SCHED, "thread", + sched_tdname(td), "spinning", + "lockname:\"%s\"", lk->lock_object.lo_name); if (flags & LK_INTERLOCK) { class->lc_unlock(ilk); flags &= ~LK_INTERLOCK; @@ -855,6 +873,8 @@ __lockmgr_args(struct lock *lk, u_int fl break; cpu_spinwait(); } + KTR_STATE0(KTR_SCHED, "thread", + sched_tdname(td), "running"); GIANT_RESTORE(); if (i != alk_loops) continue; Modified: stable/10/sys/kern/kern_mutex.c ============================================================================== --- stable/10/sys/kern/kern_mutex.c Fri Feb 13 00:21:54 2015 (r278649) +++ stable/10/sys/kern/kern_mutex.c Fri Feb 13 00:29:57 2015 (r278650) @@ -436,6 +436,10 @@ __mtx_lock_sleep(volatile uintptr_t *c, CTR3(KTR_LOCK, "%s: spinning on %p held by %p", __func__, m, owner); + KTR_STATE1(KTR_SCHED, "thread", + sched_tdname((struct thread *)tid), + "spinning", "lockname:\"%s\"", + m->lock_object.lo_name); while (mtx_owner(m) == owner && TD_IS_RUNNING(owner)) { cpu_spinwait(); @@ -443,6 +447,9 @@ __mtx_lock_sleep(volatile uintptr_t *c, spin_cnt++; #endif } + KTR_STATE0(KTR_SCHED, "thread", + sched_tdname((struct thread *)tid), + "running"); continue; } } @@ -579,6 +586,8 @@ _mtx_lock_spin_cookie(volatile uintptr_t if (LOCK_LOG_TEST(&m->lock_object, opts)) CTR1(KTR_LOCK, "_mtx_lock_spin: %p spinning", m); + KTR_STATE1(KTR_SCHED, "thread", sched_tdname((struct thread *)tid), + "spinning", "lockname:\"%s\"", m->lock_object.lo_name); #ifdef HWPMC_HOOKS PMC_SOFT_CALL( , , lock, failed); @@ -604,6 +613,8 @@ _mtx_lock_spin_cookie(volatile uintptr_t if (LOCK_LOG_TEST(&m->lock_object, opts)) CTR1(KTR_LOCK, "_mtx_lock_spin: %p spin done", m); + KTR_STATE0(KTR_SCHED, "thread", sched_tdname((struct thread *)tid), + "running"); LOCKSTAT_PROFILE_OBTAIN_LOCK_SUCCESS(LS_MTX_SPIN_LOCK_ACQUIRE, m, contested, waittime, (file), (line)); Modified: stable/10/sys/kern/kern_rwlock.c ============================================================================== --- stable/10/sys/kern/kern_rwlock.c Fri Feb 13 00:21:54 2015 (r278649) +++ stable/10/sys/kern/kern_rwlock.c Fri Feb 13 00:29:57 2015 (r278650) @@ -44,6 +44,7 @@ __FBSDID("$FreeBSD$"); #include #include #include +#include #include #include #include @@ -423,6 +424,9 @@ __rw_rlock(volatile uintptr_t *c, const CTR3(KTR_LOCK, "%s: spinning on %p held by %p", __func__, rw, owner); + KTR_STATE1(KTR_SCHED, "thread", + sched_tdname(curthread), "spinning", + "lockname:\"%s\"", rw->lock_object.lo_name); while ((struct thread*)RW_OWNER(rw->rw_lock) == owner && TD_IS_RUNNING(owner)) { cpu_spinwait(); @@ -430,16 +434,23 @@ __rw_rlock(volatile uintptr_t *c, const spin_cnt++; #endif } + KTR_STATE0(KTR_SCHED, "thread", + sched_tdname(curthread), "running"); continue; } } else if (spintries < rowner_retries) { spintries++; + KTR_STATE1(KTR_SCHED, "thread", sched_tdname(curthread), + "spinning", "lockname:\"%s\"", + rw->lock_object.lo_name); for (i = 0; i < rowner_loops; i++) { v = rw->rw_lock; if ((v & RW_LOCK_READ) == 0 || RW_CAN_READ(v)) break; cpu_spinwait(); } + KTR_STATE0(KTR_SCHED, "thread", sched_tdname(curthread), + "running"); if (i != rowner_loops) continue; } @@ -759,6 +770,9 @@ __rw_wlock_hard(volatile uintptr_t *c, u if (LOCK_LOG_TEST(&rw->lock_object, 0)) CTR3(KTR_LOCK, "%s: spinning on %p held by %p", __func__, rw, owner); + KTR_STATE1(KTR_SCHED, "thread", sched_tdname(curthread), + "spinning", "lockname:\"%s\"", + rw->lock_object.lo_name); while ((struct thread*)RW_OWNER(rw->rw_lock) == owner && TD_IS_RUNNING(owner)) { cpu_spinwait(); @@ -766,6 +780,8 @@ __rw_wlock_hard(volatile uintptr_t *c, u spin_cnt++; #endif } + KTR_STATE0(KTR_SCHED, "thread", sched_tdname(curthread), + "running"); continue; } if ((v & RW_LOCK_READ) && RW_READERS(v) && @@ -777,11 +793,16 @@ __rw_wlock_hard(volatile uintptr_t *c, u } } spintries++; + KTR_STATE1(KTR_SCHED, "thread", sched_tdname(curthread), + "spinning", "lockname:\"%s\"", + rw->lock_object.lo_name); for (i = 0; i < rowner_loops; i++) { if ((rw->rw_lock & RW_LOCK_WRITE_SPINNER) == 0) break; cpu_spinwait(); } + KTR_STATE0(KTR_SCHED, "thread", sched_tdname(curthread), + "running"); #ifdef KDTRACE_HOOKS spin_cnt += rowner_loops - i; #endif Modified: stable/10/sys/kern/kern_sx.c ============================================================================== --- stable/10/sys/kern/kern_sx.c Fri Feb 13 00:21:54 2015 (r278649) +++ stable/10/sys/kern/kern_sx.c Fri Feb 13 00:29:57 2015 (r278650) @@ -51,6 +51,7 @@ __FBSDID("$FreeBSD$"); #include #include #include +#include #include #include #include @@ -560,6 +561,10 @@ _sx_xlock_hard(struct sx *sx, uintptr_t CTR3(KTR_LOCK, "%s: spinning on %p held by %p", __func__, sx, owner); + KTR_STATE1(KTR_SCHED, "thread", + sched_tdname(curthread), "spinning", + "lockname:\"%s\"", + sx->lock_object.lo_name); GIANT_SAVE(); while (SX_OWNER(sx->sx_lock) == x && TD_IS_RUNNING(owner)) { @@ -568,9 +573,14 @@ _sx_xlock_hard(struct sx *sx, uintptr_t spin_cnt++; #endif } + KTR_STATE0(KTR_SCHED, "thread", + sched_tdname(curthread), "running"); continue; } } else if (SX_SHARERS(x) && spintries < asx_retries) { + KTR_STATE1(KTR_SCHED, "thread", + sched_tdname(curthread), "spinning", + "lockname:\"%s\"", sx->lock_object.lo_name); GIANT_SAVE(); spintries++; for (i = 0; i < asx_loops; i++) { @@ -587,6 +597,8 @@ _sx_xlock_hard(struct sx *sx, uintptr_t spin_cnt++; #endif } + KTR_STATE0(KTR_SCHED, "thread", + sched_tdname(curthread), "running"); if (i != asx_loops) continue; } @@ -849,6 +861,9 @@ _sx_slock_hard(struct sx *sx, int opts, CTR3(KTR_LOCK, "%s: spinning on %p held by %p", __func__, sx, owner); + KTR_STATE1(KTR_SCHED, "thread", + sched_tdname(curthread), "spinning", + "lockname:\"%s\"", sx->lock_object.lo_name); GIANT_SAVE(); while (SX_OWNER(sx->sx_lock) == x && TD_IS_RUNNING(owner)) { @@ -857,6 +872,8 @@ _sx_slock_hard(struct sx *sx, int opts, #endif cpu_spinwait(); } + KTR_STATE0(KTR_SCHED, "thread", + sched_tdname(curthread), "running"); continue; } } Modified: stable/10/sys/kern/kern_timeout.c ============================================================================== --- stable/10/sys/kern/kern_timeout.c Fri Feb 13 00:21:54 2015 (r278649) +++ stable/10/sys/kern/kern_timeout.c Fri Feb 13 00:29:57 2015 (r278650) @@ -150,6 +150,7 @@ struct callout_cpu { sbintime_t cc_lastscan; void *cc_cookie; u_int cc_bucket; + char cc_ktr_event_name[20]; }; #define cc_exec_curr cc_exec_entity[0].cc_curr @@ -188,7 +189,7 @@ struct callout_cpu cc_cpu; static int timeout_cpu; -static void callout_cpu_init(struct callout_cpu *cc); +static void callout_cpu_init(struct callout_cpu *cc, int cpu); static void softclock_call_cc(struct callout *c, struct callout_cpu *cc, #ifdef CALLOUT_PROFILING int *mpcalls, int *lockcalls, int *gcalls, @@ -283,7 +284,7 @@ callout_callwheel_init(void *dummy) cc = CC_CPU(timeout_cpu); cc->cc_callout = malloc(ncallout * sizeof(struct callout), M_CALLOUT, M_WAITOK); - callout_cpu_init(cc); + callout_cpu_init(cc, timeout_cpu); } SYSINIT(callwheel_init, SI_SUB_CPU, SI_ORDER_ANY, callout_callwheel_init, NULL); @@ -291,7 +292,7 @@ SYSINIT(callwheel_init, SI_SUB_CPU, SI_O * Initialize the per-cpu callout structures. */ static void -callout_cpu_init(struct callout_cpu *cc) +callout_cpu_init(struct callout_cpu *cc, int cpu) { struct callout *c; int i; @@ -306,6 +307,8 @@ callout_cpu_init(struct callout_cpu *cc) cc->cc_firstevent = INT64_MAX; for (i = 0; i < 2; i++) cc_cce_cleanup(cc, i); + snprintf(cc->cc_ktr_event_name, sizeof(cc->cc_ktr_event_name), + "callwheel cpu %d", cpu); if (cc->cc_callout == NULL) /* Only cpu0 handles timeout(9) */ return; for (i = 0; i < ncallout; i++) { @@ -367,7 +370,7 @@ start_softclock(void *dummy) continue; cc = CC_CPU(cpu); cc->cc_callout = NULL; /* Only cpu0 handles timeout(9). */ - callout_cpu_init(cc); + callout_cpu_init(cc, cpu); if (swi_add(NULL, "clock", softclock, cc, SWI_CLOCK, INTR_MPSAFE, &cc->cc_cookie)) panic("died while creating standard software ithreads"); @@ -674,6 +677,8 @@ softclock_call_cc(struct callout *c, str CTR3(KTR_CALLOUT, "callout %p func %p arg %p", c, c_func, c_arg); } + KTR_STATE3(KTR_SCHED, "callout", cc->cc_ktr_event_name, "running", + "func:%p", c_func, "arg:%p", c_arg, "direct:%d", direct); #if defined(DIAGNOSTIC) || defined(CALLOUT_PROFILING) sbt1 = sbinuptime(); #endif @@ -696,6 +701,7 @@ softclock_call_cc(struct callout *c, str lastfunc = c_func; } #endif + KTR_STATE0(KTR_SCHED, "callout", cc->cc_ktr_event_name, "idle"); CTR1(KTR_CALLOUT, "callout %p finished", c); if ((c_flags & CALLOUT_RETURNUNLOCKED) == 0) class->lc_unlock(c_lock); Modified: stable/10/tools/sched/schedgraph.py ============================================================================== --- stable/10/tools/sched/schedgraph.py Fri Feb 13 00:21:54 2015 (r278649) +++ stable/10/tools/sched/schedgraph.py Fri Feb 13 00:29:57 2015 (r278650) @@ -70,6 +70,7 @@ eventcolors = [ ("count", "red"), ("running", "green"), ("idle", "grey"), + ("spinning", "red"), ("yielding", "yellow"), ("swapped", "violet"), ("suspended", "purple"), @@ -80,8 +81,6 @@ eventcolors = [ ("runq rem", "yellow"), ("thread exit", "grey"), ("proc exit", "grey"), - ("callwheel idle", "grey"), - ("callout running", "green"), ("lock acquire", "blue"), ("lock contest", "purple"), ("failed lock try", "red"), @@ -856,7 +855,7 @@ class EventSource: return (Y_EVENTSOURCE) def eventat(self, i): - if (i >= len(self.events)): + if (i >= len(self.events) or i < 0): return (None) event = self.events[i] return (event) @@ -903,7 +902,6 @@ class KTRFile: self.timestamp_f = None self.timestamp_l = None self.locks = {} - self.callwheels = {} self.ticks = {} self.load = {} self.crit = {} Modified: stable/10/usr.bin/man/man.sh ============================================================================== --- stable/10/usr.bin/man/man.sh Fri Feb 13 00:21:54 2015 (r278649) +++ stable/10/usr.bin/man/man.sh Fri Feb 13 00:29:57 2015 (r278650) @@ -276,11 +276,8 @@ man_check_for_so() { return 0 } -# Usage: man_display_page -# Display either the manpage or catpage depending on the use_cat variable man_display_page() { - local EQN NROFF PIC TBL TROFF REFER VGRIND - local IFS l nroff_dev pipeline preproc_arg tool + local IFS pipeline preconv_enc testline # We are called with IFS set to colon. This causes really weird # things to happen for the variables that have spaces in them. @@ -312,6 +309,49 @@ man_display_page() { return fi + case "${manpage}" in + *.${man_charset}/*) + case "$man_charset" in + ISO8859-1) preconv_enc="latin-1" ;; + ISO8859-15) preconv_enc="latin-1" ;; + UTF-8) preconv_enc="utf-8" ;; + esac + ;; + esac + + if [ -n "$preconv_enc" ]; then + pipeline="preconv -e $preconv_enc |" + fi + testline="$pipeline mandoc -Tlint -Werror 2>/dev/null" + pipeline="$pipeline mandoc -Tlocale | $MANPAGER" + + if ! eval "$cattool $manpage | $testline" ;then + if which -s groff2; then + man_display_page_groff + else + echo "This manpage needs groff(1) to be rendered" >&2 + echo "First install groff(1): " >&2 + echo "pkg install groff " >&2 + ret=1 + fi + return + fi + + if [ $debug -gt 0 ]; then + decho "Command: $cattool $manpage | $pipeline" + ret=0 + else + eval "$cattool $manpage | $pipeline" + ret=$? + fi +} + +# Usage: man_display_page +# Display either the manpage or catpage depending on the use_cat variable +man_display_page_groff() { + local EQN NROFF PIC TBL TROFF REFER VGRIND + local IFS l nroff_dev pipeline preproc_arg tool + # So, we really do need to parse the manpage. First, figure out the # device flag (-T) we have to pass to eqn(1) and groff(1). Then, # setup the pipeline of commands based on the user's request.