Skip site navigation (1)Skip section navigation (2)
Date:      Wed, 17 Sep 2014 14:07:41 +0000 (UTC)
From:      Alexander Motin <mav@FreeBSD.org>
To:        src-committers@freebsd.org, svn-src-all@freebsd.org, svn-src-stable@freebsd.org, svn-src-stable-9@freebsd.org
Subject:   svn commit: r271708 - stable/9/sys/kern
Message-ID:  <201409171407.s8HE7f1N071682@svn.freebsd.org>

next in thread | raw e-mail | index | archive | help
Author: mav
Date: Wed Sep 17 14:07:40 2014
New Revision: 271708
URL: http://svnweb.freebsd.org/changeset/base/271708

Log:
  MFC r271604, r271616:
  Add couple memory barriers to order tdq_cpu_idle and tdq_load accesses.
  
  This change fixes transient performance drops in some of my benchmarks,
  vanishing as soon as I am trying to collect any stats from the scheduler.
  It looks like reordered access to those variables sometimes caused loss of
  IPI_PREEMPT, that delayed thread execution until some later interrupt.

Modified:
  stable/9/sys/kern/sched_ule.c
Directory Properties:
  stable/9/   (props changed)
  stable/9/sys/   (props changed)

Modified: stable/9/sys/kern/sched_ule.c
==============================================================================
--- stable/9/sys/kern/sched_ule.c	Wed Sep 17 14:06:21 2014	(r271707)
+++ stable/9/sys/kern/sched_ule.c	Wed Sep 17 14:07:40 2014	(r271708)
@@ -1006,6 +1006,14 @@ tdq_notify(struct tdq *tdq, struct threa
 	ctd = pcpu_find(cpu)->pc_curthread;
 	if (!sched_shouldpreempt(pri, ctd->td_priority, 1))
 		return;
+
+	/*
+	 * Make sure that tdq_load updated before calling this function
+	 * is globally visible before we read tdq_cpu_idle.  Idle thread
+	 * accesses both of them without locks, and the order is important.
+	 */
+	mb();
+
 	if (TD_IS_IDLETHREAD(ctd)) {
 		/*
 		 * If the MD code has an idle wakeup routine try that before
@@ -2607,6 +2615,12 @@ sched_idletd(void *dummy)
 
 		/* Run main MD idle handler. */
 		tdq->tdq_cpu_idle = 1;
+		/*
+		 * Make sure that tdq_cpu_idle update is globally visible
+		 * before cpu_idle() read tdq_load.  The order is important
+		 * to avoid race with tdq_notify.
+		 */
+		mb();
 		cpu_idle(switchcnt * 4 > sched_idlespinthresh);
 		tdq->tdq_cpu_idle = 0;
 



Want to link to this message? Use this URL: <https://mail-archive.FreeBSD.org/cgi/mid.cgi?201409171407.s8HE7f1N071682>