Skip site navigation (1)Skip section navigation (2)
Date:      Thu, 30 Jul 2015 15:47:54 +0000 (UTC)
From:      Konstantin Belousov <kib@FreeBSD.org>
To:        src-committers@freebsd.org, svn-src-all@freebsd.org, svn-src-head@freebsd.org
Subject:   svn commit: r286078 - in head/sys: amd64/include i386/include
Message-ID:  <201507301547.t6UFlsJx099636@repo.freebsd.org>

next in thread | raw e-mail | index | archive | help
Author: kib
Date: Thu Jul 30 15:47:53 2015
New Revision: 286078
URL: https://svnweb.freebsd.org/changeset/base/286078

Log:
  Improve comments.
  
  Submitted by:	bde
  MFC after:	2 weeks

Modified:
  head/sys/amd64/include/atomic.h
  head/sys/i386/include/atomic.h

Modified: head/sys/amd64/include/atomic.h
==============================================================================
--- head/sys/amd64/include/atomic.h	Thu Jul 30 15:43:26 2015	(r286077)
+++ head/sys/amd64/include/atomic.h	Thu Jul 30 15:47:53 2015	(r286078)
@@ -272,10 +272,10 @@ atomic_testandset_long(volatile u_long *
  * addresses, so we need a Store/Load barrier for sequentially
  * consistent fences in SMP kernels.  We use "lock addl $0,mem" for a
  * Store/Load barrier, as recommended by the AMD Software Optimization
- * Guide, and not mfence.  In the kernel, we use a private per-cpu
- * cache line as the target for the locked addition, to avoid
- * introducing false data dependencies.  In user space, we use a word
- * in the stack's red zone (-8(%rsp)).
+ * Guide, and not mfence.  To avoid false data dependencies, we use a
+ * special address for "mem".  In the kernel, we use a private per-cpu
+ * cache line.  In user space, we use a word in the stack's red zone
+ * (-8(%rsp)).
  *
  * For UP kernels, however, the memory of the single processor is
  * always consistent, so we only need to stop the compiler from

Modified: head/sys/i386/include/atomic.h
==============================================================================
--- head/sys/i386/include/atomic.h	Thu Jul 30 15:43:26 2015	(r286077)
+++ head/sys/i386/include/atomic.h	Thu Jul 30 15:47:53 2015	(r286078)
@@ -259,9 +259,9 @@ atomic_testandset_int(volatile u_int *p,
  * consistent fences in SMP kernels.  We use "lock addl $0,mem" for a
  * Store/Load barrier, as recommended by the AMD Software Optimization
  * Guide, and not mfence.  In the kernel, we use a private per-cpu
- * cache line as the target for the locked addition, to avoid
- * introducing false data dependencies.  In userspace, a word at the
- * top of the stack is utilized.
+ * cache line for "mem", to avoid introducing false data
+ * dependencies.  In user space, we use the word at the top of the
+ * stack.
  *
  * For UP kernels, however, the memory of the single processor is
  * always consistent, so we only need to stop the compiler from



Want to link to this message? Use this URL: <https://mail-archive.FreeBSD.org/cgi/mid.cgi?201507301547.t6UFlsJx099636>