Skip site navigation (1)Skip section navigation (2)
Date:      Wed, 29 Jul 2015 23:59:18 +0000 (UTC)
From:      Konstantin Belousov <kib@FreeBSD.org>
To:        src-committers@freebsd.org, svn-src-all@freebsd.org, svn-src-head@freebsd.org
Subject:   svn commit: r286050 - head/sys/i386/include
Message-ID:  <201507292359.t6TNxIBb009954@repo.freebsd.org>

next in thread | raw e-mail | index | archive | help
Author: kib
Date: Wed Jul 29 23:59:17 2015
New Revision: 286050
URL: https://svnweb.freebsd.org/changeset/base/286050

Log:
  MFamd64 r285934: Remove store/load (= full) barrier from the i386
  atomic_load_acq_*().
  
  Noted by:	alc (long time ago)
  Reviewed by:	alc, bde
  Tested by:	pho
  Sponsored by:	The FreeBSD Foundation
  MFC after:	2 weeks

Modified:
  head/sys/i386/include/atomic.h

Modified: head/sys/i386/include/atomic.h
==============================================================================
--- head/sys/i386/include/atomic.h	Wed Jul 29 23:37:15 2015	(r286049)
+++ head/sys/i386/include/atomic.h	Wed Jul 29 23:59:17 2015	(r286050)
@@ -233,13 +233,13 @@ atomic_testandset_int(volatile u_int *p,
  * IA32 memory model, a simple store guarantees release semantics.
  *
  * However, a load may pass a store if they are performed on distinct
- * addresses, so for atomic_load_acq we introduce a Store/Load barrier
- * before the load in SMP kernels.  We use "lock addl $0,mem", as
- * recommended by the AMD Software Optimization Guide, and not mfence.
- * In the kernel, we use a private per-cpu cache line as the target
- * for the locked addition, to avoid introducing false data
- * dependencies.  In userspace, a word at the top of the stack is
- * utilized.
+ * addresses, so we need Store/Load barrier for sequentially
+ * consistent fences in SMP kernels.  We use "lock addl $0,mem" for a
+ * Store/Load barrier, as recommended by the AMD Software Optimization
+ * Guide, and not mfence.  In the kernel, we use a private per-cpu
+ * cache line as the target for the locked addition, to avoid
+ * introducing false data dependencies.  In userspace, a word at the
+ * top of the stack is utilized.
  *
  * For UP kernels, however, the memory of the single processor is
  * always consistent, so we only need to stop the compiler from
@@ -282,22 +282,12 @@ __storeload_barrier(void)
 }
 #endif /* _KERNEL*/
 
-/*
- * C11-standard acq/rel semantics only apply when the variable in the
- * call is the same for acq as it is for rel.  However, our previous
- * (x86) implementations provided much stronger ordering than required
- * (essentially what is called seq_cst order in C11).  This
- * implementation provides the historical strong ordering since some
- * callers depend on it.
- */
-
 #define	ATOMIC_LOAD(TYPE)					\
 static __inline u_##TYPE					\
 atomic_load_acq_##TYPE(volatile u_##TYPE *p)			\
 {								\
 	u_##TYPE res;						\
 								\
-	__storeload_barrier();					\
 	res = *p;						\
 	__compiler_membar();					\
 	return (res);						\



Want to link to this message? Use this URL: <https://mail-archive.FreeBSD.org/cgi/mid.cgi?201507292359.t6TNxIBb009954>