Skip site navigation (1)Skip section navigation (2)
Date:      Mon, 3 Jun 2013 23:02:48 GMT
From:      Oliver Pinter <oliver.pntr@gmail.com>
To:        freebsd-gnats-submit@FreeBSD.org
Subject:   amd64/179282: [PATCH] Intel SMAP for FreeBSD-CURRENT
Message-ID:  <201306032302.r53N2mLP006167@oldred.freebsd.org>
Resent-Message-ID: <201306032310.r53NA1wQ062341@freefall.freebsd.org>

next in thread | raw e-mail | index | archive | help

>Number:         179282
>Category:       amd64
>Synopsis:       [PATCH] Intel SMAP for FreeBSD-CURRENT
>Confidential:   no
>Severity:       non-critical
>Priority:       low
>Responsible:    freebsd-amd64
>State:          open
>Quarter:        
>Keywords:       
>Date-Required:
>Class:          change-request
>Submitter-Id:   current-users
>Arrival-Date:   Mon Jun 03 23:10:01 UTC 2013
>Closed-Date:
>Last-Modified:
>Originator:     Oliver Pinter
>Release:        FreeBSD 10-CURRENT
>Organization:
>Environment:
>Description:
As subpart of my thesis, I implemented Intel SMAP[1] support for FreeBSD.
The current stable version of patch (attached) have compile time
option to enable SMAP.*

A feature complete dynamic version is expected by the end of the
month, which patched the kernel on boot time, when the feautre
presented in CPU.

[1] http://lwn.net/Articles/517475/

patches: https://github.com/opntr/freebsd-patches-2013-tavasz
smap-test: https://github.com/opntr/freebsd-smap-tester

>How-To-Repeat:

>Fix:


Patch attached with submission follows:

>From ae18b374b38401f736e4e13a8ab5fab82985df2b Mon Sep 17 00:00:00 2001
From: Oliver Pinter <oliver.pntr@gmail.com>
Date: Tue, 16 Apr 2013 01:32:25 +0200
Subject: [PATCH] added SMAP support for FreeBSD against r250423

This patch implemented support for Intel's new protection technology.

Supervisor Mode Access Prevention (SMAP) is newest security feature from
Intel, which first appears in the Haswell Line of processors.

When SMAP enabled, the kernel cannot access pages that are in userspace.
In some cases the kernel does have to access user pages, for this reason
the technology provided two instruction, to temporarily disable this
protection.

When SMAP detect protection violation, the kernel must call panic().

Intel's SMAP documentation:
http://software.intel.com/sites/default/files/319433-014.pdf

Test case:
https://github.com/opntr/freebsd-smap-tester

some parts of this patch discussed with kib freebsd org and Hunger

Signed-off-by: Oliver Pinter <oliver.pntr@gmail.com>

----------------------------------------------------------------------

* added	void clac(void) and	void stac(void) to cpufunc.h
* added STAC/CLAC instruction and added config options
* added basic support for SMAP
* added stac/clac in support.S around userspace memory access
* added RFLAGS.AC clearing to exception.S related to SMAP
* added RFLAGS.AC clearing to ia32_exception.S related to SMAP
* added RFLAGS.AC clearing to asmacros.h related to SMAP
* clac and stac functions depend on INTEL_SMAP
* added trap handler to SMAP

For security reason, when PF occured by SMAP, the kernel should paniced.

" [...]

The above items imply that the error code delivered by a page-fault
exception due to SMAP is either 1 (for reads) or 3 (for writes).
Note that the only page-fault exceptions that deliver an error code
of 1 are those induced by SMAP. (If CR0.WP = 1, some page-fault
exceptions may deliver an error code of 3 even if CR4.SMAP = 0.)

[...]" - intel 319433-014.pdf 9.3.3

* Clear the RFLAGS.AC on the start of nmi handler

suggested by kib@:
> I think that NMI handler should have CLAC executed unconditionally and
> much earlier then it is done in your patch. Since NMI could interrupt
> the copy*() functions, you would get some kernel code unneccessary
> executing with SMAP off.

* added note to fault handlers related to SMAP

suggested by kib@:
> I believe that exception labels in the support.S, like copyout_fault
> etc
> deserve a comment describing that EFLAGS.AC bit gets cleared by the
> exception entry point before the control reaches the label.

* added AC flag checking and factor out SMAP checking in trap_pfault() to make it more readable and

partially suggested by kib:
> The trap_pfault() fragment should check for the error code equal to 1 or
> 3, as described in the 9.3.3, instead of only checking for the present
> bit set. More, I suggest you to explicitely check that the #PF exception
> came from the kernel mode and that EFLAGS.AC was also set, before
> decidingto panic due to SMAP-detected failure.

* build fix, when INTEL_SMAP has not set in kernel config

/usr/home/op/git/freebsd-base.git.http/sys/amd64/amd64/trap.c:889:1: error: unused function 'smap_access_violation' [-Werror,-Wunused-function]
smap_access_violation(struct trapframe *frame, int usermode)
^
1 error generated.
*** [trap.o] Error code 1
1 error
*** [buildkernel] Error code 2
1 error
*** [buildkernel] Error code 2
1 error

* fixed smap_access_violation(...), spotted by Hunger

* fix smap_access_violatrion() when the CPU does not support SMAP

* use the CLAC and STAC macro, instead of the .byte sequence

* added memory clobber to clac and stac inline assembly

	clac and stac are sensitive instructions,
	to prevent instruction reordering added memory clobber

	spotted by Hunger, PaXTeam

Signed-off-by: Oliver Pinter <oliver.pntr@gmail.com>
---
 sys/amd64/amd64/exception.S     |  6 ++++++
 sys/amd64/amd64/identcpu.c      | 28 +++++++++++++++++++++---
 sys/amd64/amd64/initcpu.c       | 12 +++++++----
 sys/amd64/amd64/pmap.c          | 13 +++++++++++
 sys/amd64/amd64/support.S       | 48 +++++++++++++++++++++++++++++++++++++++++
 sys/amd64/amd64/trap.c          | 24 +++++++++++++++++++++
 sys/amd64/ia32/ia32_exception.S |  1 +
 sys/amd64/include/asmacros.h    |  3 ++-
 sys/amd64/include/cpufunc.h     | 27 +++++++++++++++++++++++
 sys/amd64/include/smap_instr.h  | 14 ++++++++++++
 sys/conf/NOTES                  |  4 ++++
 sys/conf/options.amd64          |  3 +++
 sys/x86/include/psl.h           |  2 +-
 sys/x86/include/specialreg.h    |  1 +
 14 files changed, 177 insertions(+), 9 deletions(-)
 create mode 100644 sys/amd64/include/smap_instr.h

diff --git a/sys/amd64/amd64/exception.S b/sys/amd64/amd64/exception.S
index 89ad638..d7ed7e4 100644
--- a/sys/amd64/amd64/exception.S
+++ b/sys/amd64/amd64/exception.S
@@ -42,6 +42,7 @@
 #include <machine/asmacros.h>
 #include <machine/psl.h>
 #include <machine/trap.h>
+#include <machine/smap_instr.h>
 #include <machine/specialreg.h>
 
 #include "assym.s"
@@ -196,6 +197,7 @@ alltraps_pushregs_no_rdi:
 	movq	%r15,TF_R15(%rsp)
 	movl	$TF_HASSEGS,TF_FLAGS(%rsp)
 	cld
+	CLAC
 	FAKE_MCOUNT(TF_RIP(%rsp))
 #ifdef KDTRACE_HOOKS
 	/*
@@ -276,6 +278,7 @@ IDTVEC(dblfault)
 	movw	%ds,TF_DS(%rsp)
 	movl	$TF_HASSEGS,TF_FLAGS(%rsp)
 	cld
+	CLAC
 	testb	$SEL_RPL_MASK,TF_CS(%rsp) /* Did we come from kernel? */
 	jz	1f			/* already running with kernel GS.base */
 	swapgs
@@ -379,6 +382,7 @@ IDTVEC(fast_syscall)
 	movq	%r15,TF_R15(%rsp)	/* C preserved */
 	movl	$TF_HASSEGS,TF_FLAGS(%rsp)
 	cld
+	CLAC
 	FAKE_MCOUNT(TF_RIP(%rsp))
 	movq	PCPU(CURTHREAD),%rdi
 	movq	%rsp,TD_FRAME(%rdi)
@@ -449,6 +453,7 @@ IDTVEC(fast_syscall32)
  */
 
 IDTVEC(nmi)
+	CLAC
 	subq	$TF_RIP,%rsp
 	movl	$(T_NMI),TF_TRAPNO(%rsp)
 	movq	$0,TF_ADDR(%rsp)
@@ -533,6 +538,7 @@ nmi_calltrap:
 
 	shrq	$3,%rcx		/* trap frame size in long words */
 	cld
+	CLAC
 	rep
 	movsq			/* copy trapframe */
 
diff --git a/sys/amd64/amd64/identcpu.c b/sys/amd64/amd64/identcpu.c
index ec5a2aa..90495eb 100644
--- a/sys/amd64/amd64/identcpu.c
+++ b/sys/amd64/amd64/identcpu.c
@@ -391,12 +391,14 @@ printcpuinfo(void)
 				       /* RDFSBASE/RDGSBASE/WRFSBASE/WRGSBASE */
 				       "\001GSFSBASE"
 				       "\002TSCADJ"
+				       "\003<b2>"
 				       /* Bit Manipulation Instructions */
 				       "\004BMI1"
 				       /* Hardware Lock Elision */
 				       "\005HLE"
 				       /* Advanced Vector Instructions 2 */
 				       "\006AVX2"
+				       "\007<b6>"
 				       /* Supervisor Mode Execution Prot. */
 				       "\010SMEP"
 				       /* Bit Manipulation Instructions */
@@ -406,12 +408,29 @@ printcpuinfo(void)
 				       "\013INVPCID"
 				       /* Restricted Transactional Memory */
 				       "\014RTM"
+				       "\015<b12>"
+				       "\016<b13>"
+				       "\017<b14>"
+				       "\020<b15>"
+				       "\021<b16>"
+				       "\022<b17>"
 				       /* Enhanced NRBG */
-				       "\022RDSEED"
+				       "\023RDSEED"
 				       /* ADCX + ADOX */
-				       "\023ADX"
+				       "\024ADX"
 				       /* Supervisor Mode Access Prevention */
-				       "\024SMAP"
+				       "\025SMAP"
+				       "\026<b21>"
+				       "\027<b22>"
+				       "\030<b23>"
+				       "\031<b24>"
+				       "\032<b25>"
+				       "\033<b26>"
+				       "\034<b27>"
+				       "\035<b28>"
+				       "\036<b29>"
+				       "\037<b30>"
+				       "\040<b31>"
 				       );
 			}
 
@@ -545,6 +564,9 @@ identify_cpu(void)
 		if (cpu_feature2 & CPUID2_HV) {
 			cpu_stdext_disable = CPUID_STDEXT_FSGSBASE |
 			    CPUID_STDEXT_SMEP;
+#ifdef INTEL_SMAP
+			cpu_stdext_disable |= CPUID_STDEXT_SMAP;
+#endif
 		} else
 			cpu_stdext_disable = 0;
 		TUNABLE_INT_FETCH("hw.cpu_stdext_disable", &cpu_stdext_disable);
diff --git a/sys/amd64/amd64/initcpu.c b/sys/amd64/amd64/initcpu.c
index 4abed4c..fbfa7c3 100644
--- a/sys/amd64/amd64/initcpu.c
+++ b/sys/amd64/amd64/initcpu.c
@@ -165,13 +165,17 @@ initializecpu(void)
 		cr4 |= CR4_FSGSBASE;
 
 	/*
-	 * Postpone enabling the SMEP on the boot CPU until the page
-	 * tables are switched from the boot loader identity mapping
-	 * to the kernel tables.  The boot loader enables the U bit in
-	 * its tables.
+	 * Postpone enabling the SMEP and the SMAP on the boot CPU until
+	 * the page tables are switched from the boot loader identity
+	 * mapping to the kernel tables.
+	 * The boot loader enables the U bit in its tables.
 	 */
 	if (!IS_BSP() && (cpu_stdext_feature & CPUID_STDEXT_SMEP))
 		cr4 |= CR4_SMEP;
+#ifdef INTEL_SMAP
+	if (!IS_BSP() && (cpu_stdext_feature & CPUID_STDEXT_SMAP))
+		cr4 |= CR4_SMAP;
+#endif
 	load_cr4(cr4);
 	if ((amd_feature & AMDID_NX) != 0) {
 		msr = rdmsr(MSR_EFER) | EFER_NXE;
diff --git a/sys/amd64/amd64/pmap.c b/sys/amd64/amd64/pmap.c
index 1b1c86c..11e560d 100644
--- a/sys/amd64/amd64/pmap.c
+++ b/sys/amd64/amd64/pmap.c
@@ -98,6 +98,7 @@ __FBSDID("$FreeBSD$");
  *	and to when physical maps must be made correct.
  */
 
+#include "opt_cpu.h"
 #include "opt_pmap.h"
 #include "opt_vm.h"
 
@@ -665,6 +666,18 @@ pmap_bootstrap(vm_paddr_t *firstaddr)
 	if (cpu_stdext_feature & CPUID_STDEXT_SMEP)
 		load_cr4(rcr4() | CR4_SMEP);
 
+	if (cpu_stdext_feature & CPUID_STDEXT_SMAP)
+#ifdef INTEL_SMAP
+		load_cr4(rcr4() | CR4_SMAP);
+	else
+		panic("The kernel compiled with \"options INTEL_SMAP\","
+			       	"but your CPU doesn't support SMAP!\n");
+#else
+		printf("Your CPU has support for SMAP security feature. "
+			"You should recompile the kernel with "
+			"\"options INTEL_SMAP\" to use them.\n");
+#endif
+
 	/*
 	 * Initialize the kernel pmap (which is statically allocated).
 	 */
diff --git a/sys/amd64/amd64/support.S b/sys/amd64/amd64/support.S
index 77dbf63..7ad8101 100644
--- a/sys/amd64/amd64/support.S
+++ b/sys/amd64/amd64/support.S
@@ -35,6 +35,7 @@
 #include <machine/asmacros.h>
 #include <machine/intr_machdep.h>
 #include <machine/pmap.h>
+#include <machine/smap_instr.h>
 
 #include "assym.s"
 
@@ -244,12 +245,16 @@ ENTRY(copyout)
 
 	shrq	$3,%rcx
 	cld
+	STAC
 	rep
 	movsq
+	CLAC
 	movb	%dl,%cl
 	andb	$7,%cl
+	STAC
 	rep
 	movsb
+	CLAC
 
 done_copyout:
 	xorl	%eax,%eax
@@ -258,6 +263,11 @@ done_copyout:
 	ret
 
 	ALIGN_TEXT
+/*
+ * note:
+ * When SMAP enabled, the EFLAGS.AC bit gets cleared before control reaches
+ * the fault handler.
+ */ 
 copyout_fault:
 	movq	PCPU(CURPCB),%rdx
 	movq	$0,PCB_ONFAULT(%rdx)
@@ -290,12 +300,16 @@ ENTRY(copyin)
 	movb	%cl,%al
 	shrq	$3,%rcx				/* copy longword-wise */
 	cld
+	STAC
 	rep
 	movsq
+	CLAC
 	movb	%al,%cl
 	andb	$7,%cl				/* copy remaining bytes */
+	STAC
 	rep
 	movsb
+	CLAC
 
 done_copyin:
 	xorl	%eax,%eax
@@ -304,6 +318,11 @@ done_copyin:
 	ret
 
 	ALIGN_TEXT
+/*
+ * note:
+ * When SMAP enabled, the EFLAGS.AC bit gets cleared before control reaches
+ * the fault handler.
+ */ 
 copyin_fault:
 	movq	PCPU(CURPCB),%rdx
 	movq	$0,PCB_ONFAULT(%rdx)
@@ -324,10 +343,12 @@ ENTRY(casuword32)
 	ja	fusufault
 
 	movl	%esi,%eax			/* old */
+	STAC
 #ifdef SMP
 	lock
 #endif
 	cmpxchgl %edx,(%rdi)			/* new = %edx */
+	CLAC
 
 	/*
 	 * The old value is in %eax.  If the store succeeded it will be the
@@ -353,10 +374,12 @@ ENTRY(casuword)
 	ja	fusufault
 
 	movq	%rsi,%rax			/* old */
+	STAC
 #ifdef SMP
 	lock
 #endif
 	cmpxchgq %rdx,(%rdi)			/* new = %rdx */
+	CLAC
 
 	/*
 	 * The old value is in %eax.  If the store succeeded it will be the
@@ -385,7 +408,9 @@ ENTRY(fuword)
 	cmpq	%rax,%rdi			/* verify address is valid */
 	ja	fusufault
 
+	STAC
 	movq	(%rdi),%rax
+	CLAC
 	movq	$0,PCB_ONFAULT(%rcx)
 	ret
 END(fuword64)	
@@ -399,7 +424,9 @@ ENTRY(fuword32)
 	cmpq	%rax,%rdi			/* verify address is valid */
 	ja	fusufault
 
+	STAC
 	movl	(%rdi),%eax
+	CLAC
 	movq	$0,PCB_ONFAULT(%rcx)
 	ret
 END(fuword32)
@@ -426,7 +453,9 @@ ENTRY(fuword16)
 	cmpq	%rax,%rdi
 	ja	fusufault
 
+	STAC
 	movzwl	(%rdi),%eax
+	CLAC
 	movq	$0,PCB_ONFAULT(%rcx)
 	ret
 END(fuword16)
@@ -439,12 +468,19 @@ ENTRY(fubyte)
 	cmpq	%rax,%rdi
 	ja	fusufault
 
+	STAC
 	movzbl	(%rdi),%eax
+	CLAC
 	movq	$0,PCB_ONFAULT(%rcx)
 	ret
 END(fubyte)
 
 	ALIGN_TEXT
+/*
+ * note:
+ * When SMAP enabled, the EFLAGS.AC bit gets cleared before control reaches
+ * the fault handler.
+ */ 
 fusufault:
 	movq	PCPU(CURPCB),%rcx
 	xorl	%eax,%eax
@@ -466,7 +502,9 @@ ENTRY(suword)
 	cmpq	%rax,%rdi			/* verify address validity */
 	ja	fusufault
 
+	STAC
 	movq	%rsi,(%rdi)
+	CLAC
 	xorl	%eax,%eax
 	movq	PCPU(CURPCB),%rcx
 	movq	%rax,PCB_ONFAULT(%rcx)
@@ -482,7 +520,9 @@ ENTRY(suword32)
 	cmpq	%rax,%rdi			/* verify address validity */
 	ja	fusufault
 
+	STAC
 	movl	%esi,(%rdi)
+	CLAC
 	xorl	%eax,%eax
 	movq	PCPU(CURPCB),%rcx
 	movq	%rax,PCB_ONFAULT(%rcx)
@@ -497,7 +537,9 @@ ENTRY(suword16)
 	cmpq	%rax,%rdi			/* verify address validity */
 	ja	fusufault
 
+	STAC
 	movw	%si,(%rdi)
+	CLAC
 	xorl	%eax,%eax
 	movq	PCPU(CURPCB),%rcx		/* restore trashed register */
 	movq	%rax,PCB_ONFAULT(%rcx)
@@ -513,7 +555,9 @@ ENTRY(subyte)
 	ja	fusufault
 
 	movl	%esi,%eax
+	STAC
 	movb	%al,(%rdi)
+	CLAC
 	xorl	%eax,%eax
 	movq	PCPU(CURPCB),%rcx		/* restore trashed register */
 	movq	%rax,PCB_ONFAULT(%rcx)
@@ -555,7 +599,9 @@ ENTRY(copyinstr)
 	decq	%rdx
 	jz	3f
 
+	STAC
 	lodsb
+	CLAC
 	stosb
 	orb	%al,%al
 	jnz	2b
@@ -584,7 +630,9 @@ cpystrflt_x:
 	testq	%r9,%r9
 	jz	1f
 	subq	%rdx,%r8
+	STAC
 	movq	%r8,(%r9)
+	CLAC
 1:
 	ret
 END(copyinstr)
diff --git a/sys/amd64/amd64/trap.c b/sys/amd64/amd64/trap.c
index 6fcca81..d37949e 100644
--- a/sys/amd64/amd64/trap.c
+++ b/sys/amd64/amd64/trap.c
@@ -127,6 +127,9 @@ void dblfault_handler(struct trapframe *frame);
 
 static int trap_pfault(struct trapframe *, int);
 static void trap_fatal(struct trapframe *, vm_offset_t);
+#ifdef INTEL_SMAP
+static bool smap_access_violation(struct trapframe *, int usermode);
+#endif
 
 #define MAX_TRAP_MSG		33
 static char *trap_msg[] = {
@@ -718,6 +721,13 @@ trap_pfault(frame, usermode)
 
 		map = &vm->vm_map;
 
+#ifdef INTEL_SMAP
+		if (__predict_false(smap_access_violation(frame, usermode))) {
+			trap_fatal(frame, eva);
+			return (-1);
+		}
+#endif
+
 		/*
 		 * When accessing a usermode address, kernel must be
 		 * ready to accept the page fault, and provide a
@@ -874,6 +884,20 @@ trap_fatal(frame, eva)
 		panic("unknown/reserved trap");
 }
 
+#ifdef INTEL_SMAP
+static bool
+smap_access_violation(struct trapframe *frame, int usermode)
+{
+	if ((cpu_stdext_feature & CPUID_STDEXT_SMAP) == 0)
+		return (false);
+
+	if (usermode || (frame->tf_rflags & PSL_AC) != 0)
+		return (false);
+
+	return (true);
+}
+#endif
+
 /*
  * Double fault handler. Called when a fault occurs while writing
  * a frame for a trap/exception onto the stack. This usually occurs
diff --git a/sys/amd64/ia32/ia32_exception.S b/sys/amd64/ia32/ia32_exception.S
index fe1a676..9f13f2f 100644
--- a/sys/amd64/ia32/ia32_exception.S
+++ b/sys/amd64/ia32/ia32_exception.S
@@ -68,6 +68,7 @@ IDTVEC(int0x80_syscall)
 	movq	%r15,TF_R15(%rsp)
 	movl	$TF_HASSEGS,TF_FLAGS(%rsp)
 	cld
+	CLAC
 	FAKE_MCOUNT(TF_RIP(%rsp))
 	movq	%rsp, %rdi
 	call	ia32_syscall
diff --git a/sys/amd64/include/asmacros.h b/sys/amd64/include/asmacros.h
index 1fb592a..c985623 100644
--- a/sys/amd64/include/asmacros.h
+++ b/sys/amd64/include/asmacros.h
@@ -167,7 +167,8 @@
 	movw	%es,TF_ES(%rsp) ;					\
 	movw	%ds,TF_DS(%rsp) ;					\
 	movl	$TF_HASSEGS,TF_FLAGS(%rsp) ;				\
-	cld
+	cld ;								\
+	CLAC
 
 #define POP_FRAME							\
 	movq	TF_RDI(%rsp),%rdi ;					\
diff --git a/sys/amd64/include/cpufunc.h b/sys/amd64/include/cpufunc.h
index 881fcd2..53b2ce8 100644
--- a/sys/amd64/include/cpufunc.h
+++ b/sys/amd64/include/cpufunc.h
@@ -39,10 +39,16 @@
 #ifndef _MACHINE_CPUFUNC_H_
 #define	_MACHINE_CPUFUNC_H_
 
+#include "opt_cpu.h"
+
 #ifndef _SYS_CDEFS_H_
 #error this file needs sys/cdefs.h as a prerequisite
 #endif
 
+#ifdef INTEL_SMAP
+#include <machine/smap_instr.h>
+#endif
+
 struct region_descriptor;
 
 #define readb(va)	(*(volatile uint8_t *) (va))
@@ -711,11 +717,31 @@ intr_restore(register_t rflags)
 	write_rflags(rflags);
 }
 
+/*
+ * Intel SMAP related functions (clac and stac)
+ */
+static __inline void
+clac(void)
+{
+#ifdef INTEL_SMAP
+	__asm __volatile(__STRING(CLAC) : : : "memory");
+#endif
+}
+
+static __inline void
+stac(void)
+{
+#ifdef INTEL_SMAP
+	__asm __volatile(__STRING(STAC) : : : "memory");
+#endif
+}
+
 #else /* !(__GNUCLIKE_ASM && __CC_SUPPORTS___INLINE) */
 
 int	breakpoint(void);
 u_int	bsfl(u_int mask);
 u_int	bsrl(u_int mask);
+void	clac(void);
 void	clflush(u_long addr);
 void	clts(void);
 void	cpuid_count(u_int ax, u_int cx, u_int *p);
@@ -775,6 +801,7 @@ uint64_t rdtsc(void);
 u_long	read_rflags(void);
 u_int	rfs(void);
 u_int	rgs(void);
+void	stac(void);
 void	wbinvd(void);
 void	write_rflags(u_int rf);
 void	wrmsr(u_int msr, uint64_t newval);
diff --git a/sys/amd64/include/smap_instr.h b/sys/amd64/include/smap_instr.h
new file mode 100644
index 0000000..77926aa
--- /dev/null
+++ b/sys/amd64/include/smap_instr.h
@@ -0,0 +1,14 @@
+#ifndef	__SMAP_INSTRUCTION_H
+#define	__SMAP_INSTRUCTION_H
+
+#include "opt_cpu.h"
+
+#ifdef INTEL_SMAP
+#define	CLAC	.byte 0x0f,0x01,0xca
+#define	STAC	.byte 0x0f,0x01,0xcb
+#else
+#define	CLAC
+#define	STAC
+#endif
+
+#endif	/* __SMAP_INSTRUCTION_H */
diff --git a/sys/conf/NOTES b/sys/conf/NOTES
index 48dba77..af1cf71 100644
--- a/sys/conf/NOTES
+++ b/sys/conf/NOTES
@@ -2963,3 +2963,7 @@ options 	RCTL
 options 	BROOKTREE_ALLOC_PAGES=(217*4+1)
 options 	MAXFILES=999
 
+# Intel SMAP
+# This options supported on Haswell and/or newer CPUs (2013 Juni < ) and
+# makes the kernel unbootable on older CPUs.
+options 	INTEL_SMAP	# Intel's hw version of PaX uderef
diff --git a/sys/conf/options.amd64 b/sys/conf/options.amd64
index 90348b7..b861439 100644
--- a/sys/conf/options.amd64
+++ b/sys/conf/options.amd64
@@ -72,3 +72,6 @@ ISCI_LOGGING	opt_isci.h
 # hw random number generators for random(4)
 PADLOCK_RNG		opt_cpu.h
 RDRAND_RNG		opt_cpu.h
+
+# Intel Supervisor Mode Access Prevention (SMAP)
+INTEL_SMAP		opt_cpu.h
diff --git a/sys/x86/include/psl.h b/sys/x86/include/psl.h
index 12d05c5..ce97a26 100644
--- a/sys/x86/include/psl.h
+++ b/sys/x86/include/psl.h
@@ -52,7 +52,7 @@
 #define	PSL_NT		0x00004000	/* nested task bit */
 #define	PSL_RF		0x00010000	/* resume flag bit */
 #define	PSL_VM		0x00020000	/* virtual 8086 mode bit */
-#define	PSL_AC		0x00040000	/* alignment checking */
+#define	PSL_AC		0x00040000	/* alignment checking or SMAP status*/
 #define	PSL_VIF		0x00080000	/* virtual interrupt enable */
 #define	PSL_VIP		0x00100000	/* virtual interrupt pending */
 #define	PSL_ID		0x00200000	/* identification bit */
diff --git a/sys/x86/include/specialreg.h b/sys/x86/include/specialreg.h
index bf1333f..6bffd43 100644
--- a/sys/x86/include/specialreg.h
+++ b/sys/x86/include/specialreg.h
@@ -73,6 +73,7 @@
 #define	CR4_PCIDE 0x00020000	/* Enable Context ID */
 #define	CR4_XSAVE 0x00040000	/* XSETBV/XGETBV */
 #define	CR4_SMEP 0x00100000	/* Supervisor-Mode Execution Prevention */
+#define	CR4_SMAP 0x00200000	/* Supervisor-Mode Access Prevention */
 
 /*
  * Bits in AMD64 special registers.  EFER is 64 bits wide.
-- 
1.8.2.2



>Release-Note:
>Audit-Trail:
>Unformatted:



Want to link to this message? Use this URL: <https://mail-archive.FreeBSD.org/cgi/mid.cgi?201306032302.r53N2mLP006167>