Skip site navigation (1)Skip section navigation (2)
Date:      Fri, 18 Feb 2000 19:30:17 -0800 (PST)
From:      Matthew Dillon <dillon@apollo.backplane.com>
To:        Jason Evans <jasone@canonware.com>
Cc:        current@FreeBSD.ORG
Subject:   Re: tentitive complete patch for MAP_GUARDED available
Message-ID:  <200002190330.TAA90856@apollo.backplane.com>
References:  <20000218122554.A2978@nonpc.cs.rice.edu> <200002181905.LAA80267@apollo.backplane.com> <20000218190256.E28177@sturm.canonware.com>

next in thread | previous in thread | raw e-mail | index | archive | help

:In general, a given number of guard pages is insufficient for some (perhaps
:non-existent) applications.  The basic idea is to catch typical stack
:overflow.  Trying to always catch stack overflow is not practical.  Since
:this is a heuristic error detection technique, I'm not sure how much
:work/complexity it's worth to paramaterize the number of guard pages for
:each mapping.
:
:Jason

    I think it's important.  Pthreads has calls to allow
    the user to set the stack size.  At the moment our uthreads library
    malloc()'s the stack in this case.  Eventually we want to be able
    to rewrite the code to use mmap() for all the stack types and then
    keep track of how big they were in the freelist for reuse.

    I think the following patch to our threads library, when combined
    with the guard-3.diff, optimizes our threads library for default
    thread stacks.  I did a very simple test and it appears to work.
    One vm_map_entry is created every 8 64K pthread stacks with
    vm.map_entry_blk_opt set to 524288.

    The native port of the linux threads library can probably be patched
    fairly easily.

    The real linux threads library (/usr/compat/linux/lib/libpthread)
    mmap()s guard pages separately, which is impossible to optimize.  We 
    would need a special freebsd version of the library to get the 
    VM map optimizations.

    It may be possible to eventually get rid of vm.map_entry_blk_opt 
    entirely but that requires more work then I have time for right now
    (we basically have to support negative vm_object offsets in order
    to be able to grow an anonymous memory object downward to do it right).


					-Matt
					Matthew Dillon 
					<dillon@backplane.com>

Index: uthread/uthread_create.c
===================================================================
RCS file: /home/ncvs/src/lib/libc_r/uthread/uthread_create.c,v
retrieving revision 1.24
diff -u -r1.24 uthread_create.c
--- uthread/uthread_create.c	2000/01/19 07:04:46	1.24
+++ uthread/uthread_create.c	2000/02/19 03:04:01
@@ -137,12 +137,23 @@
 					PANIC("Cannot unlock gc mutex");
 
 				/* Stack: */
+#if	defined(__FreeBSD__)
+				if (mmap((char *)stack - PTHREAD_STACK_GUARD, 
+				    PTHREAD_STACK_DEFAULT + PTHREAD_STACK_GUARD,
+				    PROT_READ | PROT_WRITE, 
+				    MAP_STACK | MAP_GUARDED,
+				    -1, PTHREAD_STACK_GUARD) == MAP_FAILED) {
+					ret = EAGAIN;
+					free(new_thread);
+				}
+#else
 				if (mmap(stack, PTHREAD_STACK_DEFAULT,
 				    PROT_READ | PROT_WRITE, MAP_STACK,
 				    -1, 0) == MAP_FAILED) {
 					ret = EAGAIN;
 					free(new_thread);
 				}
+#endif
 			}
 		}
 		/*


To Unsubscribe: send mail to majordomo@FreeBSD.org
with "unsubscribe freebsd-current" in the body of the message




Want to link to this message? Use this URL: <https://mail-archive.FreeBSD.org/cgi/mid.cgi?200002190330.TAA90856>