Skip site navigation (1)Skip section navigation (2)
Date:      Wed, 27 Feb 2002 14:46:42 -0500
From:      Bosko Milekic <bmilekic@unixdaemons.com>
To:        Matthew Dillon <dillon@apollo.backplane.com>
Cc:        Jeff Roberson <jroberson@chesapeake.net>, arch@FreeBSD.ORG
Subject:   Re: Slab allocator
Message-ID:  <20020227144642.A40638@unixdaemons.com>
In-Reply-To: <200202271926.g1RJQCm29905@apollo.backplane.com>; from dillon@apollo.backplane.com on Wed, Feb 27, 2002 at 11:26:12AM -0800
References:  <20020227005915.C17591-100000@mail.chesapeake.net> <200202271926.g1RJQCm29905@apollo.backplane.com>

next in thread | previous in thread | raw e-mail | index | archive | help

On Wed, Feb 27, 2002 at 11:26:12AM -0800, Matthew Dillon wrote:
> 
> :...
> :
> :There are also per cpu queues of items, with a per cpu lock.  This allows
> :for very effecient allocation, and also it provides near linear
> :performance as the number of cpus increase.  I do still depend on giant to
> :talk to the back end page supplier (kmem_alloc, etc.).  Once the VM is
> :locked the allocator will not require giant at all.
> :...
> :
> :Since you've read this far, I'll let you know where the patch is. ;-)
> :
> :http://www.chesapeake.net/~jroberson/uma.tar
> :...
> :Any feedback is appreciated.  I'd like to know what people expect from
> :this before it is committable.
> :
> :Jeff
> :
> :PS Sorry for the long winded email. :-)
> 
>     Well, one thing I've noticed right off the bat is that the code
>     is trying to take advantage of per-cpu queues but is still
>     having to obtain a per-cpu mutex to lock the per-cpu queue.

  Yes, that's normal. One can get pre-empted here.

>     Another thing I noticed is that the code appears to assume
>     that PCPU_GET(cpuid) is stable in certain places, and I don't
>     think that condition necessarily holds unless you explicitly
>     enter a critical section (critical_enter() and critical_exit()).
>     There are some cases where you obtain the per-cpu cache and lock 
>     it, which would be safe even if the cpu changed out from under 
>     you, and other case such as in uma_zalloc_internal() where you 
>     assume that the cpuid is stable when it isn't.

  No, what he does is take PCPU_GET(cpuid) and save it in a variable. If
he gets pre-empted (unlikely) and he gets shifted CPUs he still uses the
old CPU's cache. That's fine as long as it's done correctly.

>     I also noticed that cache_drain() appears to be the only
>     place where you iterate through the per-cpu mutexes.  All
>     the other places appear to use the current-cpu's mutex.

  That's normal, he drains all PCPU caches.

[...]
> 	* That you consider an alternative method for draining
> 	  the per-cpu caches.  For example, by having the
> 	  per-cpu code use a global, shared SX lock along
> 	  with the critical section to access their per-cpu
> 	  caches and then have the cache_drain code obtain
> 	  an exclusive SX lock in order to have full access
> 	  to all of the per-cpu caches.
> 
> 	* Documentation.  i.e. comment the code more, especially
> 	  areas where you have to special-case things like for
> 	  example when you unlock a cpu cache in order to
> 	  call uma_zfree_internal().
> 
> 					-Matt
> 					Matthew Dillon 
> 					<dillon@backplane.com>

-- 
Bosko Milekic
bmilekic@unixdaemons.com
bmilekic@FreeBSD.org


To Unsubscribe: send mail to majordomo@FreeBSD.org
with "unsubscribe freebsd-arch" in the body of the message




Want to link to this message? Use this URL: <https://mail-archive.FreeBSD.org/cgi/mid.cgi?20020227144642.A40638>