Skip site navigation (1)Skip section navigation (2)
Date:      Thu, 4 Jul 2002 22:06:06 -0400 (EDT)
From:      Jeff Roberson <jroberson@chesapeake.net>
To:        Julian Elischer <julian@elischer.org>
Cc:        FreeBSD current users <current@FreeBSD.ORG>
Subject:   Re: another UMA question.
Message-ID:  <20020704182736.Y25604-100000@mail.chesapeake.net>
In-Reply-To: <Pine.BSF.4.21.0207041307500.6975-100000@InterJet.elischer.org>

next in thread | previous in thread | raw e-mail | index | archive | help

On Thu, 4 Jul 2002, Julian Elischer wrote:

>
> SO I'm using UMA to store threads however UMA seem sto be too eager to
> create new threads..
> for example: (my own version has more instrumentation)
>
> ref4# sysctl kern.threads
> kern.threads.active: 71	       <- number currently attached to processes
> kern.threads.cached: 76        <- number in the UMA pool
> kern.threads.allocated: 147    <- number presently allocated + cached
> kern.threads.freed: 0          <- number of times fini() called
> kern.threads.total: 147        <- number of times init() called
> kern.threads.max: 79           <- highest ever value of  'active'
>
>
> given that the threads each have an 8k stack attached,
> this means that there are 68 x 8k stacks that will never be used..
> (557056 bytes of wasted ram)
> how would I go about 'tuning' this?

This is probably due to the per cpu buckets.  Even if this is on a uni
processor machine there is a single bucket that caches a nonfixed number
of items.  I have a somewhat lame mechanism for limiting this right now.
Basically initially I won't cache more than a slabs worth of items in a
bucket, although this can grow.  The problem is that UMA doesn't know that
there is a lot of other memory associated with this, so it shouldn't cache
so many. UMA thinks it's only caching one page per bucket.

I definately need a better way to do the bucket allocation and sizing.

>
> also:
>
> After a while UMA starts freeing and then reallocating
> these:
> e.g.
> ref4# sysctl kern.threads
> kern.threads.active: 63
> kern.threads.cached: 147
> kern.threads.allocated: 210
> kern.threads.freed: 231
> kern.threads.total: 441
> kern.threads.max: 84
>
>
> this is wasteful to allocate and deallocate (with all the work involved)
> 231  threads and sacks for no reason
> (after it freed them, it pretty quickly reallocated them as you see,
> there are 147 presently cached, representing 1.2Mb of ram.

There is an algorithm that tries to calculate demand over the last N
seconds (20 right now I believe).  This is intended to prevent us from
freeing slabs that will be needed immediately.  I think this could be
revisited now that we have a zone that actually exhibits behavior that is
interesting enough to study.  Although, please remember that freeing all
of that memory now may save diskio.  So it is worth whatever startup cost
is neccesary to save a page even for a few seconds.

>
> Can the algorythms be tuned to use a more gentle hysteresis?
> Can the high and low watermarks be specified per type?
>
> Is there a chance you can add a uma_zadjust() or something that
> allows us to set the cache high and low water marks etc?
>
> For example I really don;t want it to start allocating new threads until
> I have maybe only 12 or so left in the cache. On the other hand
> I probably want to free them from the cache if I have more than say 40..
> This is why I originally used my own cache..
>
> Eventually I would like to be able to adjust the zone parameters
> according to recent history..
> I would like to calculate a running average and variance of thread
> usage and aim to keep the caches adjusted for
> AVERAGE + 3xStandard deviations or something.
> This suggests that I should be able to register another management
> method with UMA for that zone...
>
> thoughts?
>
>
>
>

I'd like to avoid any static thresholds and so on. One of the reasons I
picked the solaris slab design as a starting point was to avoid any static
configuration.  I think uma can be aware enough of allocation history and
vm page needs to make good decisions.  I agree that it probably needs some
tuning though, especially the per cpu buckets.

Jeff


To Unsubscribe: send mail to majordomo@FreeBSD.org
with "unsubscribe freebsd-current" in the body of the message




Want to link to this message? Use this URL: <https://mail-archive.FreeBSD.org/cgi/mid.cgi?20020704182736.Y25604-100000>