From owner-freebsd-current Thu Jul 4 19: 6:18 2002 Delivered-To: freebsd-current@freebsd.org Received: from mx1.FreeBSD.org (mx1.FreeBSD.org [216.136.204.125]) by hub.freebsd.org (Postfix) with ESMTP id 9551837B400 for ; Thu, 4 Jul 2002 19:06:13 -0700 (PDT) Received: from mail.chesapeake.net (chesapeake.net [205.130.220.14]) by mx1.FreeBSD.org (Postfix) with ESMTP id 0058443E09 for ; Thu, 4 Jul 2002 19:06:13 -0700 (PDT) (envelope-from jroberson@chesapeake.net) Received: from localhost (jroberson@localhost) by mail.chesapeake.net (8.11.6/8.11.6) with ESMTP id g65266X56053; Thu, 4 Jul 2002 22:06:06 -0400 (EDT) (envelope-from jroberson@chesapeake.net) Date: Thu, 4 Jul 2002 22:06:06 -0400 (EDT) From: Jeff Roberson To: Julian Elischer Cc: FreeBSD current users Subject: Re: another UMA question. In-Reply-To: Message-ID: <20020704182736.Y25604-100000@mail.chesapeake.net> MIME-Version: 1.0 Content-Type: TEXT/PLAIN; charset=US-ASCII Sender: owner-freebsd-current@FreeBSD.ORG Precedence: bulk List-ID: List-Archive: (Web Archive) List-Help: (List Instructions) List-Subscribe: List-Unsubscribe: X-Loop: FreeBSD.ORG On Thu, 4 Jul 2002, Julian Elischer wrote: > > SO I'm using UMA to store threads however UMA seem sto be too eager to > create new threads.. > for example: (my own version has more instrumentation) > > ref4# sysctl kern.threads > kern.threads.active: 71 <- number currently attached to processes > kern.threads.cached: 76 <- number in the UMA pool > kern.threads.allocated: 147 <- number presently allocated + cached > kern.threads.freed: 0 <- number of times fini() called > kern.threads.total: 147 <- number of times init() called > kern.threads.max: 79 <- highest ever value of 'active' > > > given that the threads each have an 8k stack attached, > this means that there are 68 x 8k stacks that will never be used.. > (557056 bytes of wasted ram) > how would I go about 'tuning' this? This is probably due to the per cpu buckets. Even if this is on a uni processor machine there is a single bucket that caches a nonfixed number of items. I have a somewhat lame mechanism for limiting this right now. Basically initially I won't cache more than a slabs worth of items in a bucket, although this can grow. The problem is that UMA doesn't know that there is a lot of other memory associated with this, so it shouldn't cache so many. UMA thinks it's only caching one page per bucket. I definately need a better way to do the bucket allocation and sizing. > > also: > > After a while UMA starts freeing and then reallocating > these: > e.g. > ref4# sysctl kern.threads > kern.threads.active: 63 > kern.threads.cached: 147 > kern.threads.allocated: 210 > kern.threads.freed: 231 > kern.threads.total: 441 > kern.threads.max: 84 > > > this is wasteful to allocate and deallocate (with all the work involved) > 231 threads and sacks for no reason > (after it freed them, it pretty quickly reallocated them as you see, > there are 147 presently cached, representing 1.2Mb of ram. There is an algorithm that tries to calculate demand over the last N seconds (20 right now I believe). This is intended to prevent us from freeing slabs that will be needed immediately. I think this could be revisited now that we have a zone that actually exhibits behavior that is interesting enough to study. Although, please remember that freeing all of that memory now may save diskio. So it is worth whatever startup cost is neccesary to save a page even for a few seconds. > > Can the algorythms be tuned to use a more gentle hysteresis? > Can the high and low watermarks be specified per type? > > Is there a chance you can add a uma_zadjust() or something that > allows us to set the cache high and low water marks etc? > > For example I really don;t want it to start allocating new threads until > I have maybe only 12 or so left in the cache. On the other hand > I probably want to free them from the cache if I have more than say 40.. > This is why I originally used my own cache.. > > Eventually I would like to be able to adjust the zone parameters > according to recent history.. > I would like to calculate a running average and variance of thread > usage and aim to keep the caches adjusted for > AVERAGE + 3xStandard deviations or something. > This suggests that I should be able to register another management > method with UMA for that zone... > > thoughts? > > > > I'd like to avoid any static thresholds and so on. One of the reasons I picked the solaris slab design as a starting point was to avoid any static configuration. I think uma can be aware enough of allocation history and vm page needs to make good decisions. I agree that it probably needs some tuning though, especially the per cpu buckets. Jeff To Unsubscribe: send mail to majordomo@FreeBSD.org with "unsubscribe freebsd-current" in the body of the message