From owner-freebsd-current@FreeBSD.ORG Fri Jun 17 10:00:25 2005 Return-Path: X-Original-To: freebsd-current@freebsd.org Delivered-To: freebsd-current@freebsd.org Received: from mx1.FreeBSD.org (mx1.freebsd.org [216.136.204.125]) by hub.freebsd.org (Postfix) with ESMTP id 2E8B816A41C; Fri, 17 Jun 2005 10:00:25 +0000 (GMT) (envelope-from rwatson@FreeBSD.org) Received: from cyrus.watson.org (cyrus.watson.org [204.156.12.53]) by mx1.FreeBSD.org (Postfix) with ESMTP id F34E243D4C; Fri, 17 Jun 2005 10:00:24 +0000 (GMT) (envelope-from rwatson@FreeBSD.org) Received: from fledge.watson.org (fledge.watson.org [204.156.12.50]) by cyrus.watson.org (Postfix) with ESMTP id 0126546B3B; Fri, 17 Jun 2005 06:00:24 -0400 (EDT) Date: Fri, 17 Jun 2005 11:02:43 +0100 (BST) From: Robert Watson X-X-Sender: robert@fledge.watson.org To: Alexander Leidinger In-Reply-To: <20050617113729.i78gx3wiokw48g8k@netchild.homeip.net> Message-ID: <20050617110050.O56734@fledge.watson.org> References: <42B18536.3080200@videotron.ca> <20050616151502.X27625@fledge.watson.org> <42B192D2.7000505@videotron.ca> <20050616181820.E27625@fledge.watson.org> <42B1B784.8010405@videotron.ca> <20050616184127.L27625@fledge.watson.org> <20050617113729.i78gx3wiokw48g8k@netchild.homeip.net> MIME-Version: 1.0 Content-Type: TEXT/PLAIN; charset=US-ASCII; format=flowed Cc: alc@freebsd.org, freebsd-current@freebsd.org Subject: Re: Reboot while booting with new per-CPU allocator X-BeenThere: freebsd-current@freebsd.org X-Mailman-Version: 2.1.5 Precedence: list List-Id: Discussions about the use of FreeBSD-current List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , X-List-Received-Date: Fri, 17 Jun 2005 10:00:25 -0000 On Fri, 17 Jun 2005, Alexander Leidinger wrote: > Robert Watson wrote: > >> Looks like what basically happened is this these kern_malloc.c changes >> increase the memory burden on UMA as statistics structures for malloc types >> now get allocated from UMA. It looks like, from your dmesg, you have a >> fair number of modules loaded, so the storage for the statistics comes out >> of the early UMA page pool, whereas before it came out of BSS. We'll see if >> further tuning is required or not with large numbers of modules. > > I try to load as much as possible as modules. Can you quantify "large > number of modules"? I could load some more modules for testing purposes > at the weekend. Well, it looked like 30 was enough to exceed the 40 page UMA threshold, but it's now been bumped to 48 in HEAD. However, what actually matters is malloc types, not modules, so I think two routes would be productive: to add a debugging printf to UMA to show how much of the boot page space is used at the time it transitions to non-boot pages, and to try creating a module that creates various numbers of malloc types. Robert N M Watson