From owner-svn-src-all@FreeBSD.ORG Sat Nov 10 18:04:11 2012 Return-Path: Delivered-To: svn-src-all@freebsd.org Received: from mx1.freebsd.org (mx1.freebsd.org [69.147.83.52]) by hub.freebsd.org (Postfix) with ESMTP id 1E65EE76 for ; Sat, 10 Nov 2012 18:04:11 +0000 (UTC) (envelope-from peter@wemm.org) Received: from mail-la0-f54.google.com (mail-la0-f54.google.com [209.85.215.54]) by mx1.freebsd.org (Postfix) with ESMTP id 832E48FC18 for ; Sat, 10 Nov 2012 18:04:10 +0000 (UTC) Received: by mail-la0-f54.google.com with SMTP id e12so4705729lag.13 for ; Sat, 10 Nov 2012 10:04:09 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=wemm.org; s=google; h=mime-version:in-reply-to:references:date:message-id:subject:from:to :cc:content-type; bh=PQNNQfekIRiuNks9cYOzvOShghbPgLArbHLW3cpTqsM=; b=OJGMoP73kDJNpHGH1ZmrYv04wcmjpIlzcJrBKyFU+bSJD3ao5WJsfcD7ebeN3qni7n WStjuJVBm9wDgoH2OxWFePgBeVqUId6OtMmI2r0D+cUMJFd5/CLArV/1xlNT39xh1p5h BtsdaJ7t+B/qwCx2ErLpiFUBcAkB6/jhuPisI= X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20120113; h=mime-version:in-reply-to:references:date:message-id:subject:from:to :cc:content-type:x-gm-message-state; bh=PQNNQfekIRiuNks9cYOzvOShghbPgLArbHLW3cpTqsM=; b=dz2fPgz7dSYtTz7m7xTYklf7S1whWF376nJr/yMIZWa7v399QuU3mgWkStH8fNScrw 0XK8PIQ44GSBvoyO/Gyk1rTxDQI/2+4NTmndhKrYXcOmISUzB+5uWyo6CbkJWRyK1msK sNjO1vgMZrdpTu4xrkD8orONWGrnDB/ICD6jztu6hnaNJsEV/DyRZvPTWfavyLXd0mLr 9t9CFf3kOAjZ6iis5yzuLn9S1nnTthRgieaE714UXnmNQcH46bNqJFRM4RJMi5Kr9hhm KeL/9KmZoW2+cVgs9NRrTQsb1seDViCViJFkhv8cCO1WZkUTWvqznW7lVkZHKgSfBzlW qYZw== MIME-Version: 1.0 Received: by 10.152.148.8 with SMTP id to8mr13991719lab.2.1352570649277; Sat, 10 Nov 2012 10:04:09 -0800 (PST) Received: by 10.112.100.230 with HTTP; Sat, 10 Nov 2012 10:04:09 -0800 (PST) In-Reply-To: References: <201211100208.qAA28e0v004842@svn.freebsd.org> <509DC25E.5030306@mu.org> <509E3162.5020702@FreeBSD.org> <509E7E7C.9000104@mu.org> <509E830D.5080006@mu.org> <509E847E.30509@mu.org> <509E8930.50800@mu.org> Date: Sat, 10 Nov 2012 10:04:09 -0800 Message-ID: Subject: Re: svn commit: r242847 - in head/sys: i386/include kern From: Peter Wemm To: Eitan Adler Content-Type: text/plain; charset=ISO-8859-1 X-Gm-Message-State: ALoCoQn9qpaxhdyay5Ty2dyAgvn/PZSgwJtDb7JITErUUMN30NP6linrLIPN3XyD4/tknWM6DONO Cc: svn-src-head@freebsd.org, svn-src-all@freebsd.org, Alfred Perlstein , src-committers@freebsd.org, Alfred Perlstein X-BeenThere: svn-src-all@freebsd.org X-Mailman-Version: 2.1.14 Precedence: list List-Id: "SVN commit messages for the entire src tree \(except for " user" and " projects" \)" List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , X-List-Received-Date: Sat, 10 Nov 2012 18:04:11 -0000 On Sat, Nov 10, 2012 at 9:48 AM, Eitan Adler wrote: > On 10 November 2012 12:45, Peter Wemm wrote: >> On Sat, Nov 10, 2012 at 9:33 AM, Eitan Adler wrote: >>> On 10 November 2012 12:04, Alfred Perlstein wrote: >>>> Sure, if you'd like you can help me craft that comment now? >>> >>> I think this is short and clear: >>> === >>> Limit the amount of kernel address space used to a fixed cap. >>> 384 is an arbitrarily chosen value that leaves 270 MB of KVA available >>> of the 2 MB total. On systems with large amount of memory reduce the >>> the slope of the function in order to avoiding exhausting KVA. >>> === >> >> That's actually completely 100% incorrect... > > okay. I'm going by the log messages posted so far. I have no idea how > this works. Can you explain it better? That's exactly my point.. You get 1 maxuser per 2MB of physical ram. If you get more than 384 maxusers (ie: 192GB of ram) we scale it differently for the part past 192GB. I have no idea how the hell to calculate that. You get an unlimited number of regular mbufs. You get 64 clusters per maxuser (128k) Unless I fubared the numbers, this currently works out to be 6%, or 1/16. Each MD backend gets to provide a cap for maxusers, which is in units of 2MB. For an i386 PAE machine you have a finite amount of KVA space (1GB, but this is adjustable.. you can easily configure it for 3GB kva with one compile option for the kernel). The backends where the nmbclusters comes out of KVA should calculate the number of 2MB units to avoid running out of KVA. amd64 does a mixture of direct map and kva allocations. eg: mbufs and clusters come from direct map, the jumbo clusters come from kva. So side effects of nmbclusters for amd64 are more complicated. 1/2 of the nmbclusters (which are in physcal ram) are allocated as jumbo frames (kva) 1/4 of nmbclusters (physical) are 9k jumbo frames (kva) 1/8 of nmbclusters (physical) are used to set the 16k kva backed jumbo frame pool. amd64 kva is "large enough" now, but my recollection is that sparc64 has a small kva plus a large direct map. Tuning for amd64 isn't relevant for sparc64. mips has direct map, but doesn't have a "large" direct map, nor a "large" kva. This is complicated but we need a simple user visible view of it. It really needs to be something like "nmbclusters defaults to 6% of physical ram, with machine dependent limits". The MD limits are bad enough, and using bogo-units like "maxusers" just makes it worse. -- Peter Wemm - peter@wemm.org; peter@FreeBSD.org; peter@yahoo-inc.com; KI6FJV "All of this is for nothing if we don't go to the stars" - JMS/B5 "If Java had true garbage collection, most programs would delete themselves upon execution." -- Robert Sewell