From owner-freebsd-performance@FreeBSD.ORG Mon Aug 20 16:07:34 2012 Return-Path: Delivered-To: freebsd-performance@freebsd.org Received: from mx1.freebsd.org (mx1.freebsd.org [69.147.83.52]) by hub.freebsd.org (Postfix) with ESMTP id EBC9B106564A; Mon, 20 Aug 2012 16:07:33 +0000 (UTC) (envelope-from gezeala@gmail.com) Received: from mail-pb0-f54.google.com (mail-pb0-f54.google.com [209.85.160.54]) by mx1.freebsd.org (Postfix) with ESMTP id A97C78FC08; Mon, 20 Aug 2012 16:07:33 +0000 (UTC) Received: by pbbrp2 with SMTP id rp2so7553168pbb.13 for ; Mon, 20 Aug 2012 09:07:33 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20120113; h=mime-version:in-reply-to:references:from:date:message-id:subject:to :cc:content-type:content-transfer-encoding; bh=XFzOpBC5wyfukV74RDjM3eXLv26xoMPcW2FrCMpkxnM=; b=w+F9GaO4Zh37Ljt9LKmVv9yVnKuSFNAYOZ+0EndJfYMLAT5zHGJ6V6ujde8yI3tw3g 0+A+8P2dDQzj4UMYVK5zSMXS/KsQ1MBgbeyrg3fcawm9LNxT7UJ3SE30FrmMzbIPj4NZ VjLhXugLKV88EIIbm6NZfdJrFRqKDZDUykGfmM6Ngv0iRFw0cKZ2Hh75MsCoWZti9sG9 4Xmy8ppmhLCuIgM5tyN6WV93tsmV1IRB66gmGE8qsMDueInapBT/1F4e3vdOcZ4Wfsbp o5I3V1sONdjqb9JD135NX8tSIvF3fEl1hJzGE5uY24CQjJjWn8eKYtGQh2mPklum/FcJ CPJw== Received: by 10.66.87.66 with SMTP id v2mr30565727paz.71.1345478853012; Mon, 20 Aug 2012 09:07:33 -0700 (PDT) MIME-Version: 1.0 Received: by 10.68.117.145 with HTTP; Mon, 20 Aug 2012 09:07:12 -0700 (PDT) In-Reply-To: <50325634.7090904@rice.edu> References: <502DEAD9.6050304@zonov.org> <502EB081.3030801@rice.edu> <502FE98E.40807@rice.edu> <50325634.7090904@rice.edu> From: =?ISO-8859-1?Q?Gezeala_M=2E_Bacu=F1o_II?= Date: Mon, 20 Aug 2012 09:07:12 -0700 Message-ID: To: Alan Cox Content-Type: text/plain; charset=ISO-8859-1 Content-Transfer-Encoding: quoted-printable Cc: alc@freebsd.org, freebsd-performance@freebsd.org, Andrey Zonov , kib@freebsd.org Subject: Re: vm.kmem_size_max and vm.kmem_size capped at 329853485875 (~307GB) X-BeenThere: freebsd-performance@freebsd.org X-Mailman-Version: 2.1.5 Precedence: list List-Id: Performance/tuning List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , X-List-Received-Date: Mon, 20 Aug 2012 16:07:34 -0000 On Mon, Aug 20, 2012 at 8:22 AM, Alan Cox wrote: > On 08/18/2012 19:57, Gezeala M. Bacu=F1o II wrote: >> >> On Sat, Aug 18, 2012 at 12:14 PM, Alan Cox wrote: >>> >>> On 08/17/2012 17:08, Gezeala M. Bacu=F1o II wrote: >>>> >>>> On Fri, Aug 17, 2012 at 1:58 PM, Alan Cox wrote: >>>>> >>>>> vm.kmem_size controls the maximum size of the kernel's heap, i.e., th= e >>>>> region where the kernel's slab and malloc()-like memory allocators >>>>> obtain >>>>> their memory. While this heap may occupy the largest portion of the >>>>> kernel's virtual address space, it cannot occupy the entirety of the >>>>> address >>>>> space. There are other things that must be given space within the >>>>> kernel's >>>>> address space, for example, the file system buffer map. >>>>> >>>>> ZFS does not, however, use the regular file system buffer cache. The >>>>> ARC >>>>> takes its place, and the ARC abuses the kernel's heap like nothing >>>>> else. >>>>> So, if you are running a machine that only makes trivial use of a >>>>> non-ZFS >>>>> file system, like you boot from UFS, but store all of your data in ZF= S, >>>>> then >>>>> you can dramatically reduce the size of the buffer map via boot loade= r >>>>> tuneables and proportionately increase vm.kmem_size. >>>>> >>>>> Any further increases in the kernel virtual address space size will, >>>>> however, require code changes. Small changes, but changes nonetheles= s. >>>>> >>>>> Alan >>>>> <> >>> Your objective should be to reduce the value of "sysctl vfs.maxbufspace= ". >>> You can do this by setting the loader.conf tuneable "kern.maxbcache" to >>> the >>> desired value. >>> >>> What does your machine currently report for "sysctl vfs.maxbufspace"? >>> >> Here you go: >> vfs.maxbufspace: 54967025664 >> kern.maxbcache: 0 > > > Try setting kern.maxbcache to two billion and adding 50 billion to the > setting of vm.kmem_size{,_max}. > > Thank you. We'll try this and post back results. >> Other (probably) relevant values: >> vfs.hirunningspace: 16777216 >> vfs.lorunningspace: 11206656 >> vfs.bufdefragcnt: 0 >> vfs.buffreekvacnt: 2 >> vfs.bufreusecnt: 320149 >> vfs.hibufspace: 54966370304 >> vfs.lobufspace: 54966304768 >> vfs.maxmallocbufspace: 2748318515 >> vfs.bufmallocspace: 0 >> vfs.bufspace: 10490478592 >> vfs.runningbufspace: 0 >> >> Let me know if you need other tuneables or sysctl values. Thanks a lot >> for looking into this. >> >