From owner-freebsd-stable@FreeBSD.ORG Thu Jun 30 03:45:49 2005 Return-Path: X-Original-To: stable@freebsd.org Delivered-To: freebsd-stable@FreeBSD.ORG Received: from mx1.FreeBSD.org (mx1.freebsd.org [216.136.204.125]) by hub.freebsd.org (Postfix) with ESMTP id 7A29216A41C for ; Thu, 30 Jun 2005 03:45:49 +0000 (GMT) (envelope-from sven@dmv.com) Received: from smtp-gw-cl-c.dmv.com (smtp-gw-cl-c.dmv.com [216.240.97.41]) by mx1.FreeBSD.org (Postfix) with ESMTP id 3622943D55 for ; Thu, 30 Jun 2005 03:45:48 +0000 (GMT) (envelope-from sven@dmv.com) Received: from mail-gw-cl-a.dmv.com (mail-gw-cl-a.dmv.com [216.240.97.38]) by smtp-gw-cl-c.dmv.com (8.12.10/8.12.10) with ESMTP id j5U3jlgr073267; Wed, 29 Jun 2005 23:45:47 -0400 (EDT) (envelope-from sven@dmv.com) Received: from [64.45.134.154] (dogpound.dyndns.org [64.45.134.154]) by mail-gw-cl-a.dmv.com (8.12.9/8.12.9) with ESMTP id j5U3jigx053593; Wed, 29 Jun 2005 23:45:44 -0400 (EDT) (envelope-from sven@dmv.com) Message-ID: <42C36AFF.6000405@dmv.com> Date: Wed, 29 Jun 2005 23:46:07 -0400 From: Sven Willenberger User-Agent: Mozilla Thunderbird 1.0 (X11/20041206) X-Accept-Language: en-us, en MIME-Version: 1.0 To: Tom Lane References: <1120050088.19603.7.camel@lanshark.dmv.com> <58067976-00C6-4380-90DF-F448D9008C81@khera.org> <1120075946.19603.68.camel@lanshark.dmv.com> <51C9ABA2-20B8-4DE8-B647-A26A168A78FC@mac.com> <1120078715.19598.79.camel@lanshark.dmv.com> <1120084441.19614.87.camel@lanshark.dmv.com> <165.1120086739@sss.pgh.pa.us> In-Reply-To: <165.1120086739@sss.pgh.pa.us> X-Enigmail-Version: 0.90.0.0 X-Enigmail-Supports: pgp-inline, pgp-mime Content-Type: text/plain; charset=ISO-8859-1; format=flowed Content-Transfer-Encoding: 7bit X-Scanned-By: MIMEDefang 2.39 X-Scanned-By: MIMEDefang 2.48 on 216.240.97.38 Cc: Vivek Khera , stable@freebsd.org, postgres general Subject: Re: [GENERAL] PostgreSQL's vacuumdb fails to allocate memory for X-BeenThere: freebsd-stable@freebsd.org X-Mailman-Version: 2.1.5 Precedence: list List-Id: Production branch of FreeBSD source code List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , X-List-Received-Date: Thu, 30 Jun 2005 03:45:49 -0000 Tom Lane presumably uttered the following on 06/29/05 19:12: > Sven Willenberger writes: > >>I have found the answer/problem. On a hunch I increased maxdsiz to 1.5G >>in the loader.conf file and rebooted. I ran vacuumdb and watched top as >>the process proceeded. What I saw was SIZE sitting at 603MB (which was >>512MB plus another 91MB which corresponded nicely to the value of RES >>for the process. A bit into the process I saw SIZE jump to 1115 -- i.e. >>another 512 MB of RAM was requested and this time allocated. At one >>point SIZE dropped back to 603 and then back up to 1115. I suspect the >>same type of issue was occuring in regular vacuum from the psql client >>connecting to the backend, for some reason not as frequently. I am >>gathering that maintenance work mem is either not being recognized as >>having already been allocated and another malloc is made or the process >>is thinking the memory was released and tried to grab a chunk of memory >>again. > > > Hmm. It's probably a fragmentation issue. VACUUM will allocate a > maintenance work mem-sized chunk during command startup, but that's > likely not all that gets allocated, and if any stuff allocated after > it is not freed at the same time, the process size won't go back down. > Which wouldn't be a killer in itself, but unless the next iteration > is able to fit that array in the same space, you'd see the above > behavior. > So maintenance work mem is not a measure of the max that can allocated by a maintenance procedure but rather an increment of memory that is requested by a maintenance process (which currently are vacuum and index, no?), if my reading of the above is correct. > BTW, do you have any evidence that it's actually useful to set > maintenance work mem that high for VACUUM? A quick and dirty solution > would be to bound the dead-tuples array size at something more sane... > I was under the assumption that on systems with RAM to spare, it was beneficial to set main work mem high to make those processes more efficient. Again my thinking was that the value you set for that variable determined a *max* allocation by any given maintenance process, not a memory allocation request size. If, as my tests would indicate, the process can request and receive more memory than specified in maintenance work mem, then to play it safe I imagine I could drop that value to 256MB or so. Sven