From owner-freebsd-current@FreeBSD.ORG Tue May 19 02:45:03 2009 Return-Path: Delivered-To: current@freebsd.org Received: from mx1.freebsd.org (mx1.freebsd.org [IPv6:2001:4f8:fff6::34]) by hub.freebsd.org (Postfix) with ESMTP id E9FF51065721 for ; Tue, 19 May 2009 02:45:03 +0000 (UTC) (envelope-from mat.macy@gmail.com) Received: from an-out-0708.google.com (an-out-0708.google.com [209.85.132.244]) by mx1.freebsd.org (Postfix) with ESMTP id 9793C8FC12 for ; Tue, 19 May 2009 02:45:03 +0000 (UTC) (envelope-from mat.macy@gmail.com) Received: by an-out-0708.google.com with SMTP id c3so2005156ana.13 for ; Mon, 18 May 2009 19:45:02 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=gamma; h=domainkey-signature:mime-version:sender:received:in-reply-to :references:date:x-google-sender-auth:message-id:subject:from:to:cc :content-type:content-transfer-encoding; bh=YqfBSz+6Q9lkcEczOSX8THHqWiImP9o97xlG4kmM6wM=; b=dsPQAkyOWq68tyUp5rMCuAATiruqNSxwKiQjWOzhdsuT4qtuh4CRbwc/GBONN2Ar4k KbsHK30tOdGIdUNzSlsBO/IPyvS0+paEKpeKNm+YPfz/6pN4EfcVS2GW0bv5Py9wY6cj kGH+k6kYqWqqBB6/7viS0cyzWsQEt0zaI6sFg= DomainKey-Signature: a=rsa-sha1; c=nofws; d=gmail.com; s=gamma; h=mime-version:sender:in-reply-to:references:date :x-google-sender-auth:message-id:subject:from:to:cc:content-type :content-transfer-encoding; b=xXgd/kg2qbCnLn73LxqRzZByKwFqJNZy6L+6vjtoPsa20oKW8mlWtmXQv/VIEAyzWv NC0UUHl852IIuH85HHpP2IXabuewekohueZIaxu6EyWl2cMZ+mfiLxUJe1CjKUXoc1di m9AXLnJYiuyOXtCv06sF6HgNnutGnZJbzoOd0= MIME-Version: 1.0 Sender: mat.macy@gmail.com Received: by 10.100.178.3 with SMTP id a3mr5310227anf.59.1242701102812; Mon, 18 May 2009 19:45:02 -0700 (PDT) In-Reply-To: <1F20825F-BD11-40D1-9024-07F6E707DD08@wanderview.com> References: <20090518145614.GF82547@egr.msu.edu> <3c1674c90905181659g1d20f0f1w3f623966ae4440ec@mail.gmail.com> <20090519012202.GR82547@egr.msu.edu> <3c1674c90905181826p787a346cie90429324444a9c4@mail.gmail.com> <1F20825F-BD11-40D1-9024-07F6E707DD08@wanderview.com> Date: Mon, 18 May 2009 19:45:02 -0700 X-Google-Sender-Auth: f7963ebcdc33872f Message-ID: <3c1674c90905181945g179173b9rb064e8b37ba7148@mail.gmail.com> From: Kip Macy To: Ben Kelly Content-Type: text/plain; charset=ISO-8859-1 Content-Transfer-Encoding: quoted-printable Cc: Adam McDougall , current@freebsd.org, Larry Rosenman Subject: Re: Fatal trap 12: page fault panic with recent kernel with ZFS X-BeenThere: freebsd-current@freebsd.org X-Mailman-Version: 2.1.5 Precedence: list List-Id: Discussions about the use of FreeBSD-current List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , X-List-Received-Date: Tue, 19 May 2009 02:45:04 -0000 On Mon, May 18, 2009 at 7:34 PM, Ben Kelly wrote: > On May 18, 2009, at 9:26 PM, Kip Macy wrote: >> >> On Mon, May 18, 2009 at 6:22 PM, Adam McDougall >> wrote: >>> >>> On Mon, May 18, 2009 at 07:06:57PM -0500, Larry Rosenman wrote: >>> >>> =A0On Mon, 18 May 2009, Kip Macy wrote: >>> >>> =A0> The ARC cache allocates wired memory. The ARC will grow until ther= e is >>> =A0> vm pressure. >>> =A0My crash this AM was with 4G real, and the ARC seemed to grow and gr= ow, >>> then >>> =A0we started paging, and then crashed. >>> >>> =A0Even with the VM pressure it seemed to grow out of control. >>> >>> =A0Ideas? >>> >>> >>> Before that but since 191902 I was having the opposite problem, >>> my ARC and thus Wired would grow up to approx arc_max until my >>> Inactive memory put pressure on ARC making it shrink back down >>> to ~450M where some aspects of performance degraded. =A0A partial >>> workaround was to add a arc_min which isn't entirely successful >>> and I found I could restore ZFS performance by temporarily squeezing >>> down Inactive memory by allocating a bunch of it myself; after >>> freeing that, ARC had no pressure and could grow towards arc_max >>> again until Inactive eventually rose. =A0Reported to Kip last night >>> and some cvs commit lists. =A0I never did run into Swap. >>> >> >> >> That is a separate issue. I'm going to try adding a vm_lowmem event >> handler to drive reclamation instead of the current paging target. >> That shouldn't cause inactive pages to shrink the ARC. > > Isn't there already a vm_lowmem event for the arc that triggers reclamati= on? You're right, there is. I had asked alc if there was a better way than using the paging target and he suggested it. I hadn't looked to see if it was already there because we've had such troubles. > On the low memory front it seems like the arc needs a way to tell the pag= er > to mark some vnodes inactive. =A0I've seen many cases where the arc size > greatly exceeded the target, but it couldn't evict any memory because all > its buffers were still referenced. =A0This seems to behave a little bette= r > with code that increments vm_pageout_deficit and signals the pageout daem= on > when the arc is too far above its target. =A0The normal buffer cache seem= s to > do this as well when its low on memory. Good point. Patches welcome. Otherwise I'll look in to it when I get the ch= ance. Cheers, Kip