From owner-freebsd-stable@FreeBSD.ORG Tue May 3 12:20:55 2011 Return-Path: Delivered-To: freebsd-stable@freebsd.org Received: from mx1.freebsd.org (mx1.freebsd.org [IPv6:2001:4f8:fff6::34]) by hub.freebsd.org (Postfix) with ESMTP id 62785106566C for ; Tue, 3 May 2011 12:20:55 +0000 (UTC) (envelope-from jdc@koitsu.dyndns.org) Received: from qmta11.emeryville.ca.mail.comcast.net (qmta11.emeryville.ca.mail.comcast.net [76.96.27.211]) by mx1.freebsd.org (Postfix) with ESMTP id 44EBC8FC0A for ; Tue, 3 May 2011 12:20:54 +0000 (UTC) Received: from omta16.emeryville.ca.mail.comcast.net ([76.96.30.72]) by qmta11.emeryville.ca.mail.comcast.net with comcast id f06f1g0011ZMdJ4AB0LuBF; Tue, 03 May 2011 12:20:54 +0000 Received: from koitsu.dyndns.org ([67.180.84.87]) by omta16.emeryville.ca.mail.comcast.net with comcast id f0Ls1g00E1t3BNj8c0LspK; Tue, 03 May 2011 12:20:53 +0000 Received: by icarus.home.lan (Postfix, from userid 1000) id 1EA57102C31; Tue, 3 May 2011 05:20:52 -0700 (PDT) Date: Tue, 3 May 2011 05:20:52 -0700 From: Jeremy Chadwick To: Olaf Seibert Message-ID: <20110503122052.GA13811@icarus.home.lan> References: <20110502143230.GW6733@twoquid.cs.ru.nl> <20110503092113.GA39704@icarus.home.lan> <20110503100854.GY6733@twoquid.cs.ru.nl> MIME-Version: 1.0 Content-Type: text/plain; charset=us-ascii Content-Disposition: inline In-Reply-To: <20110503100854.GY6733@twoquid.cs.ru.nl> User-Agent: Mutt/1.5.21 (2010-09-15) Cc: freebsd-stable@freebsd.org Subject: Re: Automatic reboot doesn't reboot X-BeenThere: freebsd-stable@freebsd.org X-Mailman-Version: 2.1.5 Precedence: list List-Id: Production branch of FreeBSD source code List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , X-List-Received-Date: Tue, 03 May 2011 12:20:55 -0000 On Tue, May 03, 2011 at 12:08:54PM +0200, Olaf Seibert wrote: > On Tue 03 May 2011 at 02:21:13 -0700, Jeremy Chadwick wrote: > > There are two things you might try fiddling with. These are sysctls so > > you can try them on the fly: > > > > hw.acpi.disable_on_reboot > > hw.acpi.handle_reboot > > Thanks. For now I've set the second to 1 and we'll see if that affects > matters. > > > Check out the thread Peter Jeremy provided. This is a near-sure > > indicator of ZFS ARC exhaustion, and you seem to know of that. What's > > very interesting to me is this part of your mail: > ... > > > > Is this box running i386 or amd64? If amd64, I can't explain why your > > It's amd64. I double-checked just one, you never know what stupid > mistakes one might make :-) > > > /boot/loader.conf settings aren't taking -- they should be for sure. > > Maybe provide us a full dmesg and XXX out things you consider > > sensitive. If i386, I'm not too surprised that some automatic defaults > > get chosen instead of what you ask. > > Based on one of your mails where setting vm.kmem_size to twice the real > RAM size had adverse effects, I've taken the setting out to see if that > improves matters. I'll have to wait until the next crash (or opportunity > to reboot without too much disturbance) to see the effect. The ill-effects are a result of an underlying change that I had forgotten about but others remembered -- vm.kmem_size_scale used to be set to something like "2" by default, but it was changed to "1" prior to 8.2-RELEASE. So basically here's the current situation and how all of our 8.2-STABLE machines are tuned for ARC: we only set one single tunable for ARC "management": vfs.zfs.arc_max. We don't touch vm.kmem_size. Here's what we have literally in our /boot/loader.conf: # Limit ZFS ARC maximum. # NOTE #1: In 8.2-RELEASE and onward, vm.kmem_size_scale defaults to 1, # which means vm.kmem_size should match the amount of RAM installed # in the system. If using an earlier FreeBSD release, be sure to set # vm.kmem_size manually to the amount of RAM you have. # NOTE #2: Do not set vm.kmem_size to 2x that of physical RAM, otherwise # vfs.zfs.arc_max effectively becomes halved. # http://lists.freebsd.org/pipermail/freebsd-fs/2011-March/010875.html vfs.zfs.arc_max="6144M" The value specified here (6144MBytes) is for a machine with 8GB of RAM. Keep in mind that there is evidence that kmap/kmem exhaustion can still happen even if you tune the ARC like this. Apparently memory fragmentation plays a role, and there's some overhead as well, so calculating a 100% stable value is a little difficult. I can point you to that (very recent, as in last month) thread if you'd like. To be on the safe side, pick something that's small at first, then work your way up. You'll need probably 1+ weeks of heavy ZFS I/O between tests (e.g. don't change the tunable, reboot, then 4 hours later declare the new (larger) value as stable). So for example on an 8GB RAM machine, I might recommend starting with vfs.zfs.arc_max="4096M" and let that run for a while. If you find your "Wired" value in top(1) remains fairly constant after a week or so of heavy I/O, consider bumping up the value a bit more (say 4608M). Sorry to make this long-winded; bad habit of mine that I've never managed to break. -- | Jeremy Chadwick jdc@parodius.com | | Parodius Networking http://www.parodius.com/ | | UNIX Systems Administrator Mountain View, CA, USA | | Making life hard for others since 1977. PGP 4BD6C0CB |