From owner-freebsd-fs@FreeBSD.ORG Mon Feb 27 18:14:44 2012 Return-Path: Delivered-To: freebsd-fs@freebsd.org Received: from mx1.freebsd.org (mx1.freebsd.org [IPv6:2001:4f8:fff6::34]) by hub.freebsd.org (Postfix) with ESMTP id CBF951065679; Mon, 27 Feb 2012 18:14:44 +0000 (UTC) (envelope-from peterjeremy@acm.org) Received: from mail17.syd.optusnet.com.au (mail17.syd.optusnet.com.au [211.29.132.198]) by mx1.freebsd.org (Postfix) with ESMTP id 5F57A8FC19; Mon, 27 Feb 2012 18:14:44 +0000 (UTC) Received: from server.vk2pj.dyndns.org (c220-239-116-103.belrs4.nsw.optusnet.com.au [220.239.116.103]) by mail17.syd.optusnet.com.au (8.13.1/8.13.1) with ESMTP id q1RIEelK028699 (version=TLSv1/SSLv3 cipher=DHE-RSA-AES256-SHA bits=256 verify=NO); Tue, 28 Feb 2012 05:14:41 +1100 X-Bogosity: Ham, spamicity=0.000000 Received: from server.vk2pj.dyndns.org (localhost.vk2pj.dyndns.org [127.0.0.1]) by server.vk2pj.dyndns.org (8.14.5/8.14.4) with ESMTP id q1RIEddh052643; Tue, 28 Feb 2012 05:14:39 +1100 (EST) (envelope-from peter@server.vk2pj.dyndns.org) Received: (from peter@localhost) by server.vk2pj.dyndns.org (8.14.5/8.14.4/Submit) id q1RIEbNO052642; Tue, 28 Feb 2012 05:14:37 +1100 (EST) (envelope-from peter) Date: Tue, 28 Feb 2012 05:14:37 +1100 From: Peter Jeremy To: Luke Marsden Message-ID: <20120227181436.GA49667@server.vk2pj.dyndns.org> References: <1330081612.13430.39.camel@pow> MIME-Version: 1.0 Content-Type: multipart/signed; micalg=pgp-sha1; protocol="application/pgp-signature"; boundary="y0ulUmNC+osPPQO6" Content-Disposition: inline In-Reply-To: <1330081612.13430.39.camel@pow> X-PGP-Key: http://members.optusnet.com.au/peterjeremy/pubkey.asc User-Agent: Mutt/1.5.21 (2010-09-15) Cc: freebsd-fs@freebsd.org, team@hybrid-logic.co.uk, "freebsd-stable@freebsd.org" Subject: Re: Another ZFS ARC memory question X-BeenThere: freebsd-fs@freebsd.org X-Mailman-Version: 2.1.5 Precedence: list List-Id: Filesystems List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , X-List-Received-Date: Mon, 27 Feb 2012 18:14:45 -0000 --y0ulUmNC+osPPQO6 Content-Type: text/plain; charset=us-ascii Content-Disposition: inline Content-Transfer-Encoding: quoted-printable On 2012-Feb-24 11:06:52 +0000, Luke Marsden = wrote: >We're running 8.2-RELEASE v15 in production on 24GB RAM amd64 machines >but have been having trouble with short spikes in application memory >usage resulting in huge amounts of swapping, bringing the whole machine >to its knees and crashing it hard. I suspect this is because when there >is a sudden spike in memory usage the zfs arc reclaim thread is unable >to free system memory fast enough. There were a large number of fairly serious ZFS bugs that have been fixed since 8.2-RELEASE and I would suggest you look at upgrading. That said, I haven't seen the specific problem you are reporting. > * is this a known problem? I'm unaware of it specifically as it relates to ZFS. You don't mention how big the memory usage spike is but unless there is sufficient free+ cache available to cope with a usage spike then you will have problems whether it's UFS or ZFS (though it's possibly worse with ZFS). FreeBSD is known not to cope well with running out of memory. > * what is the community's advice for production machines running > ZFS on FreeBSD, is manually limiting the ARC cache (to ensure > that there's enough actually free memory to handle a spike in > application memory usage) the best solution to this > spike-in-memory-means-crash problem? Are you swapping onto a ZFS vdev? If so, change back to a raw (or geom) device - swapping to ZFS is known to be problematic. If you have very spiky memory requirements, increasing vm.v_cache_min and/or vm.v_free_reserved might give you better results. > * has FreeBSD 9.0 / ZFS v28 solved this problem? The ZFS code is the same in 9.0 and 8.3. Since 8.3 is less of a jump, I'd recommend that you try 8.3-prerelease in a test box and see how it handles your load. Note that there's no need to upgrade your pools =66rom v15 to v28 unless you want the ZFS features - the actual ZFS code is independent of pool version. > * rather than setting a hard limit on the ARC cache size, is it > possible to adjust the auto-tuning variables to leave more free > memory for spiky memory situations? e.g. set the auto-tuning to > make arc eat 80% of memory instead of ~95% like it is at > present? Memory spikes are absorbed by vm.v_cache_min and vm.v_free_reserved in the first instance. The current vfs.zfs.arc_max default may be a bit high for some workloads but at this point in time, you will need to tune it manually. --=20 Peter Jeremy --y0ulUmNC+osPPQO6 Content-Type: application/pgp-signature -----BEGIN PGP SIGNATURE----- Version: GnuPG v2.0.18 (FreeBSD) iEYEARECAAYFAk9LyAwACgkQ/opHv/APuIfByQCePnYTXe8GrzcA4RoQTJvLjqOW kRMAoMR9D6Lh6qtHKSqF48Px6HU02Iy2 =u09I -----END PGP SIGNATURE----- --y0ulUmNC+osPPQO6--