From owner-freebsd-fs@FreeBSD.ORG Fri Dec 2 14:49:31 2011 Return-Path: Delivered-To: freebsd-fs@freebsd.org Received: from mx1.freebsd.org (mx1.freebsd.org [IPv6:2001:4f8:fff6::34]) by hub.freebsd.org (Postfix) with ESMTP id 5D4FF106566B for ; Fri, 2 Dec 2011 14:49:31 +0000 (UTC) (envelope-from jdc@koitsu.dyndns.org) Received: from qmta15.westchester.pa.mail.comcast.net (qmta15.westchester.pa.mail.comcast.net [76.96.59.228]) by mx1.freebsd.org (Postfix) with ESMTP id 0C3E38FC18 for ; Fri, 2 Dec 2011 14:49:30 +0000 (UTC) Received: from omta15.westchester.pa.mail.comcast.net ([76.96.62.87]) by qmta15.westchester.pa.mail.comcast.net with comcast id 4EHF1i0011swQuc5FEpX8v; Fri, 02 Dec 2011 14:49:31 +0000 Received: from koitsu.dyndns.org ([67.180.84.87]) by omta15.westchester.pa.mail.comcast.net with comcast id 4EpW1i00L1t3BNj3bEpWd5; Fri, 02 Dec 2011 14:49:31 +0000 Received: by icarus.home.lan (Postfix, from userid 1000) id 0F998102C1D; Fri, 2 Dec 2011 06:49:29 -0800 (PST) Date: Fri, 2 Dec 2011 06:49:29 -0800 From: Jeremy Chadwick To: Matt Burke Message-ID: <20111202144929.GA27319@icarus.home.lan> References: <4ED8D7A5.7090700@icritical.com> MIME-Version: 1.0 Content-Type: text/plain; charset=us-ascii Content-Disposition: inline In-Reply-To: <4ED8D7A5.7090700@icritical.com> User-Agent: Mutt/1.5.21 (2010-09-15) Cc: freebsd-fs@freebsd.org Subject: Re: Monitoring ZFS IO X-BeenThere: freebsd-fs@freebsd.org X-Mailman-Version: 2.1.5 Precedence: list List-Id: Filesystems List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , X-List-Received-Date: Fri, 02 Dec 2011 14:49:31 -0000 On Fri, Dec 02, 2011 at 01:50:29PM +0000, Matt Burke wrote: > Can someone enlighten me as to how to get 'iostat -Id' or 'iostat -Idx' > style counters for zpools? > > I've read through the man pages, but all I can see is 'zpool iostat' which > gives values which appear to be averaged over an unspecified time period. > > With a 30-disk zpool, I can't fathom out how to get any meaningful data > from the individual disk stats, and keeping a daemon running 'zpool iostat > N' just to parse its output seems hugely inefficient and hacky... What exactly are you wanting? It sounds to me like what you want are incrementing counters, not averages, but I've re-read your mail a few times and really aren't sure. iostat -Id and iostat -Idx, without any interval argument (e.g. "iostat -Id" and not "iostat -Id 1") will give you, according to the man page: The first statistics that are printed are averaged over the system uptime. ...which is still an average, just over the entire uptime of the system. IMO, that's not very helpful either. This is why most people use iostat with an interval parameter. "zpool iostat" offers the latter, but not the former (meaning it does not offer "statistics shown averaged over the system uptime"). If you're effectively wanting counters: sadly this information is not available through any means that I know of. The only two "frameworks" I can think of are libzfs and libzpool, but I can't find documentation for either of them (probably my fault). Solaris is in the same boat here as FreeBSD, just for the record. The best you're going to get is either X-second averages (e.g. "zpool iostat -v 1" -- note the -v will show you those averages on a per-pool *and* per-vdev *and* per-device basis) for ZFS, or non-ZFS-related counters (e.g. pure device counters, which are available through statfs(2), but you will not get ZFS bits through this). Re: "keeping a daemon running 'zpool iostat N' to parse its output seems hackish" -- this is exactly what many programs on many OSes do, actually. E.g. a perl script that does open(FH, "| zpool iostat N") and has to handle things appropriately. We use this model at work on Solaris for parsing iostat and mpstat data and working it into a monitoring script that runs indefinitely, hooked into (sort of) Nagios. -- | Jeremy Chadwick jdc at parodius.com | | Parodius Networking http://www.parodius.com/ | | UNIX Systems Administrator Mountain View, CA, US | | Making life hard for others since 1977. PGP 4BD6C0CB |