From owner-freebsd-current@FreeBSD.ORG Sun Nov 4 11:05:55 2007 Return-Path: Delivered-To: freebsd-current@freebsd.org Received: from mx1.freebsd.org (mx1.freebsd.org [IPv6:2001:4f8:fff6::34]) by hub.freebsd.org (Postfix) with ESMTP id 4F43C16A419 for ; Sun, 4 Nov 2007 11:05:55 +0000 (UTC) (envelope-from davidt@yadt.co.uk) Received: from outcold.yadt.co.uk (outcold.yadt.co.uk [81.187.204.178]) by mx1.freebsd.org (Postfix) with ESMTP id E9B5C13C4B2 for ; Sun, 4 Nov 2007 11:05:54 +0000 (UTC) (envelope-from davidt@yadt.co.uk) Received: from localhost (localhost [127.0.0.1]) by outcold.yadt.co.uk (Postfix) with ESMTP id CB96F6FE6; Sun, 4 Nov 2007 11:05:25 +0000 (GMT) X-Virus-Scanned: amavisd-new at yadt.co.uk Received: from outcold.yadt.co.uk ([127.0.0.1]) by localhost (outcold.yadt.co.uk [127.0.0.1]) (amavisd-new, port 10024) with ESMTP id wzzNA1CiqQHr; Sun, 4 Nov 2007 11:05:21 +0000 (GMT) Received: by outcold.yadt.co.uk (Postfix, from userid 1001) id A44C56FDF; Sun, 4 Nov 2007 11:05:21 +0000 (GMT) Date: Sun, 4 Nov 2007 11:05:21 +0000 From: David Taylor To: freebsd-current@freebsd.org Message-ID: <20071104110521.GA12145@outcold.yadt.co.uk> Mail-Followup-To: freebsd-current@freebsd.org, Peter Schuller References: <200711021208.25913.Thomas.Sparrevohn@btinternet.com> <20071103164231.GB23714@outcold.yadt.co.uk> <200711040948.25732.peter.schuller@infidyne.com> Mime-Version: 1.0 Content-Type: text/plain; charset=iso-8859-15 Content-Disposition: inline In-Reply-To: <200711040948.25732.peter.schuller@infidyne.com> User-Agent: Mutt/1.4.2.3i Cc: Peter Schuller Subject: Re: ZFS slowness (not using cache?) (was: ZFS hangs) X-BeenThere: freebsd-current@freebsd.org X-Mailman-Version: 2.1.5 Precedence: list List-Id: Discussions about the use of FreeBSD-current List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , X-List-Received-Date: Sun, 04 Nov 2007 11:05:55 -0000 On Sun, 04 Nov 2007, Peter Schuller wrote: > > For example, pkg_delete seems to be _extremely_ slow and ^T reports that > > it is stuck waiting on zfs:(&zio->io_cv) for an unreasonable (IMO) amount > > of time. > > FWIW, I have seen pkg_install (and possibly other pkg_* tools) being extremely > slow seemingly as a result of the active set of files it touches exceededing > the amount cached. In particular I had this problem after converting to ZFS, > but prior to switching to amd64 and more RAM. > > It would sit and churn on disk I/O forever, entirely seek bound. Tracing the > processes showed it traversing the package database over and over (presumably > recursively following dependencies or some such). So the same files were > touched any number of times. As a result, with too little cached, runtime > exploded (it took hours and hours upgrading my desktop using *binary* > pre-built packages because the larger packages with a lot of dependencies > would take forever to install and delete). Hmm. That prompted me to have a look at the arcstats, and I'm now rather confused. It seems to have plenty of cache free, but not actually using it properly. I'm running i386 for now (on an amd64 motherboard), with 4GB of RAM (~3.5Gb usable) and the following settings in loader.conf: geom_stripe_load="YES" geom_label_load="YES" zfs_load="YES" snd_driver_load="YES" nvidia_load="YES" hw.ata.atapi_dma="1" kern.maxfiles="25000" kern.ipc.shmmax=67108864 kern.ipc.shmall=32768 kern.maxdsiz="900M" vm.kmem_size=1450M vfs.zfs.arc_max=500M vfs.root.mountfrom="zfs:tank/i386_root" This results in (sysctl): vm.kmem_size_scale: 3 vm.kmem_size_max: 335544320 # I have just noticed this is about 300MB, # far lower than vm.kmem_size. Is that a problem? vm.kmem_size_min: 0 vm.kmem_size: 1520435200 vfs.zfs.arc_min: 47513600 vfs.zfs.arc_max: 524288000 kstat.zfs.misc.arcstats.c_min: 47513600 kstat.zfs.misc.arcstats.c_max: 524288000 kstat.zfs.misc.arcstats.size: 86967808 Regardless of what I do, I can't seem to get arcstats.size to exceed 100MB. It initially goes up with disk usage, but then starts to drop again, causing it to hold steady around 70-90MB. Something seems to be agressively pushing data out of the cache, despite it being <20% full, which seems rather fishy. Does anyone know what's going on? -- David Taylor