Skip site navigation (1)Skip section navigation (2)
Date:      Mon, 21 May 2007 16:40:31 +0100
From:      Vince <jhary@unsane.co.uk>
To:        Craig Boston <craig@xfoil.gank.org>, Kris Kennaway <kris@obsecurity.org>,  freebsd-current@FreeBSD.org, Pawel Jakub Dawidek <pjd@FreeBSD.org>, freebsd-fs@FreeBSD.org
Subject:   Re: ZFS committed to the FreeBSD base.
Message-ID:  <4651BD6F.5050301@unsane.co.uk>
In-Reply-To: <20070410014233.GD8189@nowhere>
References:  <20070407131353.GE63916@garage.freebsd.pl>	<4617A3A6.60804@kasimir.com>	<20070407165759.GG8831@cicely12.cicely.de>	<20070407180319.GH8831@cicely12.cicely.de>	<20070407191517.GN63916@garage.freebsd.pl>	<20070407212413.GK8831@cicely12.cicely.de>	<20070410003505.GA8189@nowhere> <20070410003837.GB8189@nowhere>	<20070410011125.GB38535@xor.obsecurity.org>	<20070410013034.GC8189@nowhere> <20070410014233.GD8189@nowhere>

next in thread | previous in thread | raw e-mail | index | archive | help
Craig Boston wrote:
> On Mon, Apr 09, 2007 at 08:30:35PM -0500, Craig Boston wrote:
>> Even the vm.zone breakdown seems to be gone in current so apparently my
>> knowledge of such things is becoming obsolete :)
> 
> But vmstat -m still works
> 
> ...
> 
> solaris 145806 122884K       - 15319671 16,32,64,128,256,512,1024,2048,4096
> ...
> 
> Whoa!  That's a lot of kernel memory.  Meanwhile...
> 
> kstat.zfs.misc.arcstats.size: 33554944
> (which is just barely above vfs.zfs.arc_min)
> 
> So I don't think it's the arc cache (yeah I know that's redundant) that
> is the problem.  Seems like something elsewhere in zfs is allocating
> large amounts of memory and not letting it go, and even the cache is
> having to shrink to its minimum size due to the memory pressure.
> 
> It didn't panic this time, so when the tar finished I tried a "zfs
> unmount /usr/ports".  This caused the "solaris" entry to drop down to
> about 64MB, so it's not a leak.  It could just be that ZFS needs lots of
> memory to operate if it keeps a lot of metadata for each file in memory.
> 
> The sheer # of allocations still seems excessive though.  It was well
> over 20 million by the time the tar process exited.
> 

I dont suppose that there are any other tunables people could suggest? I
got a shiny new(well old but new to me) dual opteron board and dual 250
sata drives and though i'd try putting it in as my home server with
everything but / on zfs since i've had my /usr/ports on my laptop as
compressed zfs since very shortly after it was commited.
	After a few kmem_map: too small" panics I re-read this thread and put
vm.kmem_size_max and vm.kmem_size up to 512M and vfs.zfs.arc_min
vfs.zfs.arc_max down to 65 megs. This did get me past "portsnap extract"
but a make buildworld still got me the same panic.  vmstat -z showed a
steady growth. This is with a generic -CURRENT from friday. I'm happy to
provide any useful information once I get home and reboot it.


Thanks,
Vince



> Craig
> _______________________________________________
> freebsd-current@freebsd.org mailing list
> http://lists.freebsd.org/mailman/listinfo/freebsd-current
> To unsubscribe, send any mail to "freebsd-current-unsubscribe@freebsd.org"




Want to link to this message? Use this URL: <https://mail-archive.FreeBSD.org/cgi/mid.cgi?4651BD6F.5050301>