Skip site navigation (1)Skip section navigation (2)
Date:      Tue, 5 May 2009 09:48:44 -0400
From:      Ben Kelly <ben@wanderview.com>
To:        Jeff Roberson <jroberson@jroberson.net>
Cc:        current@freebsd.org
Subject:   Re: [patch] zfs kmem fragmentation
Message-ID:  <8FB38AF4-3464-45AA-A6B2-96308EC49407@wanderview.com>
In-Reply-To: <alpine.BSF.2.00.0905041207221.981@desktop>
References:  <E8BEB7E4-39C7-4BF8-8D58-F8739A0F435F@wanderview.com> <alpine.BSF.2.00.0905041207221.981@desktop>

next in thread | previous in thread | raw e-mail | index | archive | help
On May 4, 2009, at 6:17 PM, Jeff Roberson wrote:
> On Sat, 2 May 2009, Ben Kelly wrote:
>> Hello all,
>>
>> Lately I've been looking into the "kmem too small" panics that  
>> often occur with zfs if you don't restrict the arc.  What I found  
>> in my test environment was that everything works well until the  
>> kmem usage hits the 75% limit set in arc.c.  At this point the arc  
>> is shrunk and slabs are reclaimed from uma. Unfortunately, every  
>> time this reclamation process runs the kmem space becomes more  
>> fragmented.  The vast majority of the time my machine hits the  
>> "kmem too small" panic it has over 200MB of kmem space available,  
>> but the largest fragment is less than 128KB.
>
> What consumers make requests of kmem for 128kb and over?  What  
> ultimately trips the panic?

ZFS buffers range from 512 bytes to 128KB.  I don't know of any  
allocations above 128KB at the moment.

In my workload the panic is usually caused by zfs attempting to  
allocate a 128KB buffer, although sometimes its only doing a 64KB  
buffer.

At one point I hacked in some instrumentation to print the kmem_map  
vm_map_entry when I touched a sysctl mib.  Here's a capture I made  
during my load test as the fragmentation was occurring:

   http://www.wanderview.com/svn/public/misc/zfs/fragmentation.txt

I also added some debug later to show the consumers of the  
allocations.  The vast majority of them were from the opensolaris  
subsystem.  Unfortunately I don't have a capture of that output handy.

>> Ideally things would be arranged to free memory without  
>> fragmentation.  I have tried a few things along those lines, but  
>> none of them have been successful so far.  I'm going to continue  
>> that work, but in the meantime I've put together a patch that tries  
>> to avoid fragmentation by slowing kmem growth before the aggressive  
>> reclamation process is required:
>>
>> http://www.wanderview.com/svn/public/misc/zfs/zfs_kmem_limit.diff
>>
>> It uses the following heuristics to do this:
>>
>> - Start arc_c at arc_c_min instead of arc_c_max.  This causes the  
>> system to warm up more slowly.
>> - Half the rate arc_c grows when kmem exceeds kmem_slow_growth_thresh
>> - Stop arc_c growth when kmem exceeds kmem_target
>> - Evict arc data when the kmem exceeds kmem_target
>> - If kmem usage exceeds kmem_target then ask the pagedaemon to  
>> reclaim pages
>> - If the largest kmem fragment is less than kmem_fragment_target  
>> then ask the pagedaemon to reclaim pages
>> - If the largest kmem fragment is less than a kmem_fragment_thresh  
>> then force the aggressve kmem/arc reclamation process
>>
>> The defaults for the various targets and thresholds are:
>>
>> kmem_reclaim_threshold = 7/8 kmem
>> kmem_target = 3/4 kmem
>> kmem_slow_growth_threshold = 5/8 kmem
>> kmem_fragment_target = 1/8 kmem
>> kmem_fragment_thresh = 1/16 kmem
>>
>> With this patch I've been able to run my load tests with the  
>> default arc size with kmem values of 512MB to 700MB.  I tried one  
>> loaded run with a 300MB kmem, but it panic'ed due to legitimate,  
>> non-fragmented kmem exhaustion.
>>
>
> May I suggest an alternate approach;  Have you considered placing  
> zfs in its own kernel submap?  If all of its allocations are of a  
> like size, fragmentation won't be an issue and it can be constrained  
> to a fixed size without placing pressure on other kmem_map  
> consumers.  This is the approach taken for the buffer cache.  It  
> makes a good deal of sense.  If arc can be taught to handle  
> allocation failures we could eliminate the panic entirely by simply  
> causing arc to run out of space and flush more buffers.
>
> Do you believe this would also address the problem?

Using a separate submap might help.  It seems that the fragmentation  
is occurring due to the interaction of the smaller and larger buffers  
within zfs.  I believe in opensolaris data buffers and meta-data  
buffers are allocated from separate arenas.  We don't do this  
currently and it may be the cause of some of the fragmentation.  It  
also occurred to me that it might be nice if the arc could somehow  
share the buffer cache directly.

Unfortunately I am moving this Friday and probably will be unable to  
really look at this for the next couple weeks.

Thanks.

- Ben



Want to link to this message? Use this URL: <https://mail-archive.FreeBSD.org/cgi/mid.cgi?8FB38AF4-3464-45AA-A6B2-96308EC49407>