Skip site navigation (1)Skip section navigation (2)
Date:      Fri, 09 Dec 2011 09:20:38 -0500
From:      Adam McDougall <mcdouga9@egr.msu.edu>
To:        freebsd-fs@freebsd.org
Subject:   Re: ZFS hangs with 8.2-release
Message-ID:  <4EE21936.6020502@egr.msu.edu>
In-Reply-To: <4EE12632.4070309@internet2.edu>
References:  <4EE118C7.8030803@internet2.edu> <CAOjFWZ4kZfepsBdb0O9s3sivj2%2BoSkXhX1P_uyrbJW--Cp0CxQ@mail.gmail.com> <4EE12632.4070309@internet2.edu>

next in thread | previous in thread | raw e-mail | index | archive | help
On 12/08/11 16:03, Dan Pritts wrote:
> Upgrading is our intent...IF we stay with FreeBSD. Thus my question
> about stability improvements in freebsd 9.
>
> Which I guess you've answered; we'll give it a go.
>
> thanks
> danno
>> Freddie Cash <mailto:fjwcash@gmail.com>
>> December 8, 2011 3:29 PM
>>
>> With a pool that big, you really should upgrade to 8-STABLE or
>> 9-STABLE. Both of those support ZFSv28. You don't need to upgrade
>> the pool/filesystems to ZFSv28, but the new code is much more stable
>> and speedy. Plus, there are a lot of nice extra features in ZFSv28
>> compared to ZFSv15.
>>
>

Some comments on your loader.conf based on experience for stability on 
my ZFS servers (I have not used zfs send/rec in years and generally 
don't use snapshots):

- Ever since near the timeframe of ZFS v13 and some memory code 
improvements, I've been able to get ZFS to run in a stable manner if I 
make sure ZFS has enough ARC in a non-fragmented kmem space.  Frequent 
ARC allocations can cause the ARC to become fragmented inside the 
virtual kmem space leading to slowness or stalls/panics in extreme 
cases.  Additionally there is a bug in the code that adjusts the 
kmem_size based on the vm.kmem_size loader variable that you are 
setting; chances are if you inspect 'sysctl vm.kmem_size' you will find 
it considerably smaller than the 8G you set it to.  I suggest:
   - set vm.kmem_size to double your ram to give the ARC plenty of elbow 
room against becoming fragmented within kmem (its a virtual address 
space, it is not constrained to your ram size).
   - to fix the bug in the kmem_size setting, edit
/usr/src/sys/kern/kern_malloc.c and change cnt.v_page_count to mem_size
in this code (it was recently committed to HEAD and is scheduled to be 
merged into other branches):
                vm_kmem_size = 2 * cnt.v_page_count * PAGE_SIZE;
   - I would still recommend setting vfs.zfs.arc_max to 2-4G if you can 
spare the ram unless it causes the system to starve other necessary 
functions or cause it to swap.  I have seen ZFS speed CRAWL if the ARC 
is pushed too small, say 600m or under.
   - I wouldn't bother setting the arc_min unless you are trying to 
nudge it to use more ram than whatever it has chosen to use.

- I wouldn't bother setting vm.kmem_size_max, it is HUGE by default

- In my experience running with prefetch disabled is a significant 
impact to speed, once you are comfortable with doing some performance 
testing I would evaluate that and decide for yourself about "some 
discussion suggests that the prefetch sucks"

- Be wary of using dedupe in v28, it seems to have a huge performance 
drag when working with files that were written while dedupe was enabled; 
I won't comment more on that except to suggest not adding that variable 
to your issue

- These comments mostly relate to speed, but I had to give the ARC 
enough room to work without deadlocking the system so they may help you 
there.



Want to link to this message? Use this URL: <https://mail-archive.FreeBSD.org/cgi/mid.cgi?4EE21936.6020502>