Skip site navigation (1)Skip section navigation (2)
Date:      Wed, 19 Nov 2014 04:13:40 +0000
From:      Steven Hartland <killing@multiplay.co.uk>
To:        freebsd-fs@freebsd.org
Subject:   Re: No more free space after upgrading to 10.1 and zpool upgrade
Message-ID:  <546C18F4.1090209@multiplay.co.uk>
In-Reply-To: <546C01C5.7080605@delphij.net>
References:  <CA%2Bq%2BTcqo2CL%2B00-4RTD1=WStOSYtawwsZbC1tpZ1G9CbiBp_Dw@mail.gmail.com> <20141116080128.GA20042@exhan.dylanleigh.net> <CA%2Bq%2BTcoC4gTPqGc_V3xv%2BcWxJuB2r8YioH_NLfaj=5xwsaXW0w@mail.gmail.com> <20141118054443.GA40514@core.summit> <546B8203.5040607@platinum.linux.pl> <546B9754.4060906@delphij.net> <20141119013611.GA52102@core.summit> <546C01C5.7080605@delphij.net>

next in thread | previous in thread | raw e-mail | index | archive | help

On 19/11/2014 02:34, Xin Li wrote:
> -----BEGIN PGP SIGNED MESSAGE-----
> Hash: SHA512
>
> On 11/18/14 17:36, Emil Mikulic wrote:
>> On Tue, Nov 18, 2014 at 11:00:36AM -0800, Xin Li wrote:
>>> On 11/18/14 09:29, Adam Nowacki wrote:
>>>> This commit is to blame:
>>>> http://svnweb.freebsd.org/base?view=revision&revision=268455
>>>>
>>>> 3.125% of disk space is reserved.
>> This is the sort of thing I suspected, but I didn't spot this
>> commit.
>>
>>> Note that the reserved space is so that one can always delete
>>> files, etc. to get the pool back to a usable state.
>> What about the "truncate -s0" trick? That doesn't work reliably?
>>
>>> I've added a new tunable/sysctl in r274674, but note that tuning
>>> is not recommended
>> Thanks!!
>>
>> Can you give us an example of how (and when) to tune the sysctl?
> sysctl vfs.zfs.spa_slop_shift=6 would tune down the reserved space to
> 1/(2^6) (=1.5625%).
>
> Personally I would never tune it.  At this level of space your pool is
> already running at degraded performance, by the way.  Don't do that.
>
>> Regarding r268455, this is kind of a gotcha for people who are
>> running their pools close to full - should this be mentioned in
>> UPDATING or in the release notes?
>>
>> I understand that ZFS needs free space to be able to free more
>> space, but 3% of a large pool is a lot of bytes.
> Well, if you look at UFS, the reservation ratio is about 7.5% (8/108).
>
> File systems need free space to do allocation efficiently; even with
> mostly static contents, performance would suffer because at high level
> of space usage the file system would spend more time on looking up for
> free space and the resulted allocation is likely to be more
> fragmented.  For ZFS, this means many essential operations like
> resilvering would be much slower, which is a real threat to data
> recoverability.
>
The new space map code should help with that and a fixed 3.125% is a 
large portion of a decent sized pool.

On our event cache box for example thats 256GB which feels like silly 
amount to reserve.

Does anyone have any stats which backup the need for this amount of free 
space on large pool arrays, specifically with spacemaps enabled?



Want to link to this message? Use this URL: <https://mail-archive.FreeBSD.org/cgi/mid.cgi?546C18F4.1090209>