Skip site navigation (1)Skip section navigation (2)
Date:      Tue, 14 Oct 2014 09:25:07 +0100
From:      "Steven Hartland" <killing@multiplay.co.uk>
To:        "K. Macy" <kmacy@freebsd.org>, "Mark Martinec" <Mark.Martinec+freebsd@ijs.si>
Cc:        "freebsd-fs@FreeBSD.org" <freebsd-fs@freebsd.org>, FreeBSD Stable <freebsd-stable@freebsd.org>
Subject:   Re: zpool import hangs when out of space - Was: zfs pool import hangs on [tx->tx_sync_done_cv]
Message-ID:  <B324C19FE1C04AEEB3119CB3D4891B91@multiplay.co.uk>
References:  <54372173.1010100@ijs.si> <644FA8299BF848E599B82D2C2C298EA7@multiplay.co.uk> <54372EBA.1000908@ijs.si> <DE7DD7A94E9B4F1FBB3AFF57EDB47C67@multiplay.co.uk> <543731F3.8090701@ijs.si> <543AE740.7000808@ijs.si> <A5BA41116A7F4B23A9C9E469C4146B99@multiplay.co.uk> <CAHM0Q_N%2BC=3qgUnyDkEugOFcL=J8gBjbTg8v45Vz3uT=e=Fn2g@mail.gmail.com> <6E01BBEDA9984CCDA14F290D26A8E14D@multiplay.co.uk> <CAHM0Q_OpV2sAQQAH6Cj_=yJWAOt8pTPWQ-m45JSiXDpBwT6WTA@mail.gmail.com> <E2E24A91B8B04C2DBBBC7E029A12BD05@multiplay.co.uk> <CAHM0Q_Oeka25-kdSDRC2evS1R8wuQ0_XgbcdZCjS09aXJ9_WWQ@mail.gmail.com> <14ADE02801754E028D9A0EAB4A16527E@multiplay.co.uk> <543C3C47.4010208@ijs.si> <E3C3C359999140B48943A0E1A04F83A9@multiplay.co.uk> <CAHM0Q_O7LNBiQAEjygANa%2B0rqm9cywjTPbNXabB4TePfEHAZsA@mail.gmail.com> <543C7B43.5020301@ijs.si> <CAHM0Q_N9pMreejE62LMJT%2BQX1%2BOVXBTMOQSbB73OzJfQqjdqzg@mail.gmail.com>

next in thread | previous in thread | raw e-mail | index | archive | help
----- Original Message ----- 
From: "K. Macy" <kmacy@freebsd.org>


> On Mon, Oct 13, 2014 at 6:24 PM, Mark Martinec
> <Mark.Martinec+freebsd@ijs.si> wrote:
>> On 10/14/2014 03:15, K. Macy wrote:
>>>
>>> What is using the extra space in the pool? Is there an unmounted
>>> dataset or snapshot? Do you know how to easily tell? Unlike txg and
>>> zio processing I don't have the luxury of having just read that part
>>> of the codebase.
>>
>>
>> Most likely the snapshots (regular periodic snapshots).
>> Changes after upgrading an OS can maybe take an additional 50%
>> of space (just guessing). Btw, ashift=12.
>> Still can't see how that would amount to 4 GiB, but it's possible.
>>
> 
> Disconcerting. Is this something that others are likely to hit? Should
> accounting for writes fail with ENOSPC a bit earlier so that we never
> reach a state like this? I.e. non-metadata writes will fail at a lower
> threshold than data or if that is already the case, reduce the
> threshold further.

I thought I remembered seeing some recent changes in this area, but I can't
find them ATM.

Something to raise on the openzfs list.

    Regards
    Steve



Want to link to this message? Use this URL: <https://mail-archive.FreeBSD.org/cgi/mid.cgi?B324C19FE1C04AEEB3119CB3D4891B91>