Skip site navigation (1)Skip section navigation (2)
Date:      Mon, 13 Oct 2014 18:32:27 -0700
From:      "K. Macy" <kmacy@freebsd.org>
To:        Mark Martinec <Mark.Martinec+freebsd@ijs.si>
Cc:        "freebsd-fs@FreeBSD.org" <freebsd-fs@freebsd.org>, FreeBSD Stable <freebsd-stable@freebsd.org>
Subject:   Re: zpool import hangs when out of space - Was: zfs pool import hangs on [tx->tx_sync_done_cv]
Message-ID:  <CAHM0Q_N9pMreejE62LMJT%2BQX1%2BOVXBTMOQSbB73OzJfQqjdqzg@mail.gmail.com>
In-Reply-To: <543C7B43.5020301@ijs.si>
References:  <54372173.1010100@ijs.si> <644FA8299BF848E599B82D2C2C298EA7@multiplay.co.uk> <54372EBA.1000908@ijs.si> <DE7DD7A94E9B4F1FBB3AFF57EDB47C67@multiplay.co.uk> <543731F3.8090701@ijs.si> <543AE740.7000808@ijs.si> <A5BA41116A7F4B23A9C9E469C4146B99@multiplay.co.uk> <CAHM0Q_N%2BC=3qgUnyDkEugOFcL=J8gBjbTg8v45Vz3uT=e=Fn2g@mail.gmail.com> <6E01BBEDA9984CCDA14F290D26A8E14D@multiplay.co.uk> <CAHM0Q_OpV2sAQQAH6Cj_=yJWAOt8pTPWQ-m45JSiXDpBwT6WTA@mail.gmail.com> <E2E24A91B8B04C2DBBBC7E029A12BD05@multiplay.co.uk> <CAHM0Q_Oeka25-kdSDRC2evS1R8wuQ0_XgbcdZCjS09aXJ9_WWQ@mail.gmail.com> <14ADE02801754E028D9A0EAB4A16527E@multiplay.co.uk> <543C3C47.4010208@ijs.si> <E3C3C359999140B48943A0E1A04F83A9@multiplay.co.uk> <CAHM0Q_O7LNBiQAEjygANa%2B0rqm9cywjTPbNXabB4TePfEHAZsA@mail.gmail.com> <543C7B43.5020301@ijs.si>

next in thread | previous in thread | raw e-mail | index | archive | help
On Mon, Oct 13, 2014 at 6:24 PM, Mark Martinec
<Mark.Martinec+freebsd@ijs.si> wrote:
> On 10/14/2014 03:15, K. Macy wrote:
>>
>> What is using the extra space in the pool? Is there an unmounted
>> dataset or snapshot? Do you know how to easily tell? Unlike txg and
>> zio processing I don't have the luxury of having just read that part
>> of the codebase.
>
>
> Most likely the snapshots (regular periodic snapshots).
> Changes after upgrading an OS can maybe take an additional 50%
> of space (just guessing). Btw, ashift=12.
> Still can't see how that would amount to 4 GiB, but it's possible.
>

Disconcerting. Is this something that others are likely to hit? Should
accounting for writes fail with ENOSPC a bit earlier so that we never
reach a state like this? I.e. non-metadata writes will fail at a lower
threshold than data or if that is already the case, reduce the
threshold further.

-K



Want to link to this message? Use this URL: <https://mail-archive.FreeBSD.org/cgi/mid.cgi?CAHM0Q_N9pMreejE62LMJT%2BQX1%2BOVXBTMOQSbB73OzJfQqjdqzg>