Skip site navigation (1)Skip section navigation (2)
Date:      Mon, 29 Jun 2015 10:29:36 -0400
From:      Paul Kraus <paul@kraus-haus.org>
To:        freebsd-questions <freebsd-questions@freebsd.org>
Subject:   Re: Corrupt GPT on ZFS full-disks that shouldn't be using GPT
Message-ID:  <A11C7102-5B07-4301-856A-908F6D2A7A65@kraus-haus.org>
In-Reply-To: <alpine.BSF.2.20.1506290815150.85919@wonkity.com>
References:  <CAPi0psvpvO4Kpbietpzyx1TjyB20hWV%2BCK-y3bWG4OARE1VMSg@mail.gmail.com> <alpine.BSF.2.20.1506280019400.14091@wonkity.com> <CAPi0psv7io6dhqbNxm6gp%2BW1npmNoU1agF%2Bt=7aEteNmpzqJXQ@mail.gmail.com> <alpine.BSF.2.20.1506281526030.60581@wonkity.com> <5590A7AE.9040303@sneakertech.com> <alpine.BSF.2.20.1506290815150.85919@wonkity.com>

next in thread | previous in thread | raw e-mail | index | archive | help
On Jun 29, 2015, at 10:19, Warren Block <wblock@wonkity.com> wrote:

> On Sun, 28 Jun 2015, Quartz wrote:
>=20
>>> Remember, ZFS
>>> leaves space unused at the end of a disk to allow for variations in
>>> nominal disk size.
>>=20
>> Holy what the heck, no it doesn't! One big issue with zfs is that you =
CANNOT shrink a pool's size once it's been created, for any reason. You =
can't remove vdevs, and any replacement disk must bigger or exactly =
equal in size; even a disk with one less sector and you're SOL. This is =
my biggest gripe with zfs by far and in fact I just asked freebsd-fs =
about this less than a week ago wondering if it had been addressed =
finally (it hasn't).

I do recall a change in ZFS behavior to leave a very small amount of =
space unused at the every end of the drive to account for the =
differences in real sizes between various vendors drives that were =
nominally the same size. This only applied if you used the entire disk =
and did not use any partitioning. This was in both the Solaris and =
OpenSolaris versions of ZFS, so it predates the fork of the ZFS code.

I have had no issues using disks of different manufacturers and even =
models within manufacturers (which sometimes do vary in size by a few =
blocks) as long as they were all the same nominal size (1 TB or 500 GB =
in my case) and I had handed the entire disk to ZFS and not a partition.

This is NOT an indication of any sort that you can shrink an existing =
zpool nor does it imply that any given zpool is not writing to certain =
blocks at the end of the disk, but that the space allocated by the zpool =
create, when using an entire disk, leaves a little bit of wiggle room at =
the end that is NOT part of the zpool.

I will see if I can dig up the documentation on this. Note that it is a =
very small amount as drives of the same nominal capacity vary very =
little in real capacity.

--
Paul Kraus
paul@kraus-haus.org




Want to link to this message? Use this URL: <https://mail-archive.FreeBSD.org/cgi/mid.cgi?A11C7102-5B07-4301-856A-908F6D2A7A65>