Skip site navigation (1)Skip section navigation (2)
Date:      Thu, 28 Jan 2016 11:29:41 +0100
From:      Borja Marcos <borjam@sarenet.es>
To:        Marie Helene Kvello-Aune <marieheleneka@gmail.com>
Cc:        freebsd-fs@freebsd.org
Subject:   Re: ZFS bug: zpool expandz says 16.0E, clearly wrong
Message-ID:  <0614ACAB-DFF0-4CBE-8AA1-4EAE4668DBA9@sarenet.es>
In-Reply-To: <CALXRTbdJi1QZW14sYqX7o7KoeX3ht9w_KqPQ-LO0GSzWi1m62g@mail.gmail.com>
References:  <CALXRTbdJi1QZW14sYqX7o7KoeX3ht9w_KqPQ-LO0GSzWi1m62g@mail.gmail.com>

next in thread | previous in thread | raw e-mail | index | archive | help

> On 26 Jan 2016, at 17:17, Marie Helene Kvello-Aune =
<marieheleneka@gmail.com> wrote:
>=20
> I've stumbled across a curiosity with my zpool. The command 'zpool =
list'
> states that the EXPANDZ property/value is 16.0E. This is clearly =
incorrect.
> :)

Or not, maybe your encryption+compression has triggered a Shannon =
Singularity! ;)

> The pool consist of a single RaidZ2 vdev of 6 drives, and two cache =
drives.
> No log device. Executing 'zpool list -v' shows that each member of the
> RaidZ2 has an 'EXPANDZ' value of '-', as expected. But the RaidZ2 =
itself,
> and the pool, has a EXPANDZ value of 16.0E.

Now, seriously.  I=E2=80=99ve seen odd size reports for cache drives =
before. Can you try running =E2=80=9Czdb=E2=80=9D and
see where the wrong size is reported?

Maybe detaching and reattaching the cache drives from the pool might =
help, in case something related
to the cache drives is creating the confusion.





Borja.




Want to link to this message? Use this URL: <https://mail-archive.FreeBSD.org/cgi/mid.cgi?0614ACAB-DFF0-4CBE-8AA1-4EAE4668DBA9>