Skip site navigation (1)Skip section navigation (2)
Date:      Sat, 26 Sep 2015 08:42:56 +0300
From:      =?UTF-8?Q?Pekka_J=C3=A4rvinen?= <pekka.jarvinen@gmail.com>
To:        freebsd-fs@freebsd.org
Subject:   Re: Missing free space from new raidz2 zpool
Message-ID:  <CABvnMcgVqp2uXXtUpEdKNPgrgDR8%2BmXNnG1PB5Tz-Q5WaYM-KA@mail.gmail.com>
In-Reply-To: <20150926035218.GF3478@server.rulingia.com>
References:  <CABvnMcj9JTgVCXcWhy4KEb8gPq5gSQgQ1ba10mQ_LpVkrxZpjg@mail.gmail.com> <20150926035218.GF3478@server.rulingia.com>

next in thread | previous in thread | raw e-mail | index | archive | help
I installed FreeBSD-10.2-STABLE-amd64-20150917-r287929-memstick.img and it
gives 28.1T free. So possibly a bug in FreeBSD 10.2-RELEASE-p2 #0 r287260M.

I created bug report
https://bugs.freebsd.org/bugzilla/show_bug.cgi?id=3D203330

I also tried with OpenIndiana's ZFS implementation with same machine and it
gives 28.5T free and lists drives as 8001.49GB.


2015-09-26 6:52 GMT+03:00 Peter Jeremy <peter@rulingia.com>:

> On 2015-Sep-24 19:36:50 +0300, Pekka J=C3=A4rvinen <pekka.jarvinen@gmail.=
com>
> wrote:
> >I bought 6 new 8 TB drives and created new raidz2 pool with these drives=
.
> >
> >My question is why zfs list and df is only showing 14.5T free? Shouldn't
> it
> >be more close to 30T? Or is that old zfs metadata somehow lurking around
> >and zfs is reading that? Bug?
>
> Your pool seems to be sized correctly but the filesystem can't see all
> the space.  If you're concerned about old metadata, destroy your new
> pool and then dd zeroes over the first and last few MB of each drive -
> which is where ZFS stores its metadata:
>
> for i in /dev/ada{0,1,2,3,4,5};do
>   dd if=3D/dev/zero of=3D$i bs=3D64k count=3D32
>   dd if=3D/dev/zero of=3D$i bs=3D64k oseek=3D122094000
> done
>
> ># zpool iostat storage2
> >               capacity     operations    bandwidth
> >pool        alloc   free   read  write   read  write
> >----------  -----  -----  -----  -----  -----  -----
> >storage2     900K  43.5T      0      1      0  7.84K
>
> The 43.5T looks correct.  "zpool list" would be a better command.
>
> ># zdb storage2
> >
> >Cached configuration:
> ...
> >                asize: 48009350479872
> ...
> >MOS Configuration:
> ...
> >                asize: 48009350479872
>
> That's also correct.
>
> As an experiment, I simulated your configuration:
> # cd /tmp
> # mkdir zfs
> # cd zfs
> # for i in 0 1 2 3 4 5;do truncate -s 7814026584k d$i;done
> # zpool create storage2 raidz2 /tmp/zfs/d?
> # zpool list storage2
> NAME       SIZE  ALLOC   FREE  EXPANDSZ   FRAG    CAP  DEDUP  HEALTH
> ALTROOT
> storage2  43.5T   165K  43.5T         -     0%     0%  1.00x  ONLINE  -
> # zfs list storage2
> NAME       USED  AVAIL  REFER  MOUNTPOINT
> storage2  88.9K  28.1T  32.0K  /storage2
> # zfs get all storage2
> NAME      PROPERTY              VALUE                  SOURCE
> storage2  type                  filesystem             -
> storage2  creation              Sat Sep 26 13:31 2015  -
> storage2  used                  88.9K                  -
> storage2  available             28.1T                  -
> storage2  referenced            32.0K                  -
> storage2  compressratio         1.00x                  -
> storage2  mounted               yes                    -
> ...
>
> --
> Peter Jeremy
>



--=20
Pekka J=C3=A4rvinen



Want to link to this message? Use this URL: <https://mail-archive.FreeBSD.org/cgi/mid.cgi?CABvnMcgVqp2uXXtUpEdKNPgrgDR8%2BmXNnG1PB5Tz-Q5WaYM-KA>