Skip site navigation (1)Skip section navigation (2)
Date:      Wed, 10 Feb 2021 16:37:18 +0000
From:      Mike Clarke <jmc-freebsd2@milibyte.co.uk>
To:        freebsd-questions@freebsd.org
Subject:   Re: Bootenv containing several filesystems
Message-ID:  <2068084.5gTYqTD1LS@curlew>
In-Reply-To: <CADqw_gLg8uG8jXTTFe=ZODwyxDSov6os44_P7No%2BQOtuwbapQg@mail.gmail.com>
References:  <CADqw_gKG6ovTuN7bZvYy7PCydfCXH4M2fw68YLmLvZhxi-g2xw@mail.gmail.com> <2476830.FrFBg55ix7@curlew> <CADqw_gLg8uG8jXTTFe=ZODwyxDSov6os44_P7No%2BQOtuwbapQg@mail.gmail.com>

next in thread | previous in thread | raw e-mail | index | archive | help
On Tuesday, 9 February 2021 19:10:34 GMT Michael Schuster wrote:
> On Tue, Feb 9, 2021 at 5:30 PM Mike Clarke <jmc-freebsd2@milibyte.co.uk> 
> wrote:
> > On Tuesday, 9 February 2021 09:53:27 GMT Matthew Seaman wrote:
> > > There's an important difference between beadm and bectl which seems
> > > relevant here.  beadm defaults to accepting a tree of ZFSes as a boot
> > > environment, whereas bectl only applies to the ZFS at the top level of
> > > the boot environment unless you use the -r flag.
> > 
> > That probably accounts for a discrepancy that I always see between beadm
> > list
> > and bectl list for my BE which has child datasets:
> > 
> > curlew:/tmp% beadm list
> > BE        Active Mountpoint  Space Created
> > fbsd12.1y -      -            1.9G 2020-12-20 20:52
> > fbsd12.2a -      -          133.0M 2020-12-24 11:20
> > fbsd12.2b -      -           18.5M 2021-01-02 09:50
> > fbsd12.2c -      -           11.7M 2021-01-12 09:55
> > fbsd12.2d NR     /           39.4G 2021-02-05 10:46
> > curlew:/tmp% bectl list
> > BE        Active Mountpoint Space Created
> > fbsd12.1y -      -          61.3M 2020-12-20 20:52
> > fbsd12.2a -      -          6.97M 2020-12-24 11:20
> > fbsd12.2b -      -          2.80M 2021-01-02 09:50
> > fbsd12.2c -      -          5.91M 2021-01-12 09:55
> > fbsd12.2d NR     /          39.5G 2021-02-05 10:46
> 
> strangely, I don't see such a difference:
> 
> bectl:
> BE_20210205_121021_CURRENT14   -      -          81.6M 2021-02-05 12:10
> BE_20210205_181224_CURRENT14   -      -          49.9M 2021-02-05 18:12
> BE_20210206_102540_CURRENT14   -      -          153M  2021-02-06 10:25
> BE_20210206_175312_CURRENT14   NR     /          30.9G 2021-02-06 17:53
> BE_20210208_204901_CURRENT_14  -      -          31.9M 2021-02-08 20:49
> 
> beadm:
> BE_20210205_121021_CURRENT14   -      -           81.6M 2021-02-05 12:10
> BE_20210205_181224_CURRENT14   -      -           49.9M 2021-02-05 18:12
> BE_20210206_102540_CURRENT14   -      -          152.3M 2021-02-06 10:25
> BE_20210206_175312_CURRENT14   NR     /           30.9G 2021-02-06 17:53
> BE_20210208_204901_CURRENT_14  -      -           31.9M 2021-02-08 20:49
> 
> as you can see, the difference is negligable ...
> 
> is there some zpool or zfs property I need to set so that be(ctl|adm) (with
> appropriate options if need be) will create a recursive boot environment?

A possible explanation for the lack of any discrepancy with your system may 
be that there might have been relatively small changes between your BEs which 
were all created within a few days of each other. In my case there was a 
significant change between fbsd12.1y and fbsd12.2a when I upgraded from 12.1 
RELEASE to 12,2 RELEASE and there would have been a big change in the child 
dataset for /usr which I suspect bectl is not including in the total.

All my BEs were created with beadm which is always recursive.

Most of the options for the datasets in my BEs are default ones apart from 
noatime and compression on all datasets and noexec and  nosuid for a few 
datasets.

Here's the filesystems for a typical BE:

NAME                            USED  AVAIL  REFER  MOUNTPOINT
ssd/ROOT/fbsd12.2d             39.5G  75.6G  1.54G  /
ssd/ROOT/fbsd12.2d/usr         31.2G  75.6G  10.1G  /usr
ssd/ROOT/fbsd12.2d/usr/ports   8.45G  75.6G  6.84G  /usr/ports
ssd/ROOT/fbsd12.2d/usr/src     3.52G  75.6G  1.46G  /usr/src
ssd/ROOT/fbsd12.2d/var         6.50G  75.6G  1.63G  /var
ssd/ROOT/fbsd12.2d/var/db      3.13G  75.6G  1.93G  /var/db
ssd/ROOT/fbsd12.2d/var/db/pkg   776M  75.6G  83.6M  /var/db/pkg
ssd/ROOT/fbsd12.2d/var/empty    104K  75.6G    96K  /var/empty
ssd/ROOT/fbsd12.2d/var/tmp     1.03G  75.6G   128K  /var/tmp

Some datasets are created outside the BE to preserve data when switching 
between BEs.

NAME                      USED  AVAIL  REFER  MOUNTPOINT
home/DATA/var            2.87G  65.5G    31K  none
home/DATA/var/cache       867M  65.5G    31K  none
home/DATA/var/cache/pkg   867M  65.5G   866M  /var/cache/pkg
home/DATA/var/db         1.76G  65.5G    31K  none
home/DATA/var/db/mysql   1.76G  65.5G  1.01G  /var/db/mysql
home/DATA/var/log         275M   749M   201M  /var/log
home/DATA/var/mail         52K  65.5G    32K  /var/mail

I've been experimenting by switching between BEs created with beadm and bectl 
in an attempt to identify the reason for the discrepancies but I think that's 
just muddied the water instead of throwing light on things.

After creating two BEs with 'bectl -r' and booting into the second one after 
activating it with bectl I would have expected the space for the active BE, 
bectl-test2, to be about 37.7G but this space is still attributed to the 
previously active BE, fbsd12.2d.

The BEs are no longer displayed in date order by bectl, probably due to the 
difference in date format for the snapshots used by beadm and bectl.

curlew:/home/mike% bectl list
BE          Active Mountpoint Space Created
bectl-test  -      -          1.93M 2021-02-10 09:54
bectl-test2 NR     /          1.82G 2021-02-10 10:10
fbsd12.1y   -      -          61.3M 2020-12-20 20:52
fbsd12.2a   -      -          6.97M 2020-12-24 11:20
fbsd12.2b   -      -          2.85M 2021-01-02 09:50
fbsd12.2c   -      -          5.91M 2021-01-12 09:55
fbsd12.2d   -      -          37.7G 2021-02-05 10:46
curlew:/home/mike% beadm list
BE          Active Mountpoint  Space Created
fbsd12.1y   -      -            1.9G 2020-12-20 20:52
fbsd12.2a   -      -          189.7M 2020-12-24 11:20
fbsd12.2b   -      -           84.5M 2021-01-02 09:50
fbsd12.2c   -      -           11.7M 2021-01-12 09:55
fbsd12.2d   -      -           37.7G 2021-02-05 10:46
bectl-test  -      -            2.2M 2021-02-10 09:54
bectl-test2 NR     /            1.8G 2021-02-10 10:10

I then rebooted after using beadm to create and activate beadm-test. Both 
bectl and beadm then showed 39.5G for the space attributed to the active BE 
beadm-test.

curlew:/home/mike% bectl list
BE          Active Mountpoint Space Created
beadm-test  NR     /          39.5G 2021-02-10 10:20
bectl-test  -      -          924K  2021-02-10 09:54
bectl-test2 -      -          1.07M 2021-02-10 10:10
fbsd12.1y   -      -          61.3M 2020-12-20 20:52
fbsd12.2a   -      -          6.97M 2020-12-24 11:20
fbsd12.2b   -      -          2.85M 2021-01-02 09:50
fbsd12.2c   -      -          5.91M 2021-01-12 09:55
fbsd12.2d   -      -          1.18M 2021-02-05 10:46
curlew:/home/mike% beadm list
BE          Active Mountpoint  Space Created
fbsd12.1y   -      -            1.9G 2020-12-20 20:52
fbsd12.2a   -      -          189.7M 2020-12-24 11:20
fbsd12.2b   -      -           84.5M 2021-01-02 09:50
fbsd12.2c   -      -           11.7M 2021-01-12 09:55
fbsd12.2d   -      -            1.2M 2021-02-05 10:46
bectl-test  -      -            1.8M 2021-02-10 09:54
bectl-test2 -      -            2.1M 2021-02-10 10:10
beadm-test  NR     /           39.5G 2021-02-10 10:20

The next step was to use bectl to reactivate bectl-test2 and reboot which 
then left 37.7G attributed to the inactive beadm-test.

curlew:/home/mike% bectl list
BE          Active Mountpoint Space Created
beadm-test  -      -          37.7G 2021-02-10 10:20
bectl-test  -      -          924K  2021-02-10 09:54
bectl-test2 NR     /          1.82G 2021-02-10 10:10
fbsd12.1y   -      -          61.3M 2020-12-20 20:52
fbsd12.2a   -      -          6.97M 2020-12-24 11:20
fbsd12.2b   -      -          2.85M 2021-01-02 09:50
fbsd12.2c   -      -          5.91M 2021-01-12 09:55
fbsd12.2d   -      -          1.18M 2021-02-05 10:46
curlew:/home/mike% beadm list
BE          Active Mountpoint  Space Created
fbsd12.1y   -      -            1.9G 2020-12-20 20:52
fbsd12.2a   -      -          189.7M 2020-12-24 11:20
fbsd12.2b   -      -           84.5M 2021-01-02 09:50
fbsd12.2c   -      -           11.7M 2021-01-12 09:55
fbsd12.2d   -      -            1.2M 2021-02-05 10:46
bectl-test  -      -            1.8M 2021-02-10 09:54
bectl-test2 NR     /            1.8G 2021-02-10 10:10
beadm-test  -      -           37.7G 2021-02-10 10:20

I then used beadm to reactivate bectl-test2 by using 'beadm activate beadm-
test' followed by 'beadm activate bectl-test2' and rebooted with the result 
that both bectl and beadm then attributed 39.5G to the active BE, bectl-
test2. 

curlew:/home/mike% bectl list
BE          Active Mountpoint Space Created
beadm-test  -      -          1.45M 2021-02-10 10:20
bectl-test  -      -          924K  2021-02-10 09:54
bectl-test2 NR     /          39.5G 2021-02-10 10:10
fbsd12.1y   -      -          61.3M 2020-12-20 20:52
fbsd12.2a   -      -          6.97M 2020-12-24 11:20
fbsd12.2b   -      -          2.85M 2021-01-02 09:50
fbsd12.2c   -      -          5.91M 2021-01-12 09:55
fbsd12.2d   -      -          1.18M 2021-02-05 10:46
curlew:/home/mike% beadm list
BE          Active Mountpoint  Space Created
fbsd12.1y   -      -            1.9G 2020-12-20 20:52
fbsd12.2a   -      -          189.7M 2020-12-24 11:20
fbsd12.2b   -      -           84.5M 2021-01-02 09:50
fbsd12.2c   -      -           11.7M 2021-01-12 09:55
fbsd12.2d   -      -            1.2M 2021-02-05 10:46
bectl-test  -      -            1.8M 2021-02-10 09:54
bectl-test2 NR     /           39.5G 2021-02-10 10:10
beadm-test  -      -            2.5M 2021-02-10 10:20

Surprisingly using bectl to repeat the process with 'bectl activate beadm-
test' followed by 'bectl activate bectl-test2 resulted in 39.5G still being 
attributed to bectl-test2.

Throughout all these tests bectl consistently showed the size of fbsd12.1y to 
be 61.3M and beadm showed 1.9G.

So  appears to be a significant difference in the way beadm and bectl account 
for the space used by BEs with child datasets. Perhaps this only shows when 
the BEs are not all created with the same program. I'll probably start using 
bectl to create the next few BEs and see what happens to the space values 
after the older beadm BEs get destroyed.

Before long I'll be deleting fbsd12.1y and it will be interesting to see 
whether 1.9G or 61.3M get freed up if I use bectl to destroy it.

-- 
Mike Clarke





Want to link to this message? Use this URL: <https://mail-archive.FreeBSD.org/cgi/mid.cgi?2068084.5gTYqTD1LS>