Skip site navigation (1)Skip section navigation (2)
Date:      Tue, 23 Mar 2021 11:40:04 -0400
From:      Waitman Gobble <gobble.wa@gmail.com>
To:        FreeBSD <freebsd-questions@freebsd.org>
Subject:   Re: Disappearing files in FreeBSD 13.0-R2
Message-ID:  <CAFuo_fwc0d1U6DHWH7c1fPsxjXhiCWoDfr=2=bx052kWO87YSw@mail.gmail.com>
In-Reply-To: <202103230747.12N7lnl0000125@sdf.org>
References:  <202103230747.12N7lnl0000125@sdf.org>

next in thread | previous in thread | raw e-mail | index | archive | help
On Tue, Mar 23, 2021 at 3:47 AM Scott Bennett <bennett@sdf.org> wrote:
>
>      On Sun, 14 Mar 2021 23:44:08 -0700 David Christensen <dpchrist@holgerdanske.com>
> wrote:
>
> On 3/14/21 6:22 PM, David Christensen wrote:
> >> On Sun, Mar 14, 2021 at 9:20 PM Waitman Gobble <gobble.wa@gmail.com> wrote:
> >>>
> >>> On Sun, Mar 14, 2021 at 8:00 PM David Christensen
> >>> <dpchrist@holgerdanske.com> wrote:
> >>>>
> >>>> On 3/14/21 4:03 PM, Waitman Gobble wrote:
> >>>>> I did a fresh install using ZFS with encryption. I copied the files on a
> >>>>> second drive (UFS) to /usr/home/backup (ZFS). I reformatted the second
> >>>>> drive ZFS and created a new pool "home" for that drive. It decided to mount
> >>>>> the drive as /home. AFAIK i never told the system to do that. But /home and
> >>>>> /usr/home are different, there is no link.
>
>      So far, what Waitman describes is the system doing exactly what it was supposed
> to do, given his instructions.  It created a pool called home.  Because every
> pool has a name, that name is the default mountpoint for the native root file
> system of the pool, in this case, "home" gets mounted at /home.  "home" seems an odd
> choice for a pool that might have other file systems in it that do not contain home
> directories, but if that's the way he wants his system to look, then so be it.  No
> symlink exists because he has not created one.
> >>>>>
> >>>>> I can only see /usr/home/backup if i boot into single user mode. If i mount
> >>>>> read write or boot normally then /usr/home is empty.
> >>>>>
>      Here Waitman appears to have forgotten the basics of UNIX.  When a file system
> is mounted onto a directory, the entries in that directory become no longer visible,
> except to any process that currently has an open file descriptor for the directory.
> Files in that directory also become inaccessible, except for files that are already
> held open by existing processes.  When the file system that has been mounted there,
> thus "covering up" the contents of the directory, gets unmounted, the directory and
> its contents become visible again.
>      In single-user mode in Waitman's example, the home pool has yet to be imported,
> so it is not visible yet.  In starting up multi-user mode, all pools in the cache
> get imported.  However, I do not know why "zpool status" with no pools specified
> does not cause all pools in the cache (except the boot pool, of course) to be
> imported before displaying the status.  Did he manually export home before rebooting
> the system or before shutting down to single-user mode?  I.e., was home not listed
> in the cache at the time due to having been manually exported?
>
> >>>>> I copied the files to a usb drive.
> >>>>>
> >>>>> How do i delete the backup? Its taking up 100 gb. I can see them read only
>
>      You have to do that when another file system is not mounted on top of it.
>
> >>>>> and copy to usb drive, but as soon as i mount read write they disappear. I
>
>      Of course, they do.  That is the expected UNIX behavior.
>
> >>>>> did not import the home pool, it does not show up in the status command.
> >>>>
> >>>>
> >>>> Please run the following commands and post your console session:
> >>>>
> >>>>   [much stuff deleted --SB]
> >>>>
> >
> >
> >It looks like 'ada1' is a 500 GB drive with the ZFS pool 'home'.  ZFS
> >mounts this at '/home' by default.  This mount overlays the root
>
>      It mounts a ZFS *file system* at /home.  A pool cannot be mounted.
> A pool is *not* a file system.  A pool contains a native root file system,
> just as a disk partition with a UFS file system has a top (a.k.a. root)
> directory.  Additional file systems subordinate to the root file system
> are usually created, and those file systems' default mount points within
> the full FreeBSD file system are as subdirectories of the pool's root
> file system.  If one wants it mounted elsewhere, the elsewhere must be
> set as the file system's mountpoint at "zfs create" time or by "zfs set".
> Setting a mountpoint to "legacy" leaves the mount information to /etc/fstab
> or the mount(8) command.
>
> >filesystem symbolic link '/home', but I do not understand why
> >'/usr/home/backup' disappears.
> >
>      As noted above, it disappears when a file system is mounted on top
> of it, thus "covering it up".
> >
> >I would do the following as root in single-user mode:
> >
> >1.  Record your console session with script(1).  Exact details will be
> >useful later; sooner if something goes wrong.
> >
> >2.  Take a recursive snapshot of the 'zroot' and 'home' ZFS filesystems.
> >  Pick a meaningful SNAPNAME (I do date/ time strings):
> >
> >       # zfs snapshot -r zroot@SNAPNAME home@SNAPNAME
> >
> >3.  Unmount the ZFS filesystem 'zroot/usr/home', make it read-only,
> >change its mountpoint, and mount it::
> >
> >       # zfs unmount zroot/usr/home
> >
> >       # zfs set readonly=on zroot/usr/home
> >
> >       # zfs set mountpoint=/usr/oldhome zroot/usr/home
> >
> >       # zfs mount zroot/usr/home
> >
> >     /usr/oldhome and /usr/oldhome/backup should now be visible.
> >
> >     (If you previously created a ZFS filesystem
> >'zroot/usr/home/backup', repeat the first, second, and fourth steps
> >above; adjusting the filesystem name.  The 'mountpoint' property should
> >be inherited.)
> >
> >4.  Set the mountpoint of the ZFS pool 'home' and mount it:
> >
> >       # zfs set mountpoint=/usr/home home
> >
> >       # zfs mount home
> >
> >     /usr/home should now be visible.
>
>      Yes.  Doing it with a symlink can work, but is likely to cause other
> inconveniences, so setting the mountpoint to put it where one wants is the
> better way to go.
> >
> >5.  It is recommended practice not to put files and directories into the
> >base filesystem of a ZFS pool (e.g. '/usr/home') -- it is better to
> >create ZFS filesystems at the base level of a pool and put files,
> >directories, and/or additional ZFS filesystems into those.  Assuming
> >'/usr/oldhome/backup' represents your old home directory, create a ZFS
> >filesystem for your new home directory:
> >
> >       # zfs create home/username
> >
> >     Do the same when adding more accounts in the future.
> >
> >6.  Assuming '/usr/oldhome/backup' represents one user account, copy its
> >contents to '/usr/home/username'.
> >
> >7.  Reboot and check everything.
> >
> >8.  Wait a while (hours, days, weeks).  When you are certain everything
> >is okay, destroy the old home filesystem:
> >
> >       # zfs destroy -r zroot/usr/home
> >
> >     This should reclaim space in the 'zroot' pool and filesystem.
> >
>      This is true, but can leave you without basics available in single-
> user mode.  It is better to leave a basic home directory there that belongs
> to the system administrator that is accessible from the single-user shell
> without mounting anything more than /usr.  IIRC, home is just a directory
> in /usr in a system as installed by the FreeBSD installer's ZFS option.
>      Here is what I've done as an example, using two 932 GB boot drives.
>
>
> [hellas] 101 % gpart show ada0 ada1
> =>        40  1953525088  ada0  GPT  (932G)
>           40        1024     1  freebsd-boot  (512K)
>         1064         984        - free -  (492K)
>         2048   637534208     2  freebsd-zfs  (304G)
>    637536256    25163776     4  freebsd-swap  (12G)
>    662700032        1064        - free -  (532K)
>    662701096  1048576000     6  freebsd-zfs  (500G)
>   1711277096     4194304     8  freebsd-ufs  (2.0G)
>   1715471400   184549376        - free -  (88G)
>   1900020776    52428800    13  freebsd-swap  (25G)
>   1952449576     1075552        - free -  (525M)
>
> =>        40  1953525088  ada1  GPT  (932G)
>           40        1024     1  freebsd-boot  (512K)
>         1064         984        - free -  (492K)
>         2048   637534208     2  freebsd-zfs  (304G)
>    637536256    25163776     4  freebsd-swap  (12G)
>    662700032        1064        - free -  (532K)
>    662701096  1048576000     6  freebsd-zfs  (500G)
>   1711277096     4194304     8  freebsd-ufs  (2.0G)
>   1715471400   184549376        - free -  (88G)
>   1900020776    52428800    13  freebsd-ufs  (25G)
>   1952449576     1075552        - free -  (525M)
>
> [hellas] 102 % gpart show -l ada0 ada1
> =>        40  1953525088  ada0  GPT  (932G)
>           40        1024     1  gptboot0  (512K)
>         1064         984        - free -  (492K)
>         2048   637534208     2  system0  (304G)
>    637536256    25163776     4  swap0  (12G)
>    662700032        1064        - free -  (532K)
>    662701096  1048576000     6  local1  (500G)
>   1711277096     4194304     8  dbtor0  (2.0G)
>   1715471400   184549376        - free -  (88G)
>   1900020776    52428800    13  crashdump  (25G)
>   1952449576     1075552        - free -  (525M)
>
> =>        40  1953525088  ada1  GPT  (932G)
>           40        1024     1  gptboot1  (512K)
>         1064         984        - free -  (492K)
>         2048   637534208     2  system1  (304G)
>    637536256    25163776     4  swap1  (12G)
>    662700032        1064        - free -  (532K)
>    662701096  1048576000     6  local0  (500G)
>   1711277096     4194304     8  dbtor1  (2.0G)
>   1715471400   184549376        - free -  (88G)
>   1900020776    52428800    13  varcrash  (25G)
>   1952449576     1075552        - free -  (525M)
>
> [hellas] 103 % zpool status local system
>   pool: local
>  state: ONLINE
>   scan: scrub repaired 0 in 0 days 01:34:26 with 0 errors on Mon Mar  1 16:41:52 2021
> config:
>
>         NAME            STATE     READ WRITE CKSUM
>         local           ONLINE       0     0     0
>           mirror-0      ONLINE       0     0     0
>             ada1p6.eli  ONLINE       0     0     0  (100% initialized, completed at Wed Jan 20 19:09:12 2021)
>             ada0p6.eli  ONLINE       0     0     0  (100% initialized, completed at Wed Jan 20 19:09:12 2021)
>
> errors: No known data errors
>
>   pool: system
>  state: ONLINE
>   scan: scrub repaired 0 in 0 days 02:07:46 with 0 errors on Mon Mar  1 07:16:52 2021
> config:
>
>         NAME            STATE     READ WRITE CKSUM
>         system          ONLINE       0     0     0
>           mirror-0      ONLINE       0     0     0
>             ada0p2.eli  ONLINE       0     0     0  (100% initialized, completed at Sun May  5 07:37:22 2019)
>             ada1p2.eli  ONLINE       0     0     0  (100% initialized, completed at Sun May  5 07:52:42 2019)
>
> errors: No known data errors
> [hellas] 104 % zfs list -r system local
> NAME                              USED  AVAIL  REFER  MOUNTPOINT
> local                             405G  75.7G    88K  legacy
> local/archives                    155G  75.7G   125G  legacy
> local/home                        250G  75.7G   245G  legacy
> local/ltmp                        200K  75.7G    88K  /ltmp
> system                            284G  8.75G    88K  /system
> system/ROOT                       161G  8.75G    88K  none
> system/ROOT/hellas.12.1.r361435   559M  8.75G  47.4G  /
> system/ROOT/hellas.r364474       1.05M  8.75G  59.5G  /
> system/ROOT/hellas.r367545        113M  8.75G  59.5G  /
> system/ROOT/hellas.r368269       19.5G  8.75G  59.7G  /
> system/ROOT/hellas.r369178         12K  8.75G  59.2G  /
> system/ROOT/hellas.r369409        140G  8.75G  59.6G  /
> system/tmp                        148K  8.75G   148K  /tmp
> system/usr                        123G  8.75G    88K  /usr
> system/usr/ports                  116G  8.75G  86.5G  /usr/ports
> system/usr/src                   2.61G  8.75G  1.93G  /usr/src
> system/usr/src12.2r              1.40G  8.75G  1.40G  /usr/src12.2r
> system/usr/src12s                3.21G  8.75G  3.21G  /usr/src12
> system/var                        150M  8.75G    88K  /var
> system/var/audit                   88K  8.75G    88K  /var/audit
> system/var/crash                   88K  8.75G    88K  /var/crash
> system/var/log                   21.7M  8.75G  7.55M  /var/log
> system/var/mail                  20.4M  8.75G  5.71M  /var/mail
> system/var/tmp                    108M  8.75G  1.13M  /var/tmp
>
>      One caveat about the above configuration is that the brain-dead FreeBSD
> ZFS installer option makes it damned nearly impossible to do this with just
> one disk drive.  Fortunately, I had a pair of drives to use to set this up,
> To do it on a single-drive system appears to require another drive be
> attached at least temporarily somehow, e.g., by eSATA or USB or even Firewire..
>      However, it works well for my use case..  Both local and system are ZFS
> pools with one top-level mirror vdev apiece.  (There is no EFI partition
> because the machine is from 2008 and has a QX9650 CPU on a Dell proprietary
> motherboard.)  I keep only FreeBSD stuff, including both the base system and
> installed ports, in the system pool.  All other local stuff goes into a file
> system in the local pool or in pools or UFS2 file systems on other drives.  This
> structure not only allows easier management of system vs. other data, but allows
> placement of the swapping/paging partitions closer to the middle of the drives
> rather than at one end or the other.  I also like the pool names better because
> they are more meaningful to me.  The standalone dump partition and the UFS2 file
> system holding /var/crash have identical locations and sizes, one on each disk.
> An encrypted GEOM mirror device holds a UFS2 partition for tor's special data in
> order to allow tor to overwrite critical files (e.g., keys), which would not be
> possible in a COW file system like ZFS.  I do use legacy mountpoints for my own
> convenience in temporarily mounting a few file systems in abnormal places, but
> yes, it would be possible to use ZFS mountpoints in these cases, too.  Using
> legacy mountpoints for a few situations also makes it easier to see where
> certain things are with a quick glance at /etc/fstab.  Your and Waitman's use
> cases and habits will undoubtedly differ from mine.
>      Another thing I standardly do is to label UFS2 file systems when I create
> them with "newfs -L".  That way they can be mounted by name as /dev/ufs/${name},
> which may be an encrypted partition or another type of GEOM device.  At mount
> time one need not worry about knowing where it is physically or whether it is
> encrypted or at which level.  If I forget to label it when running newfs, I can
> fix it with "tunefs -L".  Anyway, I do most things, except FAT32 on flash
> drives, as ZFS now.  Only a few things with very special needs (e.g., allergy to
> COW, ccache trees, portmaster's WRKDIRPREFIX, system dump device) end up in some
> other form.
>      Don't forget UNIX basics just because you're dealing with a newer type of
> file system.  mount(8) and umount(8) basically work the same way and do the same
> things as they always have.
>
>
>                                   Scott Bennett, Comm. ASMELG, CFIAG
> **********************************************************************
> * Internet:   bennett at sdf.org   *xor*   bennett at freeshell.org  *
> *--------------------------------------------------------------------*
> * "A well regulated and disciplined militia, is at all times a good  *
> * objection to the introduction of that bane of all free governments *
> * -- a standing army."                                               *
> *    -- Gov. John Hancock, New York Journal, 28 January 1790         *
> **********************************************************************


"Here Waitman appears to have forgotten the basics of UNIX." I'm not
clear about what you are talking about.

I created a second pool I named "home" that was encrypted with GELI
but it was not imported and was not mounted. Apparently merely naming
a pool "home" makes it /usr/home in ZFS. I presumed I could set the
mount point to whatever I wanted. My original plan was to mount it
something like /t, copy the files, then change the mount point to
/usr/home. But before I got that far I noticed a problem with
/usr/home...

However, after adding the secondary drive (but before actually
importing or mounting it),

The files in /usr/home where gone when the primary drive was mounted
rw, but the files were there when the drive was mounted ro in single
user mode.

Deleting /home solved the problem. The secondary drive was not
"unlocked", imported or mounted yet. My first problem was the files I
put on /user/home/backup were not there, when I noticed they were
there in single user mode "ro" then I copied them off to external
drive but could not delete them. As soon as I mounted the drive rw
they vanished. After I deleted /home directory the files in /usr/home
appeared in rw and I could delete them.

So the weirdness was /home was a directory that I don't recall
creating (but who else would have done it right?) and the presence of
it apparently made the files in /usr/home vanish after the drive was
mounted rw. The secondary drive did not seem to be causing the
problem.

What's basic? I've done this many times with UFS systems but maybe the
first time with ZFS. Maybe creating a pool named "home" mucked things
up for me. The solution was simply deleting /home, an empty directory.
After deleting the backup files I decrypted the second drive, imported
it and it magically showed up on /usr/home where I wanted it. Then I
made a symlink from /usr/home to /home, just because it's always been
that way that I remember. When I have time I'll check out that ZFS
book somebody pointed out.

-- 
Waitman Gobble



Want to link to this message? Use this URL: <https://mail-archive.FreeBSD.org/cgi/mid.cgi?CAFuo_fwc0d1U6DHWH7c1fPsxjXhiCWoDfr=2=bx052kWO87YSw>