Skip site navigation (1)Skip section navigation (2)
Date:      Mon, 2 May 2016 22:42:47 +0200
From:      Sebastian Wolfgarten <sebastian@wolfgarten.com>
To:        Matthias Fechner <idefix@fechner.net>, freebsd-questions@freebsd.org
Subject:   Re: ZFS migration - New pool lost after reboot
Message-ID:  <6E1B2BCF-3B5C-4D18-9152-FE68711B2B43@wolfgarten.com>
In-Reply-To: <2D936447-34C1-471B-8787-8075B19F8B28@wolfgarten.com>
References:  <0A383C91-FCBA-4B9E-A95A-157A13708125@wolfgarten.com> <72087b33-53f9-e298-1441-4988c2a5ecb3@fechner.net> <2D936447-34C1-471B-8787-8075B19F8B28@wolfgarten.com>

next in thread | previous in thread | raw e-mail | index | archive | help
Hi,

just to follow-up on my own email earlier on - I managed to get the new =
pool booting by amending /boot/loader.conf as follows:

root@vm:~ # cat /boot/loader.conf
vfs.root.mountfrom=3D"zfs:newpool/ROOT/default"
kern.geom.label.gptid.enable=3D"2"
zfs_load=3D"YES"

However, when rebooting I can see he is using the new pool however I am =
running into issues as he can=92t seem to find some essential files in =
/usr:

Mounting local file systems
eval: zfs not found
eval: touch not found
/etc/rc: cannot create /dev/null: No such file or directory
/etc/rc: date: not found

Here is what =84zfs list=93 looks like:

root@vm:~ # zfs list
NAME                   USED  AVAIL  REFER  MOUNTPOINT
newpool                385M  5.41G    19K  /mnt/zroot
newpool/ROOT           385M  5.41G    19K  /mnt
newpool/ROOT/default   385M  5.41G   385M  /mnt
newpool/tmp             21K  5.41G    21K  /mnt/tmp
newpool/usr             76K  5.41G    19K  /mnt/usr
newpool/usr/home        19K  5.41G    19K  /mnt/usr/home
newpool/usr/ports       19K  5.41G    19K  /mnt/usr/ports
newpool/usr/src         19K  5.41G    19K  /mnt/usr/src
newpool/var            139K  5.41G    19K  /mnt/var
newpool/var/audit       19K  5.41G    19K  /mnt/var/audit
newpool/var/crash       19K  5.41G    19K  /mnt/var/crash
newpool/var/log         44K  5.41G    44K  /mnt/var/log
newpool/var/mail        19K  5.41G    19K  /mnt/var/mail
newpool/var/tmp         19K  5.41G    19K  /mnt/var/tmp
zroot                  524M  26.4G    96K  /zroot
zroot/ROOT             522M  26.4G    96K  none
zroot/ROOT/default     522M  26.4G   522M  /
zroot/tmp             74.5K  26.4G  74.5K  /tmp
zroot/usr              384K  26.4G    96K  /usr
zroot/usr/home          96K  26.4G    96K  /usr/home
zroot/usr/ports         96K  26.4G    96K  /usr/ports
zroot/usr/src           96K  26.4G    96K  /usr/src
zroot/var              580K  26.4G    96K  /var
zroot/var/audit         96K  26.4G    96K  /var/audit
zroot/var/crash         96K  26.4G    96K  /var/crash
zroot/var/log          103K  26.4G   103K  /var/log
zroot/var/mail          96K  26.4G    96K  /var/mail
zroot/var/tmp         92.5K  26.4G  92.5K  /var/tmp=20

I am assuming I have to amend the zfs parameters for the mount points =
but I can=92t seem to figure out what=92s wrong. I tried things like:

zfs set mountpoint=3D/usr newpool/usr
zfs set mountpoint=3D/tmp newpool/tmp
zfs set mountpoint=3D/var newpool/var

Unfortunately this did not solve the issue. Any ideas?

Many thanks.

Best regards
Sebastian

> Am 02.05.2016 um 21:43 schrieb Sebastian Wolfgarten =
<sebastian@wolfgarten.com>:
>=20
> Hi Matthias,
> dear list,
>=20
> I have build a new VM to test this further without affecting my live =
machine. When doing all these steps (including the amendment of =
loader.conf on the new pool), my system will boots up with the old pool. =
Any ideas why?
>=20
> Here is what I did:
>=20
> 1) Create required partitions on temporary hard disk ada2
> gpart create -s GPT ada2
> gpart add -t freebsd-boot -s 128 ada2
> gpart add -t freebsd-swap -s 4G -l newswap ada2
> gpart add -t freebsd-zfs -l newdisk ada2
> gpart bootcode -b /boot/pmbr -p /boot/gptzfsboot -i 1 ada2
>=20
> 2) Create new pool (newpool)
>=20
> zpool create -o cachefile=3D/tmp/zpool.cache newpool gpt/newdisk
>=20
> 3) Create snapshot of existing zroot pool and copy it over to new pool=20=

> zfs snapshot -r zroot@movedata
> zfs send -vR zroot@movedata | zfs receive -vFd newpool
> zfs destroy -r zroot@movedata
>=20
> 4) Make the new pool bootable
> =09
> zpool set bootfs=3Dnewpool/ROOT/default newpool
>=20
> 5) Mount new pool and prepare for reboot
>=20
> cp /tmp/zpool.cache /tmp/newpool.cache
> zpool export newpool
> zpool import -c /tmp/newpool.cache -R /mnt newpool
> cp /tmp/newpool.cache /mnt/boot/zfs/zpool.cache
> in /mnt/boot/loader.conf the value of kern.geom.label.gptid.enable=3D=84=
0=93 changed to =842"=20
> zfs set mountpoint=3D/ newpool/ROOT
> reboot
>=20
> After the reboot, the machine is still running of the old zfs striped =
mirror but I can mount the newpool without any problems:
>=20
> root@vm:~ # cat /boot/loader.conf
> kern.geom.label.gptid.enable=3D"0"
> zfs_load=3D"YES"
> root@vm:~ # zpool import -c /tmp/newpool.cache -R /mnt newpool
> root@vm:~ # cd /mnt
> root@vm:/mnt # ls -la
> total 50
> drwxr-xr-x  19 root  wheel    26 May  2 23:33 .
> drwxr-xr-x  18 root  wheel    25 May  2 23:37 ..
> -rw-r--r--   2 root  wheel   966 Mar 25 04:52 .cshrc
> -rw-r--r--   2 root  wheel   254 Mar 25 04:52 .profile
> -rw-------   1 root  wheel  1024 May  2 01:45 .rnd
> -r--r--r--   1 root  wheel  6197 Mar 25 04:52 COPYRIGHT
> drwxr-xr-x   2 root  wheel    47 Mar 25 04:51 bin
> -rw-r--r--   1 root  wheel     9 May  2 23:27 bla
> drwxr-xr-x   8 root  wheel    47 May  2 01:44 boot
> drwxr-xr-x   2 root  wheel     2 May  2 01:32 dev
> -rw-------   1 root  wheel  4096 May  2 23:21 entropy
> drwxr-xr-x  23 root  wheel   107 May  2 01:46 etc
> drwxr-xr-x   3 root  wheel    52 Mar 25 04:52 lib
> drwxr-xr-x   3 root  wheel     4 Mar 25 04:51 libexec
> drwxr-xr-x   2 root  wheel     2 Mar 25 04:51 media
> drwxr-xr-x   2 root  wheel     2 Mar 25 04:51 mnt
> drwxr-xr-x   2 root  wheel     2 May  2 23:33 newpool
> dr-xr-xr-x   2 root  wheel     2 Mar 25 04:51 proc
> drwxr-xr-x   2 root  wheel   147 Mar 25 04:52 rescue
> drwxr-xr-x   2 root  wheel     7 May  2 23:27 root
> drwxr-xr-x   2 root  wheel   133 Mar 25 04:52 sbin
> lrwxr-xr-x   1 root  wheel    11 Mar 25 04:52 sys -> usr/src/sys
> drwxrwxrwt   6 root  wheel     7 May  2 23:33 tmp
> drwxr-xr-x  16 root  wheel    16 Mar 25 04:52 usr
> drwxr-xr-x  24 root  wheel    24 May  2 23:21 var
> drwxr-xr-x   2 root  wheel     2 May  2 01:32 zroot
> root@vm:/mnt # cd boot
> root@vm:/mnt/boot # cat loader.conf
> kern.geom.label.gptid.enable=3D"2"
> zfs_load=3D=84YES"
>=20
> My question is: How do I make my system permanently boot off the =
newpool such that I can destroy the existing zroot one?
>=20
> Many thanks for your help, it is really appreciated.
>=20
> Best regards
> Sebastian
>=20
>> Am 29.04.2016 um 10:25 schrieb Matthias Fechner <idefix@fechner.net>:
>>=20
>> Am 28.04.2016 um 23:14 schrieb Sebastian Wolfgarten:
>>> 5) Mount new pool and prepare for reboot
>>>=20
>>> cp /tmp/zpool.cache /tmp/newpool.cache
>>> zpool export newpool
>>> zpool import -c /tmp/newpool.cache -R /mnt newpool
>>> cp /tmp/newpool.cache /mnt/boot/zfs/zpool.cache
>>> zfs set mountpoint=3D/ newpool/ROOT
>>> reboot
>>=20
>> I think you forgot to adapt vfs.zfs.mountfrom=3D in /boot/loader.conf =
on the new pool?
>>=20
>>=20
>>=20
>> Gru=DF
>> Matthias
>>=20
>> --=20
>>=20
>> "Programming today is a race between software engineers striving to
>> build bigger and better idiot-proof programs, and the universe trying =
to
>> produce bigger and better idiots. So far, the universe is winning." =
--
>> Rich Cook
>=20
> _______________________________________________
> freebsd-questions@freebsd.org mailing list
> https://lists.freebsd.org/mailman/listinfo/freebsd-questions
> To unsubscribe, send any mail to =
"freebsd-questions-unsubscribe@freebsd.org"




Want to link to this message? Use this URL: <https://mail-archive.FreeBSD.org/cgi/mid.cgi?6E1B2BCF-3B5C-4D18-9152-FE68711B2B43>