Skip site navigation (1)Skip section navigation (2)
Date:      Wed, 3 Jun 2009 15:42:08 +0200
From:      Lorenzo Perone <lopez.on.the.lists@yellowspace.net>
To:        Lorenzo Perone <lopez.on.the.lists@yellowspace.net>
Cc:        Mickael MAILLOT <mickael.maillot@gmail.com>, Adam McDougall <mcdouga9@egr.msu.edu>, FreeBSD Stable Mailing List <freebsd-stable@freebsd.org>
Subject:   Re: ZFS booting without partitions
Message-ID:  <4CB9BD6A-EB09-4FF0-AC34-74CF36837381@yellowspace.net>
In-Reply-To: <EE9A198B-295F-470F-A3EE-AA2F9B651F79@yellowspace.net>
References:  <29579856-69F7-4CDC-A52A-B414A40180ED@yellowspace.net> <4A1B0B4F.1020106@h3q.com> <ea7b7b810905260226g29e8cbf5ic75a59b979f6cd42@mail.gmail.com> <alpine.BSF.2.00.0905261353140.8940@woozle.rinet.ru> <18972.5870.795005.186542@already.dhcp.gene.com> <4A1C18CC.7080902@icyb.net.ua> <18972.7173.216763.407615@already.dhcp.gene.com> <A1B19FAF-B574-484F-9434-17F5AF754B88@yellowspace.net> <ea7b7b810905281246l26e798a1h65100635c1b2cb5b@mail.gmail.com> <63548432-B73D-4A08-BA99-FEF5BCC1028A@yellowspace.net> <20090531071759.GA35763@egr.msu.edu> <EE9A198B-295F-470F-A3EE-AA2F9B651F79@yellowspace.net>

next in thread | previous in thread | raw e-mail | index | archive | help
OK, so I've got my next little adventure here to share :-)

... after reading Your posts I was very eager to give the
whole boot-zfs-without-partitions thing a new try.

My starting situation was a ZFS mirror made up, as I wrote,
of two GPT partitions, so my pool looked like:

phaedrus# zpool status
   pool: tank
  state: ONLINE
  scrub: none requested
config:

         NAME        STATE     READ WRITE CKSUM
         tank     ONLINE       0     0     0
           ad6p4       ONLINE       0     0     0
           ad4p4       ONLINE       0     0     0

it was root-mounted and everything was seemingly working
fine, with the machine surviving several bonnie++'s,
sysbenches, and supersmacks concurrently for many
hours (cool!).

So to give it another try my plan was to detach one
partition, clear the gmirror on the UFS boot partition,
make a new pool made out of the free disk and start
the experiment over.

it looked almost like this:

zpool offline tank ad4p4
zpool detach tank ad4p4

gmirror stop gmboot (made out of ad6p2 and ad4p2)
gmirror remove gmboot ad4p2

then I had to reboot cause it wouldn't give up
on the swap partition on the zpool.

That's where the first problem began: it wouldn't boot
anymore... just because I removed a device?
In this case I was stuck at the mountroot: stage.
It wouldn't find the root filesystem on zfs.
(this happened also when physically detaching ad4).

So I booted off a recent 8-CURRENT iso DVD, and although
the mounroot stage is, iirc, at a later stage than
the loader, I smelled it could have something to do
with it and downloaded Adam's CURRENT/ZFS loader, put it in
the appropriate place on my UFS boot partition...

note:
 From the CD, I had to import the pool with
zpool import -o altroot=/somewhere tank to avoid having
problems with the datasets being mounted on top
of the 8-fixit environment's /usr ...

Ok, rebooted, and whoops it would boot again in the previous
environment.

So... from there I started over with the creation of
a ZFS-bootonly situation on ad4 (with the intention
of zpool-attaching ad6 later on)

dd if=/dev/zero bs=1m of=/dev/ad4 count=200
(just to be safe, some 'whitespace'..)

zpool create esso da4

zfs snapshot -r tank@night
zfs send -R tank@night | zfs recv -d -F esso
(it did what it had to do - cool new v13 feature BTW!)

zpool export esso

dd if=/boot/zfsboot of=/dev/ad4 bs=512 count=1
dd if=/boot/zfsboot of=/dev/ad4 bs=512 skip=1 seek=1024

zpool import esso

zpool set bootfs=esso esso

the mountpoints (legacy on the poolfs, esso,
and the corresponding ones) had been correctly
copied by the send -R.

Just shortly mounted esso somewhere else,
edited loader.conf and fstab, and put it back
to legacy.

shutdown -r now.

Upon boot, it would wait a while, not present
any F1/F5, and booted into the old environment
(ad6p2 boot partition and then mounted tank as root).

 From there, a zfs list or zpool status just showed
the root pool (tank), but the new one (esso) was
not present.

A zpool import showed:

heidegger# zpool import
   pool: esso
     id: 865609520845688328
  state: UNAVAIL
status: One or more devices are missing from the system.
action: The pool cannot be imported. Attach the missing
         devices and try again.
    see: http://www.sun.com/msg/ZFS-8000-3C
config:

         esso        UNAVAIL  insufficient replicas
           ad4       UNAVAIL  cannot open

zpool import -f esso did not succeed, instead,
looking on the console, I found
ZFS: WARNING: could not open ad4 for writing

I repeated the steps above two more times, making sure
I had wiped everyhing off ad4 before trying... but it
would always come up with that message. The disk is OK,
the cables too, I triple-checked it. Besides, writing
to the disk with other means (such as dd or creating a new
pool) succeeded... (albeit after the usual
sysctl kern.geom.debugflags=16 ...)

well for now I think I'll stick to the GPT + UFS Root +
ZFS Root solution (I'm so happy this works seemlessly,
so this is a big THANX and not a complaint!), but I
thought I'd share the latest hickups...

I won't be getting to that machine for a few days before
restoring to the gpt-ufs-based mirror, so if someone would like
me to provide other info I'll be happy to contribute it.

Big Regards!

Lorenzo


On 01.06.2009, at 19:09, Lorenzo Perone wrote:

> On 31.05.2009, at 09:18, Adam McDougall wrote:
>
>> I encountered the same symptoms today on both a 32bit and 64bit
>> brand new install using gptzfsboot.  It works for me when I use
>> a copy of loader from an 8-current box with zfs support compiled in.
>> I haven't looked into it much yet but it might help you.  If you
>> want, you can try the loader I am using from:
>> http://www.egr.msu.edu/~mcdouga9/loader
>
> Thanx for posting me your loader,  I'll try with this tomorrow night!
> (any hint, btw, on why the one in -STABLE seems to be
> broken, or whether it has actually been fixed by now?)





Want to link to this message? Use this URL: <https://mail-archive.FreeBSD.org/cgi/mid.cgi?4CB9BD6A-EB09-4FF0-AC34-74CF36837381>