Skip site navigation (1)Skip section navigation (2)
Date:      Thu, 11 Feb 2021 21:46:27 -0500
From:      Dennis Clarke <dclarke@blastwave.org>
To:        freebsd-current@freebsd.org
Subject:   Re: upgrade stable/12 -> stable/13 zfs + boot partition Mediasize 64K
Message-ID:  <d787a293-abb6-533f-d6ac-ccce7ae647d7@blastwave.org>
In-Reply-To: <YCXgkRac2tBjvJJP@in-addr.com>
References:  <ccc9862a-f6f6-f0c1-abd7-fd3bdd5a481f@pinyon.org> <YCXgkRac2tBjvJJP@in-addr.com>

next in thread | previous in thread | raw e-mail | index | archive | help
On 2/11/21 8:57 PM, Gary Palmer wrote:
> On Thu, Feb 11, 2021 at 05:34:40PM -0700, Russell L. Carter wrote:
>> Greetings,
>>
>> I really want to jump from stable/12 to stable/13 but one thing is
>> causing a hesitancy.  And that is, my main raidz2 system has
>> a system boot zfs mirror pair that has boot partition size
>> (Mediasize) of 64K, and when I tried to zpool upgrade that pool a
>> year or 2 ago I got some scary message something like "boot
>> partition size is not large enough".  I asked about this on the
>> lists but never received an answer.  So, laziness required me
>> to ignore the problem and not zpool upgrade any of my 15 or so
>> zpools in the interim.
>>
>> A few weeks ago I tried to make buildworld/installworld upgrade
>> 12->13 but the boot failed in the mounting filesystems phase with it
>> couldn't find a bootable target.  So after restoring 12 I decided
>> to wait a bit.  In the interim I have upgraded every zpool but that
>> one system pool.  All the other freebsd-boot partitions have a size
>> of 512K.
>>
>> So what is the current advice?  Is a freebsd-boot partition size
>> of 64K laughably obsolete, and I should get with the program and
>> repartition those disks, or can I march blindly into the upgrade?
>>
>> I guess I just want to understand where these sizes are going in
>> the future.
> 
> Most layouts put a swap partition after the boot partition.  If
> that is the case for you also, if you can disable the swapping to the
> swap partition you can probably increase boot and reduce swap size
> pretty easily.  Otherwise you're probably going to have to split
> the mirror, repartition one drive, rebuild the mirror, reboot onto
> that drive and then do the same to the other drive.  I've done it
> before on a headless system in a remote DC.  With planning it's
> perfectly doable.  I think I built a test vm in VirtualBox and
> made sure it all worked on that before trying it for real.
> 

The process is trivial with ZFS and a mirror setup. No need to reboot.
Think of the mirror as a "left" and "right" side. If you have a three
way mirror than you are singing in the rain. Regardless just break the
mirror. Do whatever you want with the disks that are now free and clear
of the previous mirror config. Use gpart and set them up with whatever
you need. Then attach the disk(s) back onto the mirror and wait for the
thing to re-silver. Run a scrub if you want. Depends on the size. Just
know that a large amount of storage ( more than 64T ) will take a long
time to scrub and for that matter a long time to re-silver. Maybe a day.
Once everything is re-synced as a mirror just repeat the process on the
other side of the mirror. No need to reboot until you feel like testing
the whole show.

This sort of situation is also a good reason to use three way mirrors
with a hot spare pool. When possible. Makes the whole process entirely
worry free and nothing more than a cup of coffee to ponder it.

For the sake of details what does "gpart show" report?


Dennis Clarke



Want to link to this message? Use this URL: <https://mail-archive.FreeBSD.org/cgi/mid.cgi?d787a293-abb6-533f-d6ac-ccce7ae647d7>