Skip site navigation (1)Skip section navigation (2)
Date:      Thu, 11 Feb 2021 20:13:33 -0700
From:      "Russell L. Carter" <rcarter@pinyon.org>
To:        freebsd-current@freebsd.org
Subject:   Re: upgrade stable/12 -> stable/13 zfs + boot partition Mediasize 64K
Message-ID:  <6cd9f937-1bcc-3f50-87dc-fcbf038dff6a@pinyon.org>
In-Reply-To: <d787a293-abb6-533f-d6ac-ccce7ae647d7@blastwave.org>
References:  <ccc9862a-f6f6-f0c1-abd7-fd3bdd5a481f@pinyon.org> <YCXgkRac2tBjvJJP@in-addr.com> <d787a293-abb6-533f-d6ac-ccce7ae647d7@blastwave.org>

next in thread | previous in thread | raw e-mail | index | archive | help
On 2/11/21 7:46 PM, Dennis Clarke via freebsd-current wrote:
> On 2/11/21 8:57 PM, Gary Palmer wrote:
>> On Thu, Feb 11, 2021 at 05:34:40PM -0700, Russell L. Carter wrote:
>>> Greetings,
>>>
>>> I really want to jump from stable/12 to stable/13 but one thing is
>>> causing a hesitancy.  And that is, my main raidz2 system has
>>> a system boot zfs mirror pair that has boot partition size
>>> (Mediasize) of 64K, and when I tried to zpool upgrade that pool a
>>> year or 2 ago I got some scary message something like "boot
>>> partition size is not large enough".  I asked about this on the
>>> lists but never received an answer.  So, laziness required me
>>> to ignore the problem and not zpool upgrade any of my 15 or so
>>> zpools in the interim.
>>>
>>> A few weeks ago I tried to make buildworld/installworld upgrade
>>> 12->13 but the boot failed in the mounting filesystems phase with it
>>> couldn't find a bootable target.  So after restoring 12 I decided
>>> to wait a bit.  In the interim I have upgraded every zpool but that
>>> one system pool.  All the other freebsd-boot partitions have a size
>>> of 512K.
>>>
>>> So what is the current advice?  Is a freebsd-boot partition size
>>> of 64K laughably obsolete, and I should get with the program and
>>> repartition those disks, or can I march blindly into the upgrade?
>>>
>>> I guess I just want to understand where these sizes are going in
>>> the future.
>>
>> Most layouts put a swap partition after the boot partition.  If
>> that is the case for you also, if you can disable the swapping to the
>> swap partition you can probably increase boot and reduce swap size
>> pretty easily.  Otherwise you're probably going to have to split
>> the mirror, repartition one drive, rebuild the mirror, reboot onto
>> that drive and then do the same to the other drive.  I've done it
>> before on a headless system in a remote DC.  With planning it's
>> perfectly doable.  I think I built a test vm in VirtualBox and
>> made sure it all worked on that before trying it for real.
>>
> 
> The process is trivial with ZFS and a mirror setup. No need to reboot.
> Think of the mirror as a "left" and "right" side. If you have a three
> way mirror than you are singing in the rain. Regardless just break the
> mirror. Do whatever you want with the disks that are now free and clear
> of the previous mirror config. Use gpart and set them up with whatever
> you need. Then attach the disk(s) back onto the mirror and wait for the
> thing to re-silver. Run a scrub if you want. Depends on the size. Just
> know that a large amount of storage ( more than 64T ) will take a long
> time to scrub and for that matter a long time to re-silver. Maybe a day.
> Once everything is re-synced as a mirror just repeat the process on the
> other side of the mirror. No need to reboot until you feel like testing
> the whole show.
> 
> This sort of situation is also a good reason to use three way mirrors
> with a hot spare pool. When possible. Makes the whole process entirely
> worry free and nothing more than a cup of coffee to ponder it.
> 
> For the sake of details what does "gpart show" report?

Here you go:

root@terpsichore> gpart show
=>       34  625142381  da0  GPT  (298G)
          34        128    1  freebsd-boot  (64K)
         162    8388608    2  freebsd-swap  (4.0G)
     8388770  616753645    3  freebsd-zfs  (294G)

=>       34  625142381  da1  GPT  (298G)
          34        128    1  freebsd-boot  (64K)
         162    8388608    2  freebsd-swap  (4.0G)
     8388770  616753645    3  freebsd-zfs  (294G)

=>        34  5860533101  da2  GPT  (2.7T)
           34           6       - free -  (3.0K)
           40  5860533088    1  freebsd-zfs  (2.7T)
   5860533128           7       - free -  (3.5K)

=>        40  5860533088  da3  GPT  (2.7T)
           40  5860533080    1  freebsd-zfs  (2.7T)
   5860533120           8       - free -  (4.0K)

=>        40  5860533088  da4  GPT  (2.7T)
           40  5860533088    1  freebsd-zfs  (2.7T)

=>        40  5860533088  da5  GPT  (2.7T)
           40  5860533088    1  freebsd-zfs  (2.7T)

=>        40  5860533088  da6  GPT  (2.7T)
           40  5860533088    1  freebsd-zfs  (2.7T)

=>        40  5860533088  da7  GPT  (2.7T)
           40  5860533088    1  freebsd-zfs  (2.7T)

root@terpsichore>

I'm interested in any comments, if appropriate.
This now 7(!!) year old system with 6 drive replacements
over time on the raidz2, quite tiny and I guess entirely
obsolete.  But it's paid for, does its job.  These days
I might go with a 2 or 3 drive mirror.

Thanks,
Russell

> 
> 
> Dennis Clarke
> _______________________________________________
> freebsd-current@freebsd.org mailing list
> https://lists.freebsd.org/mailman/listinfo/freebsd-current
> To unsubscribe, send any mail to "freebsd-current-unsubscribe@freebsd.org"
> 




Want to link to this message? Use this URL: <https://mail-archive.FreeBSD.org/cgi/mid.cgi?6cd9f937-1bcc-3f50-87dc-fcbf038dff6a>