Skip site navigation (1)Skip section navigation (2)
Date:      Mon, 20 May 2019 19:36:57 +0700
From:      Eugene Grosbein <eugen@grosbein.net>
To:        Freddie Cash <fjwcash@gmail.com>, Paul Mather <paul@gromit.dlib.vt.edu>
Cc:        FreeBSD Stable <freebsd-stable@freebsd.org>, tech-lists <tech-lists@zyxst.net>
Subject:   Re: trying to expand a zvol-backed bhyve guest which is UFS
Message-ID:  <75903282-0efc-b622-771c-38914289b779@grosbein.net>
In-Reply-To: <CAOjFWZ6yyPapu%2Bwv9ePhmPWPhDOBfE3g9tmj1PUi0vgsCp-O3Q@mail.gmail.com>
References:  <20190520014645.GC6971@rpi3.zyxst.net> <54BFD570-E9B9-49A5-8785-2D80D4FC05D1@gromit.dlib.vt.edu> <CAOjFWZ6yyPapu%2Bwv9ePhmPWPhDOBfE3g9tmj1PUi0vgsCp-O3Q@mail.gmail.com>

next in thread | previous in thread | raw e-mail | index | archive | help
20.05.2019 9:14, Freddie Cash wrote:

> On Sun, May 19, 2019, 6:59 PM Paul Mather, <paul@gromit.dlib.vt.edu> wrote:
> 
>> On May 19, 2019, at 9:46 PM, tech-lists <tech-lists@zyxst.net> wrote:
>>
>>> Hi,
>>>
>>> context is 12-stable, zfs, bhyve
>>>
>>> I have a zvol-backed bhyve guest. Its zvol size was initially 512GB
>>> It needed to be expanded to 4TB. That worked fine.
>>>
>>> The problem is the freebsd guest is UFS and I can't seem to make it see
>>> the new size. But zfs list -o size on the host shows that as far as zfs
>> is
>>> concerned, it's 4TB
>>>
>>> On the guest, I've tried running growfs / but it says requested size is
>>> the same as the size it already is (508GB)
>>>
>>> gpart show on the guest has the following
>>>
>>> # gpart show
>>> =>        63  4294967232  vtbd0  MBR  (4.0T)
>>>          63           1         - free -  (512B)
>>>          64  4294967216      1  freebsd  [active]  (2.0T)
>>>         4294967280          15         - free -  (7.5K)
>>>
>>> =>         0  4294967216  vtbd0s1  BSD  (2.0T)
>>>           0  1065353216        1  freebsd-ufs (508G)
>>>  1065353216     8388544        2  freebsd-swap  (4.0G)
>>>  1073741760  3221225456               - free -  (1.5T)
>>>
>>> I'm not understanding the double output, or why growfs hasn't worked on
>>> the guest ufs. Can anyone help please?
>>
>>
>> Given the above, the freebsd-ufs partition can't grow because there is a
>> freebsd-swap partition between it and the free space you've added at the
>> end of the volume.
>>
>> You'd need to delete the swap partition (or otherwise move it to the end
>> of
>> the partition on the volume) before you could successfully growfs the
>> freebsd-ufs partition.
>>
> 
> Even if you do all that, you won't be able to use more than 2 TB anyway, as
> that's all MBR supports.
> 
> If you need more than 2 TB, you'll need to backup, repartition with GPT,
> and restore from backups.

Strictly speaking, FreeBSD is capable of using over 2TB "disk" with MBR.
And there are multiple ways to achieve that. Simplies one is to boot one time
using another root file system (mdconfig'ed image, iSCSI or just another local media)
and use "graid label -S" for large media to create GRAID "Promise" array with two SINGLE volumes.
First volume should span over boot/root partion in the MBR and then
instead of /dev/vtb0s1 it will be shown like /dev/raid/r0s1. No existing data will be lost
if there are two 512b blocks free at the end of media for GRAID label.

Second volume should span over rest of space and can be arbitrary large
as GRAID uses 64 bit numbers. It may be seen as /dev/raid/r1 then.

You may then just "newfs /dev/raid/r1" or put BSD label on it beforehand
or use this "device" for new ZFS pool etc.

There is also GEOM_MAP capable of similar things but it is less convenient.

But, if your boot environment supports GPT, it is easier to use GPT.




Want to link to this message? Use this URL: <https://mail-archive.FreeBSD.org/cgi/mid.cgi?75903282-0efc-b622-771c-38914289b779>