Skip site navigation (1)Skip section navigation (2)
Date:      Sun, 3 Feb 2013 16:27:33 -0800
From:      Tim Kientzle <kientzle@FreeBSD.org>
To:        Ian Lepore <ian@FreeBSD.org>, Erich Dollansky <erichsfreebsdlist@alogt.com>
Cc:        freebsd-current Current <freebsd-current@FreeBSD.org>
Subject:   Re: gpart resize vs. cache?
Message-ID:  <EBF66A62-184B-488E-94B3-54E4357F776E@FreeBSD.org>
In-Reply-To: <1359925713.93359.440.camel@revolution.hippie.lan>
References:  <3D812191-2D6E-43B2-B9C1-F00FFA44C5F8@freebsd.org> <1359925713.93359.440.camel@revolution.hippie.lan>

next in thread | previous in thread | raw e-mail | index | archive | help

On Feb 3, 2013, at 1:08 PM, Ian Lepore wrote:

> On Sun, 2013-02-03 at 12:06 -0800, Tim Kientzle wrote:
>> I'm tinkering with a disk image that automatically
>> fills whatever media you put it onto.  But I'm having
>> trouble with gpart resize failing.
>>=20
>> Disk layout:
>>   MBR with two slices  mmcsd0s1 and mmcsd0s2
>>   bsdlabel with one partition mmcsd0s2a
>>=20
>> Before I can use growfs, I have two gpart resize operations:
>>=20
>> 1)   gpart resize -i 2 mmcsd0
>>=20
>> 2)  gpart resize -i 1 mmcsd0s2
>>=20
>> Step 1 resizes mmcsd0s2 and always succeeds.
>>=20
>> Step 2 resizes mmcsd0s2a and always fails
>> with "No space on device."
>>=20
>> BUT if I reboot between these steps, step #2
>> always succeeds.
>>=20
>> I suspect that step #1 is updating the partition
>> information on disk but that step #2 is somehow
>> reading the old size of mmcsd0s2 and thus finding
>> that there is no available space to grow the partition.

BTW, I've added some debug messages to gpart
and the second resize is failing because the new
computed size is a little smaller than the old size
(maybe because of a different alignment?).  But
it's certainly not sizing to the new container size.

>> gpart(1) doesn't say anything about caching of
>> disk partiition info and "gpart list" does show the
>> updated information after step #1.
>>=20
>> Is there some trick that will force the partition
>> information in memory to be updated (short of
>> a reboot or unmount/remount the root filesystem)?
>=20
> This sounds like one of those situations where the "force re-taste"
> incantation may work... just open/close the parent geom for write.  =
From
> script, it's as easy as
>=20
>  : >/dev/mmcsd0s2
>=20
> If that doesn't work, try /dev/mmcsd0.
>=20
> The re-taste trick is usually only needed on things like a usb sdcard
> reader where it can't tell you changed media and tries to use the
> in-memory info from the prior card.  Since you're using a geom-aware
> tool to make a geom change, I wonder why it doesn't do the re-taste
> automatically?

That certainly changes things, but not in a good way.
Here's the key part of the script now:

    gpart resize -i 2 mmcsd0
    :> /dev/mmcsd0
    gpart resize -i 1 mmcsd0s2
    :> /dev/mmcsd0s2
    growfs -y /dev/mmcsd0s2a

And here's the result:

mmcsd0s2 resized
mmcsd0s2a resized
eval: growfs: Device not configured
=85 lots more "Device not configured", ultimately leading to=85
vm_fault: pager read error, pid 1 (init)
vnode_pager_getpages: I/O read error
vm_fault: pager read error, pid 1 (init)
vnode_pager_getpages: I/O read error
=85 which keeps scrolling until I pull power.

Apparently this hosed the root mount (I've tried every combination
of one or both force retastes above with the same effect).  It
does not appear that the disk is hosed as I can reboot single
user and everything is okay, but every time this code runs
the same errors occur.

I also tried Erich Dollansky's suggestion of adding a
"gpart show" between the resize requests but that
seems to make no difference at all.

Tim




Want to link to this message? Use this URL: <https://mail-archive.FreeBSD.org/cgi/mid.cgi?EBF66A62-184B-488E-94B3-54E4357F776E>