Skip site navigation (1)Skip section navigation (2)
Date:      Fri, 11 Sep 2009 12:21:39 +0200
From:      Ivan Voras <ivoras@freebsd.org>
To:        freebsd-stable@freebsd.org
Subject:   Re: 8.0-B4 gstripe / GEOM_PART_* upgrade woes
Message-ID:  <h8d8b5$b3q$1@ger.gmane.org>
In-Reply-To: <4AAA1296.2080705@sasktel.net>
References:  <4AAA1296.2080705@sasktel.net>

next in thread | previous in thread | raw e-mail | index | archive | help
Stephen Hurd wrote:
> I've upgraded from 7.2-RELEASE_p2 to 8.0-BETA4 and using GEOM_PART_* 
> with my sliced gstripe array causes the /dev/stripe/raid0a to disappear 
> and the reset of the /dev/stripe/raid0[a-z] file systems to be unmountable.
> 
> My gvinum array is still working fine and, after chasing the ad* slices, 
> they can be mounted as well.  It's just the gstripe slices which are 
> corrupt/missing.

Ouch.

> GEOM_STRIPE: Device raid0 created (id=40432321).
> GEOM_STRIPE: Disk da0s2 attached to raid0.
> GEOM_STRIPE: Disk da1s2 attached to raid0.
> GEOM_STRIPE: Disk da2s2 attached to raid0.
> GEOM_STRIPE: Disk da3s2 attached to raid0.
> GEOM_STRIPE: Device raid0 activated.
> Trying to mount root from ufs:/dev/da0s1a
> <<111188>C>aCna'tn 'stta ts t/atd e/vd/aedv0s/1sgt:ri pNeo/ rsuacidh0 
> af:i lNeo  sour directory
> c
> =========== END OF dmesg ===========
> 
> =========== gstripe list ===========
> Geom name: raid0
> State: UP
> Status: Total=4, Online=4
> Type: AUTOMATIC
> Stripesize: 262144
> ID: 40432321
> Providers:
> 1. Name: stripe/raid0
>   Mediasize: 42934992896 (40G)
>   Sectorsize: 512
>   Mode: r0w0e0
> Consumers:
> 1. Name: da0s2
>   Mediasize: 10733990400 (10G)
>   Sectorsize: 512
>   Mode: r0w0e0
>   Number: 0
> 2. Name: da1s2
>   Mediasize: 10733990400 (10G)
>   Sectorsize: 512
>   Mode: r0w0e0
>   Number: 1
> 3. Name: da2s2
>   Mediasize: 10733990400 (10G)
>   Sectorsize: 512
>   Mode: r0w0e0
>   Number: 2
> 4. Name: da3s2
>   Mediasize: 10733990400 (10G)
>   Sectorsize: 512
>   Mode: r0w0e0
>   Number: 3
> =========== END OF gstripe list ===========
> 
> =========== gstripe status ===========
>        Name  Status  Components
> stripe/raid0      UP  da0s2
>                      da1s2
>                      da2s2
>                      da3s2
> =========== END OF gstripe status ===========

 > When I build without GEOM_PART_* and use GEOM_BSD and GEOM_MBR, it works
 > fine.

An interesting problem. I presume that in either case (gpart or 
GEOM_BSD/MBR) the output of "gstripe status" is the same? Only the 
interpretation of the partition tables is problematic?

What is the expected ("good") structure of the partitions/file systems? 
Do you have a single MBR partition and inside it multiple BSD 
partitions? What are their partition types?





Want to link to this message? Use this URL: <https://mail-archive.FreeBSD.org/cgi/mid.cgi?h8d8b5$b3q$1>