Skip site navigation (1)Skip section navigation (2)
Date:      Mon, 8 Nov 2004 22:24:18 +1100 (EST)
From:      Andy Farkas <andy@bradfieldprichard.com.au>
To:        Lukas Ertl <le@FreeBSD.org>
Cc:        freebsd-current@FreeBSD.org
Subject:   Re: gvinum remains broken in 5.3-RELEASE?
Message-ID:  <20041108220605.B59160@bpgate.speednet.com.au>
In-Reply-To: <20041108120226.L5398@pcle2.cc.univie.ac.at>
References:  <20041107003413.GQ24507@wantadilla.lemis.com> <200411071124.35056.msch@snafu.de> <20041108084037.O570@korben.prv.univie.ac.at> <20041108120226.L5398@pcle2.cc.univie.ac.at>

next in thread | previous in thread | raw e-mail | index | archive | help
> Hmmm, probably a NULL pointer some where.  I suggest it will be easier for 
> now if you wipe out the config on the old disks with a 'dd if=/dev/zero 
> of=/dev/disk'.

Will I lose my data?
I need more confidence.

The situation ATM is that my 'classic' vinum volume, named holden.p0,
has lost a physical disk from the raid5 config. I am trying to rebuild
it with a new disk. The data is ok in degraded mode.

When I try and use 'classic' vinum to add the new disk, I get this:

Nov  8 12:00:01 <kern.crit> hummer kernel: vinum: incompatible sector 
sizes.  holden.p0.s1 has 0 bytes, holden.p0 has 512 bytes.  Ignored.

I cannot restore the full raid5 volume.


When I try and use gvinum, I get this:

hummer# gvinum list
9 drives:
D hold0                 State: up       /dev/da0s1      A: 32/4095 MB (0%)
D hold4                 State: up       /dev/da2s1      A: 2110/6173 MB (34%)
D hold5                 State: up       /dev/da3s1      A: 2110/6173 MB (34%)
D hold2                 State: up       /dev/da4s1      A: 43/4106 MB (1%)
D hold3                 State: up       /dev/da5s1      A: 43/4106 MB (1%)
D hold8                 State: up       /dev/da6s1      A: 0/4063 MB (0%)
D hold9                 State: up       /dev/da7s1      A: 0/4063 MB (0%)
D other                 State: up       /dev/da4        A: 43/4106 MB (1%)
D citus                 State: up       /dev/da5        A: 43/4106 MB (1%)

3 volumes:
V holden                State: up       Plexes:       1 Size:         23 GB
V stripy                State: down     Plexes:       0 Size:          0  B
V hewey                 State: up       Plexes:       1 Size:       4063 MB

3 plexes:
P holden.p0          R5 State: degraded Subdisks:     7 Size:         23 GB
P stripy.p0           S State: down     Subdisks:     0 Size:          0  B
P hewey.p0           R5 State: degraded Subdisks:     2 Size:       4063 MB

16 subdisks:
S holden.p0.s7          State: up       D: hold9        Size:       4063 MB
S holden.p0.s6          State: up       D: hold8        Size:       4063 MB
S holden.p0.s5          State: up       D: hold5        Size:       4063 MB
S holden.p0.s4          State: up       D: hold4        Size:       4063 MB
S holden.p0.s3          State: up       D: hold3        Size:       4063 MB
S holden.p0.s2          State: up       D: hold2        Size:       4063 MB
S holden.p0.s1          State: stale    D: hold1        Size:       4063 MB
S holden.p0.s0          State: up       D: hold0        Size:       4063 MB
S stripy.p0.s2          State: down     D: sea1         Size:       8547 MB
S stripy.p0.s1          State: down     D: sea0         Size:       8547 MB
S stripy.p0.s0          State: down     D: filler       Size:       8547 MB
S hewey.p0.s4           State: up       D: hp1          Size:       4063 MB
S hewey.p0.s3           State: up       D: hp0          Size:       4063 MB
S hewey.p0.s2           State: up       D: citus        Size:       4063 MB
S hewey.p0.s1           State: up       D: other        Size:       4063 MB
S hewey.p0.s0           State: stale    D: big          Size:       4063 MB
hummer#


In reality, I should have:

8 Drives:	hold[0-7]
1 Volume:	holden
1 Plex:		holden.p0
8 Subdisks:	holden.p0.s[0-7]


And if I try and 'rm -r <object>' I get kernel panic.


>
> Can you please send me the output of gvinum printconfig before wiping the 
> disks?
>
> thanks,
> le
>

hummer# gvinum printconfig
# Vinum configuration of hummer.af.speednet.com.au, saved at Mon Nov  8 21:05:55 2004
drive hold0 device /dev/da0s1
drive hold4 device /dev/da2s1
drive hold5 device /dev/da3s1
drive hold2 device /dev/da4s1
drive hold3 device /dev/da5s1
drive hold8 device /dev/da6s1
drive hold9 device /dev/da7s1
drive other device /dev/da4
drive citus device /dev/da5
volume holden
volume stripy
volume hewey
plex name holden.p0 org raid5 528s vol holden
plex name stripy.p0 org striped 528s
plex name hewey.p0 org raid5 528s vol hewey
sd name holden.p0.s7 drive hold9 len 8321280s driveoffset 265s plex holden.p0 plexoffset 3696s
sd name holden.p0.s6 drive hold8 len 8321280s driveoffset 265s plex holden.p0 plexoffset 3168s
sd name holden.p0.s5 drive hold5 len 8321280s driveoffset 265s plex holden.p0 plexoffset 2640s
sd name holden.p0.s4 drive hold4 len 8321280s driveoffset 265s plex holden.p0 plexoffset 2112s
sd name holden.p0.s3 drive hold3 len 8321280s driveoffset 265s plex holden.p0 plexoffset 1584s
sd name holden.p0.s2 drive hold2 len 8321280s driveoffset 265s plex holden.p0 plexoffset 1056s
sd name holden.p0.s1 drive hold1 len 8321280s driveoffset 265s
sd name holden.p0.s0 drive hold0 len 8321280s driveoffset 265s plex holden.p0 plexoffset 0s
sd name stripy.p0.s2 drive sea1 len 17505312s driveoffset 265s
sd name stripy.p0.s1 drive sea0 len 17505312s driveoffset 265s
sd name stripy.p0.s0 drive filler len 17505313s driveoffset 265s
sd name hewey.p0.s4 drive hp1 len 8321280s driveoffset 265s
sd name hewey.p0.s3 drive hp0 len 8321280s driveoffset 265s
sd name hewey.p0.s2 drive citus len 8321280s driveoffset 265s plex hewey.p0 plexoffset 1056s
sd name hewey.p0.s1 drive other len 8321280s driveoffset 265s plex hewey.p0 plexoffset 528s
sd name hewey.p0.s0 drive big len 8321280s driveoffset 265s
hummer#



- andyf



Want to link to this message? Use this URL: <https://mail-archive.FreeBSD.org/cgi/mid.cgi?20041108220605.B59160>