Skip site navigation (1)Skip section navigation (2)
Date:      Sat, 4 Dec 2004 08:10:47 -0800 (PST)
From:      orville weyrich <weyrich_comp@yahoo.com>
To:        freebsd-questions@freebsd.org
Cc:        weyrich_comp@yahoo.com
Subject:   Problems with VINUM recovery
Message-ID:  <20041204161047.49887.qmail@web50704.mail.yahoo.com>

next in thread | raw e-mail | index | archive | help
I was trying to test my ability to recover from a disk
crash and managed to toast my system disk.  Therefore,
I do not have any relevant vinum history file or
messages file.  

What happened is that I purposely blew away the drive
named ahc0t15, and was trying to recover it.  

In error, I ran the create command A SECOND TIME with
the exact same configuration file as the first time,
and it created two aditional plexes, which it of
course couldn't fit onto the physical disks, except
for raid.p2.s0, raid.p2.s9, raid.p3.s5 and raid.p3.s5.

I decided that it wasn't working properly, and was in
the process of zapping my entire vinum volume to
recreate it and try again (I had decided that my plex
pattern was not the best), and had zapped the drive
ahc0t02, when I accidently blew away the system disk,
crashing my system.

The good news is that most of the important data I
care about on the system disk was copied to the vinum
volume just to put some data on the volume -- but now
I really WANT that data on the vinum volume, because
it is the most recent backup.

As it stands now, I want to:

(1) delete raid.p2 and raid.p3

(2) rebuild drive ahc0t02 to recieve revived
raid.p0.s0 and raid.p1.s5

(3) rebuild drive ahc0t15 to receive revived
raid.p0.s9 and raid.p1.s4

(4) revive raid.p0.s0 from the valid raid.p1.s0 

(5) revive raid.p0.9 from the valid raid.p1.s9

(6) revive raid.p1.s4 from the valid raid.p0.s4

(7) revive raid.p1.s5 from the valid raid.p0.s5

I think (hope!) all of this is possible.

My vinum volume was created on a vanilla FreeBSD 4.3
system.  The system has now been reloaded with a
FreeBSD 4.10 system in order to produce the vinum list
output attached (let me know if you have trouble
reading the file as attached).

Trying to rm raid.p2.s8 gives the message
Can’t remove raid.p2.s8: Device busy (16)
*** Warning configuration updates are disabled. ***

I am afraid to reenable configuration updates until I
am sure I know what I am doing.

Since I messed up in my previous attempts, and I am
now between a rock and a hard place, I need some
guidance regarding how to recover -- the failed drill
has suddenly become real :-(

I have looked at the document
http://www.vinumvm.org/vinum/replacing-drive.html and
am not sure how to apply it to the above scenario.

Please please help ... please

orville weyrich

VINUM LISTING
=======================================================

8 drives:
D ahc0t03               State: up	Device /dev/da1s1e
Avail: 2152/4303 
MB (50%)
D ahc0t04               State: up	Device /dev/da2s1e
Avail: 2152/4303 
MB (50%)
D ahc0t09               State: up	Device /dev/da3s1e
Avail: 2152/4303 
MB (50%)
D ahc0t10               State: up	Device /dev/da4s1e
Avail: 2152/4303 
MB (50%)
D ahc0t11               State: up	Device /dev/da5s1e
Avail: 2152/4303 
MB (50%)
D ahc0t12               State: up	Device /dev/da6s1e
Avail: 2152/4303 
MB (50%)
D ahc0t13               State: up	Device /dev/da7s1e
Avail: 2152/4303 
MB (50%)
D ahc0t14               State: up	Device /dev/da8s1e
Avail: 2152/4303 
MB (50%)
D ahc0t02               State: referenced	Device 
Avail: 0/0 MB
D *invalid*             State: referenced	Device 
Avail: 0/0 MB
D ahc0t15               State: referenced	Device 
Avail: 0/0 MB

1 volumes:
V raid                  State: up	Plexes:       4
Size:         21 GB

4 plexes:
P raid.p0             S State: corrupt	Subdisks:    10
Size:         21 
GB
P raid.p1             S State: corrupt	Subdisks:    10
Size:         21 
GB
P raid.p2             S State: faulty	Subdisks:    10
Size:         21 
GB
P raid.p3             S State: corrupt	Subdisks:    10
Size:         21 
GB

40 subdisks:
S raid.p0.s0            State: crashed	PO:        0  B
Size:       2151 
MB
S raid.p0.s1            State: up	PO:      512 kB
Size:       2151 MB
S raid.p0.s2            State: up	PO:     1024 kB
Size:       2151 MB
S raid.p0.s3            State: up	PO:     1536 kB
Size:       2151 MB
S raid.p0.s4            State: up	PO:     2048 kB
Size:       2151 MB
S raid.p0.s5            State: up	PO:     2560 kB
Size:       2151 MB
S raid.p0.s6            State: up	PO:     3072 kB
Size:       2151 MB
S raid.p0.s7            State: up	PO:     3584 kB
Size:       2151 MB
S raid.p0.s8            State: up	PO:     4096 kB
Size:       2151 MB
S raid.p0.s9            State: crashed	PO:     4608 kB
Size:       2151 
MB
S raid.p1.s0            State: up	PO:        0  B
Size:       2151 MB
S raid.p1.s1            State: up	PO:      512 kB
Size:       2151 MB
S raid.p1.s2            State: up	PO:     1024 kB
Size:       2151 MB
S raid.p1.s3            State: up	PO:     1536 kB
Size:       2151 MB
S raid.p1.s4            State: obsolete	PO:     2048
kB Size:       
2151 MB
S raid.p1.s5            State: crashed	PO:     2560 kB
Size:       2151 
MB
S raid.p1.s6            State: up	PO:     3072 kB
Size:       2151 MB
S raid.p1.s7            State: up	PO:     3584 kB
Size:       2151 MB
S raid.p1.s8            State: up	PO:     4096 kB
Size:       2151 MB
S raid.p1.s9            State: up	PO:     4608 kB
Size:       2151 MB
S raid.p2.s0            State: stale	PO:        0  B
Size:       2150 
MB
S raid.p2.s9            State: stale	PO:     4608 kB
Size:       2150 
MB
S raid.p3.s0            State: up	PO:        0  B
Size:       2151 MB
S raid.p3.s1            State: up	PO:      512 kB
Size:       2151 MB
S raid.p3.s2            State: up	PO:     1024 kB
Size:       2151 MB
S raid.p3.s3            State: up	PO:     1536 kB
Size:       2151 MB
S raid.p3.s4            State: stale	PO:     2048 kB
Size:       2151 
MB
S raid.p3.s5            State: stale	PO:     2560 kB
Size:       2151 
MB
S raid.p3.s6            State: down	PO:     3072 kB
Size:       2151 MB
S raid.p3.s7            State: down	PO:     3584 kB
Size:       2151 MB
S raid.p3.s8            State: down	PO:     4096 kB
Size:       2151 MB
S raid.p3.s9            State: down	PO:     4608 kB
Size:       2151 MB
S raid.p2.s1            State: down	PO:      512 kB
Size:       2150 MB
S raid.p2.s2            State: down	PO:     1024 kB
Size:       2150 MB
S raid.p2.s3            State: down	PO:     1536 kB
Size:       2150 MB
S raid.p2.s4            State: down	PO:     2048 kB
Size:       2150 MB
S raid.p2.s5            State: down	PO:     2560 kB
Size:       2150 MB
S raid.p2.s6            State: down	PO:     3072 kB
Size:       2150 MB
S raid.p2.s7            State: down	PO:     3584 kB
Size:       2150 MB
S raid.p2.s8            State: down	PO:     4096 kB
Size:       2150 MB

MESSAGES EXTRACT
=======================================================

Nov 29 23:03:44 bashful /kernel: FreeBSD 4.10-RELEASE
#0: Tue May 25 
22:47:12 GMT 2004
Nov 29 23:03:44 bashful /kernel: 
root@perseus.cse.buffalo.edu:/usr/obj/usr/src/sys/GENERIC
Nov 29 23:03:44 bashful /kernel: CPU: Pentium
III/Pentium III 
Xeon/Celeron (451.02-MHz 686-class CPU)
Nov 29 23:03:44 bashful /kernel: 
Features=0x383f9ff<FPU,VME,DE,PSE,TSC,MSR,PAE,MCE,CX8,SEP,MTRR,PGE,MCA,CMOV,PAT,PSE36,MMX,FXSR,SSE>
Nov 29 23:03:44 bashful /kernel: real memory  =
134217728 (131072K 
bytes)
Nov 29 23:03:44 bashful /kernel: avail memory =
125165568 (122232K 
bytes)
Nov 29 23:03:45 bashful /kernel: ahc0: <Adaptec 2940
Ultra SCSI 
adapter> port 0xde00-0xdeff mem 0xdffef000-0xdffeffff
irq 10 at device 16.0 on 
pci0
Nov 29 23:03:45 bashful /kernel: aic7880: Ultra Wide
Channel A, SCSI 
Id=7, 16/253 SCBs
Nov 29 23:03:47 bashful /kernel: da3 at ahc0 bus 0
target 9 lun 0
Nov 29 23:03:47 bashful /kernel: da3: <IBM DFHSS4W
4141> Fixed Direct 
Access SCSI-2 device 
Nov 29 23:03:47 bashful /kernel: da3: 10.000MB/s
transfers (5.000MHz, 
offset 8, 16bit), Tagged Queueing Enabled
Nov 29 23:03:47 bashful /kernel: da3: 4303MB (8813870
512 byte sectors: 
64H 32S/T 4303C)
Nov 29 23:03:47 bashful /kernel: da4 at ahc0 bus 0
target 10 lun 0
Nov 29 23:03:47 bashful /kernel: da4: <IBM DFHSS4W
4141> Fixed Direct 
Access SCSI-2 device 
Nov 29 23:03:47 bashful /kernel: da4: 10.000MB/s
transfers (5.000MHz, 
offset 8, 16bit), Tagged Queueing Enabled
Nov 29 23:03:47 bashful /kernel: da4: 4303MB (8813870
512 byte sectors: 
64H 32S/T 4303C)
Nov 29 23:03:48 bashful /kernel: da6 at ahc0 bus 0
target 12 lun 0
Nov 29 23:03:48 bashful /kernel: da6: <IBM DFHSS4W
4141> Fixed Direct 
Access SCSI-2 device 
Nov 29 23:03:48 bashful /kernel: da6: 10.000MB/s
transfers (5.000MHz, 
offset 8, 16bit), Tagged Queueing Enabled
Nov 29 23:03:48 bashful /kernel: da6: 4303MB (8813870
512 byte sectors: 
64H 32S/T 4303C)
Nov 29 23:03:48 bashful /kernel: da0 at ahc0 bus 0
target 2 lun 0
Nov 29 23:03:48 bashful /kernel: da0: <IBM DFHSS4W
4141> Fixed Direct 
Access SCSI-2 device 
Nov 29 23:03:48 bashful /kernel: da0: 10.000MB/s
transfers (5.000MHz, 
offset 8, 16bit), Tagged Queueing Enabled
Nov 29 23:03:48 bashful /kernel: da0: 4303MB (8813870
512 byte sectors: 
64H 32S/T 4303C)
Nov 29 23:03:48 bashful /kernel: da9 at ahc0 bus 0
target 15 lun 0
Nov 29 23:03:48 bashful /kernel: da9: <IBM DFHSS4W
4141> Fixed Direct 
Access SCSI-2 device 
Nov 29 23:03:48 bashful /kernel: da9: 10.000MB/s
transfers (5.000MHz, 
offset 8, 16bit), Tagged Queueing Enabled
Nov 29 23:03:48 bashful /kernel: da9: 4303MB (8813870
512 byte sectors: 
64H 32S/T 4303C)
Nov 29 23:03:48 bashful /kernel: da8 at ahc0 bus 0
target 14 lun 0
Nov 29 23:03:48 bashful /kernel: da8: <IBM DFHSS4W    
  !q 4141> Fixed 
Direct Access SCSI-2 device 
Nov 29 23:03:48 bashful /kernel: da8: 10.000MB/s
transfers (5.000MHz, 
offset 8, 16bit), Tagged Queueing Enabled
Nov 29 23:03:48 bashful /kernel: da8: 4303MB (8813870
512 byte sectors: 
64H 32S/T 4303C)
Nov 29 23:03:48 bashful /kernel: da1 at ahc0 bus 0
target 3 lun 0
Nov 29 23:03:48 bashful /kernel: da1: <IBM DFHSS4W
4141> Fixed Direct 
Access SCSI-2 device 
Nov 29 23:03:48 bashful /kernel: da1: 10.000MB/s
transfers (5.000MHz, 
offset 8, 16bit), Tagged Queueing Enabled
Nov 29 23:03:48 bashful /kernel: da1: 4303MB (8813870
512 byte sectors: 
64H 32S/T 4303C)
Nov 29 23:03:48 bashful /kernel: da2 at ahc0 bus 0
target 4 lun 0
Nov 29 23:03:49 bashful /kernel: da2: <IBM DFHSS4W
4141> Fixed Direct 
Access SCSI-2 device 
Nov 29 23:03:49 bashful /kernel: da2: 10.000MB/s
transfers (5.000MHz, 
offset 8, 16bit), Tagged Queueing Enabled
Nov 29 23:03:49 bashful /kernel: da2: 4303MB (8813870
512 byte sectors: 
64H 32S/T 4303C)
Nov 29 23:03:49 bashful /kernel: da7 at ahc0 bus 0
target 13 lun 0
Nov 29 23:03:49 bashful /kernel: da7: <IBM DFHSS4W
4141> Fixed Direct 
Access SCSI-2 device 
Nov 29 23:03:49 bashful /kernel: da7: 10.000MB/s
transfers (5.000MHz, 
offset 8, 16bit), Tagged Queueing Enabled
Nov 29 23:03:49 bashful /kernel: da7: 4303MB (8813870
512 byte sectors: 
64H 32S/T 4303C)
Nov 29 23:03:49 bashful /kernel: da5 at ahc0 bus 0
target 11 lun 0
Nov 29 23:03:49 bashful /kernel: da5: <IBM DFHSS4W    
  !q 4141> Fixed 
Direct Access SCSI-2 device 
Nov 29 23:03:49 bashful /kernel: da5: 10.000MB/s
transfers (5.000MHz, 
offset 8, 16bit), Tagged Queueing Enabled
Nov 29 23:03:49 bashful /kernel: da5: 4303MB (8813870
512 byte sectors: 
64H 32S/T 4303C)
Nov 30 00:15:15 bashful /kernel: vinum: loaded
Nov 30 00:16:15 bashful /kernel: vinum: reading
configuration from 
/dev/da8s1e
Nov 30 00:16:15 bashful /kernel: vinum: raid.p0.s0 is
crashed
Nov 30 00:16:15 bashful /kernel: vinum: raid.p0 is
faulty
Nov 30 00:16:16 bashful /kernel: vinum: raid.p0.s9 is
crashed
Nov 30 00:16:16 bashful /kernel: vinum: raid.p0 is
corrupt
Nov 30 00:16:16 bashful /kernel: vinum: raid.p1.s5 is
crashed
Nov 30 00:16:16 bashful /kernel: vinum: No space for 
on ahc0t03
Nov 30 00:16:16 bashful /kernel: Disabling
configuration updates
Nov 30 00:16:16 bashful /kernel: vinum: No space for 
on ahc0t04
Nov 30 00:16:16 bashful /kernel: vinum: No space for 
on ahc0t09
Nov 30 00:16:16 bashful /kernel: vinum: No space for 
on ahc0t10
Nov 30 00:16:16 bashful /kernel: vinum: No space for 
on ahc0t11
Nov 30 00:16:16 bashful /kernel: vinum: No space for 
on ahc0t12
Nov 30 00:16:16 bashful /kernel: vinum: No space for 
on ahc0t13
Nov 30 00:16:16 bashful /kernel: vinum: No space for 
on ahc0t14
Nov 30 00:16:16 bashful /kernel: vinum: raid.p3.s0 is
down by force
Nov 30 00:16:17 bashful /kernel: vinum: raid.p3.s0 is
up
Nov 30 00:16:17 bashful /kernel: vinum: raid.p3 is up
Nov 30 00:16:17 bashful /kernel: vinum: No space for
raid.p3.s0 on 
drive ahc0t11 at offset -1
Nov 30 00:16:17 bashful /kernel: vinum: raid.p3.s1 is
down by force
Nov 30 00:16:17 bashful /kernel: vinum: raid.p3.s0 is
up
Nov 30 00:16:17 bashful /kernel: vinum: raid.p3.s1 is
up
Nov 30 00:16:17 bashful /kernel: vinum: No space for
raid.p3.s1 on 
drive ahc0t12 at offset -1
Nov 30 00:16:17 bashful /kernel: vinum: raid.p3.s2 is
down by force
Nov 30 00:16:17 bashful /kernel: vinum: raid.p3.s0 is
up
Nov 30 00:16:17 bashful /kernel: vinum: raid.p3.s1 is
up
Nov 30 00:16:18 bashful /kernel: vinum: raid.p3.s2 is
up
Nov 30 00:16:18 bashful /kernel: vinum: No space for
raid.p3.s2 on 
drive ahc0t13 at offset -1
Nov 30 00:16:18 bashful /kernel: vinum: raid.p3.s3 is
down by force
Nov 30 00:16:18 bashful /kernel: vinum: raid.p3.s0 is
up
Nov 30 00:16:18 bashful /kernel: vinum: raid.p3.s1 is
up
Nov 30 00:16:18 bashful /kernel: vinum: raid.p3.s2 is
up
Nov 30 00:16:18 bashful /kernel: vinum: raid.p3.s3 is
up
Nov 30 00:16:18 bashful /kernel: vinum: No space for
raid.p3.s3 on 
drive ahc0t14 at offset -1
Nov 30 00:16:18 bashful /kernel: vinum: raid.p3 is
corrupt
Nov 30 00:16:18 bashful /kernel: vinum: raid.p3.s6 is
down by force
Nov 30 00:16:19 bashful /kernel: vinum: No space for
raid.p3.s6 on 
drive ahc0t03 at offset -1
Nov 30 00:16:19 bashful /kernel: vinum: raid.p3.s7 is
down by force
Nov 30 00:16:19 bashful /kernel: vinum: No space for
raid.p3.s7 on 
drive ahc0t04 at offset -1
Nov 30 00:16:19 bashful /kernel: vinum: raid.p3.s8 is
down by force
Nov 30 00:16:19 bashful /kernel: vinum: No space for
raid.p3.s8 on 
drive ahc0t09 at offset -1
Nov 30 00:16:19 bashful /kernel: vinum: raid.p3.s9 is
down by force
Nov 30 00:16:19 bashful /kernel: vinum: No space for
raid.p3.s9 on 
drive ahc0t10 at offset -1
Nov 30 00:16:19 bashful /kernel: vinum: updating
configuration from 
/dev/da7s1e
Nov 30 00:16:19 bashful /kernel: vinum: raid.p2.s1 is
down by force
Nov 30 00:16:19 bashful /kernel: vinum: No space for
raid.p2.s1 on 
drive ahc0t03 at offset -1
Nov 30 00:16:20 bashful /kernel: vinum: raid.p2.s2 is
down by force
Nov 30 00:16:20 bashful /kernel: vinum: No space for
raid.p2.s2 on 
drive ahc0t04 at offset -1
Nov 30 00:16:20 bashful /kernel: vinum: raid.p2.s3 is
down by force
Nov 30 00:16:20 bashful /kernel: vinum: No space for
raid.p2.s3 on 
drive ahc0t09 at offset -1
Nov 30 00:16:20 bashful /kernel: vinum: raid.p2.s4 is
down by force
Nov 30 00:16:20 bashful /kernel: vinum: No space for
raid.p2.s4 on 
drive ahc0t10 at offset -1
Nov 30 00:16:20 bashful /kernel: vinum: raid.p2.s5 is
down by force
Nov 30 00:16:21 bashful /kernel: vinum: No space for
raid.p2.s5 on 
drive ahc0t11 at offset -1
Nov 30 00:16:21 bashful /kernel: vinum: raid.p2.s6 is
down by force
Nov 30 00:16:21 bashful /kernel: vinum: No space for
raid.p2.s6 on 
drive ahc0t12 at offset -1
Nov 30 00:16:21 bashful /kernel: vinum: raid.p2.s7 is
down by force
Nov 30 00:16:21 bashful /kernel: vinum: No space for
raid.p2.s7 on 
drive ahc0t13 at offset -1
Nov 30 00:16:21 bashful /kernel: vinum: raid.p2.s8 is
down by force
Nov 30 00:16:21 bashful /kernel: vinum: No space for
raid.p2.s8 on 
drive ahc0t14 at offset -1
Nov 30 00:16:21 bashful /kernel: vinum: updating
configuration from 
/dev/da6s1e
Nov 30 00:16:21 bashful /kernel: vinum: updating
configuration from 
/dev/da5s1e
Nov 30 00:16:22 bashful /kernel: vinum: updating
configuration from 
/dev/da4s1e
Nov 30 00:16:22 bashful /kernel: vinum: updating
configuration from 
/dev/da3s1e
Nov 30 00:16:22 bashful /kernel: vinum: updating
configuration from 
/dev/da2s1e
Nov 30 00:16:22 bashful /kernel: vinum: updating
configuration from 
/dev/da1s1e
Nov 30 00:16:22 bashful /kernel: vinum: removing 6144
blocks of partial 
stripe at the end of raid.p2
Nov 30 00:16:22 bashful /kernel: Correcting length of
raid.p2: was 
79288320, is 44046340
Nov 30 00:16:22 bashful /kernel: vinum: removing 4100
blocks of partial 
stripe at the end of raid.p2




		
__________________________________ 
Do you Yahoo!? 
Yahoo! Mail - Helps protect you from nasty viruses. 
http://promotions.yahoo.com/new_mail



Want to link to this message? Use this URL: <https://mail-archive.FreeBSD.org/cgi/mid.cgi?20041204161047.49887.qmail>