Date: Thu, 18 Oct 2007 20:00:43 -0700 (PDT) From: FX Charpentier <charpentierfx@yahoo.com> To: freebsd-questions@freebsd.org Subject: Vinum Raid 5: Bad HD Crashes Server. Can't rebuild array. Message-ID: <360972.24315.qm@web36211.mail.mud.yahoo.com>
next in thread | raw e-mail | index | archive | help
Hi there,=0A=0A=0A=0AI setup this FreeBSD 4.9 a while back (I know, I know= this isn't the latest version; but look it's been running perfectly since= then), with the OS on a SCSI drive and a vinum volume on 3 IDE 200GB driv= es, hook on a Promise IDE controller.=0A=0A=0A=0AA) The Crash=0A=0A=3D =3D= =3D =3D =3D =3D =3D =0A=0AThe vinum volume is set in a Raid 5 configuratio= n. Here is how it's configured:=0A=0A=0A=0A=0A drive d1 device /dev/ad4= s1h=0A=0A=0A drive d2 device /dev/ad5s1h=0A=0A=0A drive d3 device /dev/= ad6s1h=0A=0A=0A volume datastore=0A=0A=0A plex org raid5 256k=0A=0A= =0A subdisk length 185g=0Adrive d1=0A=0A=0A subdisk length 185g= =0Adrive d2=0A=0A=0A subdisk length 185g=0Adrive d3=0A=0A=0A=0A=0A=0A= Each drive in the array had a single partition, and were labeled with a ty= pe of "vinum" and an "h" partition.=0A=0A=0A=0ALast Saturday night, drive d= 2 (ad5) went bad. To my surprise the server stopped, crashed and automati= cally rebooted. I got a "kernel panic" at the console and the server woul= d stop during the boot process when trying to start / mount the vinum volu= me.=0A=0A=0A=0A=3D> Q1: Isn't a Raid 5 configuration supposed to allow me t= o run on a degraded array, when 1 of the drive is missing?=0A=0A=3D> Q2: D= id I do anything wrong with the vinum config above?=0A=0A=0A=0A=0A=0AB) The= Recovery (well, sort of)=0A=0A=3D =3D =3D =3D =3D =3D =3D =3D =3D =3D =3D = =3D =3D =3D =3D=0A=0ASo, the next day I got a brand new 250GB hard drive an= d replaced d2 (ad5). Then I did the fixit floppy thing to comment out vin= um from both rc.conf and fstab. This way I was able to start the server.= =0A=0A=0A=0AI prepared the new drive with Fdisk first, then did a 'disklabe= l' to change the type to "vinum" and the partition to "h". After that I = created a special vinum configuration file called 'recoverdata' to recover = the volume, and put "drive d2 device /dev/ad5s1h" there. Finally I ran: = vinum create -v recoverdata. This worked and I finally entered vinum in i= nteractive mode.=0A=0A=0A=0AFirst thing, I started vinum with the 'start' c= ommand. That worked. Next, I did a "ld -v" to bring information about the= vinum drives. Vinum drive d1 came up with the right information. d2 came= =0Aup with some information. d3 had all fields, but no information. It wa= s just like a drive with only blank information.=0A=0AI checked d2, formerl= y failed, was pointing at ad5, then ran an "lv -r" to ensure that datastore= .p0 said 'degraded'. It did. Finally to rebuild the array I ran: start da= tastore.p0.=0A=0AAt that point I didn't notice right away, but I had "vinum= [xxx]: reviving datastore.p0.s0". I started to get worried the drive to r= ebuild is datastore.p0.s1. Then reviving failed at 69%.=0A=0AI tried "star= t datastore.p0.s1" to rebuild the array, but that failed at 69% too.=0A=0A= =3D> Q3: What can I do to revive the array? I don't know what to do at thi= s point.=0A=3D> Q4: Did I do anything wrong in the recovery process? Just = want to make sure I learn from my mistakes.=0A=0AMany thanks for your help = in advance.=0A=0A=0A=0A=0A=0A=0A=0A=0A=0A__________________________________= ________________=0ADo You Yahoo!?=0ATired of spam? Yahoo! Mail has the bes= t spam protection around =0Ahttp://mail.yahoo.com
Want to link to this message? Use this URL: <https://mail-archive.FreeBSD.org/cgi/mid.cgi?360972.24315.qm>