Skip site navigation (1)Skip section navigation (2)
Date:      Sat, 3 Jun 2000 16:56:40 +0930
From:      Greg Lehey <grog@lemis.com>
To:        Greg Skouby <gskouby@ns0.sitesnow.com>
Cc:        freebsd-questions@FreeBSD.ORG
Subject:   Re: vinum help. corrupt raid 5 volume
Message-ID:  <20000603165640.M30249@wantadilla.lemis.com>
In-Reply-To: <Pine.BSF.4.10.10006021407210.41106-100000@ns0.sitesnow.com>
References:  <Pine.BSF.4.10.10006021423590.42498-100000@ns0.sitesnow.com> <Pine.BSF.4.10.10006021407210.41106-100000@ns0.sitesnow.com>

next in thread | previous in thread | raw e-mail | index | archive | help
On Friday,  2 June 2000 at 14:11:26 -0400, Greg Skouby wrote:
> Hello,
>
> We have been using RAID5 on a 3.3 release system quite successfully until
> the last day. This morning we got these messages in /var/log/messages:
>
> Jun  2 10:10:47 mail2 /kernel: (da3:ahc0:0:4:0): Invalidating pack
> Jun  2 10:10:47 mail2 /kernel: raid5.p0.s3: fatal read I/O error
> Jun  2 10:10:47 mail2 /kernel: vinum: raid5.p0.s3 is crashed by force
> Jun  2 10:10:47 mail2 /kernel: vinum: raid5.p0 is degraded
> Jun  2 10:10:47 mail2 /kernel: d: fatal drive I/O error
> Jun  2 10:10:47 mail2 /kernel: vinum: drive d is down
> Jun  2 10:10:47 mail2 /kernel: raid5.p0.s3: fatal write I/O error
> Jun  2 10:10:47 mail2 /kernel: vinum: raid5.p0.s3 is stale by force
> Jun  2 10:10:47 mail2 /kernel: d: fatal drive I/O error
> Jun  2 10:10:47 mail2 /kernel: biodone: buffer already done

On Friday,  2 June 2000 at 14:27:42 -0400, Greg Skouby wrote:
> Hello again,
>
> I just sent a message regarding raid5 and vinum a couple of minutes ago. I
> managed to get the volume to this state:
> Configuration summary
>
> Drives:         4 (8 configured)
> Volumes:        1 (4 configured)
> Plexes:         1 (8 configured)
> Subdisks:       4 (16 configured)
>
> D a                     State: up       Device /dev/da0h        Avail:0/22129 MB (0%)
> D b                     State: up       Device /dev/da1h  	Avail:0/22129 MB (0%)
> D c                     State: up       Device /dev/da2h        Avail:0/22129 MB (0%)
> D d                     State: up       Device /dev/da3h        Avail:0/22129 MB (0%)
>
> V raid5                 State: up       Plexes:       1 Size:         64GB
>
> P raid5.p0           R5 State: degraded Subdisks:     4 Size:         64GB
>
> S raid5.p0.s0           State: up       PO:        0  B Size:         21GB
> S raid5.p0.s1           State: up       PO:      512 kB Size:         21GB
> S raid5.p0.s2           State: up       PO:     1024 kB Size:         21GB
> S raid5.p0.s3           State: reviving PO:     1536 kB Size:         21GB
>
> How long does the reviving process take?

That depends on the size and speed of the drives.  I'd expect this to
take an hour or two.  You should see heavy disk activity.

> I saw that Mr. Lehey noted that there were some problems with raid5
> and the start raid5.p0.s3 command.

I must say you're brave running RAID-5 on 3.3-RELEASE.

> Is there anything else I can do? Thanks for your time.

I'd suggest you leave it the way it is at the moment.  There are so
many bugs in revive in 3.3 that it's not even worth trying.  I'm about
to commit a whole lot of fixes to 3-STABLE.  When I've done it, you
can upgrade.  Reviving the plex should then work.

Greg
--
When replying to this message, please copy the original recipients.
For more information, see http://www.lemis.com/questions.html
Finger grog@lemis.com for PGP public key
See complete headers for address and phone numbers


To Unsubscribe: send mail to majordomo@FreeBSD.org
with "unsubscribe freebsd-questions" in the body of the message




Want to link to this message? Use this URL: <https://mail-archive.FreeBSD.org/cgi/mid.cgi?20000603165640.M30249>