Skip site navigation (1)Skip section navigation (2)
Date:      Fri, 14 Mar 2003 10:05:28 +0200
From:      Vallo Kallaste <kalts@estpak.ee>
To:        "Greg 'groggy' Lehey" <grog@FreeBSD.org>
Cc:        Darryl Okahata <darrylo@soco.agilent.com>, current@FreeBSD.org
Subject:   Re: Vinum R5 [was: Re: background fsck deadlocks with ufs2 and big disk]
Message-ID:  <20030314080528.GA1174@kevad.internal>
In-Reply-To: <20030314024602.GL77236@wantadilla.lemis.com>
References:  <20030220200317.GA5136@kevad.internal> <200302202228.OAA03775@mina.soco.agilent.com> <20030221080046.GA1103@kevad.internal> <20030227012959.GA89235@wantadilla.lemis.com> <20030227095302.GA1183@kevad.internal> <20030301184310.GA631@kevad.internal> <20030314024602.GL77236@wantadilla.lemis.com>

next in thread | previous in thread | raw e-mail | index | archive | help
On Fri, Mar 14, 2003 at 01:16:02PM +1030, Greg 'groggy' Lehey
<grog@FreeBSD.org> wrote:

> > So I did. Loaned two SCSI disks and 50-pin cable. Things haven't
> > improved a bit, I'm very sorry to say it.
> 
> Sorry for the slow reply to this.  I thought it would make sense to
> try things out here, and so I kept trying to find time, but I have to
> admit I just don't have it yet for a while.  I haven't forgotten, and
> I hope that in a few weeks time I can spend some time chasing down a
> whole lot of Vinum issues.  This is definitely the worst I have seen,
> and I'm really puzzled why it always happens to you.
> 
> > # simulate disk crash by forcing one arbitrary subdisk down
> > # seems that vinum doesn't return values for command completion status
> > # checking?
> > echo "Stopping subdisk.. degraded mode"
> > vinum stop -f r5.p0.s3	# assume it was successful
> 
> I wonder if there's something relating to stop -f that doesn't happen
> during a normal failure.  But this was exactly the way I tested it in
> the first place.

Thank you Greg, I really appreciate your ongoing effort for making
vinum stable, trusted volume manager.
I have to add some facts to the mix. Raidframe on the same hardware
does not have any problems. The later tests I conducted was done
under -stable, because I couldn't get raidframe to work under
-current, system did panic everytime at the end of initialisation of
parity (raidctl -iv raid?). So I used the raidframe patch for
-stable at
http://people.freebsd.org/~scottl/rf/2001-08-28-RAIDframe-stable.diff.gz
Had to do some patching by hand, but otherwise works well.
Will it suffice to switch off power for one disk to simulate "more"
real-world disk failure? Are there any hidden pitfalls for failing
and restoring operation of non-hotswap disks?
-- 

Vallo Kallaste

To Unsubscribe: send mail to majordomo@FreeBSD.org
with "unsubscribe freebsd-current" in the body of the message




Want to link to this message? Use this URL: <https://mail-archive.FreeBSD.org/cgi/mid.cgi?20030314080528.GA1174>