Skip site navigation (1)Skip section navigation (2)
Date:      Sun, 16 Mar 2003 09:20:10 +1030
From:      Greg 'groggy' Lehey <grog@FreeBSD.org>
To:        Vallo Kallaste <kalts@estpak.ee>
Cc:        Darryl Okahata <darrylo@soco.agilent.com>, current@FreeBSD.org
Subject:   Re: Vinum R5
Message-ID:  <20030315225010.GJ92629@wantadilla.lemis.com>
In-Reply-To: <20030315083454.GA935@kevad.internal>
References:  <20030220200317.GA5136@kevad.internal> <200302202228.OAA03775@mina.soco.agilent.com> <20030221080046.GA1103@kevad.internal> <20030227012959.GA89235@wantadilla.lemis.com> <20030227095302.GA1183@kevad.internal> <20030301184310.GA631@kevad.internal> <20030314024602.GL77236@wantadilla.lemis.com> <20030314080528.GA1174@kevad.internal> <20030315013223.GC90698@wantadilla.lemis.com> <20030315083454.GA935@kevad.internal>

next in thread | previous in thread | raw e-mail | index | archive | help

--W13SgbpmD6bhZUTM
Content-Type: text/plain; charset=us-ascii
Content-Disposition: inline

On Saturday, 15 March 2003 at 10:34:54 +0200, Vallo Kallaste wrote:
> On Sat, Mar 15, 2003 at 12:02:23PM +1030, Greg 'groggy' Lehey
> <grog@FreeBSD.org> wrote:
>
>>> -current, system did panic everytime at the end of
>>> initialisation of parity (raidctl -iv raid?). So I used the
>>> raidframe patch for -stable at
>>> http://people.freebsd.org/~scottl/rf/2001-08-28-RAIDframe-stable.diff.gz
>>> Had to do some patching by hand, but otherwise works well.
>>
>> I don't think that problems with RAIDFrame are related to these
>> problems with Vinum.  I seem to remember a commit to the head branch
>> recently (in the last 12 months) relating to the problem you've seen.
>> I forget exactly where it went (it wasn't from me), and in cursory
>> searching I couldn't find it.  It's possible that it hasn't been
>> MFC'd, which would explain your problem.  If you have a 5.0 machine,
>> it would be interesting to see if you can reproduce it there.
>
> Yes, yes, the whole raidframe story was meant as information about
> the conditions I did the raidframe vs. Vinum testing on. Nothing to
> do with Vinum, besides that raidframe works and Vinum does not.
>
>>> Will it suffice to switch off power for one disk to simulate "more"
>>> real-world disk failure? Are there any hidden pitfalls for failing
>>> and restoring operation of non-hotswap disks?
>>
>> I don't think so.  It was more thinking aloud than anything else.  As
>> I said above, this is the way I tested things in the first place.
>
> Ok, I'll try to simulate the disk failure by switching off the
> power, then.

I think you misunderstand.  I simulated the disk failures by doing a
"stop -f".  I can't see any way that the way they go down can
influence the revive integrity.  I can see that powering down might
not do the disks any good.

Greg
--
See complete headers for address and phone numbers

--W13SgbpmD6bhZUTM
Content-Type: application/pgp-signature
Content-Disposition: inline

-----BEGIN PGP SIGNATURE-----
Version: GnuPG v1.2.0 (FreeBSD)

iD8DBQE+c64iIubykFB6QiMRAkEvAJ9RLcXQO7DK+zVGMHFLZBplR2LrBwCgiEjD
4LY0crsMJqAUvFC3n1X04UE=
=LPNM
-----END PGP SIGNATURE-----

--W13SgbpmD6bhZUTM--

To Unsubscribe: send mail to majordomo@FreeBSD.org
with "unsubscribe freebsd-current" in the body of the message




Want to link to this message? Use this URL: <https://mail-archive.FreeBSD.org/cgi/mid.cgi?20030315225010.GJ92629>