Skip site navigation (1)Skip section navigation (2)
Date:      Sat, 3 Nov 2007 02:43:06 +0100
From:      Ulf Lilleengen <lulf@stud.ntnu.no>
To:        Peter Giessel <pgiessel@mac.com>
Cc:        freebsd-geom@freebsd.org
Subject:   Re: gvinum and raid5
Message-ID:  <20071103014306.GA22755@stud.ntnu.no>
In-Reply-To: <0001DFFC-0115-1000-9A80-3F81219C1B16-Webmail-10013@mac.com>
References:  <8d4842b50710310814w3880f7d3ldf8abe3a236cbcc8@mail.gmail.com> <20071031215756.GB1670@stud.ntnu.no> <472AA59F.3020103@rootnode.com> <0001DFFC-0115-1000-9A80-3F81219C1B16-Webmail-10013@mac.com>

next in thread | previous in thread | raw e-mail | index | archive | help
On fre, nov 02, 2007 at 12:38:36 -0700, Peter Giessel wrote:
> On Friday, November 02, 2007, at 01:04AM, "Joe Koberg" <joe@rootnode.com> wrote:
> >Ulf Lilleengen wrote:
> >> On ons, okt 31, 2007 at 12:14:18 -0300, Marco Haddad wrote:
> >>   
> >>> I found in recent researchs that a lot of people say gvinum should not be
> >>> trusted, when it comes to raid5. I began to get worried. Am I alone using
> >>>
> >>>     
> >> I'm working on it, and there are definately people still using it. (I've
> >> recieved a number of private mails as well as those seen on this list). IMO,
> >> gvinum can be trusted when it comes to raid5. I've not experienced any
> >> corruption-bugs or anything like that with it. 
> >>   
> >
> >The source of the mistrust may be the fact that few software-only RAID-5 
> >systems can guarantee write consistency across a multi-drive 
> >read-update-write cycle in the case of, e.g., power failure.
> 
> That may be the true source, but my source of mistrust comes from a few
> drive failures and gvinum's inability to rebuild the replaced drive.
> 
> Worked fine under vinum in tests, tried the same thing in gvinum (granted,
> this was under FreeBSD 5), and the array failed to rebuild.
> 
> I can't be 100% sure it wasn't a flakey ATA controller and not gvinum's
> fault, and I no longer have access to the box to play with, but when I was
> playing with gvinum, replacing a failed drive usually resulted in panics.

Well, all I can say is that I've tested this many times with gvinum in
CURRENT/7.x/6.x as well as my SoC work, and I made updates to the manpage to
give examples on how to do this as well. 

Also, for the software RAID-5 problems... they are hard to "fix" since gvinum
doesn't really know anything about the consumers. However, it could be
interesting to try out different optimizations like not reading parity when
having a sufficiently large request, or some sort of write cache until one
can issue a large enough request.
-- 
Ulf Lilleengen



Want to link to this message? Use this URL: <https://mail-archive.FreeBSD.org/cgi/mid.cgi?20071103014306.GA22755>