Skip site navigation (1)Skip section navigation (2)
Date:      Wed, 15 Dec 2004 19:16:59 -0500
From:      asym <asym@rfnj.org>
To:        Gianluca <gianluca@gmail.com>
Cc:        freebsd-stable@freebsd.org
Subject:   Re: drive failure during rebuild causes page fault
Message-ID:  <6.1.2.0.2.20041215190056.02f9bfb8@mail.rfnj.org>
In-Reply-To: <41C0CF5F.3080009@kzsu.org>
References:  <20041213052628.GB78120@meer.net> <20041213054159.GC78120@meer.net><20041213060549.GE78120@meer.net> <20041213192119.GB4781@meer.net><41BE8F2D.8000407@DeepCore.dk> <a9ef27270412151516fcc7720@mail.gmail.com><41C0CF5F.3080009@kzsu.org>

next in thread | previous in thread | raw e-mail | index | archive | help
At 18:57 12/15/2004, Gianluca wrote:
>actually all the data I plan to keep on that server is gonna be backed up, 
>either to cdr/dvdr or in the original audio cds that I still have. what I 
>meant by integrity is trying to avoid having to go back to the backups to 
>restore 120G (or more in this case) that were on a dead drive. I've done 
>that before, and even if it's no mission-critical data, it remains a huge 
>PITA :)

That's true.  Restoring is always a pain in the ass, no matter the media 
you use.


>thanks for the detailed explanation of how RAID5 works, somehow I didn't 
>really catch the distinction between the normal and degraded operations on 
>the array.
>
>what would be your recommendations for this particular (and very limited) 
>application?

Honestly I'd probably go for a RAID1+0 setup.  It wastes half the space in 
total for mirroring, but it has none of the performance penalties of 
RAID-5, and upto half the drives in the array can fail without anything but 
speed being degraded.  You can sort of think of this as having a second 
dedicated array for 'backups' if you want, with the normal caveats -- 
namely that "destroyed" data cannot be recovered, such as things purposely 
deleted.

RAID5 sacrifices write speed and redundancy for the sake of space.  Since 
you're using IDE and the drives are pretty cheap, I don't see the need for 
such a sacrifice.

Just make sure the controller can do "real" 1+0.  Several vendoers are 
confused about what the differences are between 1+0, 0+1, and 10 -- they 
mistakenly call their raid 0+1 support "RAID-10".

The difference is pretty important though.  If you have say 8 drives, in 
RAID 1+0 (aka 10) you would first create 4 RAID-1 mirrors with 2 disks 
each, and then use these 4 virtual disks in a RAID-0 stripe setup.  This 
would be optimal, as any 4 drives could fail provided they all came from 
different RAID-1 pairs.

In 0+1, you first create two 4-disk RAID-0 arrays and then use one as a 
mirror of the other to create one large RAID-1 disk.  In this setup, which 
has *no* benefits over 1+0, if any drive fails the entire 4-disk RAID-0 
stripe set that the disk is in goes offline and you are left with no 
redundancy -- the entire array is degraded running off the remaining 4-disk 
RAID-0 array, and if any of the drives in that array fail, you're smoked.

If you want redundancy to avoid having to possibly restore data, and you 
can afford more disks, go 1+0.  If you can't afford more disks, then one of 
the striped+parity solutions (-3, -4, -5) are all you can do.. but be ready 
to see write performance anywhere from "ok" on a $1500 controller, to 
"annoying" on a sub $500 controller, to "downright retardedly slow" on 
anything down in the cheap end -- including most IDE controllers -- Look up 
the controller, find out what I/O chip it's using (most are intel based, 
either StrongARM or i960) and see if the chip supports hardware XOR.  If it 
doesn't, you'll really wish it did.




Want to link to this message? Use this URL: <https://mail-archive.FreeBSD.org/cgi/mid.cgi?6.1.2.0.2.20041215190056.02f9bfb8>