Skip site navigation (1)Skip section navigation (2)
Date:      Mon, 25 May 2009 14:57:48 +0100
From:      Howard Jones <howard.jones@network-i.net>
To:        freebsd-questions@freebsd.org
Subject:   FreeBSD & Software RAID
Message-ID:  <4A1AA3DC.5020300@network-i.net>

next in thread | raw e-mail | index | archive | help
Hi,

Can anyone with experience of software RAID point me in the right
direction please? I've used gmirror before with no trouble, but nothing
fancier.

I have a set of brand new 1TB drives, a Sil3124 SATA card and a FreeBSD
7.1-p4 system.

I created a RAID 5 set with gvinum:
drive d0 device /dev/ad4s1a
drive d1 device /dev/ad6s1a
drive d2 device /dev/ad8s1a
drive d3 device /dev/ad10s1a
volume jumbo
        plex org raid5 256k
        sd drive d0
        sd drive d1
        sd drive d2
        sd drive d3

and it shows as up and happy. If I reboot, all the subdisks show as
stale, and so the plex is down. It seems to be doing a rebuild, although
it wasn't before, and would newfs, mount and accept data onto the new
plex before the reboot.

Is there any way to avoid having to wait while gvinum apparently
calculates the parity on all those zeroes?

Am I missing some step to 'liven up' the plex before the first reboot?
(loader.conf has the correct line to load gvinum at boot) I tried again,
with 'gvinum start jumbo' before rebooting, and that made no difference.

Also is the configuration file format actually documented anywhere? I
got that example from someone's blog, but the gvinum manpage doesn't
mention the format at all! It *does* have a few pages dedicated to
things that don't work, which was handy... :-) The handbook is still
talking about ccd and vinum, and mostly covers the complications of
booting of such a device.

On the subject of documentation, I'm also assuming that this:
    S jumbo.p0.s2           State: I 1%     D: d2           Size:       
931 GB
means it's 1% through initialising, because the states or the output of
'list' aren't described in the manual either.

I'm was half-considering switching to ZFS, but the most positive thing I
could find written about that (as implemented on FreeBSD) is that it
"doesn't crash that much", so perhaps not. That was from a while ago though.

Does anyone use software RAID5 (or RAIDZ) for data they care about?

Cheers,

Howie



Want to link to this message? Use this URL: <https://mail-archive.FreeBSD.org/cgi/mid.cgi?4A1AA3DC.5020300>