From owner-freebsd-questions@FreeBSD.ORG Mon May 25 14:24:33 2009 Return-Path: Delivered-To: freebsd-questions@freebsd.org Received: from mx1.freebsd.org (mx1.freebsd.org [IPv6:2001:4f8:fff6::34]) by hub.freebsd.org (Postfix) with ESMTP id 3BE52106567A for ; Mon, 25 May 2009 14:24:33 +0000 (UTC) (envelope-from howard.jones@network-i.net) Received: from mail.thingy.com (wotsit.thingy.com [212.21.100.67]) by mx1.freebsd.org (Postfix) with ESMTP id B52788FC1E for ; Mon, 25 May 2009 14:24:32 +0000 (UTC) (envelope-from howard.jones@network-i.net) Received: (qmail 24979 invoked by uid 0); 25 May 2009 14:57:50 +0100 Received: from unknown (HELO ?192.168.1.56?) (howie@thingy.com@212.21.124.49) by wotsit3.thingy.com with AES256-SHA encrypted SMTP; 25 May 2009 14:57:50 +0100 Message-ID: <4A1AA3DC.5020300@network-i.net> Date: Mon, 25 May 2009 14:57:48 +0100 From: Howard Jones User-Agent: Thunderbird 2.0.0.21 (Windows/20090302) MIME-Version: 1.0 To: freebsd-questions@freebsd.org Content-Type: text/plain; charset=ISO-8859-1 Content-Transfer-Encoding: 7bit Subject: FreeBSD & Software RAID X-BeenThere: freebsd-questions@freebsd.org X-Mailman-Version: 2.1.5 Precedence: list List-Id: User questions List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , X-List-Received-Date: Mon, 25 May 2009 14:24:33 -0000 Hi, Can anyone with experience of software RAID point me in the right direction please? I've used gmirror before with no trouble, but nothing fancier. I have a set of brand new 1TB drives, a Sil3124 SATA card and a FreeBSD 7.1-p4 system. I created a RAID 5 set with gvinum: drive d0 device /dev/ad4s1a drive d1 device /dev/ad6s1a drive d2 device /dev/ad8s1a drive d3 device /dev/ad10s1a volume jumbo plex org raid5 256k sd drive d0 sd drive d1 sd drive d2 sd drive d3 and it shows as up and happy. If I reboot, all the subdisks show as stale, and so the plex is down. It seems to be doing a rebuild, although it wasn't before, and would newfs, mount and accept data onto the new plex before the reboot. Is there any way to avoid having to wait while gvinum apparently calculates the parity on all those zeroes? Am I missing some step to 'liven up' the plex before the first reboot? (loader.conf has the correct line to load gvinum at boot) I tried again, with 'gvinum start jumbo' before rebooting, and that made no difference. Also is the configuration file format actually documented anywhere? I got that example from someone's blog, but the gvinum manpage doesn't mention the format at all! It *does* have a few pages dedicated to things that don't work, which was handy... :-) The handbook is still talking about ccd and vinum, and mostly covers the complications of booting of such a device. On the subject of documentation, I'm also assuming that this: S jumbo.p0.s2 State: I 1% D: d2 Size: 931 GB means it's 1% through initialising, because the states or the output of 'list' aren't described in the manual either. I'm was half-considering switching to ZFS, but the most positive thing I could find written about that (as implemented on FreeBSD) is that it "doesn't crash that much", so perhaps not. That was from a while ago though. Does anyone use software RAID5 (or RAIDZ) for data they care about? Cheers, Howie