Skip site navigation (1)Skip section navigation (2)
Date:      Sat, 30 May 2009 16:27:44 -0400
From:      Mike Meyer <mwm-keyword-freebsdhackers2.e313df@mired.org>
To:        xorquewasp@googlemail.com
Cc:        freebsd-hackers@freebsd.org
Subject:   Re: Request for opinions - gvinum or ccd?
Message-ID:  <20090530162744.5d77e9d1@bhuda.mired.org>
In-Reply-To: <20090530191840.GA68514@logik.internal.network>
References:  <20090530175239.GA25604@logik.internal.network> <20090530144354.2255f722@bhuda.mired.org> <20090530191840.GA68514@logik.internal.network>

next in thread | previous in thread | raw e-mail | index | archive | help
On Sat, 30 May 2009 20:18:40 +0100
xorquewasp@googlemail.com wrote:
> > If you're running a 7.X 64-bit system with a couple of GIG of ram,
> > expect it to be in service for years without having to reformat the
> > disks, and can afford another drive, I'd recommend going to raidz on a
> > three-drive system. That will give you close to the size/performance
> > of your RAID0 system, but let you lose a disk without losing data. The
> > best you can do with zfs on two disks is a mirror, which means write
> > throughput will suffer.
> 
> Certainly a lot to think about.
> 
> The system has 12gb currently, with room to upgrade.  I currently have
> two 500gb drives and one 1tb drive. I wanted the setup to be essentially
> two drives striped, backed up onto one larger one nightly. I wanted the
> large backup drive to be as "isolated" as possible, eg, in the event of
> some catastrophic hardware failure, I can remove it and place it in
> another machine without a lot of stressful configuration to recover the
> data (not possible with a RAID configuration involving all three drives,
> as far as I'm aware).

The last bit is wrong. Moving a zfs pool between two systems is pretty
straightforward. The configuration information is on the drives; you
just do "zpool import <pool>" after plugging them in, and if the mount
point exists, it'll mount it. If the system crashed with the zfs pool
active, you might have to do -f to force an import. Geom is pretty
much the same way, except you can configure it to not write the config
data to disk, thus forcing you to do it manually (what you
expect). I'm not sure geom is as smart if the drives change names,
though.

RAID support and volume management has come a long way from the days
of ccd and vinum. zfs in particular is a major advance. If you aren't
aware of it's advantages, take the time to read the zfs & zpool man
pages, at the very least, before committing to geom (not that geom
isn't pretty slick in and of itself, but zfs solves a more pressing
problem).

Hmm. Come to think of it, you ought to be able to use gstrip to stripe
your disks, then put a zpool on that, which should get you the
advantages of zfs with a striped disk. But that does seem odd to me.


       <mike
-- 
Mike Meyer <mwm@mired.org>		http://www.mired.org/consulting.html
Independent Network/Unix/Perforce consultant, email for more information.

O< ascii ribbon campaign - stop html mail - www.asciiribbon.org



Want to link to this message? Use this URL: <https://mail-archive.FreeBSD.org/cgi/mid.cgi?20090530162744.5d77e9d1>