Skip site navigation (1)Skip section navigation (2)
Date:      Sun, 31 May 2009 13:13:24 +0100
From:      krad <kraduk@googlemail.com>
To:        "'Mike Meyer'" <mwm-keyword-freebsdhackers2.e313df@mired.org>, <xorquewasp@googlemail.com>
Cc:        freebsd-hackers@freebsd.org
Subject:   RE: Request for opinions - gvinum or ccd?
Message-ID:  <A5BB2D2B836A4438B1B7BD8420FCC6A3@uk.tiscali.intl>
In-Reply-To: <20090530162744.5d77e9d1@bhuda.mired.org>
References:  <20090530175239.GA25604@logik.internal.network><20090530144354.2255f722@bhuda.mired.org><20090530191840.GA68514@logik.internal.network> <20090530162744.5d77e9d1@bhuda.mired.org>

next in thread | previous in thread | raw e-mail | index | archive | help
Please don't whack gstripe and zfs together. It should work but is ugly and
you might run into issues. Getting out of them will be harder than a pure
zfs solution

ZFS does support striping by default across vdevs

Eg

Zpool create data da1
Zpool add data da2

Would create a striped data set across da1  and da2

Zpool create data mirror da1 da2
Zpool add data mirror da3 da4

This would create a raid 10 across all drives

Zpool create data raidz2 da1 da2 da3 da5
Zpool add data raidz2 da6 da7 da8 da9

Would create a raid 60

If you replace the add keyword with attach, mirroring is performed rather
than striping

Just for fun here is one of the configs off one of our sun x4500 at work,
its opensolaris not freebsd, but it is zfs. One whoping big array of ~ 28 TB

zpool create -O compression=lzjb -O atime=off data raidz2 c3t0d0 c4t0d0
c8t0d0 c10t0d0 c11t0d0 c3t1d0 c4t1d0 c8t1d0 c9t1d0 c10t1d0 c11t1d0 raidz2
c3t2d0 c4t2d0 c8t2d0 c9t2d0 c11t2d0 c3t3d0 c4t3d0 c8t3d0 c9t3d0 c10t3d0
c11t3d0 raidz2 c3t4d0 c4t4d0 c8t4d0 c10t4d0 c11t4d0 c3t5d0 c4t5d0 c8t5d0
c9t5d0 c10t5d0 c11t5d0 raidz2 c3t6d0 c4t6d0 c8t6d0 c9t6d0 c10t6d0 c11t6d0
c3t7d0 c4t7d0 c9t7d0 c10t7d0 c11t7d0 spare c10t2d0 c8t7d0

$ zpool status
  pool: archive-2
 state: ONLINE
status: The pool is formatted using an older on-disk format.  The pool can
        still be used, but some features are unavailable.
action: Upgrade the pool using 'zpool upgrade'.  Once this is done, the
        pool will no longer be accessible on older software versions.
 scrub: scrub completed after 11h9m with 0 errors on Sun May 31 01:09:22
2009
config:

        NAME         STATE     READ WRITE CKSUM
        archive-2    ONLINE       0     0     0
          raidz2     ONLINE       0     0     0
            c3t0d0   ONLINE       0     0     0
            c4t0d0   ONLINE       0     0     0
            c8t0d0   ONLINE       0     0     0
            c10t0d0  ONLINE       0     0     0
            c11t0d0  ONLINE       0     0     0
            c3t1d0   ONLINE       0     0     0
            c4t1d0   ONLINE       0     0     0
            c8t1d0   ONLINE       0     0     0
            c9t1d0   ONLINE       0     0     0
            c10t1d0  ONLINE       0     0     0
            c11t1d0  ONLINE       0     0     0
          raidz2     ONLINE       0     0     0
            c3t2d0   ONLINE       0     0     0
            c4t2d0   ONLINE       0     0     0
            c8t2d0   ONLINE       0     0     0
            c9t2d0   ONLINE       0     0     0
            c11t2d0  ONLINE       0     0     0
            c3t3d0   ONLINE       0     0     0
            c4t3d0   ONLINE       0     0     0
            c8t3d0   ONLINE       0     0     0
            c9t3d0   ONLINE       0     0     0
            c10t3d0  ONLINE       0     0     0
            c11t3d0  ONLINE       0     0     0
          raidz2     ONLINE       0     0     0
            c3t4d0   ONLINE       0     0     0
            c4t4d0   ONLINE       0     0     0
            c8t4d0   ONLINE       0     0     0
            c10t4d0  ONLINE       0     0     0
            c11t4d0  ONLINE       0     0     0
            c3t5d0   ONLINE       0     0     0
            c4t5d0   ONLINE       0     0     0
            c8t5d0   ONLINE       0     0     0
            c9t5d0   ONLINE       0     0     0
            c10t5d0  ONLINE       0     0     0
            c11t5d0  ONLINE       0     0     0
          raidz2     ONLINE       0     0     0
            c3t6d0   ONLINE       0     0     0
            c4t6d0   ONLINE       0     0     0
            c8t6d0   ONLINE       0     0     0
            c9t6d0   ONLINE       0     0     0
            c10t6d0  ONLINE       0     0     0
            c11t6d0  ONLINE       0     0     0
            c3t7d0   ONLINE       0     0     0
            c4t7d0   ONLINE       0     0     0
            c9t7d0   ONLINE       0     0     0
            c10t7d0  ONLINE       0     0     0
            c11t7d0  ONLINE       0     0     0
        spares
          c10t2d0    AVAIL   
          c8t7d0     AVAIL   

errors: No known data errors

ZFS also check sums all data blocks written to the drive so data integrity
is guaranteed. If you are paranoid you can also set it to keep multiple
copies of each file. This will eat up loads of disk space so its best to use
it sparingly one the most important stuff. You can only do it on a fs basis
but this inst a big deal with zfs

Zfs create data/important_stuff
Zfs set copies=3 data/important_stuff

You  can also do compression as well, the big example above has this.

In the near future encryption and deduping are also getting integrated into
zfs. This is probably happening in the next few months on opensolaris, but
if you want those features in freebsd I guess it will take at least 6 months
after that.

With regards to your backup I suggest you definitely look at doing regular
fs snapshots.  To be real safe, id install the tb drive (probably worth
getting another as well as they are cheap) into another machine, and have it
in another room, or building if possible. Replicate you data using
incremental zfs sends, as this is the most efficient way. You can easily
push it through ssh for security as well. Rsync will work fine but you will
loose all you zfs fs settings with it as it works at the user level not the
fs level.

Hope this helps, im really looking forward to zfs maturing on bsd and having
pure zfs systems 8)

-----Original Message-----
From: owner-freebsd-hackers@freebsd.org
[mailto:owner-freebsd-hackers@freebsd.org] On Behalf Of Mike Meyer
Sent: 30 May 2009 21:28
To: xorquewasp@googlemail.com
Cc: freebsd-hackers@freebsd.org
Subject: Re: Request for opinions - gvinum or ccd?

On Sat, 30 May 2009 20:18:40 +0100
xorquewasp@googlemail.com wrote:
> > If you're running a 7.X 64-bit system with a couple of GIG of ram,
> > expect it to be in service for years without having to reformat the
> > disks, and can afford another drive, I'd recommend going to raidz on a
> > three-drive system. That will give you close to the size/performance
> > of your RAID0 system, but let you lose a disk without losing data. The
> > best you can do with zfs on two disks is a mirror, which means write
> > throughput will suffer.
> 
> Certainly a lot to think about.
> 
> The system has 12gb currently, with room to upgrade.  I currently have
> two 500gb drives and one 1tb drive. I wanted the setup to be essentially
> two drives striped, backed up onto one larger one nightly. I wanted the
> large backup drive to be as "isolated" as possible, eg, in the event of
> some catastrophic hardware failure, I can remove it and place it in
> another machine without a lot of stressful configuration to recover the
> data (not possible with a RAID configuration involving all three drives,
> as far as I'm aware).

The last bit is wrong. Moving a zfs pool between two systems is pretty
straightforward. The configuration information is on the drives; you
just do "zpool import <pool>" after plugging them in, and if the mount
point exists, it'll mount it. If the system crashed with the zfs pool
active, you might have to do -f to force an import. Geom is pretty
much the same way, except you can configure it to not write the config
data to disk, thus forcing you to do it manually (what you
expect). I'm not sure geom is as smart if the drives change names,
though.

RAID support and volume management has come a long way from the days
of ccd and vinum. zfs in particular is a major advance. If you aren't
aware of it's advantages, take the time to read the zfs & zpool man
pages, at the very least, before committing to geom (not that geom
isn't pretty slick in and of itself, but zfs solves a more pressing
problem).

Hmm. Come to think of it, you ought to be able to use gstrip to stripe
your disks, then put a zpool on that, which should get you the
advantages of zfs with a striped disk. But that does seem odd to me.


       <mike
-- 
Mike Meyer <mwm@mired.org>		http://www.mired.org/consulting.html
Independent Network/Unix/Perforce consultant, email for more information.

O< ascii ribbon campaign - stop html mail - www.asciiribbon.org
_______________________________________________
freebsd-hackers@freebsd.org mailing list
http://lists.freebsd.org/mailman/listinfo/freebsd-hackers
To unsubscribe, send any mail to "freebsd-hackers-unsubscribe@freebsd.org"




Want to link to this message? Use this URL: <https://mail-archive.FreeBSD.org/cgi/mid.cgi?A5BB2D2B836A4438B1B7BD8420FCC6A3>