Skip site navigation (1)Skip section navigation (2)
Date:      Sat, 03 Jan 2009 13:19:49 +0100
From:      Frederique Rijsdijk <frederique@isafeelin.org>
To:        freebsd-questions@freebsd.org
Subject:   Re: Using HDD's for ZFS: 'desktop' vs 'raid / enterprise' -edition drives?
Message-ID:  <495F57E5.7040905@isafeelin.org>
In-Reply-To: <495F2919.6040103@isafeelin.org>
References:  <495E17AD.30707@isafeelin.org>	<20090102160727.A38841@wojtek.tensor.gdynia.pl>	<F881B4D0-69A3-4E33-BA55-EC5947064467@gmail.com> <495F2919.6040103@isafeelin.org>

next in thread | previous in thread | raw e-mail | index | archive | help
After some reading, I come back from my original idea. Main reason is 
I'd like to be able to grow the fs as the need develops in time.

One could create a raidz zpool with a couple of disks, but when adding a 
disk later on, it will not become part of the raidz (I tested this).

It seems vdevs can not be nested (create raidz sets and join them as a 
whole), so I came up with the following:

Start out with 4*1TB, and use geom_raid5 to create an independent 
redundant pool of storage:

'graid5 label -v graid5a da0 da1 da2 da3'  (this is all tested in 
vmware, one of these 'da' drives is 8GB)

Then I 'zpool create bigvol /dev/raid5/graid5a', and I have a /bigvol of 
24G - sounds about right to me for a raid5 volume.

Now lets say later in time I need more storage, I buy another 4 of these 
drives, and

'graid5 label -v graid5b da4 da5 da6 da7'
and
'zpool add bigvol /dev/raid5/graid5b'

Now my bigvol is 48G. Very cool! Now I have redundant storage that can 
grow and it's pretty easy too.

Is this OK (besides from the fact that graid5 is not in production yet, 
nor is ZFS ;) or are there easier (or better) ways to do this?

- So I want redundancy (I don't want one failing drive to cause me to 
loose all my data)
- I want to be able to grow the filesystem if I need to, by adding a 
(set of) drive(s) later on.



-- FR



Want to link to this message? Use this URL: <https://mail-archive.FreeBSD.org/cgi/mid.cgi?495F57E5.7040905>