Skip site navigation (1)Skip section navigation (2)
Date:      Sat, 25 Jan 2014 19:12:02 +0000
From:      Kaya Saman <kayasaman@gmail.com>
To:        freebsd-questions <freebsd-questions@freebsd.org>
Subject:   ZFS confusion
Message-ID:  <52E40C82.7050302@gmail.com>

Next in thread | Raw E-Mail | Index | Archive | Help
Hi,

I'm really confused about something so I hope someone can help me clear 
the fog up....

basically I'm about to setup a ZFS RAIDZ3 pool and having discovered 
this site:

https://calomel.org/zfs_raid_speed_capacity.html

as a reference for disk quantity got totally confused.


Though in addition have checked out these sites too:

https://blogs.oracle.com/ahl/entry/triple_parity_raid_z

http://www.zfsbuild.com/2010/06/03/howto-create-raidz2-pool/

http://www.zfsbuild.com/2010/05/26/zfs-raid-levels/

http://www.linux.org/threads/zettabyte-file-system-zfs.4619/


Implementing a test ZFS pool on my old FreeBSD 8.3 box using dd derived 
vdevs coupled with reading the man page for zpool found that raidz3 
needs a minimum of 4 disks to work.

However, according to the above mentioned site for triple parity one 
should use 5 disks in 2+3 format.

My confusion is this: does the 2+3 mean 2 disks in the pool with 3 hot 
spares or does it mean 5 disks in the pool? As in:

zpool create <pool_name> raidz3 disk1 disk2 disk3 disk4 disk5


In addition to my testing I was looking at ease of expansion... ie. 
growing the pool, so is doing something like this:

zpool create <pool_name> raidz3 disk1 disk2 disk3 disk4

Then when I needed to expand just do:

zpool add <pool_name> raidz3 disk5 disk6 disk7 disk8

which gets:

   pool: testpool
  state: ONLINE
status: The pool is formatted using a legacy on-disk format.  The pool can
     still be used, but some features are unavailable.
action: Upgrade the pool using 'zpool upgrade'.  Once this is done, the
     pool will no longer be accessible on software that does not support 
feature
     flags.
   scan: none requested
config:

     NAME            STATE     READ WRITE CKSUM
     testpool        ONLINE       0     0     0
       raidz3-0      ONLINE       0     0     0
         /tmp/disk1  ONLINE       0     0     0
         /tmp/disk2  ONLINE       0     0     0
         /tmp/disk3  ONLINE       0     0     0
         /tmp/disk4  ONLINE       0     0     0
       raidz3-1      ONLINE       0     0     0
         /tmp/disk5  ONLINE       0     0     0
         /tmp/disk6  ONLINE       0     0     0
         /tmp/disk7  ONLINE       0     0     0
         /tmp/disk8  ONLINE       0     0     0

----------

The same as this:

----------

   pool: testpool
  state: ONLINE
status: The pool is formatted using a legacy on-disk format.  The pool can
     still be used, but some features are unavailable.
action: Upgrade the pool using 'zpool upgrade'.  Once this is done, the
     pool will no longer be accessible on software that does not support 
feature
     flags.
   scan: none requested
config:

     NAME            STATE     READ WRITE CKSUM
     testpool        ONLINE       0     0     0
       raidz3-0      ONLINE       0     0     0
         /tmp/disk1  ONLINE       0     0     0
         /tmp/disk2  ONLINE       0     0     0
         /tmp/disk3  ONLINE       0     0     0
         /tmp/disk4  ONLINE       0     0     0
         /tmp/disk5  ONLINE       0     0     0
         /tmp/disk6  ONLINE       0     0     0
         /tmp/disk7  ONLINE       0     0     0
         /tmp/disk8  ONLINE       0     0     0


?? Of course using the 1st method there is extra meta data involved but 
not too much especially with TB drives.

Having created a zfs filesystem on top of both setups, the fs will grow 
over the 1st scenario to utilize disks 5 through 8 added later; while of 
course with the second setup the filesystem is already created over all 
8 disks.


In a real situation however, the above would certainly be 5 disks at a 
time to gain the triple parity, with ZIL and L2ARC on SSD's and hot swap 
spares.


The reason am asking the above is that I've got a new enclosure with up 
to 26 disk capacity and need to create a stable environment and make 
best use of the space. So another words, maximum redundancy with max 
capacity allowed per method: which would be raidz1..3 and of course 
raidz3 offers the best redundancy but yet has much more capacity then a 
raid1+0 setup.

My intention was to grab 5 disks to start with then expand as necessary 
plus 2 SSD's for ZIL+L2ARC using (raid0 mirroring and raid1 mirroring 
consecutively) and then 3x hot swap spares and use lz4 compression on 
the filesystem. With FreeBSD 10.0 as base OS... my current 8.3 must be 
EOL now though on a different box so no matter :-)


Hopefully someone can help me understanding the above.


Many thanks.


Regards,


Kaya



Want to link to this message? Use this URL: <http://docs.FreeBSD.org/cgi/mid.cgi?52E40C82.7050302>