Skip site navigation (1)Skip section navigation (2)
Date:      Mon, 21 Jul 2008 21:18:31 -0700
From:      Steven Schlansker <stevenschlansker@berkeley.edu>
To:        freebsd-questions@freebsd.org
Subject:   Using ccd with zfs
Message-ID:  <DD98FA1F-04E2-40C3-BF97-7F80ACBB1006@berkeley.edu>

next in thread | raw e-mail | index | archive | help
Hello -questions,
I have a FreeBSD ZFS storage system working wonderfully with 7.0.   
It's set up as three 3-disk RAIDZs -triplets of 500, 400, and 300GB  
drives.

I recently purchased three 750GB drives and would like to convert to  
using a RAIDZ2.  As ZFS has no restriping capabilities yet, I will  
have to nuke the zpool from orbit and make a new one.  I would like to  
verify my methodology against your experience to see if what I wish to  
do is reasonable:

I plan to first take 2 of the 750GB drives and make an unreplicated  
1.5TB zpool as a temporary storage.  Since ZFS doesn't seem to have  
the ability to create zpools in degraded mode (with missing drives) I  
plan to use iSCSI to create two additional drives (backed by /dev/ 
zero) to fake having two extra drives, relying on ZFS's RAIDZ2  
protection to keep everything running despite the fact that two of the  
drives are horribly broken ;)

To make these 500, 400, and 300GB drives useful, I would like to  
stitch them together using ccd.  I would use it as 500+300 = 800GB and  
400+400=800GB

That way, in the end I would have
750 x 3
500 + 300 x3
400 + 400 x 1
400 + 200 + 200 x 1
as the members in my RAIDZ2 group.  I understand that this is slightly  
less reliable than having "real" drives for all the members, but I am  
not interested in purchasing 5 more 750GB drives.  I'll replace the  
drives as they fail.

I am wondering if there are any logistical problems.  The three parts  
I am worried about are:

1) Are there any problems with using an iSCSI /dev/zero drive to fake  
drives for creation of a new zpool, with the intent to replace them  
later with proper drives?

2) Are there any problems with using CCD under zpool?  Should I stripe  
or concatenate?  Will the startup scripts (either by design or less  
likely intelligently) decide to start CCD before zfs?  The zpool  
should start without me interfering, correct?

3) I hear a lot about how you should use whole disks so ZFS can enable  
write caching for improved performance.  Do I need to do anything  
special to let the system know that it's OK to enable the write  
cache?  And persist across reboots?

Any other potential pitfalls?  Also, I'd like to confirm that there's  
no way to do this pure ZFS-like - I read the documentation but it  
doesn't seem to have support for nesting vdevs (which would let me do  
this without ccd)

Thanks for any information that you might be able to provide,
Steven Schlansker



Want to link to this message? Use this URL: <https://mail-archive.FreeBSD.org/cgi/mid.cgi?DD98FA1F-04E2-40C3-BF97-7F80ACBB1006>