Skip site navigation (1)Skip section navigation (2)
Date:      Tue, 20 Jan 2009 12:32:31 +0100
From:      Matias Surdi <matiassurdi@gmail.com>
To:        Frederique Rijsdijk <frederique@isafeelin.org>
Cc:        freebsd-questions@freebsd.org
Subject:   Re: Large raid arrays
Message-ID:  <4975B64F.20207@gmail.com>
In-Reply-To: <4975B4E5.7000609@isafeelin.org>
References:  <gl4auv$pdd$1@ger.gmane.org> <gl4bce$pdd$2@ger.gmane.org> <4975B4E5.7000609@isafeelin.org>

next in thread | previous in thread | raw e-mail | index | archive | help
Frederique Rijsdijk escribió:
> Matias Surdi wrote:
>> Matias Surdi escribió:
>>> Hi,
>>>
>>> I've a host with two large (2Tb and 4 Tb) hardware raid5 arrays.
>>>
>>> For the backup system we are using, I need to join them to make 1
>>> logical device.
>>>
>>> What would you recomment? ccd or vinum?
>>>
>>>
>> Some comments that may help in the decision:
>>
>> - Reliability/resistance to power failures are the most important factor.
>>
>> - It doesn't require high performance or high speed.
>>
> 
> Either gconcat or ZFS, depending which version of FreeBSD you're running.
> 
> gconcat label -v data /dev/raid1 /dev/raid2
> newfs /dev/concat/data
> mkdir /mnt/data && mount /dev/concat/data /mnt/data
> df -h /mnt/data
> 
> or
> 
> zpool create data /dev/raid1 /dev/raid2
> df -h /data
> 
> 
> 
> -- Frederique
> _______________________________________________
> freebsd-questions@freebsd.org mailing list
> http://lists.freebsd.org/mailman/listinfo/freebsd-questions
> To unsubscribe, send any mail to "freebsd-questions-unsubscribe@freebsd.org"
> 


ZFS was a disaster.

That is what we were using until today, when the power went off  and the 
zfs pool ended up corrupted and irrecoverable.

Three other times we had power failures, the zpool ended with some errors.

But, all the times, the UFS partitions remained intact.

I won't use ZFS for a long time.





Want to link to this message? Use this URL: <https://mail-archive.FreeBSD.org/cgi/mid.cgi?4975B64F.20207>