Skip site navigation (1)Skip section navigation (2)
Date:      Wed, 18 May 2016 10:02:00 +0200
From:      InterNetX - Juergen Gotteswinter <jg@internetx.com>
To:        Joe Love <joe@getsomewhere.net>
Cc:        freebsd-fs@freebsd.org
Subject:   Re: Best practice for high availability ZFS pool
Message-ID:  <361f80cb-c7e2-18f6-ad62-f6f91aa7c293@internetx.com>
In-Reply-To: <8E674522-17F0-46AC-B494-F0053D87D2B0@pingpong.net>
References:  <5E69742D-D2E0-437F-B4A9-A71508C370F9@FreeBSD.org> <5DA13472-F575-4D3D-80B7-1BE371237CE5@getsomewhere.net> <8E674522-17F0-46AC-B494-F0053D87D2B0@pingpong.net>

next in thread | previous in thread | raw e-mail | index | archive | help


Am 5/18/2016 um 9:53 AM schrieb Palle Girgensohn:
> 
> 
>> 17 maj 2016 kl. 18:13 skrev Joe Love <joe@getsomewhere.net>:
>>
>>
>>> On May 16, 2016, at 5:08 AM, Palle Girgensohn <girgen@FreeBSD.org> wrote:
>>>
>>> Hi,
>>>
>>> We need to set up a ZFS pool with redundance. The main goal is high availability - uptime.
>>>
>>> I can see a few of paths to follow.
>>>
>>> 1. HAST + ZFS
>>>
>>> 2. Some sort of shared storage, two machines sharing a JBOD box.
>>>
>>> 3. ZFS replication (zfs snapshot + zfs send | ssh | zfs receive)
>>>
>>> 4. using something else than ZFS, even a different OS if required.
>>>
>>> My main concern with HAST+ZFS is performance. Google offer some insights here, I find mainly unsolved problems. Please share any success stories or other experiences.
>>>
>>> Shared storage still has a single point of failure, the JBOD box. Apart from that, is there even any support for the kind of storage PCI cards that support dual head for a storage box? I cannot find any.
>>>
>>> We are running with ZFS replication today, but it is just too slow for the amount of data.
>>>
>>> We prefer to keep ZFS as we already have a rather big (~30 TB) pool and also tools, scripts, backup all is using ZFS, but if there is no solution using ZFS, we're open to alternatives. Nexenta springs to mind, but I believe it is using shared storage for redundance, so it does have single points of failure?
>>>
>>> Any other suggestions? Please share your experience. :)
>>>
>>> Palle
>>
>> I don’t know if this falls into the realm of what you want, but BSDMag just released an issue with an article entitled “Adding ZFS to the FreeBSD dual-controller storage concept.”
>> https://bsdmag.org/download/reusing_openbsd/
>>
>> My understanding in this setup is that the only single point of failure for this model is the backplanes that the drives would connect to.  Depending on your controller cards, this could be alleviated by simply using multiple drive shelves, and only using one drive/shelf as part of a vdev (then stripe or whatnot over your vdevs).
>>
>> It might not be what you’re after, as it’s basically two systems with their own controllers, with a shared set of drives.  Some expansion from the virtual world to real physical systems will probably need additional variations.
>> I think the TrueNAS system (with HA) is setup similar to this, only without the split between the drives being primarily handled by separate controllers, but someone with more in-depth knowledge would need to confirm/deny this.
>>
>> -Jo
> 
> Hi,
> 
> Do you know any specific controllers that work with dual head?
> 
> Thanks.,
> Palle

go for lsi sas2008 based hba

> 
> 
> _______________________________________________
> freebsd-fs@freebsd.org mailing list
> https://lists.freebsd.org/mailman/listinfo/freebsd-fs
> To unsubscribe, send any mail to "freebsd-fs-unsubscribe@freebsd.org"
> 



Want to link to this message? Use this URL: <https://mail-archive.FreeBSD.org/cgi/mid.cgi?361f80cb-c7e2-18f6-ad62-f6f91aa7c293>