Skip site navigation (1)Skip section navigation (2)
Date:      Tue, 17 May 2016 11:13:18 -0500
From:      Joe Love <joe@getsomewhere.net>
To:        Palle Girgensohn <girgen@FreeBSD.org>
Cc:        freebsd-fs@freebsd.org
Subject:   Re: Best practice for high availability ZFS pool
Message-ID:  <5DA13472-F575-4D3D-80B7-1BE371237CE5@getsomewhere.net>
In-Reply-To: <5E69742D-D2E0-437F-B4A9-A71508C370F9@FreeBSD.org>
References:  <5E69742D-D2E0-437F-B4A9-A71508C370F9@FreeBSD.org>

next in thread | previous in thread | raw e-mail | index | archive | help

> On May 16, 2016, at 5:08 AM, Palle Girgensohn <girgen@FreeBSD.org> =
wrote:
>=20
> Hi,
>=20
> We need to set up a ZFS pool with redundance. The main goal is high =
availability - uptime.
>=20
> I can see a few of paths to follow.
>=20
> 1. HAST + ZFS
>=20
> 2. Some sort of shared storage, two machines sharing a JBOD box.
>=20
> 3. ZFS replication (zfs snapshot + zfs send | ssh | zfs receive)
>=20
> 4. using something else than ZFS, even a different OS if required.
>=20
> My main concern with HAST+ZFS is performance. Google offer some =
insights here, I find mainly unsolved problems. Please share any success =
stories or other experiences.
>=20
> Shared storage still has a single point of failure, the JBOD box. =
Apart from that, is there even any support for the kind of storage PCI =
cards that support dual head for a storage box? I cannot find any.
>=20
> We are running with ZFS replication today, but it is just too slow for =
the amount of data.
>=20
> We prefer to keep ZFS as we already have a rather big (~30 TB) pool =
and also tools, scripts, backup all is using ZFS, but if there is no =
solution using ZFS, we're open to alternatives. Nexenta springs to mind, =
but I believe it is using shared storage for redundance, so it does have =
single points of failure?
>=20
> Any other suggestions? Please share your experience. :)
>=20
> Palle
>=20

I don=E2=80=99t know if this falls into the realm of what you want, but =
BSDMag just released an issue with an article entitled =E2=80=9CAdding =
ZFS to the FreeBSD dual-controller storage concept.=E2=80=9D
https://bsdmag.org/download/reusing_openbsd/

My understanding in this setup is that the only single point of failure =
for this model is the backplanes that the drives would connect to.  =
Depending on your controller cards, this could be alleviated by simply =
using multiple drive shelves, and only using one drive/shelf as part of =
a vdev (then stripe or whatnot over your vdevs).

It might not be what you=E2=80=99re after, as it=E2=80=99s basically two =
systems with their own controllers, with a shared set of drives.  Some =
expansion from the virtual world to real physical systems will probably =
need additional variations.
I think the TrueNAS system (with HA) is setup similar to this, only =
without the split between the drives being primarily handled by separate =
controllers, but someone with more in-depth knowledge would need to =
confirm/deny this.

-Joe




Want to link to this message? Use this URL: <https://mail-archive.FreeBSD.org/cgi/mid.cgi?5DA13472-F575-4D3D-80B7-1BE371237CE5>