Skip site navigation (1)Skip section navigation (2)
Date:      Wed, 18 May 2016 09:53:26 +0200
From:      Palle Girgensohn <girgen@pingpong.net>
To:        Joe Love <joe@getsomewhere.net>
Cc:        Palle Girgensohn <girgen@FreeBSD.org>, freebsd-fs@freebsd.org
Subject:   Re: Best practice for high availability ZFS pool
Message-ID:  <8E674522-17F0-46AC-B494-F0053D87D2B0@pingpong.net>
In-Reply-To: <5DA13472-F575-4D3D-80B7-1BE371237CE5@getsomewhere.net>
References:  <5E69742D-D2E0-437F-B4A9-A71508C370F9@FreeBSD.org> <5DA13472-F575-4D3D-80B7-1BE371237CE5@getsomewhere.net>

next in thread | previous in thread | raw e-mail | index | archive | help


> 17 maj 2016 kl. 18:13 skrev Joe Love <joe@getsomewhere.net>:
>=20
>=20
>> On May 16, 2016, at 5:08 AM, Palle Girgensohn <girgen@FreeBSD.org> wrote:=

>>=20
>> Hi,
>>=20
>> We need to set up a ZFS pool with redundance. The main goal is high avail=
ability - uptime.
>>=20
>> I can see a few of paths to follow.
>>=20
>> 1. HAST + ZFS
>>=20
>> 2. Some sort of shared storage, two machines sharing a JBOD box.
>>=20
>> 3. ZFS replication (zfs snapshot + zfs send | ssh | zfs receive)
>>=20
>> 4. using something else than ZFS, even a different OS if required.
>>=20
>> My main concern with HAST+ZFS is performance. Google offer some insights h=
ere, I find mainly unsolved problems. Please share any success stories or ot=
her experiences.
>>=20
>> Shared storage still has a single point of failure, the JBOD box. Apart f=
rom that, is there even any support for the kind of storage PCI cards that s=
upport dual head for a storage box? I cannot find any.
>>=20
>> We are running with ZFS replication today, but it is just too slow for th=
e amount of data.
>>=20
>> We prefer to keep ZFS as we already have a rather big (~30 TB) pool and a=
lso tools, scripts, backup all is using ZFS, but if there is no solution usi=
ng ZFS, we're open to alternatives. Nexenta springs to mind, but I believe i=
t is using shared storage for redundance, so it does have single points of f=
ailure?
>>=20
>> Any other suggestions? Please share your experience. :)
>>=20
>> Palle
>=20
> I don=E2=80=99t know if this falls into the realm of what you want, but BS=
DMag just released an issue with an article entitled =E2=80=9CAdding ZFS to t=
he FreeBSD dual-controller storage concept.=E2=80=9D
> https://bsdmag.org/download/reusing_openbsd/
>=20
> My understanding in this setup is that the only single point of failure fo=
r this model is the backplanes that the drives would connect to.  Depending o=
n your controller cards, this could be alleviated by simply using multiple d=
rive shelves, and only using one drive/shelf as part of a vdev (then stripe o=
r whatnot over your vdevs).
>=20
> It might not be what you=E2=80=99re after, as it=E2=80=99s basically two s=
ystems with their own controllers, with a shared set of drives.  Some expans=
ion from the virtual world to real physical systems will probably need addit=
ional variations.
> I think the TrueNAS system (with HA) is setup similar to this, only withou=
t the split between the drives being primarily handled by separate controlle=
rs, but someone with more in-depth knowledge would need to confirm/deny this=
.
>=20
> -Jo

Hi,

Do you know any specific controllers that work with dual head?

Thanks.,
Palle





Want to link to this message? Use this URL: <https://mail-archive.FreeBSD.org/cgi/mid.cgi?8E674522-17F0-46AC-B494-F0053D87D2B0>