Skip site navigation (1)Skip section navigation (2)
Date:      Tue, 17 May 2016 12:30:13 +0200
From:      Ben RUBSON <ben.rubson@gmail.com>
To:        freebsd-fs@freebsd.org
Subject:   Re: Best practice for high availability ZFS pool
Message-ID:  <AB71607F-7048-404E-AFE3-D448823BB768@gmail.com>
In-Reply-To: <alpine.GSO.2.20.1605162034170.7756@freddy.simplesystems.org>
References:  <5E69742D-D2E0-437F-B4A9-A71508C370F9@FreeBSD.org> <alpine.GSO.2.20.1605162034170.7756@freddy.simplesystems.org>

next in thread | previous in thread | raw e-mail | index | archive | help
> On 17 may 2016 at 03:43, Bob Friesenhahn =
<bfriesen@simple.dallas.tx.us> wrote:
>=20
> On Mon, 16 May 2016, Palle Girgensohn wrote:
>>=20
>> Shared storage still has a single point of failure, the JBOD box. =
Apart from that, is there even any support for the kind of storage PCI =
cards that support dual head for a storage box? I cannot find any.
>=20
> Use two (or three) JBOD boxes and do simple zfs mirroring across them =
so you can unplug a JBOD and the pool still works. Or use a bunch of =
JBOD boxes and use zfs raidz2 (or raidz3) across them with careful LUN =
selection so there is total storage redundancy and you can unplug a JBOD =
and the pool still works.
>=20
> Fiber channel (or FCoE) or iSCSI allows putting the hardware at some =
distance.
>=20
> Without completely isolated systems there is always the risk of total =
failure.  Even with zfs send there is the risk of total failure if the =
sent data results in corruption on the receiving side.

In this case rollback one of the previous snapshots on the receiving =
side ?
Did you mean the sent data can totally brake the receiving pool making =
it unusable / unable to import ? Did we already see this ?

Thank you,

Ben=



Want to link to this message? Use this URL: <https://mail-archive.FreeBSD.org/cgi/mid.cgi?AB71607F-7048-404E-AFE3-D448823BB768>