Skip site navigation (1)Skip section navigation (2)
Date:      Mon, 16 May 2016 20:43:49 -0500 (CDT)
From:      Bob Friesenhahn <bfriesen@simple.dallas.tx.us>
To:        Palle Girgensohn <girgen@freebsd.org>
Cc:        freebsd-fs@freebsd.org
Subject:   Re: Best practice for high availability ZFS pool
Message-ID:  <alpine.GSO.2.20.1605162034170.7756@freddy.simplesystems.org>
In-Reply-To: <5E69742D-D2E0-437F-B4A9-A71508C370F9@FreeBSD.org>
References:  <5E69742D-D2E0-437F-B4A9-A71508C370F9@FreeBSD.org>

next in thread | previous in thread | raw e-mail | index | archive | help
On Mon, 16 May 2016, Palle Girgensohn wrote:
>
> Shared storage still has a single point of failure, the JBOD box. 
> Apart from that, is there even any support for the kind of storage 
> PCI cards that support dual head for a storage box? I cannot find 
> any.

Use two (or three) JBOD boxes and do simple zfs mirroring across them 
so you can unplug a JBOD and the pool still works. Or use a bunch of 
JBOD boxes and use zfs raidz2 (or raidz3) across them with careful LUN 
selection so there is total storage redundancy and you can unplug a 
JBOD and the pool still works.

Fiber channel (or FCoE) or iSCSI allows putting the hardware at some 
distance.

Without completely isolated systems there is always the risk of total 
failure.  Even with zfs send there is the risk of total failure if the 
sent data results in corruption on the receiving side.

Decide if you really want to optimize for maximum availability or you 
want to minimize the duration of the outage if something goes wrong. 
There is a difference.

Bob
-- 
Bob Friesenhahn
bfriesen@simple.dallas.tx.us, http://www.simplesystems.org/users/bfriesen/
GraphicsMagick Maintainer,    http://www.GraphicsMagick.org/



Want to link to this message? Use this URL: <https://mail-archive.FreeBSD.org/cgi/mid.cgi?alpine.GSO.2.20.1605162034170.7756>