Skip site navigation (1)Skip section navigation (2)
Date:      Mon, 4 Jul 2016 21:31:32 +0200
From:      Julien Cigar <julien@perdition.city>
To:        Jordan Hubbard <jkh@ixsystems.com>
Cc:        Ben RUBSON <ben.rubson@gmail.com>, freebsd-fs@freebsd.org
Subject:   Re: HAST + ZFS + NFS + CARP
Message-ID:  <20160704193131.GJ41276@mordor.lan>
In-Reply-To: <AE372BF0-02BE-4BF3-9073-A05DB4E7FE34@ixsystems.com>
References:  <678321AB-A9F7-4890-A8C7-E20DFDC69137@gmail.com> <20160630185701.GD5695@mordor.lan> <6035AB85-8E62-4F0A-9FA8-125B31A7A387@gmail.com> <20160703192945.GE41276@mordor.lan> <20160703214723.GF41276@mordor.lan> <65906F84-CFFC-40E9-8236-56AFB6BE2DE1@ixsystems.com> <B48FB28E-30FA-477F-810E-DF4F575F5063@gmail.com> <61283600-A41A-4A8A-92F9-7FAFF54DD175@ixsystems.com> <20160704183643.GI41276@mordor.lan> <AE372BF0-02BE-4BF3-9073-A05DB4E7FE34@ixsystems.com>

next in thread | previous in thread | raw e-mail | index | archive | help

--3MMMIZFJzhAsRj/+
Content-Type: text/plain; charset=utf-8
Content-Disposition: inline
Content-Transfer-Encoding: quoted-printable

On Mon, Jul 04, 2016 at 11:56:57AM -0700, Jordan Hubbard wrote:
>=20
> > On Jul 4, 2016, at 11:36 AM, Julien Cigar <julien@perdition.city> wrote:
> >=20
> > I think the discussion evolved a bit since I started this thread, the
> > original purpose was to build a low-cost redundant storage for a small
> > infrastructure, no more no less.
> >=20
> > The context is the following: I work in a small company, partially
> > financed by public funds, we started small, evolved a bit to a point
> > that some redundancy is required for $services.=20
> > Unfortunately I'm alone to take care of the infrastructure (and it's=20
> > only 50% of my time) and we don't have that much money :(=20
>=20
> Sure, I get that part also, but let=E2=80=99s put the entire conversation=
 into context:
>=20
> 1. You=E2=80=99re looking for a solution to provide some redundant storag=
e in a very specific scenario.
>=20
> 2. We=E2=80=99re talking on a public mailing list with a bunch of folks, =
so the conversation is also naturally going to go from the specific to the =
general - e.g. =E2=80=9CIs there anything of broader applicability to be le=
arned / used here?=E2=80=9D  I=E2=80=99m speaking more to the larger audien=
ce who is probably wondering if there=E2=80=99s a more general solution her=
e using the same =E2=80=9Cmoving parts=E2=80=9D.

of course..! It has been an interesting discussion, learned some things,
and it's always enjoyable to get different point of view.

>=20
> To get specific again, I am not sure I would do what you are contemplatin=
g given your circumstances since it=E2=80=99s not the cheapest / simplest s=
olution.  The cheapest / simplest solution would be to create 2 small ZFS s=
ervers and simply do zfs snapshot replication between them at periodic inte=
rvals, so you have a backup copy of the data for maximum safety as well as =
a physically separate server in case one goes down hard.  Disk storage is t=
he cheap part now, particularly if you have data redundancy and can therefo=
re use inexpensive disks, and ZFS replication is certainly =E2=80=9Cgood en=
ough=E2=80=9D for disaster recovery.  As others have said, adding additiona=
l layers will only increase the overall fragility of the solution, and =E2=
=80=9Cfragile=E2=80=9D is kind of the last thing you need when you=E2=80=99=
re frantically trying to deal with a server that has gone down for what cou=
ld be any number of reasons.
>=20
> I, for example, use a pair of FreeNAS Minis at home to store all my media=
 and they work fine at minimal cost.  I use one as the primary server that =
talks to all of the VMWare / Plex / iTunes server applications (and serves =
as a backup device for all my iDevices) and it replicates the entire pool t=
o another secondary server that can be pushed into service as the primary i=
f the first one loses a power supply / catches fire / loses more than 1 dri=
ve at a time / etc.  Since I have a backup, I can also just use RAIDZ1 for =
the 4x4Tb drive configuration on the primary and get a good storage / redun=
dancy ratio (I can lose a single drive without data loss but am also not wa=
sting a lot of storage on parity).

You're right, I'll definitively reconsider the zfs send / zfs receive
approach.

>=20
> Just my two cents.  There are a lot of different ways to do this, and lik=
e all things involving computers (especially PCs), the simplest way is usua=
lly the best.
>=20

Thanks!

Julien

> - Jordan
>=20

--=20
Julien Cigar
Belgian Biodiversity Platform (http://www.biodiversity.be)
PGP fingerprint: EEF9 F697 4B68 D275 7B11  6A25 B2BB 3710 A204 23C0
No trees were killed in the creation of this message.
However, many electrons were terribly inconvenienced.

--3MMMIZFJzhAsRj/+
Content-Type: application/pgp-signature; name="signature.asc"

-----BEGIN PGP SIGNATURE-----
Version: GnuPG v2

iQIcBAABCgAGBQJXermQAAoJELK7NxCiBCPAxloP/0EHtudE2MmOGNtgttmTGNGN
KLLGYcaoDZI4sn2t/q+48oBNfTDPdkHnsXDhGgVRXcQ4yXTuBg9IzcqezwNB+lGb
FaC5ckFLqBlXzXUo2Y4lia6T45MtH4QrZn9GPH6O4QfvIQp0ulmCEtVVIUKw/fuA
tanEGTTwc6qMhdzd1Uopml2HZJ74SzGmavFf/N4eP3Gnzz/p9a6SQjCH7TAtvxH7
ck6AM3teS79eGdb2BmU6Ehs9A10LZefnleRfMLi2V7RDNNXP2oI4ohI00dTRz1Cf
GrcmPB4oRlGEonbMBytYZVOTdrwCGsFyrXTnT7xy4XHCg/9ndUZDcnnDTD8syEBL
CFz8l/uZB++l0lAs8a+RyarTNRSMvTznrS039IaOHhD23M/zGXLGVe8EwQlpkwom
I/3UUQeQI1291Xnq12PPJUKE2SK8gZ9eJWswO97eXJow7ky1L2bIc3HFUkpkDu9y
pPbrMtzslTdfCb4w2SHlE1yJn0/Mo/FyKMuPKbHP5uBDvVoc5PpH912Tcg544Ose
ChriAsDJ1Fy23pg52wj/W5zXCzfMjTKLmokNwV3xH4c8wbpFo7jm7IR9lA8BoFME
dNSSnLPF3mPFAc4TUM0hS980cPAYE5ovoKMwsOWPR0YEhwtFGZp5xUdpMdkrpnAA
SbW49On9C/KPLhfK+NFp
=TMxe
-----END PGP SIGNATURE-----

--3MMMIZFJzhAsRj/+--



Want to link to this message? Use this URL: <https://mail-archive.FreeBSD.org/cgi/mid.cgi?20160704193131.GJ41276>