Skip site navigation (1)Skip section navigation (2)
Date:      Thu, 30 Jun 2016 17:30:26 +0200
From:      Julien Cigar <julien@perdition.city>
To:        InterNetX - Juergen Gotteswinter <jg@internetx.com>
Cc:        freebsd-fs@freebsd.org
Subject:   Re: HAST + ZFS + NFS + CARP
Message-ID:  <20160630153026.GA5695@mordor.lan>
In-Reply-To: <71b8da1e-acb2-9d4e-5d11-20695aa5274a@internetx.com>
References:  <20160630144546.GB99997@mordor.lan> <71b8da1e-acb2-9d4e-5d11-20695aa5274a@internetx.com>

next in thread | previous in thread | raw e-mail | index | archive | help

--wRRV7LY7NUeQGEoC
Content-Type: text/plain; charset=utf-8
Content-Disposition: inline
Content-Transfer-Encoding: quoted-printable

On Thu, Jun 30, 2016 at 05:14:08PM +0200, InterNetX - Juergen Gotteswinter =
wrote:
>=20
>=20
> Am 30.06.2016 um 16:45 schrieb Julien Cigar:
> > Hello,
> >=20
> > I'm always in the process of setting a redundant low-cost storage for=
=20
> > our (small, ~30 people) team here.
> >=20
> > I read quite a lot of articles/documentations/etc and I plan to use HAST
> > with ZFS for the storage, CARP for the failover and the "good old NFS"
> > to mount the shares on the clients.
> >=20
> > The hardware is 2xHP Proliant DL20 boxes with 2 dedicated disks for the
> > shared storage.
> >=20
> > Assuming the following configuration:
> > - MASTER is the active node and BACKUP is the standby node.
> > - two disks in each machine: ada0 and ada1.
> > - two interfaces in each machine: em0 and em1
> > - em0 is the primary interface (with CARP setup)
> > - em1 is dedicated to the HAST traffic (crossover cable)
> > - FreeBSD is properly installed in each machine.
> > - a HAST resource "disk0" for ada0p2.
> > - a HAST resource "disk1" for ada1p2.
> > - a zpool create zhast mirror /dev/hast/disk0 /dev/hast/disk1 is created
> >   on MASTER
> >=20
> > A couple of questions I am still wondering:
> > - If a disk dies on the MASTER I guess that zpool will not see it and
> >   will transparently use the one on BACKUP through the HAST ressource..
>=20
> thats right, as long as writes on $anything have been successful hast is
> happy and wont start whining
>=20
> >   is it a problem?=20
>=20
> imho yes, at least from management view
>=20
> > could this lead to some corruption?
>=20
> probably, i never heard about anyone who uses that for long time in
> production
>=20
>  At this stage the
> >   common sense would be to replace the disk quickly, but imagine the
> >   worst case scenario where ada1 on MASTER dies, zpool will not see it=
=20
> >   and will transparently use the one from the BACKUP node (through the=
=20
> >   "disk1" HAST ressource), later ada0 on MASTER dies, zpool will not=20
> >   see it and will transparently use the one from the BACKUP node=20
> >   (through the "disk0" HAST ressource). At this point on MASTER the two=
=20
> >   disks are broken but the pool is still considered healthy ... What if=
=20
> >   after that we unplug the em0 network cable on BACKUP? Storage is
> >   down..
> > - Under heavy I/O the MASTER box suddently dies (for some reasons),=20
> >   thanks to CARP the BACKUP node will switch from standy -> active and=
=20
> >   execute the failover script which does some "hastctl role primary" for
> >   the ressources and a zpool import. I wondered if there are any
> >   situations where the pool couldn't be imported (=3D data corruption)?
> >   For example what if the pool hasn't been exported on the MASTER before
> >   it dies?
> > - Is it a problem if the NFS daemons are started at boot on the standby
> >   node, or should they only be started in the failover script? What
> >   about stale files and active connections on the clients?
>=20
> sometimes stale mounts recover, sometimes not, sometimes clients need
> even reboots
>=20
> > - A catastrophic power failure occur and MASTER and BACKUP are suddently
> >   powered down. Later the power returns, is it possible that some
> >   problem occur (split-brain scenario ?) regarding the order in which t=
he
>=20
> sure, you need an exact procedure to recover
>=20
> >   two machines boot up?
>=20
> best practice should be to keep everything down after boot
>=20
> > - Other things I have not thought?
> >=20
>=20
>=20
>=20
> > Thanks!
> > Julien
> >=20
>=20
>=20
> imho:
>=20
> leave hast where it is, go for zfs replication. will save your butt,
> sooner or later if you avoid this fragile combination

Do you mean a $> zfs snapshot followed by a $> zfs send ... | ssh zfs
receive ... ?

--=20
Julien Cigar
Belgian Biodiversity Platform (http://www.biodiversity.be)
PGP fingerprint: EEF9 F697 4B68 D275 7B11  6A25 B2BB 3710 A204 23C0
No trees were killed in the creation of this message.
However, many electrons were terribly inconvenienced.

--wRRV7LY7NUeQGEoC
Content-Type: application/pgp-signature; name="signature.asc"

-----BEGIN PGP SIGNATURE-----
Version: GnuPG v2

iQIcBAABCgAGBQJXdTsPAAoJELK7NxCiBCPAKugQAMBExrLenCJ2tMFbkeT8ii/r
DApZUEnkbeUeBFxvlbt0BPWIyTYWI7aQoAOeiSV4sdjJmxUqANKSFKnoU909dc29
RrH0j36ijjbeogoBq+QmScM2+odvw13gdJxmkxRqBT/FSKRaKiSUZe51VdibsE43
Dm4YknXLb0Y8V6b0vZ6DdQ1iaWZwa/rakalDK1Y4bSoGhQGZPJocPRxlDIuMBway
AZQIIb6HaUueRGDVKAOsJTvVrgV36vNEeHyfeSKakxOm/Qm55qRFwbqfWastFZTd
pzLY6ExLDiZ3TM32bphPtuvcj6EFKD1CyjRJr6+wlR0j19SfCoAVaAwBp7wh95B5
u3Kub34z0HzfWGe+qcoKXKe0eYxUIjn6pE4BziRIO3ggiXuD2FZuHiv5n86sB1/G
qOIb90Mc/wGvgiSCnTNuXg0xUb9RI3x/BBnwM3cONuBXiu26Thuz3NbHx0S/lI5n
G1CfyOhBcZPBHPnfl/BpWLw9+DdCVQ8SU/Rz0rGD0rmHjpeMRtbgXV+hCglpdxC9
bS33+FTqTaLm+L2emMTa/iaM7ZTJwwR6IOPVaHoKKvZ3eJFyfmeWWI+ShZhEx24g
x2G08K4m7cdjXtMlWIXesGc7OzCY/7T1je6hUNR4zWyGYN096i8+r7jsX3GcWQ8D
b9+pMsCa3+vVZN2YpwLy
=2Yy9
-----END PGP SIGNATURE-----

--wRRV7LY7NUeQGEoC--



Want to link to this message? Use this URL: <https://mail-archive.FreeBSD.org/cgi/mid.cgi?20160630153026.GA5695>