Skip site navigation (1)Skip section navigation (2)
Date:      Thu, 19 May 2011 20:14:36 +0200
From:      Pawel Jakub Dawidek <pjd@FreeBSD.org>
To:        Per von Zweigbergk <pvz@itassistans.se>
Cc:        freebsd-fs@freebsd.org
Subject:   Re: HAST + ZFS self healing? Hot spares?
Message-ID:  <20110519181436.GB2100@garage.freebsd.pl>
In-Reply-To: <85EC77D3-116E-43B0-BFF1-AE1BD71B5CE9@itassistans.se>
References:  <85EC77D3-116E-43B0-BFF1-AE1BD71B5CE9@itassistans.se>

next in thread | previous in thread | raw e-mail | index | archive | help

--oLBj+sq0vYjzfsbl
Content-Type: text/plain; charset=us-ascii
Content-Disposition: inline
Content-Transfer-Encoding: quoted-printable

On Wed, May 18, 2011 at 08:13:13AM +0200, Per von Zweigbergk wrote:
> I've been investigating HAST as a possibility in adding synchronous repli=
cation and failover to a set of two NFS servers backed by NFS. The servers =
themselves contain quite a few disks. 20 of them (7200 RPM SAS disks), to b=
e exact. (If I didn't lose count again...) Plus two quick but small SSD's f=
or ZIL and two not-as-quick but larger SSD's for L2ARC.
[...]

The configuration you should try first is to connect each disks pair
using HAST and create ZFS pool on top of those HAST devices.

Let's assume you have 4 data disks (da0-da3), 2 SSD disks for ZIL
(da4-da5) and 2 SSD disks for L2ARC (da6-da7).

Then you create the following HAST devices:

/dev/hast/data0 =3D MachineA(da0) + MachineB(da0)
/dev/hast/data1 =3D MachineA(da1) + MachineB(da1)
/dev/hast/data2 =3D MachineA(da2) + MachineB(da2)
/dev/hast/data3 =3D MachineA(da3) + MachineB(da3)

/dev/hast/slog0 =3D MachineA(da4) + MachineB(da4)
/dev/hast/slog1 =3D MachineA(da5) + MachineB(da5)

/dev/hast/cache0 =3D MachineA(da6) + MachineB(da6)
/dev/hast/cache1 =3D MachineA(da7) + MachineB(da7)

And then you create ZFS pool of your choice. Here you specify
redundancy, so if there is any you will have ZFS self-healing:

zpool create tank raidz1 hast/data{0,1,2,3} log mirror hast/slog{0,1} cache=
 hast/cache{0,1}

> 1. Hardware failure management. In case of a hardware failure, I'm not ex=
actly sure what will happen, but I suspect the single-disk RAID-0 array con=
taining the failed disk will simply fail. I assume it will still exist, but=
 refuse to be read or written. In this situation I understand HAST will han=
dle this by routing all I/O to the secondary server, in case the disk on th=
e primary side dies, or simply by cutting off replication if the disk on th=
e secondary server fails.

HAST sends all write requests to both nodes (if secondary is present)
and read requests only to primary node. In some cases reads can be send
to secondary node, for example when synchronization is in progress and
secondary has more recent data or reading from local disk failed (either
because of single EIO or entire disk went bad).

In other words HAST itself can handle one of the mirrored disk failure.

If entire hast/<resource> dies for some reason (eg. secondary is down
and local disk dies) then ZFS redundancy kicks in.

--=20
Pawel Jakub Dawidek                       http://www.wheelsystems.com
FreeBSD committer                         http://www.FreeBSD.org
Am I Evil? Yes, I Am!                     http://yomoli.com

--oLBj+sq0vYjzfsbl
Content-Type: application/pgp-signature

-----BEGIN PGP SIGNATURE-----
Version: GnuPG v2.0.14 (FreeBSD)

iEYEARECAAYFAk3VXgsACgkQForvXbEpPzQU1QCfbfpiBAKH71tOMJMKfUSIwp7Y
WjMAn2R6hjssqi1y5oImzrgc0KrzAovY
=lZEY
-----END PGP SIGNATURE-----

--oLBj+sq0vYjzfsbl--



Want to link to this message? Use this URL: <https://mail-archive.FreeBSD.org/cgi/mid.cgi?20110519181436.GB2100>