Skip site navigation (1)Skip section navigation (2)
Date:      Wed, 1 Oct 2014 17:00:44 +0300
From:      George Kontostanos <gkontos.mail@gmail.com>
To:        jg@internetx.com
Cc:        freebsd-fs@freebsd.org
Subject:   Re: HAST with broken HDD
Message-ID:  <CA%2BdUSyr9OK9SvN3wX-O4DeriLBP-EEuAA8TTSYwdGfcR1asdtQ@mail.gmail.com>
In-Reply-To: <542C0710.3020402@internetx.com>
References:  <542BC135.1070906@Skynet.be> <542BDDB3.8080805@internetx.com> <CA%2BdUSypO8xTR3sh_KSL9c9FLxbGH%2BbTR9-gPdcCVd%2Bt0UgUF-g@mail.gmail.com> <542BF853.3040604@internetx.com> <CA%2BdUSyp4vMB_qUeqHgXNz2FiQbWzh8MjOEFYw%2BURcN4gUq69nw@mail.gmail.com> <542C019E.2080702@internetx.com> <CA%2BdUSyoEcPdJ1hdR3k1vNROFG7p1kN0HB5S2a_0gYhiV75OLAw@mail.gmail.com> <542C0710.3020402@internetx.com>

next in thread | previous in thread | raw e-mail | index | archive | help
On Wed, Oct 1, 2014 at 4:52 PM, InterNetX - Juergen Gotteswinter <
jg@internetx.com> wrote:

> Am 01.10.2014 um 15:49 schrieb George Kontostanos:
> > On Wed, Oct 1, 2014 at 4:29 PM, InterNetX - Juergen Gotteswinter
> > <jg@internetx.com <mailto:jg@internetx.com>> wrote:
> >
> >     Am 01.10.2014 um 15:06 schrieb George Kontostanos:
> >     >
> >     >
> >     > On Wed, Oct 1, 2014 at 3:49 PM, InterNetX - Juergen Gotteswinter
> >     > <jg@internetx.com <mailto:jg@internetx.com> <mailto:
> jg@internetx.com
> >     <mailto:jg@internetx.com>>> wrote:
> >     >
> >     >     Am 01.10.2014 um 14:28 schrieb George Kontostanos:
> >     >     >
> >     >     > On Wed, Oct 1, 2014 at 1:55 PM, InterNetX - Juergen
> Gotteswinter
> >     >     > <jg@internetx.com <mailto:jg@internetx.com> <mailto:
> jg@internetx.com
> >     <mailto:jg@internetx.com>>
> >     >     <mailto:jg@internetx.com <mailto:jg@internetx.com> <mailto:
> jg@internetx.com
> >     <mailto:jg@internetx.com>>>> wrote:
> >     >     >
> >     >     >     Am 01.10.2014 um 10:54 schrieb JF-Bogaerts:
> >     >     >     >    Hello,
> >     >     >     >    I'm preparing a HA NAS solution using HAST.
> >     >     >     >    I'm wondering what will happen if one of disks of
> the
> >     >     primary node will
> >     >     >     >    fail or become erratic.
> >     >     >     >
> >     >     >     >    Thx,
> >     >     >     >    Jean-Fran=C3=A7ois Bogaerts
> >     >     >
> >     >     >     nothing. if you are using zfs on top of hast zfs wont
> even
> >     >     take notice
> >     >     >     about the disk failure.
> >     >     >
> >     >     >     as long as the write operation was sucessfull on one of
> the 2
> >     >     nodes,
> >     >     >     hast doesnt notify the ontop layers about io errors.
> >     >     >
> >     >     >     interesting concept, took me some time to deal with thi=
s.
> >     >     >
> >     >     >
> >     >     > Are you saying that the pool will appear to be optimal even
> with a bad
> >     >     > drive?
> >     >     >
> >     >     >
> >     >
> >     >     https://forums.freebsd.org/viewtopic.php?&t=3D24786
> >     >
> >     >
> >     >
> >     > It appears that this is actually the case. And it is very
> disturbing,
> >     > meaning that a drive failure goes unnoticed. In my case I
> completely
> >     > removed the second disk on the primary node and a zpool status
> showed
> >     > absolutely no problem. Scrubbing the pool began resilvering which
> >     > indicates that there is actually something wrong!
> >
> >
> >     right. lets go further and think how zfs works regarding direct
> hardware
> >     / disk access. theres a layer between which always says ey,
> everthing is
> >     fine. no more need for pool scrubbing, since hastd wont tell if
> anything
> >     is wrong :D
> >
> >
> > Correct, ZFS needs direct access and any layer in between might end up =
a
> > disaster!!!
> >
> > Which means that practically HAST should only be used in UFS
> > environments backed by a hardware controller. In that case, HAST will
> > not notice again anything (unless you loose the controller) but at leas=
t
> > you will know that you need to replace a disk, by monitoring the
> > controller status.
> >
>
> imho this should be included at least as a notice/warning in the hastd
> manpage, afaik theres no real warning about such problems with the
> hastd/zfs combo. but lots of howtos are out there describing exactly
> such setups.
>
> Yes, it should. I have actually written a guide like that when HAST was a=
t
its early stages. I had never tested it though for flaws. This thread
started ringing some bells!



> sad, since the comparable piece on linux - drbd - is handling io errors
> fine. the upper layers get notified like it should be imho
>
> My next lab environment will be to try a DRBD similar set up. Although
some tests we performed last year with ZFS on linux were not that
promising.


--=20
George Kontostanos
---
http://www.aisecure.net



Want to link to this message? Use this URL: <https://mail-archive.FreeBSD.org/cgi/mid.cgi?CA%2BdUSyr9OK9SvN3wX-O4DeriLBP-EEuAA8TTSYwdGfcR1asdtQ>