Skip site navigation (1)Skip section navigation (2)
Date:      Tue, 30 Apr 2019 08:12:22 -0600
From:      Alan Somers <asomers@freebsd.org>
To:        Michelle Sullivan <michelle@sorbs.net>
Cc:        Karl Denninger <karl@denninger.net>, FreeBSD <freebsd-stable@freebsd.org>
Subject:   Re: ZFS...
Message-ID:  <CAOtMX2iB7xJszO8nT_KU%2BrFuSkTyiraMHddz1fVooe23bEZguA@mail.gmail.com>
In-Reply-To: <34539589-162B-4891-A68F-88F879B59650@sorbs.net>
References:  <30506b3d-64fb-b327-94ae-d9da522f3a48@sorbs.net> <CAOtMX2gf3AZr1-QOX_6yYQoqE-H%2B8MjOWc=eK1tcwt5M3dCzdw@mail.gmail.com> <56833732-2945-4BD3-95A6-7AF55AB87674@sorbs.net> <3d0f6436-f3d7-6fee-ed81-a24d44223f2f@netfence.it> <17B373DA-4AFC-4D25-B776-0D0DED98B320@sorbs.net> <70fac2fe3f23f85dd442d93ffea368e1@ultra-secure.de> <70C87D93-D1F9-458E-9723-19F9777E6F12@sorbs.net> <CAGMYy3tYqvrKgk2c==WTwrH03uTN1xQifPRNxXccMsRE1spaRA@mail.gmail.com> <5ED8BADE-7B2C-4B73-93BC-70739911C5E3@sorbs.net> <d0118f7e-7cfc-8bf1-308c-823bce088039@denninger.net> <2e4941bf-999a-7f16-f4fe-1a520f2187c0@sorbs.net> <CAOtMX2gOwwZuGft2vPpR-LmTpMVRy6hM_dYy9cNiw%2Bg1kDYpXg@mail.gmail.com> <34539589-162B-4891-A68F-88F879B59650@sorbs.net>

next in thread | previous in thread | raw e-mail | index | archive | help
On Tue, Apr 30, 2019 at 8:05 AM Michelle Sullivan <michelle@sorbs.net> wrot=
e:
>
>
>
> Michelle Sullivan
> http://www.mhix.org/
> Sent from my iPad
>
> > On 01 May 2019, at 00:01, Alan Somers <asomers@freebsd.org> wrote:
> >
> >> On Tue, Apr 30, 2019 at 7:30 AM Michelle Sullivan <michelle@sorbs.net>=
 wrote:
> >>
> >> Karl Denninger wrote:
> >>> On 4/30/2019 05:14, Michelle Sullivan wrote:
> >>>>>> On 30 Apr 2019, at 19:50, Xin LI <delphij@gmail.com> wrote:
> >>>>>> On Tue, Apr 30, 2019 at 5:08 PM Michelle Sullivan <michelle@sorbs.=
net> wrote:
> >>>>>> but in my recent experience 2 issues colliding at the same time re=
sults in disaster
> >>>>> Do we know exactly what kind of corruption happen to your pool?  If=
 you see it twice in a row, it might suggest a software bug that should be =
investigated.
> >>>>>
> >>>>> All I know is it=E2=80=99s a checksum error on a meta slab (122) an=
d from what I can gather it=E2=80=99s the spacemap that is corrupt... but I=
 am no expert.  I don=E2=80=99t believe it=E2=80=99s a software fault as su=
ch, because this was cause by a hard outage (damaged UPSes) whilst resilver=
ing a single (but completely failed) drive.  ...and after the first outage =
a second occurred (same as the first but more damaging to the power hardwar=
e)... the host itself was not damaged nor were the drives or controller.
> >>> .....
> >>>>> Note that ZFS stores multiple copies of its essential metadata, and=
 in my experience with my old, consumer grade crappy hardware (non-ECC RAM,=
 with several faulty, single hard drive pool: bad enough to crash almost mo=
nthly and damages my data from time to time),
> >>>> This was a top end consumer grade mb with non ecc ram that had been =
running for 8+ years without fault (except for hard drive platter failures.=
). Uptime would have been years if it wasn=E2=80=99t for patching.
> >>> Yuck.
> >>>
> >>> I'm sorry, but that may well be what nailed you.
> >>>
> >>> ECC is not just about the random cosmic ray.  It also saves your baco=
n
> >>> when there are power glitches.
> >>
> >> No. Sorry no.  If the data is only half to disk, ECC isn't going to sa=
ve
> >> you at all... it's all about power on the drives to complete the write=
.
> >
> > ECC RAM isn't about saving the last few seconds' worth of data from
> > before a power crash.  It's about not corrupting the data that gets
> > written long before a crash.  If you have non-ECC RAM, then a cosmic
> > ray/alpha ray/row hammer attack/bad luck can corrupt data after it's
> > been checksummed but before it gets DMAed to disk.  Then disk will
> > contain corrupt data and you won't know it until you try to read it
> > back.
>
> I know this... unless I misread Karl=E2=80=99s message he implied the ECC=
 would have saved the corruption in the crash... which is patently false...=
 I think you=E2=80=99ll agree..

I don't think that's what Karl meant.  I think he meant that the
non-ECC RAM could've caused latent corruption that was only detected
when the crash forced a reboot and resilver.

>
> Michelle
>
>
> >
> > -Alan
> >
> >>>
> >>> Unfortunately however there is also cache memory on most modern hard
> >>> drives, most of the time (unless you explicitly shut it off) it's on =
for
> >>> write caching, and it'll nail you too.  Oh, and it's never, in my
> >>> experience, ECC.
> >
> > Fortunately, ZFS never sends non-checksummed data to the hard drive.
> > So an error in the hard drive's cache ram will usually get detected by
> > the ZFS checksum.
> >
> >>
> >> No comment on that - you're right in the first part, I can't comment i=
f
> >> there are drives with ECC.
> >>
> >>>
> >>> In addition, however, and this is something I learned a LONG time ago
> >>> (think Z-80 processors!) is that as in so many very important things
> >>> "two is one and one is none."
> >>>
> >>> In other words without a backup you WILL lose data eventually, and it
> >>> WILL be important.
> >>>
> >>> Raidz2 is very nice, but as the name implies it you have two
> >>> redundancies.  If you take three errors, or if, God forbid, you *writ=
e*
> >>> a block that has a bad checksum in it because it got scrambled while =
in
> >>> RAM, you're dead if that happens in the wrong place.
> >>
> >> Or in my case you write part data therefore invalidating the checksum.=
..
> >>>
> >>>> Yeah.. unlike UFS that has to get really really hosed to restore fro=
m backup with nothing recoverable it seems ZFS can get hosed where issues o=
ccur in just the wrong bit... but mostly it is recoverable (and my experien=
ce has been some nasty shit that always ended up being recoverable.)
> >>>>
> >>>> Michelle
> >>> Oh that is definitely NOT true.... again, from hard experience,
> >>> including (but not limited to) on FreeBSD.
> >>>
> >>> My experience is that ZFS is materially more-resilient but there is n=
o
> >>> such thing as "can never be corrupted by any set of events."
> >>
> >> The latter part is true - and my blog and my current situation is not
> >> limited to or aimed at FreeBSD specifically,  FreeBSD is my experience=
.
> >> The former part... it has been very resilient, but I think (based on
> >> this certain set of events) it is easily corruptible and I have just
> >> been lucky.  You just have to hit a certain write to activate the issu=
e,
> >> and whilst that write and issue might be very very difficult (read: hi=
t
> >> and miss) to hit in normal every day scenarios it can and will
> >> eventually happen.
> >>
> >>>   Backup
> >>> strategies for moderately large (e.g. many Terabytes) to very large
> >>> (e.g. Petabytes and beyond) get quite complex but they're also very
> >>> necessary.
> >>>
> >> and there in lies the problem.  If you don't have a many 10's of
> >> thousands of dollars backup solutions, you're either:
> >>
> >> 1/ down for a looooong time.
> >> 2/ losing all data and starting again...
> >>
> >> ..and that's the problem... ufs you can recover most (in most
> >> situations) and providing the *data* is there uncorrupted by the fault
> >> you can get it all off with various tools even if it is a complete
> >> mess....  here I am with the data that is apparently ok, but the
> >> metadata is corrupt (and note: as I had stopped writing to the drive
> >> when it started resilvering the data - all of it - should be intact...
> >> even if a mess.)
> >>
> >> Michelle
> >>
> >> --
> >> Michelle Sullivan
> >> http://www.mhix.org/
> >>
> >> _______________________________________________
> >> freebsd-stable@freebsd.org mailing list
> >> https://lists.freebsd.org/mailman/listinfo/freebsd-stable
> >> To unsubscribe, send any mail to "freebsd-stable-unsubscribe@freebsd.o=
rg"



Want to link to this message? Use this URL: <https://mail-archive.FreeBSD.org/cgi/mid.cgi?CAOtMX2iB7xJszO8nT_KU%2BrFuSkTyiraMHddz1fVooe23bEZguA>