Skip site navigation (1)Skip section navigation (2)
Date:      Tue, 30 Apr 2019 18:09:06 +1000
From:      Michelle Sullivan <michelle@sorbs.net>
To:        Andrea Venturoli <ml@netfence.it>
Cc:        freebsd-stable <freebsd-stable@freebsd.org>
Subject:   Re: ZFS...
Message-ID:  <17B373DA-4AFC-4D25-B776-0D0DED98B320@sorbs.net>
In-Reply-To: <3d0f6436-f3d7-6fee-ed81-a24d44223f2f@netfence.it>
References:  <30506b3d-64fb-b327-94ae-d9da522f3a48@sorbs.net> <CAOtMX2gf3AZr1-QOX_6yYQoqE-H%2B8MjOWc=eK1tcwt5M3dCzdw@mail.gmail.com> <56833732-2945-4BD3-95A6-7AF55AB87674@sorbs.net> <3d0f6436-f3d7-6fee-ed81-a24d44223f2f@netfence.it>

next in thread | previous in thread | raw e-mail | index | archive | help


Michelle Sullivan
http://www.mhix.org/
Sent from my iPad

> On 30 Apr 2019, at 17:10, Andrea Venturoli <ml@netfence.it> wrote:
>=20
>> On 4/30/19 2:41 AM, Michelle Sullivan wrote:
>>=20
>> The system was originally built on 9.0, and got upgraded through out the y=
ears... zfsd was not available back then.  So get your point, but maybe you d=
idn=E2=80=99t realize this blog was a history of 8+ years?
>=20
> That's one of the first things I thought about while reading the original p=
ost: what can be inferred from it is that ZFS might not have been that good i=
n the past.
> It *could* still suffer from the same problems or it *could* have improved=
 and be more resilient.
> Answering that would be interesting...
>=20

Without a doubt it has come a long way, but in my opinion, until there is a t=
ool to walk the data (to transfer it out) or something that can either repai=
r or invalidate metadata (such as a spacemap corruption) there is still a fa=
tal flaw that makes it questionable to use... and that is for one reason alo=
ne (regardless of my current problems.)

Consider..

If one triggers such a fault on a production server, how can one justify tra=
nsferring from backup multiple terabytes (or even petabytes now) of data to r=
epair an unmountable/faulted array.... because all backup solutions I know c=
urrently would take days if not weeks to restore the sort of store ZFS is to=
uted with supporting. =20

Now, yes most production environments have multiple backing stores so will h=
ave a server or ten to switch to whilst the store is being recovered, but it=
 still wouldn=E2=80=99t be a pleasant experience... not to mention the possi=
bility that if one store is corrupted there is a chance that the other store=
(s) would also be affected in the same way if in the same DC... (Eg a DC fir=
e - which I have seen) .. and if you have multi DC stores to protect from th=
at.. size of the pipes between DCs comes clearly into play.

Thoughts?

Michelle





Want to link to this message? Use this URL: <https://mail-archive.FreeBSD.org/cgi/mid.cgi?17B373DA-4AFC-4D25-B776-0D0DED98B320>