Skip site navigation (1)Skip section navigation (2)
Date:      Tue, 30 Apr 2019 10:28:36 -0400
From:      Paul Mather <paul@gromit.dlib.vt.edu>
To:        Michelle Sullivan <michelle@sorbs.net>
Cc:        rainer@ultra-secure.de, owner-freebsd-stable@freebsd.org, freebsd-stable <freebsd-stable@freebsd.org>, Andrea Venturoli <ml@netfence.it>
Subject:   Re: ZFS...
Message-ID:  <D73696A4-7D14-4782-85E9-001FCDD36EC9@gromit.dlib.vt.edu>
In-Reply-To: <70C87D93-D1F9-458E-9723-19F9777E6F12@sorbs.net>
References:  <30506b3d-64fb-b327-94ae-d9da522f3a48@sorbs.net> <CAOtMX2gf3AZr1-QOX_6yYQoqE-H%2B8MjOWc=eK1tcwt5M3dCzdw@mail.gmail.com> <56833732-2945-4BD3-95A6-7AF55AB87674@sorbs.net> <3d0f6436-f3d7-6fee-ed81-a24d44223f2f@netfence.it> <17B373DA-4AFC-4D25-B776-0D0DED98B320@sorbs.net> <70fac2fe3f23f85dd442d93ffea368e1@ultra-secure.de> <70C87D93-D1F9-458E-9723-19F9777E6F12@sorbs.net>

next in thread | previous in thread | raw e-mail | index | archive | help
On Apr 30, 2019, at 5:05 AM, Michelle Sullivan <michelle@sorbs.net> wrote:

>
>
> Michelle Sullivan
> http://www.mhix.org/
> Sent from my iPad
>
>> On 30 Apr 2019, at 18:44, rainer@ultra-secure.de wrote:
>>
>> Am 2019-04-30 10:09, schrieb Michelle Sullivan:
>>
>>> Now, yes most production environments have multiple backing stores so
>>> will have a server or ten to switch to whilst the store is being
>>> recovered, but it still wouldn’t be a pleasant experience... not to
>>> mention the possibility that if one store is corrupted there is a
>>> chance that the other store(s) would also be affected in the same way
>>> if in the same DC... (Eg a DC fire - which I have seen) .. and if you
>>> have multi DC stores to protect from that.. size of the pipes between
>>> DCs comes clearly into play.
>>
>>
>> I have one customer with about 13T of ZFS - and because it would take a  
>> while to restore (actual backups), it zfs-sends delta-snapshots every  
>> hour to a standby-system.
>>
>> It was handy when we had to rebuild the system with different HBAs.
>
> I wonder what would happen if you scaled that up by just 10 (storage) and  
> had the master blow up where it needs to be restored from backup.. how  
> long would one be praying to higher powers that there is no problem with  
> the backup...? (As in no outage or error causing a complete outAge.)...  
> don’t get me wrong.. we all get to that position at sometime, but in my  
> recent experience 2 issues colliding at the same time results in  
> disaster.  13T is really not something I have issues with as I can  
> usually cobble something together with 16T.. (at least until 6T drives  
> became a viable (cost and availability at short notice) option...  even  
> 10T is becoming easier to get a hold of now.. but I have a measly 96T  
> here and it takes weeks even with gigabit bonded interfaces when I need  
> to restore.


Such is the curse of large-scale storage when disaster befalls it.

I guess you need to invent a home brew version of Amazon Snowball or Amazon  
Snowmobile. ;-)

Cheers,

Paul.




Want to link to this message? Use this URL: <https://mail-archive.FreeBSD.org/cgi/mid.cgi?D73696A4-7D14-4782-85E9-001FCDD36EC9>