Skip site navigation (1)Skip section navigation (2)
Date:      Thu, 30 Jun 2011 22:05:04 -0700
From:      Todd Wasson <tsw5@duke.edu>
To:        "C. P. Ghost" <cpghost@cordula.ws>, Pete French <petefrench@ingresso.co.uk>
Cc:        freebsd-stable@freebsd.org
Subject:   Re: zfs on geli vs. geli on zfs (via zvol)
Message-ID:  <EFC7DAF3-D3C5-49CC-92B0-572D28E7A37C@duke.edu>
In-Reply-To: <BANLkTi=Ck_yTxS70GX0-45-DOrVHLYq7gw@mail.gmail.com>
References:  <alpine.BSF.2.00.1106281131250.23640@skylab.org> <BANLkTi=Ck_yTxS70GX0-45-DOrVHLYq7gw@mail.gmail.com>

next in thread | previous in thread | raw e-mail | index | archive | help
Thanks to both C. P. and Pete for your responses.  Comments inline:

> Case 1.) is probably harmless, because geli would return a
> corrupted sectors' content to zfs... which zfs will likely detect
> because it wouldn't checksum correctly. So zfs will correct it
> out of redundant storage, and write it back through a new
> encryption. BE CAREFUL: don't enable hmac integrity checks
> in geli, as that would prevent geli from returning corrupted
> data and would result in hangs!

Perhaps the hmac integrity checks were related to the lack of reporting =
of problems back to zfs that Pete referred to?  Maybe we need someone =
with more technical experience with the filesystem / disk access =
infrastructure to weigh in, but it still doesn't seem clear to me what =
the best option is.

> Case 2.) is a bigger problem. If a sector containing vital
> geli metadata (perhaps portions of keys?) gets corrupted,
> and geli had no way to detect and/or correct this (e.g. by
> using redundant sectors on the same .eli volume!), the whole
> .eli, or maybe some stripes out of it, could become useless.
> ZFS couldn't repair this at all... at least not automatically.
> You'll have to MANUALLY reformat the failed .eli device, and
> resilver it from zfs redundant storage later.

This is precisely the kind of thing that made me think about putting zfs =
directly on the disks instead of geli...  This, and other unknown issues =
that could crop up and are out of geli's ability to guard against.

> There may be other failure modes involved as well. I don't know.
> But in most practical day to day uses, with enough redundancy
> and regular backups, a zfs-over-geli should be good enough.

I understand the point here, but I'm specifically thinking about my =
backup server.  As I understand it, part of the purpose of zfs is to be =
reliable enough to run on a backup server itself, given some redundancy =
as you say.  Perhaps asking for encryption as well is asking too much =
(at least, unless zfs v30 with zfs-crypto ever gets open-sourced and =
ported) but I'd really like to maintain zfs' stability while also having =
an option for encryption.

> I wouldn't put {zfs,ufs}-over-geli-over-raw-zpool though, as this
> would involve considerable overhead, IMHO. In this case, I'd
> rather use a gmirror as a backend, as in a setup:
>  {zfs,ufs}-over-geli-over-{gmirror,graid3}
> or something similar. But I've never tried this though.

I understand about the overhead, but I'm interested in using zfs via a =
zraid to avoid using gmirror or graid3, because of the benefits =
(detection of silent corruption, etc.) that you get with a zraid.  I =
think your suggestion is a pretty good one in terms of =
performance/reliability tradeoff, though.  In my specific case I'm more =
likely to pay a performance cost instead of a reliability cost, but only =
because my server spends most of its time hanging around idling, and =
throughput isn't really an issue.  Thanks regardless, though.


Todd=



Want to link to this message? Use this URL: <https://mail-archive.FreeBSD.org/cgi/mid.cgi?EFC7DAF3-D3C5-49CC-92B0-572D28E7A37C>