Skip site navigation (1)Skip section navigation (2)
Date:      Wed, 7 Mar 2012 17:00:46 -0500
From:      "xenophon\\+freebsd" <xenophon+freebsd@irtnog.org>
To:        <freebsd-stable@freebsd.org>
Subject:   RE: FreeBSD root on a geli-encrypted ZFS pool
Message-ID:  <BABF8C57A778F04791343E5601659908236BDA@cinip100ntsbs.irtnog.net>
In-Reply-To: <20120307174850.746a6b0a@fabiankeil.de>
References:  <BABF8C57A778F04791343E5601659908236BD9@cinip100ntsbs.irtnog.net> <20120307174850.746a6b0a@fabiankeil.de>

next in thread | previous in thread | raw e-mail | index | archive | help
> -----Original Message-----
> From: Fabian Keil [mailto:freebsd-listen@fabiankeil.de]
> Sent: Wednesday, March 07, 2012 11:49 AM

Thanks for your comments!

> It's not clear to me why you enable geli integrity verification.
>=20
> Given that it is single-sector-based it seems inferior to ZFS's
> integrity checks in every way and could actually prevent ZFS from
> properly detecting (and depending on the pool layout correcting)
> checksum errors itself.

My goal in encrypting/authenticating the storage media is to prevent
unauthorized external data access or tampering.  My assumption is that
ZFS's integrity checks have more to do with maintaining metadata
integrity in the event of certain hardware or software faults (e.g.,
operating system crashes, power outages) - that is to say, ZFS cannot
tell if an attacker boots from a live CD, imports the zpool, fiddles
with something, and reboots, whereas GEOM_ELI can if integrity checking
is enabled (even if someone tampers with the encrypted data).  This does
raise an interesting question that merits further testing: What happens
if a physical sector goes bad, whether that's due to a system bus or
controller I/O error, a physical problem with the media itself, or
someone actively tampering with the encrypted storage?  GEOM_ELI would
probably return some error back to ZFS for that sector, which could
cause the entire vdev to go offline but might just require scrubbing the
zpool to fix.

> I'm also wondering if you actually benchmarked the difference
> between HMAC/MD5 and HMAC/SHA256. Unless the difference can
> be easily measured, I'd probably stick with the recommendation.

I based my choice of HMAC algorithm on the following forum post:

http://forums.freebsd.org/showthread.php?t=3D12955

I wouldn't recommend anyone use MD5 in real-world applications, either,
so I'll update my instructions to use HMAC/SHA256 as recommended by
geli(8).  I chose MD5 solely because my test server is rather old (a
Dell PowerEdge 2400 with a single 1-GHz Pentium-III processor), as is
the underlying storage (100-Mbps PATA).  My threat model involves a
thief foolishly making off with the server in my office, only to realize
later (too late) that the hardware is actually garbage.

> I would also be interested in benchmarks that show that geli(8)'s
> recommendation to increase geli's block size to 4096 bytes makes
> sense for ZFS. Is anyone aware of any?

As far as I know, ZFS on FreeBSD has no issues with 4k-sector drives,
see Ivan Voras' comments here:

http://ivoras.net/blog/tree/2011-01-01.freebsd-on-4k-sector-drives.html

Double-checking my zpool shows the correct value for ashift:

  masip205bsdfile# zdb -C tank | grep ashift
                  ashift: 12

Benchmarking different geli sector sizes would also be interesting and
worth incorporating into these instructions.  I'll add that to my to-do
list as well.

Best wishes,
Matthew

--=20
I FIGHT FOR THE USERS




Want to link to this message? Use this URL: <https://mail-archive.FreeBSD.org/cgi/mid.cgi?BABF8C57A778F04791343E5601659908236BDA>