From owner-freebsd-stable@FreeBSD.ORG Fri Mar 9 14:25:57 2012 Return-Path: Delivered-To: freebsd-stable@freebsd.org Received: from mx1.freebsd.org (mx1.freebsd.org [69.147.83.52]) by hub.freebsd.org (Postfix) with ESMTP id 87105106566B for ; Fri, 9 Mar 2012 14:25:57 +0000 (UTC) (envelope-from freebsd-listen@fabiankeil.de) Received: from smtprelay04.ispgateway.de (smtprelay04.ispgateway.de [80.67.29.8]) by mx1.freebsd.org (Postfix) with ESMTP id 14D548FC13 for ; Fri, 9 Mar 2012 14:25:56 +0000 (UTC) Received: from [109.41.64.216] (helo=fabiankeil.de) by smtprelay04.ispgateway.de with esmtpsa (TLSv1:AES128-SHA:128) (Exim 4.68) (envelope-from ) id 1S60lR-0000Re-VM for freebsd-stable@freebsd.org; Fri, 09 Mar 2012 15:25:50 +0100 Date: Fri, 9 Mar 2012 15:22:53 +0100 From: Fabian Keil To: freebsd-stable@freebsd.org Message-ID: <20120309152253.17a108c2@fabiankeil.de> In-Reply-To: References: <20120307174850.746a6b0a@fabiankeil.de> Mime-Version: 1.0 Content-Type: multipart/signed; micalg=PGP-SHA1; boundary="Sig_/ei3AZnsArVqjRwEiNSQsCCF"; protocol="application/pgp-signature" X-Df-Sender: Nzc1MDY3 Subject: Re: FreeBSD root on a geli-encrypted ZFS pool X-BeenThere: freebsd-stable@freebsd.org X-Mailman-Version: 2.1.5 Precedence: list List-Id: Production branch of FreeBSD source code List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , X-List-Received-Date: Fri, 09 Mar 2012 14:25:57 -0000 --Sig_/ei3AZnsArVqjRwEiNSQsCCF Content-Type: text/plain; charset=US-ASCII Content-Transfer-Encoding: quoted-printable "xenophon\\+freebsd" wrote: > > -----Original Message----- > > From: Fabian Keil [mailto:freebsd-listen@fabiankeil.de] > > Sent: Wednesday, March 07, 2012 11:49 AM > > It's not clear to me why you enable geli integrity verification. > >=20 > > Given that it is single-sector-based it seems inferior to ZFS's > > integrity checks in every way and could actually prevent ZFS from > > properly detecting (and depending on the pool layout correcting) > > checksum errors itself. >=20 > My goal in encrypting/authenticating the storage media is to prevent > unauthorized external data access or tampering. My assumption is that > ZFS's integrity checks have more to do with maintaining metadata > integrity in the event of certain hardware or software faults (e.g., > operating system crashes, power outages) - that is to say, ZFS cannot > tell if an attacker boots from a live CD, imports the zpool, fiddles > with something, and reboots, whereas GEOM_ELI can if integrity checking > is enabled (even if someone tampers with the encrypted data). If the ZFS pool is located on GEOM_ELI providers the attacker shouldn't be able to import it unless the passphrase and/or keyfile are already known. If the attacker tampers with the encrypted data used by the pool, ZFS should detect it, unless it's a replay attack in which case enabling GEOM_ELI's integrity checking wouldn't have helped you either. If the attacker only replays a couple of blocks, ZFS's integrity detection is likely to detect it for most blocks, while GEOM_ELI's integrity checking will not detect it for any block. In my opinion protecting ZFS's default checksums (which cover non-metadata as well) with GEOM_ELI is sufficient. I don't see what advantage additionally enabling GEOM_ELI's integrity verification offers. > This does > raise an interesting question that merits further testing: What happens > if a physical sector goes bad, whether that's due to a system bus or > controller I/O error, a physical problem with the media itself, or > someone actively tampering with the encrypted storage? GEOM_ELI would > probably return some error back to ZFS for that sector, which could > cause the entire vdev to go offline but might just require scrubbing the > zpool to fix. >=20 > > I'm also wondering if you actually benchmarked the difference > > between HMAC/MD5 and HMAC/SHA256. Unless the difference can > > be easily measured, I'd probably stick with the recommendation. >=20 > I based my choice of HMAC algorithm on the following forum post: >=20 > http://forums.freebsd.org/showthread.php?t=3D12955 I'm wondering if dd's block size is correct, 4096 seems rather small. Anyway, it's a test without file system so the ZFS overhead isn't measured. I wasn't entirely clear about it, but my assumption was that the ZFS overhead might be big enough to make the difference between HMAC/MD5 and HMAC/SHA256 a lot less significant. > I wouldn't recommend anyone use MD5 in real-world applications, either, > so I'll update my instructions to use HMAC/SHA256 as recommended by > geli(8). It's still not clear to me why you recommend using a HMAC for geli at all. > > I would also be interested in benchmarks that show that geli(8)'s > > recommendation to increase geli's block size to 4096 bytes makes > > sense for ZFS. Is anyone aware of any? >=20 > As far as I know, ZFS on FreeBSD has no issues with 4k-sector drives, > see Ivan Voras' comments here: >=20 > http://ivoras.net/blog/tree/2011-01-01.freebsd-on-4k-sector-drives.html > > Double-checking my zpool shows the correct value for ashift: >=20 > masip205bsdfile# zdb -C tank | grep ashift > ashift: 12 I'm currently using sector sizes between 512 and 8192 so I'm not actually expecting technical problems, it's just not clear to me how much the sector size matters and if 4096 is actually the best value when using ZFS. > Benchmarking different geli sector sizes would also be interesting and > worth incorporating into these instructions. I'll add that to my to-do > list as well. Great. Fabian --Sig_/ei3AZnsArVqjRwEiNSQsCCF Content-Type: application/pgp-signature; name=signature.asc Content-Disposition: attachment; filename=signature.asc -----BEGIN PGP SIGNATURE----- Version: GnuPG v2.0.18 (FreeBSD) iEYEARECAAYFAk9aEkEACgkQBYqIVf93VJ36YACgi9V0RW4BX9DFJvXEZHvFEuHV fPgAoJEcvjlp6MJzpQSUqkhtSeELb6f/ =U12I -----END PGP SIGNATURE----- --Sig_/ei3AZnsArVqjRwEiNSQsCCF--