From owner-freebsd-stable@FreeBSD.ORG Wed Mar 7 22:01:02 2012 Return-Path: Delivered-To: freebsd-stable@freebsd.org Received: from mx1.freebsd.org (mx1.freebsd.org [IPv6:2001:4f8:fff6::34]) by hub.freebsd.org (Postfix) with ESMTP id 740FF106564A for ; Wed, 7 Mar 2012 22:01:02 +0000 (UTC) (envelope-from xenophon+freebsd@irtnog.org) Received: from mx1.irtnog.org (rrcs-24-123-13-61.central.biz.rr.com [24.123.13.61]) by mx1.freebsd.org (Postfix) with ESMTP id A82118FC12 for ; Wed, 7 Mar 2012 22:01:01 +0000 (UTC) Received: from cinep001bsdgw.irtnog.net (localhost [127.0.0.1]) by mx1.irtnog.org (Postfix) with ESMTP id C63F812EB9 for ; Wed, 7 Mar 2012 17:00:54 -0500 (EST) X-Virus-Scanned: amavisd-new at irtnog.org Received: from mx1.irtnog.org ([127.0.0.1]) by cinep001bsdgw.irtnog.net (mx1.irtnog.org [127.0.0.1]) (amavisd-new, port 10024) with ESMTP id axZYVvyM81uS for ; Wed, 7 Mar 2012 17:00:47 -0500 (EST) Received: from cinip100ntsbs.irtnog.net (cinip100ntsbs.irtnog.net [10.63.1.100]) by mx1.irtnog.org (Postfix) with ESMTP for ; Wed, 7 Mar 2012 17:00:47 -0500 (EST) X-MimeOLE: Produced By Microsoft Exchange V6.5 Content-class: urn:content-classes:message MIME-Version: 1.0 Content-Type: text/plain; charset="us-ascii" Content-Transfer-Encoding: quoted-printable Date: Wed, 7 Mar 2012 17:00:46 -0500 Message-ID: In-Reply-To: <20120307174850.746a6b0a@fabiankeil.de> X-MS-Has-Attach: X-MS-TNEF-Correlator: Thread-Topic: FreeBSD root on a geli-encrypted ZFS pool thread-index: Acz8gjZKOgWaCvQARxWntsbyGoxDSQAJBs6Q References: <20120307174850.746a6b0a@fabiankeil.de> From: "xenophon\\+freebsd" To: Subject: RE: FreeBSD root on a geli-encrypted ZFS pool X-BeenThere: freebsd-stable@freebsd.org X-Mailman-Version: 2.1.5 Precedence: list List-Id: Production branch of FreeBSD source code List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , X-List-Received-Date: Wed, 07 Mar 2012 22:01:02 -0000 > -----Original Message----- > From: Fabian Keil [mailto:freebsd-listen@fabiankeil.de] > Sent: Wednesday, March 07, 2012 11:49 AM Thanks for your comments! > It's not clear to me why you enable geli integrity verification. >=20 > Given that it is single-sector-based it seems inferior to ZFS's > integrity checks in every way and could actually prevent ZFS from > properly detecting (and depending on the pool layout correcting) > checksum errors itself. My goal in encrypting/authenticating the storage media is to prevent unauthorized external data access or tampering. My assumption is that ZFS's integrity checks have more to do with maintaining metadata integrity in the event of certain hardware or software faults (e.g., operating system crashes, power outages) - that is to say, ZFS cannot tell if an attacker boots from a live CD, imports the zpool, fiddles with something, and reboots, whereas GEOM_ELI can if integrity checking is enabled (even if someone tampers with the encrypted data). This does raise an interesting question that merits further testing: What happens if a physical sector goes bad, whether that's due to a system bus or controller I/O error, a physical problem with the media itself, or someone actively tampering with the encrypted storage? GEOM_ELI would probably return some error back to ZFS for that sector, which could cause the entire vdev to go offline but might just require scrubbing the zpool to fix. > I'm also wondering if you actually benchmarked the difference > between HMAC/MD5 and HMAC/SHA256. Unless the difference can > be easily measured, I'd probably stick with the recommendation. I based my choice of HMAC algorithm on the following forum post: http://forums.freebsd.org/showthread.php?t=3D12955 I wouldn't recommend anyone use MD5 in real-world applications, either, so I'll update my instructions to use HMAC/SHA256 as recommended by geli(8). I chose MD5 solely because my test server is rather old (a Dell PowerEdge 2400 with a single 1-GHz Pentium-III processor), as is the underlying storage (100-Mbps PATA). My threat model involves a thief foolishly making off with the server in my office, only to realize later (too late) that the hardware is actually garbage. > I would also be interested in benchmarks that show that geli(8)'s > recommendation to increase geli's block size to 4096 bytes makes > sense for ZFS. Is anyone aware of any? As far as I know, ZFS on FreeBSD has no issues with 4k-sector drives, see Ivan Voras' comments here: http://ivoras.net/blog/tree/2011-01-01.freebsd-on-4k-sector-drives.html Double-checking my zpool shows the correct value for ashift: masip205bsdfile# zdb -C tank | grep ashift ashift: 12 Benchmarking different geli sector sizes would also be interesting and worth incorporating into these instructions. I'll add that to my to-do list as well. Best wishes, Matthew --=20 I FIGHT FOR THE USERS