From owner-freebsd-hackers@FreeBSD.ORG Sat May 30 20:29:19 2009 Return-Path: Delivered-To: freebsd-hackers@freebsd.org Received: from mx1.freebsd.org (mx1.freebsd.org [IPv6:2001:4f8:fff6::34]) by hub.freebsd.org (Postfix) with ESMTP id 48045106566B for ; Sat, 30 May 2009 20:29:19 +0000 (UTC) (envelope-from mwm-keyword-freebsdhackers2.e313df@mired.org) Received: from mired.org (two.mired.org [74.143.213.43]) by mx1.freebsd.org (Postfix) with ESMTP id EDAD48FC1C for ; Sat, 30 May 2009 20:29:18 +0000 (UTC) (envelope-from mwm-keyword-freebsdhackers2.e313df@mired.org) Received: (qmail 86056 invoked by uid 1001); 30 May 2009 16:27:45 -0400 Received: from bhuda.mired.org (localhost.localdomain [127.0.0.1]) by bhuda (tmda-ofmipd) with ESMTP; Sat, 30 May 2009 16:27:45 -0400 Date: Sat, 30 May 2009 16:27:44 -0400 To: xorquewasp@googlemail.com Message-ID: <20090530162744.5d77e9d1@bhuda.mired.org> In-Reply-To: <20090530191840.GA68514@logik.internal.network> References: <20090530175239.GA25604@logik.internal.network> <20090530144354.2255f722@bhuda.mired.org> <20090530191840.GA68514@logik.internal.network> Organization: Meyer Consulting X-Mailer: Claws Mail 3.7.1 (GTK+ 2.14.7; amd64-portbld-freebsd7.1) Face: iVBORw0KGgoAAAANSUhEUgAAADAAAAAwBAMAAAClLOS0AAAAG1BMVEXguIzRkGnhyaz069mXhW0WHRnbrnR9WCQ6LB0CchNMAAACSUlEQVQ4jV2TQW7jMAxFGaPQOgQEdZaGMsgBrAvUA03dCxj1Uu4U2gfwQD7AGNax51NK07RcxXz6/CSl0Ij450vkPG1jzpIZM1UwDCl/xB14TWnNX8A00Qj5a0mnVFVbVUz4MeErea2HikSRqZzY894zwg9p2+/AtO8LzxFED+tNAUFeU29iFOLRxlZAcdo9A8wi8ZBMV4BKPde82Oxrvs6BTkulQIClte0DLFzzsKk9j1MBex8iUaP00Bd78S/muyFScrTXz6zLkEUxJp+SabQfNOs4f4Jpx5qSZ/304PWwlEWP1cOn/mJQR7EOD+uKhjcBLziuL7xoY5Xm+VFAUSw/LwwwsHEHxihpwV4EJH0xXRkbw1PkRw+X4pEuSJwBggqk+HEYKkiL5/74/nQkogigzQsAFrakxZyfw3wMIEEZPv4AWMfxwqE5GNxGaERjmH+PG8AE0L4/w9g0lsp1raLYAN5azQa+AOoO9NwcpFkTrG2VKNMNEL5UKUUAw34tha0z7onUG0oBoNtczE04GwFE3wCHc0ChezAJ6A1WMV81AtY7wDAJSlXwV+4cwBvsOsrQMRawfQEBz0deEZ7WNpV2szckIKo5VpDHDSDvF1GItwqqAlG01Hh50BGtVhuUkjkasg/14bYFGCgWg1fSWHvmOoJck2xdp9ZvZBHzDVTzX23TkrOn7qe5U2COEw5D4Vx3qEQpFY2Z/3QFnJxzp7YCmSMG19nOUoe869zZfOQb5ywQuWu0yCn5+8gxZz+BE7vG3j4/wbf4D/sXN9Wug1s7AAAAAElFTkSuQmCC Mime-Version: 1.0 Content-Type: text/plain; charset=US-ASCII Content-Transfer-Encoding: 7bit X-Delivery-Agent: TMDA/1.1.12 (Macallan) From: Mike Meyer Cc: freebsd-hackers@freebsd.org Subject: Re: Request for opinions - gvinum or ccd? X-BeenThere: freebsd-hackers@freebsd.org X-Mailman-Version: 2.1.5 Precedence: list List-Id: Technical Discussions relating to FreeBSD List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , X-List-Received-Date: Sat, 30 May 2009 20:29:19 -0000 On Sat, 30 May 2009 20:18:40 +0100 xorquewasp@googlemail.com wrote: > > If you're running a 7.X 64-bit system with a couple of GIG of ram, > > expect it to be in service for years without having to reformat the > > disks, and can afford another drive, I'd recommend going to raidz on a > > three-drive system. That will give you close to the size/performance > > of your RAID0 system, but let you lose a disk without losing data. The > > best you can do with zfs on two disks is a mirror, which means write > > throughput will suffer. > > Certainly a lot to think about. > > The system has 12gb currently, with room to upgrade. I currently have > two 500gb drives and one 1tb drive. I wanted the setup to be essentially > two drives striped, backed up onto one larger one nightly. I wanted the > large backup drive to be as "isolated" as possible, eg, in the event of > some catastrophic hardware failure, I can remove it and place it in > another machine without a lot of stressful configuration to recover the > data (not possible with a RAID configuration involving all three drives, > as far as I'm aware). The last bit is wrong. Moving a zfs pool between two systems is pretty straightforward. The configuration information is on the drives; you just do "zpool import " after plugging them in, and if the mount point exists, it'll mount it. If the system crashed with the zfs pool active, you might have to do -f to force an import. Geom is pretty much the same way, except you can configure it to not write the config data to disk, thus forcing you to do it manually (what you expect). I'm not sure geom is as smart if the drives change names, though. RAID support and volume management has come a long way from the days of ccd and vinum. zfs in particular is a major advance. If you aren't aware of it's advantages, take the time to read the zfs & zpool man pages, at the very least, before committing to geom (not that geom isn't pretty slick in and of itself, but zfs solves a more pressing problem). Hmm. Come to think of it, you ought to be able to use gstrip to stripe your disks, then put a zpool on that, which should get you the advantages of zfs with a striped disk. But that does seem odd to me. http://www.mired.org/consulting.html Independent Network/Unix/Perforce consultant, email for more information. O< ascii ribbon campaign - stop html mail - www.asciiribbon.org