From owner-freebsd-questions@FreeBSD.ORG Thu Sep 21 18:52:49 2006 Return-Path: X-Original-To: freebsd-questions@freebsd.org Delivered-To: freebsd-questions@freebsd.org Received: from mx1.FreeBSD.org (mx1.freebsd.org [216.136.204.125]) by hub.freebsd.org (Postfix) with ESMTP id D19B716A403 for ; Thu, 21 Sep 2006 18:52:49 +0000 (UTC) (envelope-from lists@jnielsen.net) Received: from ns1.jnielsen.net (ns1.jnielsen.net [69.55.238.237]) by mx1.FreeBSD.org (Postfix) with ESMTP id EDB3A43D8D for ; Thu, 21 Sep 2006 18:52:42 +0000 (GMT) (envelope-from lists@jnielsen.net) Received: from localhost (jn@ns1 [69.55.238.237]) (authenticated bits=0) by ns1.jnielsen.net (8.12.9p2/8.12.9) with ESMTP id k8LIqfLc075425 for ; Thu, 21 Sep 2006 11:52:42 -0700 (PDT) (envelope-from lists@jnielsen.net) From: John Nielsen To: freebsd-questions@freebsd.org Date: Thu, 21 Sep 2006 14:52:38 -0400 User-Agent: KMail/1.9.4 References: <20060920222944.M1031@ganymede.hub.org> <45122531.6010503@infracaninophile.co.uk> In-Reply-To: <45122531.6010503@infracaninophile.co.uk> X-Face: #X5#Y*q>F:]zT!DegL3z5Xo'^MN[$8k\[4^3rN~wm=s=Uw(sW}R?3b^*f1Wu*.<=?utf-8?q?of=5F4NrS=0A=09P*M/9CpxDo!D6?=)IY1w<9B1jB; tBQf[RU-R<,I)e"$q7N7 MIME-Version: 1.0 Content-Type: text/plain; charset="iso-8859-6" Content-Transfer-Encoding: quoted-printable Content-Disposition: inline Message-Id: <200609211452.39110.lists@jnielsen.net> X-Virus-Scanned: ClamAV version 0.88.3, clamav-milter version 0.88.3 on ns1.jnielsen.net X-Virus-Status: Clean Subject: Re: geom - help ... X-BeenThere: freebsd-questions@freebsd.org X-Mailman-Version: 2.1.5 Precedence: list List-Id: User questions List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , X-List-Received-Date: Thu, 21 Sep 2006 18:52:50 -0000 On Thursday 21 September 2006 01:37, Matthew Seaman wrote: > Marc G. Fournier wrote: > > So, again, if I'm reading through things correctly, I'll have to do > > something like: > > > > gstripe st1 da1 da2 > > gstripe st2 da3 da4 > > gmirror drive st1 st2 > > newfs drive > > That's the wrong way round, I think. If you lose a drive, then you've > the whole of one of your stripes and have no resilience. Shouldn't you > rather stripe the mirrors: > > gmirror gm0 da1 da2 > gmirror gm1 da3 da4 > gstripe gs0 gm0 gm1 > newfs gs0 > > This way if you lose a drive then only one of your gmirrors loses > resilience and the other half of your disk space is unaffected. I would recommend the 1+0 approach as well. In addition to increasing your= =20 odds of surviving a multi-disk failure, it makes replacing a failed compone= nt=20 easier and faster--you only need to rebuild component mirror (which involve= s=20 one command and duplication of half of the total volume) instead of=20 recreating a component stripe and then rebuilding the whole mirror (which=20 involves at least two commands and duplication of the entire volume). Regarding the spare, I think you're right that there isn't (yet) a way to=20 configure a system-wide hot spare, but it would not be hard to write a=20 monitoring script that gives you essentially the same thing. Assuming the 1= +0=20 approach: every N seconds, check the health of both mirrors (using "gmirror= =20 status" or similar). If volume V is degraded, do a "gmirror forget V; gmirr= or=20 insert V sparedev", e-mail the administrator, and mark the spare as=20 unavailable. After the failed drive is replaced, the script (or better, a=20 knob that the script knows how to check) should be updated with the=20 devicename of the new spare. =46or a 50% chance of having zero time-to-recovery (at the cost of more=20 expensive writes), you could also add the spare as a third member to one of= =20 the mirror sets. If a member of that set fails, you still have a redundant= =20 mirror. If a member of the other set fails, you just do a "gmirror remove" = to=20 free the spare from the 3-way mirror and then add it to the failed set. =46rom my own experience, I've been very happy with both gmirror and gstrip= e,=20 and in fact I just finished setting up a rather unorthodox volume on my=20 desktop at work. I have three drives (two of which were scavenged from othe= r=20 machines): one 60GB and two 40GB. I wanted fault tolerance for both /=20 and /usr, I wanted /usr to be as big as possible, and I wanted reasonable=20 performance. I ruled out graid3 and gvinum raid5 since I want to be able to= =20 boot easily from / and performance would be poor since the 40GB drives shar= e=20 a controller. I made / a mirror of two 10GB partitions on the 40GB drives,= =20 made a stripe out of the remaining 30GB from the 40GB drives, and added the= =20 stripe into a mirror set with the 60GB drive. It's working quite nicely so= =20 far. JN