From owner-freebsd-questions@FreeBSD.ORG Thu Dec 9 01:52:34 2004 Return-Path: Delivered-To: freebsd-questions@freebsd.org Received: from mx1.FreeBSD.org (mx1.freebsd.org [216.136.204.125]) by hub.freebsd.org (Postfix) with ESMTP id BD72F16A4CE for ; Thu, 9 Dec 2004 01:52:34 +0000 (GMT) Received: from moghedien.mukappabeta.net (moghedien.mukappabeta.net [194.145.150.66]) by mx1.FreeBSD.org (Postfix) with ESMTP id 34C6A43D41 for ; Thu, 9 Dec 2004 01:52:34 +0000 (GMT) (envelope-from mkb@mukappabeta.de) Received: from [192.168.2.10] (pD9E687D5.dip.t-dialin.net [217.230.135.213]) (using TLSv1 with cipher DHE-RSA-AES256-SHA (256/256 bits)) (No client certificate requested) by moghedien.mukappabeta.net (Postfix) with ESMTP id F37E82D82 for ; Thu, 9 Dec 2004 02:47:51 +0100 (CET) Message-ID: <41B7AFDB.20402@mukappabeta.de> Date: Thu, 09 Dec 2004 02:52:27 +0100 From: Matthias Buelow User-Agent: Mozilla Thunderbird 0.9 (X11/20041124) X-Accept-Language: en-us, en MIME-Version: 1.0 To: freebsd-questions@freebsd.org References: <20041208054552.71553.qmail@web50708.mail.yahoo.com> <20041208100905.GA12684@tuatara.fishballoon.org> <20041208105255.GW39558@pcwin002.win.tue.nl> In-Reply-To: <20041208105255.GW39558@pcwin002.win.tue.nl> Content-Type: text/plain; charset=ISO-8859-1; format=flowed Content-Transfer-Encoding: 7bit Subject: Re: Has anybody EVER successfully recovered VINUM? X-BeenThere: freebsd-questions@freebsd.org X-Mailman-Version: 2.1.1 Precedence: list List-Id: User questions List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , X-List-Received-Date: Thu, 09 Dec 2004 01:52:34 -0000 > Yes I was too -- however I wasn't as impressed with the fact that I had parity > errors afterwards. Have you run 'vinum checkparity' after these rebuilds? In > my case I suffered data corruption... > AFAIK the only way to guarantee a consistent rebuild is to do it offline (at > least in 4.x, haven't tested gvinum in 5.x yet). >>To play it safe you might want to unmount the volume before starting. If this is indeed true, which I find a bit hard to believe, it should be fixed ASAP. I've never seen a RAID that had to be taken _offline_ to rebuild parity onto a failed and replaced drive. I've triggered rebuilds on a few so far, including h/w RAID, RAIDFrame and the Linux raid* thing, and it has always worked nicely while there was heavy load on the volume (with reduced performance during the rebuild, of course.) -- Matthias Buelow; mkb@{mukappabeta,informatik.uni-wuerzburg}.de