From owner-freebsd-stable@FreeBSD.ORG Wed May 17 23:53:17 2006 Return-Path: X-Original-To: freebsd-stable@freebsd.org Delivered-To: freebsd-stable@freebsd.org Received: from mx1.FreeBSD.org (mx1.freebsd.org [216.136.204.125]) by hub.freebsd.org (Postfix) with ESMTP id 93B1316A578 for ; Wed, 17 May 2006 23:53:17 +0000 (UTC) (envelope-from davidn@datalinktech.com.au) Received: from mail-ihug.icp-qv1-irony3.iinet.net.au (ihug-mail.icp-qv1-irony3.iinet.net.au [203.59.1.197]) by mx1.FreeBSD.org (Postfix) with ESMTP id E030943D45 for ; Wed, 17 May 2006 23:53:16 +0000 (GMT) (envelope-from davidn@datalinktech.com.au) Received: from 203-206-162-119.perm.iinet.net.au (HELO mail.datalinktech.com.au) ([203.206.162.119]) by mail-ihug.icp-qv1-irony3.iinet.net.au with ESMTP; 18 May 2006 07:53:14 +0800 X-BrightmailFiltered: true X-Brightmail-Tracker: AAAAAA== X-IronPort-AV: i="4.05,139,1146412800"; d="scan'208"; a="784473656:sNHT21972132" Received: from [192.168.4.232] ([192.168.4.232]) by mail.datalinktech.com.au with esmtp; Thu, 18 May 2006 09:53:13 +1000 id 0018D8DE.446BB769.000025E8 Message-ID: <446BB765.9000800@datalinktech.com.au> Date: Thu, 18 May 2006 09:53:09 +1000 From: David Nugent User-Agent: Thunderbird 1.5.0.2 (X11/20060516) MIME-Version: 1.0 To: Daniel O'Connor References: <200605171323.19970.doconnor@gsoft.com.au> In-Reply-To: <200605171323.19970.doconnor@gsoft.com.au> Content-Type: text/plain; charset=ISO-8859-1; format=flowed Content-Transfer-Encoding: 7bit Cc: freebsd-stable@freebsd.org Subject: Re: RAID rebuild problem X-BeenThere: freebsd-stable@freebsd.org X-Mailman-Version: 2.1.5 Precedence: list List-Id: Production branch of FreeBSD source code List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , X-List-Received-Date: Wed, 17 May 2006 23:53:18 -0000 Daniel O'Connor wrote: > and rebuilt the array.. > sudo atacontrol rebuild ar0 > > However the status stayed at 0%. On the rare occasion I've needed to do so, since 5.1 days, the atacontrol rebuild stays at 0%, last I tried this was on 6.1-PREPRELEASE ~mid February. The first time I gave up waiting after 2 days, now I'm not prepared to be so patient and won't bother until I hear it has been fixed. Instead I boot from a liveCD, delete the raid, dd copy the "good" disk to the degraded disk(s), redefine the raid, reboot. That still takes some hours to complete depending on the size/speed of the disks, but that at least works to get the raid back up. Since hotswap isn't currently supported the requirement to boot into single user isn't a severe limitation, but the downtime should be unnecessary.