From owner-freebsd-questions Mon Jan 21 14:50:18 2002 Delivered-To: freebsd-questions@freebsd.org Received: from out4.mx.nwbl.wi.voyager.net (out4.mx.nwbl.wi.voyager.net [169.207.1.77]) by hub.freebsd.org (Postfix) with ESMTP id 4B75237B402 for ; Mon, 21 Jan 2002 14:50:13 -0800 (PST) Received: from shell.core.com (IDENT:2525@shell.voyager.net [169.207.1.89]) by out4.mx.nwbl.wi.voyager.net (8.11.1/8.11.4/1.7) with ESMTP id g0LMoC595987; Mon, 21 Jan 2002 16:50:12 -0600 (CST) Received: (from dpoland@localhost) by shell.core.com (8.11.6/8.11.6/1.3) id g0LMoAA28934; Mon, 21 Jan 2002 16:50:10 -0600 (CST) Date: Mon, 21 Jan 2002 16:50:10 -0600 From: Doug Poland To: Tony Landells Cc: tony , questions@FreeBSD.ORG Subject: Re: faulty vinum plex, need help please Message-ID: <20020121165010.B26558@polands.org> References: <200201212213.JAA29808@tungsten.austclear.com.au> Mime-Version: 1.0 Content-Type: text/plain; charset=us-ascii Content-Disposition: inline User-Agent: Mutt/1.2.5i In-Reply-To: <200201212213.JAA29808@tungsten.austclear.com.au>; from ahl@austclear.com.au on Tue, Jan 22, 2002 at 09:13:15AM +1100 Sender: owner-freebsd-questions@FreeBSD.ORG Precedence: bulk List-ID: List-Archive: (Web Archive) List-Help: (List Instructions) List-Subscribe: List-Unsubscribe: X-Loop: FreeBSD.ORG On Tue, Jan 22, 2002 at 09:13:15AM +1100, Tony Landells wrote: > doug@polands.org said: > > Did you put the 0 in the vinum.conf? > > Yes, the 0 would go in vinum.conf as the subdisk length. > > However, you usually get some sort of error when you attach a plex > to a mirror, which reflects that the data hasn't been copied (yet). > > Have you tried telling vinum to start the subdisks? > > vinum start dataraid.p1.s0 > vinum start dataraid.p1.s1 > vinum start dataraid.p1.s2 > vinum start dataraid.p1.s3 > > If all the subdisks start successfully, the plex should come up as > a result--at the moment it's faulty because its subdisks aren't up. > I issue the commands as suggested. Each replied with: Reviving dataraid.p1.s0 in the background Reviving dataraid.p1.s1 in the background Reviving dataraid.p1.s2 in the background Reviving dataraid.p1.s3 in the background vinum list reports: # vinum list 4 drives: D a State: up Device /dev/da0s2e Avail: 0/8628 MB (0%) D b State: up Device /dev/da1s2e Avail: 0/8628 MB (0%) D c State: up Device /dev/da2s2e Avail: 0/8628 MB (0%) D d State: up Device /dev/da3s2e Avail: 0/8628 MB (0%) 1 volumes: V dataraid State: up Plexes: 2 Size: 16 GB 2 plexes: P dataraid.p0 S State: up Subdisks: 4 Size: 16 GB P dataraid.p1 S State: faulty Subdisks: 4 Size: 16 GB 8 subdisks: S dataraid.p0.s0 State: up PO: 0 B Size: 4314 MB S dataraid.p0.s1 State: up PO: 512 kB Size: 4314 MB S dataraid.p0.s2 State: up PO: 1024 kB Size: 4314 MB S dataraid.p0.s3 State: up PO: 1536 kB Size: 4314 MB S dataraid.p1.s0 State: R 4% PO: 0 B Size: 4314 MB S dataraid.p1.s1 State: R 3% PO: 512 kB Size: 4314 MB S dataraid.p1.s2 State: R 3% PO: 1024 kB Size: 4314 MB S dataraid.p1.s3 State: R 3% PO: 1536 kB Size: 4314 MB So when the subdisks are revived, then the plex will automatically come up, or must I start it? Could you speculate as to why this happened? I'd like to understand what went wrong and how to avoid/fix this in the future. Many thanks for your help so far, -- Regards, Doug To Unsubscribe: send mail to majordomo@FreeBSD.org with "unsubscribe freebsd-questions" in the body of the message