From owner-freebsd-fs@FreeBSD.ORG Wed Apr 23 12:03:11 2014 Return-Path: Delivered-To: freebsd-fs@freebsd.org Received: from mx1.freebsd.org (mx1.freebsd.org [IPv6:2001:1900:2254:206a::19:1]) (using TLSv1 with cipher ADH-AES256-SHA (256/256 bits)) (No client certificate requested) by hub.freebsd.org (Postfix) with ESMTPS id 51197B70 for ; Wed, 23 Apr 2014 12:03:11 +0000 (UTC) Received: from mail-ee0-x22c.google.com (mail-ee0-x22c.google.com [IPv6:2a00:1450:4013:c00::22c]) (using TLSv1 with cipher ECDHE-RSA-RC4-SHA (128/128 bits)) (Client CN "smtp.gmail.com", Issuer "Google Internet Authority G2" (verified OK)) by mx1.freebsd.org (Postfix) with ESMTPS id D7DA71C64 for ; Wed, 23 Apr 2014 12:03:10 +0000 (UTC) Received: by mail-ee0-f44.google.com with SMTP id e49so715332eek.3 for ; Wed, 23 Apr 2014 05:03:09 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20120113; h=message-id:date:from:user-agent:mime-version:to:subject:references :in-reply-to:content-type:content-transfer-encoding; bh=NvyedYgbyEpMIPsjxASq3JwftILnQGfLnn98PnvOD8c=; b=gxDDnCcrTZxJp96m1TdDitJF/hPHHY3cxs+khMN0kjX0OSmQ1fl1Q9KqSh8E/n8dem M9NbhorylWVxnl/FHeiDaq0Bxxg16inb5SmqBGxJRJ4l1iB033trbPCt1Kq7lKFV3Bd8 4ic6lhp/e4e1cgflNhAB8Uk4QJApaJy8mmMrufeNEH2H/0OZWYn99/K5mwDbFtssgv6B ZGb/r7+9cX/ccXDwQk7y4LGqaJrERRurjaYAG/7x3EqnNvK1WF/DsJScVPWHvDPGHY7Z nC1bGZo/uJucZz5OaBonMcs38eMuuFB1pzS7cEiSSjWGOAiaM19oYn5NvQeExdtzfgWM nEhA== X-Received: by 10.14.202.201 with SMTP id d49mr18419618eeo.69.1398254589122; Wed, 23 Apr 2014 05:03:09 -0700 (PDT) Received: from [192.168.1.117] (schavemaker.nl. [213.84.84.186]) by mx.google.com with ESMTPSA id w46sm5684315eeo.35.2014.04.23.05.03.07 for (version=TLSv1 cipher=ECDHE-RSA-RC4-SHA bits=128/128); Wed, 23 Apr 2014 05:03:08 -0700 (PDT) Message-ID: <5357ABFB.9060702@gmail.com> Date: Wed, 23 Apr 2014 14:03:07 +0200 From: Johan Hendriks User-Agent: Mozilla/5.0 (X11; Linux x86_64; rv:24.0) Gecko/20100101 Thunderbird/24.4.0 MIME-Version: 1.0 To: Hugo Lombard , freebsd-fs@freebsd.org Subject: Re: ZFS unable to import pool References: <20140423064203.GD2830@sludge.elizium.za.net> <20140423080056.GE2830@sludge.elizium.za.net> <20140423091852.GH2830@sludge.elizium.za.net> <20140423100126.GJ2830@sludge.elizium.za.net> <5357937D.4080302@gmail.com> <20140423120042.GK2830@sludge.elizium.za.net> In-Reply-To: <20140423120042.GK2830@sludge.elizium.za.net> Content-Type: text/plain; charset=ISO-8859-1; format=flowed Content-Transfer-Encoding: 7bit X-BeenThere: freebsd-fs@freebsd.org X-Mailman-Version: 2.1.17 Precedence: list List-Id: Filesystems List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , X-List-Received-Date: Wed, 23 Apr 2014 12:03:11 -0000 op 23-04-14 14:00, Hugo Lombard schreef: > On Wed, Apr 23, 2014 at 12:18:37PM +0200, Johan Hendriks wrote: >> Did you in the past add an extra disk to the pool? >> This could explain the whole issue as the pool is missing a whole vdev. >> > I agree that there's a vdev missing... > > I was able to "simulate" the current problematic import state (sans > failed "disk7", since that doesn't seem to be the stumbling block) by > adding 5 disks [1] to get to here: > > # zpool status test > pool: test > state: ONLINE > scan: none requested > config: > > NAME STATE READ WRITE CKSUM > test ONLINE 0 0 0 > raidz1-0 ONLINE 0 0 0 > md3 ONLINE 0 0 0 > md4 ONLINE 0 0 0 > md5 ONLINE 0 0 0 > md6 ONLINE 0 0 0 > md7 ONLINE 0 0 0 > raidz1-2 ONLINE 0 0 0 > md8 ONLINE 0 0 0 > md9 ONLINE 0 0 0 > md10 ONLINE 0 0 0 > md11 ONLINE 0 0 0 > md12 ONLINE 0 0 0 > logs > md1s1 ONLINE 0 0 0 > cache > md1s2 ONLINE 0 0 0 > > errors: No known data errors > # > > Then exporting it, and removing md8-md12, which results in: > > # zpool import > pool: test > id: 8932371712846778254 > state: UNAVAIL > status: One or more devices are missing from the system. > action: The pool cannot be imported. Attach the missing > devices and try again. > see: http://illumos.org/msg/ZFS-8000-6X > config: > > test UNAVAIL missing device > raidz1-0 ONLINE > md3 ONLINE > md4 ONLINE > md5 ONLINE > md6 ONLINE > md7 ONLINE > cache > md1s2 > logs > md1s1 ONLINE > > Additional devices are known to be part of this pool, though their > exact configuration cannot be determined. > # > > One more data point: In the 'zdb -l' output on the log device it shows > > vdev_children: 2 > > for the pool consisting of raidz1 + log + cache, but it shows > > vdev_children: 3 > > for the pool with raidz1 + raidz1 + log + cache. The pool in the > problem report also shows 'vdev_children: 3' [2] > > > > [1] Trying to add a single device resulted in zpool add complaining > with: > > mismatched replication level: pool uses raidz and new vdev is disk > > and trying it with three disks said: > > mismatched replication level: pool uses 5-way raidz and new vdev uses 3-way raidz > > > [2] http://lists.freebsd.org/pipermail/freebsd-fs/2014-April/019340.html > But you can force it.... If you force it, it will add a vdev not the same as the current vdev. So you will have a raidz1 and a single no parity vdev in the pool. If you destroy the single disk vdev then you will get a pool which can not be repaired as far as I know. regards Johan