Skip site navigation (1)Skip section navigation (2)
Date:      Wed, 23 Apr 2014 07:10:27 -0700
From:      Gena Guchin <ggulchin@icloud.com>
To:        Johan Hendriks <joh.hendriks@gmail.com>
Cc:        freebsd-fs@freebsd.org
Subject:   Re: ZFS unable to import pool
Message-ID:  <A178B102-BB8C-4FB8-8960-7822A5042105@icloud.com>
In-Reply-To: <5357ABFB.9060702@gmail.com>
References:  <B493BD80-CDC2-4BA6-AC14-AE16B373A051@icloud.com> <20140423064203.GD2830@sludge.elizium.za.net> <B1024D84-EBBE-4A9B-82C4-5C19B5A66B60@icloud.com> <20140423080056.GE2830@sludge.elizium.za.net> <20140423091852.GH2830@sludge.elizium.za.net> <20140423100126.GJ2830@sludge.elizium.za.net> <5357937D.4080302@gmail.com> <20140423120042.GK2830@sludge.elizium.za.net> <5357ABFB.9060702@gmail.com>

next in thread | previous in thread | raw e-mail | index | archive | help
looks like this is what I did :(

On Apr 23, 2014, at 5:03 AM, Johan Hendriks <joh.hendriks@gmail.com> wrote:

> 
> op 23-04-14 14:00, Hugo Lombard schreef:
>> On Wed, Apr 23, 2014 at 12:18:37PM +0200, Johan Hendriks wrote:
>>> Did you in the past add an extra disk to the pool?
>>> This could explain the whole issue as the pool is missing a whole vdev.
>>> 
>> I agree that there's a vdev missing...
>> 
>> I was able to "simulate" the current problematic import state (sans
>> failed "disk7", since that doesn't seem to be the stumbling block) by
>> adding 5 disks [1] to get to here:
>> 
>>   # zpool status test
>>     pool: test
>>    state: ONLINE
>>     scan: none requested
>>   config:
>>   	  NAME        STATE     READ WRITE CKSUM
>> 	  test        ONLINE       0     0     0
>> 	    raidz1-0  ONLINE       0     0     0
>> 	      md3     ONLINE       0     0     0
>> 	      md4     ONLINE       0     0     0
>> 	      md5     ONLINE       0     0     0
>> 	      md6     ONLINE       0     0     0
>> 	      md7     ONLINE       0     0     0
>> 	    raidz1-2  ONLINE       0     0     0
>> 	      md8     ONLINE       0     0     0
>> 	      md9     ONLINE       0     0     0
>> 	      md10    ONLINE       0     0     0
>> 	      md11    ONLINE       0     0     0
>> 	      md12    ONLINE       0     0     0
>> 	  logs
>> 	    md1s1     ONLINE       0     0     0
>> 	  cache
>> 	    md1s2     ONLINE       0     0     0
>>      errors: No known data errors
>>   #
>> 
>> Then exporting it, and removing md8-md12, which results in:
>> 
>>   # zpool import
>>      pool: test
>>        id: 8932371712846778254
>>     state: UNAVAIL
>>    status: One or more devices are missing from the system.
>>    action: The pool cannot be imported. Attach the missing
>> 	  devices and try again.
>>      see: http://illumos.org/msg/ZFS-8000-6X
>>    config:
>>   	  test         UNAVAIL  missing device
>> 	    raidz1-0   ONLINE
>> 	      md3      ONLINE
>> 	      md4      ONLINE
>> 	      md5      ONLINE
>> 	      md6      ONLINE
>> 	      md7      ONLINE
>> 	  cache
>> 	    md1s2
>> 	  logs
>> 	    md1s1      ONLINE
>>   	  Additional devices are known to be part of this pool, though their
>> 	  exact configuration cannot be determined.
>>   #
>> 
>> One more data point:  In the 'zdb -l' output on the log device it shows
>> 
>>   vdev_children: 2
>> 
>> for the pool consisting of raidz1 + log + cache, but it shows
>> 
>>   vdev_children: 3
>> 
>> for the pool with raidz1 + raidz1 + log + cache.  The pool in the
>> problem report also shows 'vdev_children: 3' [2]
>> 
>> 
>> 
>> [1] Trying to add a single device resulted in zpool add complaining
>> with:
>> 
>>   mismatched replication level: pool uses raidz and new vdev is disk
>> 
>> and trying it with three disks said:
>> 
>>   mismatched replication level: pool uses 5-way raidz and new vdev uses 3-way raidz
>> 
>> 
>> [2] http://lists.freebsd.org/pipermail/freebsd-fs/2014-April/019340.html
>> 
> But you can force it....
> If you force it, it will add a vdev not the same as the current vdev. So you will have a raidz1 and a single no parity vdev in the pool. If you destroy the single disk vdev then you will get a pool which can not be repaired as far as I know.
> 
> regards
> Johan
> 
> 
> _______________________________________________
> freebsd-fs@freebsd.org mailing list
> http://lists.freebsd.org/mailman/listinfo/freebsd-fs
> To unsubscribe, send any mail to "freebsd-fs-unsubscribe@freebsd.org"




Want to link to this message? Use this URL: <https://mail-archive.FreeBSD.org/cgi/mid.cgi?A178B102-BB8C-4FB8-8960-7822A5042105>