Skip site navigation (1)Skip section navigation (2)
Date:      Wed, 23 Apr 2014 07:09:35 -0700
From:      Gena Guchin <ggulchin@icloud.com>
To:        Johan Hendriks <joh.hendriks@gmail.com>
Cc:        freebsd-fs@freebsd.org
Subject:   Re: ZFS unable to import pool
Message-ID:  <72E79259-3DB1-48B7-8E5E-19CC2145A464@icloud.com>
In-Reply-To: <5357937D.4080302@gmail.com>
References:  <B493BD80-CDC2-4BA6-AC14-AE16B373A051@icloud.com> <20140423064203.GD2830@sludge.elizium.za.net> <B1024D84-EBBE-4A9B-82C4-5C19B5A66B60@icloud.com> <20140423080056.GE2830@sludge.elizium.za.net> <20140423091852.GH2830@sludge.elizium.za.net> <20140423100126.GJ2830@sludge.elizium.za.net> <5357937D.4080302@gmail.com>

next in thread | previous in thread | raw e-mail | index | archive | help
Johan,=20

Looking though the history, i DID add that disk ada7 (!) to the pool, =
but I added it as a separate disk. I wanted to re-add the disk to the =
storage pool, but it added as a new disk=85
this does help a lille..


anything I can do now?=20
can I remove that vdev?


thanks!
On Apr 23, 2014, at 3:18 AM, Johan Hendriks <joh.hendriks@gmail.com> =
wrote:

>=20
> op 23-04-14 12:01, Hugo Lombard schreef:
>> Hello
>>=20
>> In your original 'zpool import' output, it shows the following:
>>=20
>>        Additional devices are known to be part of this pool, though =
their
>>        exact configuration cannot be determined.
>>=20
>> I'm thinking your problem might be related to devices that's supposed =
to
>> be part of the pool but that's not shown in the import.
>>=20
>> For instance, here's my attempt at recreating your scenario:
>>=20
>>   # zpool import
>>      pool: t
>>        id: 15230454775812525624
>>     state: DEGRADED
>>    status: One or more devices are missing from the system.
>>    action: The pool can be imported despite missing or damaged =
devices.  The
>> 	  fault tolerance of the pool may be compromised if imported.
>>      see: http://illumos.org/msg/ZFS-8000-2Q
>>    config:
>>   	  t                        DEGRADED
>> 	    raidz1-0               DEGRADED
>> 	      md3                  ONLINE
>> 	      md4                  ONLINE
>> 	      md5                  ONLINE
>> 	      md6                  ONLINE
>> 	      3421664295019948379  UNAVAIL  cannot open
>> 	  cache
>> 	    md1s2
>> 	  logs
>> 	    md1s1                  ONLINE
>>   #
>>=20
>> As you can see, the pool stattus is 'DEGRADED' instead of 'UNAVAIL', =
and
>> I don't have the 'Additional devices...' message.
>>=20
>> The pool imports OK:
>>=20
>>   # zpool import t
>>   # zpool status t
>>     pool: t
>>    state: DEGRADED
>>   status: One or more devices could not be opened.  Sufficient =
replicas exist for
>> 	  the pool to continue functioning in a degraded state.
>>   action: Attach the missing device and online it using 'zpool =
online'.
>>      see: http://illumos.org/msg/ZFS-8000-2Q
>>     scan: none requested
>>   config:
>>   	  NAME                     STATE     READ WRITE CKSUM
>> 	  t                        DEGRADED     0     0     0
>> 	    raidz1-0               DEGRADED     0     0     0
>> 	      md3                  ONLINE       0     0     0
>> 	      md4                  ONLINE       0     0     0
>> 	      md5                  ONLINE       0     0     0
>> 	      md6                  ONLINE       0     0     0
>> 	      3421664295019948379  UNAVAIL      0     0     0  was =
/dev/md7
>> 	  logs
>> 	    md1s1                  ONLINE       0     0     0
>> 	  cache
>> 	    md1s2                  ONLINE       0     0     0
>>      errors: No known data errors
>>   #
>>=20
>> As a further test, let's see what happens when the cache disk
>> disappears:
>>=20
>>   # zpool export t
>>   # gpart delete -i 2 md1
>>   md1s2 deleted
>>   # zpool import
>>      pool: t
>>        id: 15230454775812525624
>>     state: DEGRADED
>>    status: One or more devices are missing from the system.
>>    action: The pool can be imported despite missing or damaged =
devices.  The
>> 	  fault tolerance of the pool may be compromised if imported.
>>      see: http://illumos.org/msg/ZFS-8000-2Q
>>    config:
>>   	  t                        DEGRADED
>> 	    raidz1-0               DEGRADED
>> 	      md3                  ONLINE
>> 	      md4                  ONLINE
>> 	      md5                  ONLINE
>> 	      md6                  ONLINE
>> 	      3421664295019948379  UNAVAIL  cannot open
>> 	  cache
>> 	    7736388725784014558
>> 	  logs
>> 	    md1s1                  ONLINE
>>   # zpool import t
>>   # zpool status t
>>     pool: t
>>    state: DEGRADED
>>   status: One or more devices could not be opened.  Sufficient =
replicas exist for
>> 	  the pool to continue functioning in a degraded state.
>>   action: Attach the missing device and online it using 'zpool =
online'.
>>      see: http://illumos.org/msg/ZFS-8000-2Q
>>     scan: none requested
>>   config:
>>   	  NAME                     STATE     READ WRITE CKSUM
>> 	  t                        DEGRADED     0     0     0
>> 	    raidz1-0               DEGRADED     0     0     0
>> 	      md3                  ONLINE       0     0     0
>> 	      md4                  ONLINE       0     0     0
>> 	      md5                  ONLINE       0     0     0
>> 	      md6                  ONLINE       0     0     0
>> 	      3421664295019948379  UNAVAIL      0     0     0  was =
/dev/md7
>> 	  logs
>> 	    md1s1                  ONLINE       0     0     0
>> 	  cache
>> 	    7736388725784014558    UNAVAIL      0     0     0  was =
/dev/md1s2
>>      errors: No known data errors
>>   #
>>=20
>> So even with a missing raidz component and a missing cache device, =
the
>> pool still imports.
>>=20
>> I think some crucial piece of information is missing to complete the
>> picture.
>>=20
> Did you in the past add an extra disk to the pool?
> This could explain the whole issue as the pool is missing a whole =
vdev.
>=20
> regards
> Johan
>=20
> _______________________________________________
> freebsd-fs@freebsd.org mailing list
> http://lists.freebsd.org/mailman/listinfo/freebsd-fs
> To unsubscribe, send any mail to "freebsd-fs-unsubscribe@freebsd.org"




Want to link to this message? Use this URL: <https://mail-archive.FreeBSD.org/cgi/mid.cgi?72E79259-3DB1-48B7-8E5E-19CC2145A464>