Skip site navigation (1)Skip section navigation (2)
Date:      Thu, 26 Nov 2009 11:30:06 GMT
From:      kot@softlynx.ru
To:        freebsd-fs@FreeBSD.org
Subject:   Re: kern/140888: [zfs] boot fail from zfs root while the pool  resilvering
Message-ID:  <200911261130.nAQBU65B077048@freefall.freebsd.org>

next in thread | raw e-mail | index | archive | help
The following reply was made to PR kern/140888; it has been noted by GNATS.

From: kot@softlynx.ru
To: bug-followup@FreeBSD.org
Cc:  
Subject: Re: kern/140888: [zfs] boot fail from zfs root while the pool 
     resilvering
Date: Thu, 26 Nov 2009 14:02:12 +0300 (MSK)

 I found, that it keep fail booting if has at least one device not ONLINE
 and pool state DEGRADED.
 
 For instance
 
 [root@livecd8:/]# zpool status
   pool: tank0
  state: DEGRADED
  scrub: none requested
 config:
 
         NAME                        STATE     READ WRITE CKSUM
         tank0                       DEGRADED     0     0     0
           raidz1                    DEGRADED     0     0     0
             replacing               DEGRADED     0     0     0
               12996219703647995136  UNAVAIL      0   298     0  was
 /dev/gpt/QM00002
               gpt/SN023432          ONLINE       0     0     0
             gpt/SN091234            ONLINE       0     0     0
 
 errors: No known data errors
 
 considered as degraded even it has replace gpt/QM00002 with new gpt/SN023432.
 
 Detaching UNAVAIL component turns pool to ONLINE state back.
 
  [root@livecd8:/]# zpool detach tank0 12996219703647995136
  [root@livecd8:/]# zpool status
    pool: tank0
   state: ONLINE
   scrub: none requested
  config:
 
          NAME              STATE     READ WRITE CKSUM
          tank0             ONLINE       0     0     0
            raidz1          ONLINE       0     0     0
              gpt/SN023432  ONLINE       0     0     0
              gpt/SN091234  ONLINE       0     0     0
 
  errors: No known data errors
 
 This case lets to boot from tank0.
 
 It also keeps booting fine in case of component is manually turns to
 OFFLINE state in any combination, for instance like
 
 [root@fresh-inst:~]# zpool status
   pool: tank0
  state: DEGRADED
 status: One or more devices has experienced an unrecoverable error.  An
         attempt was made to correct the error.  Applications are unaffected.
 action: Determine if the device needs to be replaced, and clear the errors
         using 'zpool clear' or replace the device with 'zpool replace'.
    see: http://www.sun.com/msg/ZFS-8000-9P
  scrub: none requested
 config:
 
         NAME              STATE     READ WRITE CKSUM
         tank0             DEGRADED     0     0     0
           raidz1          DEGRADED     0     0     0
             gpt/SN023432  ONLINE       0     0     0
             gpt/SN091234  OFFLINE      0   921     0
 
 errors: No known data errors
 
 



Want to link to this message? Use this URL: <https://mail-archive.FreeBSD.org/cgi/mid.cgi?200911261130.nAQBU65B077048>