Skip site navigation (1)Skip section navigation (2)
Date:      Sat, 31 Dec 2011 13:40:59 -0800
From:      Drew Tomlinson <drew@mykitchentable.net>
To:        freebsd-questions@freebsd.org
Subject:   Help Recovering FBSD 8 ZFS System
Message-ID:  <4EFF816B.60705@mykitchentable.net>

next in thread | raw e-mail | index | archive | help
I have a FBSD 8 system that won't boot.  Console messages suggest I lost 
my ad6 drive.

When I created the system, built root on a zfs mirror and a raidz1 pool 
for data.  I've managed to boot into an 8.2 Fixit shell and been able to 
mount root on /mnt/root.  However my raidz1 pool (data) can't be 
imported.  I'm *really* hoping there is some way to fix this.

 From the fixit shell, here is what I get when I run 'zpool import':

# zpool import
   pool: data
     id: 10923029512613632741
  state: UNAVAIL
status: The pool was last accessed by another system.
action: The pool cannot be imported due to damaged devices or data.
    see: http://www.sun.com/msg/ZFS-8000-EY
config:

         data                                            UNAVAIL  
insufficient replicas
           raidz1                                        UNAVAIL  
insufficient replicas
             gptid/1be11998-6c47-11de-ae82-001b21361de7  ONLINE
             ad6p4                                       UNAVAIL  cannot 
open
             gptid/fc514567-6c46-11de-ae82-001b21361de7  ONLINE
             gptid/0f45dd02-6c47-11de-ae82-001b21361de7  UNAVAIL  cannot 
open

However I'm not exactly sure what this is telling me other than 
something "bad".  I did capture this information when I upgraded to 
version 15:

# 11/26/2010 - DLT
# http://www.freebsddiary.org/zfs-upgrade.php
#
# After upgrading to zpool v 15 and learning about upgrading zfs file 
systems,
# I decided to take a snapshot of disks. It's interesting in the zpool 
history
# commands, it shows different physical disks than what are currently shown.
# I don't know why.

ad6: 715404MB <Seagate ST3750330AS SD15> at ata3-master UDMA100 SATA 3Gb/s
ad10: 476940MB <Hitachi HDP725050GLA360 GM4OA52A> at ata5-master UDMA100 
SATA 3Gb/s
acd0: DVDR <LITE-ON DVDRW LH-20A1L/BL05> at ata6-master UDMA100 SATA 1.5Gb/s
ad14: 476940MB <Hitachi HDP725050GLA360 GM4OA52A> at ata7-master UDMA100 
SATA 3Gb/s
ad16: 476940MB <Hitachi HDP725050GLA360 GM4OA52A> at ata8-master UDMA100 
SATA 3Gb/s

vm# zpool history data
History for 'data':
2009-07-12.08:56:22 zpool create data raidz1 ad8p4 ad6p4 ad12p2 ad14p2
2009-07-12.08:57:37 zfs create data/usr
2009-07-12.08:57:41 zfs create data/var
2009-07-12.08:57:59 zfs create data/home
2009-07-12.09:00:46 zfs set mountpoint=none data
2009-07-12.09:19:27 zfs set mountpoint=/usr data/usr
2009-07-12.09:19:37 zfs set mountpoint=/home data/home
2009-07-12.09:19:46 zfs set mountpoint=/var data/var
2009-07-12.10:00:29 zpool import -f data
2009-07-12.10:19:52 zpool import -f data
2009-07-12.10:35:33 zpool import -f data
2009-07-12.11:34:30 zpool import -f data
2009-07-15.13:02:10 zpool import -f data
2009-07-15.13:24:10 zpool import -f data
2009-07-17.12:48:39 zpool import -f data
2009-07-17.12:49:49 zpool upgrade data
2009-07-17.13:17:44 zpool import -f data
2009-07-17.13:48:19 zpool export data
2009-07-17.13:48:24 zpool import data
2009-07-22.16:05:37 zfs create data/archive
2009-07-22.16:07:15 zfs set mountpoint=/archive data/archive
2009-07-24.23:18:18 zfs create -V 9G data/test
2009-07-24.23:35:54 zfs destroy data/test
2010-06-05.16:39:26 zpool upgrade data
2010-07-05.06:20:09 zpool import -f data
2010-07-05.06:27:09 zfs set mountpoint=/mnt data
2010-11-28.07:39:01 zpool upgrade -a

 From dmesg output, these are the drives as seen after booting to Fixit:

ad6: 715404MB <Seagate ST3750330AS SD15> at ata3-master UDMA100 SATA 3Gb/s
ad10: 476940MB <Hitachi HDP725050GLA360 GM4OA52A> at ata5-master UDMA100 
SATA 3Gb/s
ad14: 476940MB <Hitachi HDP725050GLA360 GM4OA52A> at ata7-master UDMA100 
SATA 3Gb/s

Thus it appears I am missing ad16 that I used to have.  My data zpool 
was the bulk of my system with over 600 gig of files and things I'd like 
to have back.  I thought that by creating a raidz1 I could avoid having 
to back up the huge drive and avoid this grief.  However it appears I 
have lost 2 disks at the same time.  :(

Any thoughts before I just give up on recovering my data pool?

And regarding my root pool, my system can't mount root and start.  What 
do I need to do to boot from my degraded root pool.  Here's the current 
status:

# zpool status
   pool: root
  state: DEGRADED
status: One or more devices could not be opened.  Sufficient replicas 
exist for
         the pool to continue functioning in a degraded state.
action: Attach the missing device and online it using 'zpool online'.
    see: http://www.sun.com/msg/ZFS-8000-2Q
  scrub: none requested
config:

         NAME                                            STATE     READ 
WRITE CKSUM
         root                                            DEGRADED     
0     0     0
           mirror                                        DEGRADED     
0     0     0
             gptid/5b623854-6c46-11de-ae82-001b21361de7  ONLINE       
0     0     0
             12032653780322685599                        UNAVAIL      
0     0     0  was /dev/ad6p3

Do I just need to do a 'zpool detach root /dev/ad6p3' to remove it from 
the pool and get it to boot?  And then once I replace the disk a 'zpool 
attach root <new partition>' to fix?

Thanks for your time.

Cheers,

Drew

-- 
Like card tricks?

Visit The Alchemist's Warehouse to
learn card magic secrets for free!

http://alchemistswarehouse.com





Want to link to this message? Use this URL: <https://mail-archive.FreeBSD.org/cgi/mid.cgi?4EFF816B.60705>