Skip site navigation (1)Skip section navigation (2)
Date:      Sat, 11 Jul 2015 15:19:21 +0200
From:      Holm Tiffe <holm@freibergnet.de>
To:        freebsd-stable@freebsd.org
Subject:   ZFS Woes... Guru needed
Message-ID:  <20150711131921.GA34566@beast.freibergnet.de>

next in thread | raw e-mail | index | archive | help

Hi Guys,

I have a FreeBSD9.3-stable system with 3x 68GB IBM Disks (Cheeta)
configured wirh gpt and zfs as pool zroot. In the last days one of the
disks was dying and I've disconnected the bad going drive to connect
an other one. The "new" disk is physically identical but replacing the bad
disk with the new one failed, since the new disk reported only 65 insted of
68GByte free space (other firmware, IBMAS400 in the inquiry string).

So I looked at the bad disks and found one with defective electronics
and one with defective platters and exchanged the controllers (all Disks
are in "real life ST373453LC disks, original with 534 Byte sector size
or so).

I've rebooted the system, set up a camcontrol cmd for changing the sector
size to 512Byte and begun to format the disk with camcontrol format.
While formating for a while the system apniced with an adaptec register
dump and would'nt boot anymore with the infamous "ZFS: i/o error - all block
copies unavailabe", kernel booted but mountroot failed.

I've found out that I could boot the system when I manually load the zfs.ko
module at the loader prompt.
The filesystem is somewhat broken now, /boot/loader.conf isn't readable and
so are other files, /etc/rc.conf for example, /usr/home is empty.

I've tried to reformat the disk a 2nd time, but it never finished. Tried 
the next disk and this worked, tried a replace in the with zpool but I have
an unclear state here:

# zpool status -v
  pool: zroot
 state: DEGRADED
status: One or more devices has experienced an error resulting in data
        corruption.  Applications may be affected.
action: Restore the file in question if possible.  Otherwise restore the
        entire pool from backup.
   see: http://illumos.org/msg/ZFS-8000-8A
  scan: resilvered 42.4G in 1h25m with 2 errors on Sat Jul 11 12:28:53 2015
config:

        NAME                       STATE     READ WRITE CKSUM
        zroot                      DEGRADED     0     0    86
          raidz1-0                 DEGRADED     0     0   344
            gpt/disk0              ONLINE       0     0     0
            gpt/disk1              ONLINE       0     0     0
            replacing-2            DEGRADED     0     0     0
              6329731321389372496  OFFLINE      0     0     0  was
/dev/gpt/disk
2/old
              gpt/disk2            ONLINE       0     0     0

errors: Permanent errors have been detected in the following files:

        zroot:<0x0>
        zroot/var/db:<0x0>
#

I'm unable to remove thedisk with that id 6329731321389372496 and don't
know what replacing-2 could mean at all.

I do have an tar backup from my /home from almost a week ago, but in the
meantime I've done some serious work with Kicad (PCB's) tat I really don't
want to loose. The Backups from the system itselves are really old...

How can I repair this trouble? Can I get my data back at all?
zpool scrub ran several times, resilvering too. I can't remove that
loader.conf file or others. (cat /etc/rc.conf .. unknown error 122)

I think I need ZFS Guru here...please help..

Regards,

Holm

 -- 
      Technik Service u. Handel Tiffe, www.tsht.de, Holm Tiffe, 
     Freiberger Straße 42, 09600 Oberschöna, USt-Id: DE253710583
  www.tsht.de, info@tsht.de, Fax +49 3731 74200, Mobil: 0172 8790 741




Want to link to this message? Use this URL: <https://mail-archive.FreeBSD.org/cgi/mid.cgi?20150711131921.GA34566>