Date: Wed, 17 Apr 2013 22:15:17 -0700 (PDT) From: Beeblebrox <zaphod@berentweb.com> To: freebsd-fs@freebsd.org Subject: [ZFS] recover destroyed zpool with ZDB Message-ID: <1366262117117-5804714.post@n5.nabble.com> In-Reply-To: <CA%2BtpaK1UL4RMKFoJdaPLDnfa14xyPjbafVbD-79dDEFVmxtPMg@mail.gmail.com> References: <1366221907838-5804517.post@n5.nabble.com> <CA%2BtpaK24z2uF7yWp1wmJjYhN4ZFydqRRVSfKNGNQbhmf-P8%2BDw@mail.gmail.com> <1366226180639-5804603.post@n5.nabble.com> <CA%2BtpaK1UL4RMKFoJdaPLDnfa14xyPjbafVbD-79dDEFVmxtPMg@mail.gmail.com>
next in thread | previous in thread | raw e-mail | index | archive | help
Thanks, but that document does not appear very relevant to my situation. Also, the issue is not as straight-forward as it seems. The DEFAULTED status of the zpool was a 'false positive', because A- The "present pool" did not accept any zpool commands and always gave message like no such pool or dataset ... recover the pool from a backup source. B- The more relevant on-disk metadata showed and still shows this: # zdb -l /dev/ada0p2 => all 4 labels intact and pool_guid: 12018916494219117471 vdev_tree: type: 'disk' id: 0 guid: 17860002997423999070 While the pool showing up in the zpool list was/is clearly in a worse state that the above pool: # zdb -l /dev/ada0 => only label 2 intact and pool_guid: 16018525702691588432 In my opinion, this problem is more similar to a "Resolving a Missing Device" problem rather than data corruption. Unfortunately, missing device repairs focus on mirrored setups and no decent document on missing device of single-HDD pool. ----- 10-Current-amd64-using ccache-portstree merged with marcuscom.gnome3 & xorg.devel -- View this message in context: http://freebsd.1045724.n5.nabble.com/ZFS-recover-destroyed-zpool-with-ZDB-tp5804517p5804714.html Sent from the freebsd-fs mailing list archive at Nabble.com.
Want to link to this message? Use this URL: <https://mail-archive.FreeBSD.org/cgi/mid.cgi?1366262117117-5804714.post>