Skip site navigation (1)Skip section navigation (2)
Date:      Thu, 22 Dec 2016 14:11:37 +0500
From:      "Eugene M. Zheganin" <emz@norma.perm.ru>
To:        freebsd-stable <freebsd-stable@freebsd.org>
Subject:   cannot detach vdev from zfs pool
Message-ID:  <585B98C9.4070607@norma.perm.ru>

next in thread | raw e-mail | index | archive | help
Hi,

Recently I decided to remove the bogus zfs-inside-geli-inside-zvol pool,
since it's now officially unsupported. So, I needed to reslice my disk,
hence to detach one of the disks from a mirrored pool. I issued 'zpool
detach zroot gpt/zroot1' and my system livelocked almost immidiately, so
I pressed reset. Now I got this:

# zpool status zroot
  pool: zroot
 state: DEGRADED
status: One or more devices has been taken offline by the administrator.
        Sufficient replicas exist for the pool to continue functioning in=
 a
        degraded state.
action: Online the device using 'zpool online' or replace the device with=

        'zpool replace'.
  scan: resilvered 687G in 5h26m with 0 errors on Sat Oct 17 19:41:49 201=
5
config:

        NAME                     STATE     READ WRITE CKSUM
        zroot                    DEGRADED     0     0     0
          mirror-0               DEGRADED     0     0     0
            gpt/zroot0           ONLINE       0     0     0
            1151243332124505229  OFFLINE      0     0     0  was
/dev/gpt/zroot1

errors: No known data errors

This isn't a big deal by itself, since I was able to create second zfs
pool and now I'm relocating my data to it, although I should say that
this is very disturbing sequence of events, because I'm now unable to
even delete the UNAVAIL vdev from the pool. I tried to boot from a
FreeBSD USB stick and detach it there, but all I discovered was the fact
that zfs subsystem locks up upon the command 'zpool detach zroot
1151243332124505229'. I waited for several minutes but nothing happened,
furthermore subsequent zpool/zfs commands are hanging up too.

Is this worth submitting a pr, or may be it does need additional
investigation ? In general I intend to destroy this pool after
relocation it, but I'm afraid someone (or even myself again) could step
on this later. Both disks are healthy, and I don't see the complains in
dmesg. I'm running a FreeBSD 11.0-release-p5 here. The pool was initialy
created somewhere under 9.0 I guess.

Thanks.
Eugene.




Want to link to this message? Use this URL: <https://mail-archive.FreeBSD.org/cgi/mid.cgi?585B98C9.4070607>