Skip site navigation (1)Skip section navigation (2)
Date:      Wed, 15 Jun 2011 07:00:03 -0600
From:      "Justin T. Gibbs" <gibbs@scsiguy.com>
To:        Pawel Jakub Dawidek <pjd@FreeBSD.org>
Cc:        fs@FreeBSD.org
Subject:   Re: [CFR][ZFS] Add "zpool labelclear" command.
Message-ID:  <4DF8ACD3.1070202@scsiguy.com>
In-Reply-To: <20110615120524.GI1975@garage.freebsd.pl>
References:  <4DF7CDD0.8040108@scsiguy.com> <20110615120524.GI1975@garage.freebsd.pl>

next in thread | previous in thread | raw e-mail | index | archive | help
On 6/15/11 6:05 AM, Pawel Jakub Dawidek wrote:
>  On Tue, Jun 14, 2011 at 03:08:32PM -0600, Justin T. Gibbs wrote:
> > ZFS rightfully has a lot of safety belts in place to ward off unintended
> > data loss. But in some scenarios, the safety belts are so restrictive,
> > the only way to proceed is to wipe the label information off of a drive.
> >
> > Here's an example:
> >
> > Pull a drive that is active in a pool on one system and stick it into
> > another system. ZFS will correctly reject this drive as a member of
> > a new pool or as the argument of a replace command. But if you really
> > want to use that drive, how do you clear it's "potentially active"
> > status? If the pool were imported, you could destroy it, but ZFS wont
> > allow you to import a pool unless there are sufficient members for it
> > to serve I/O (I know about the undocumented -F option for import,
> > but users aren't going to find that). You can use dd to wipe the label
> > data off, but where exactly does ZFS keep its for copies of the label?
>
>  In most of the cases like that you can use -f switch, eg. you can create
>  pool or replace vdev using one that is active when you use that switch.
>  I'm sure you are aware of this, so I guess it doesn't always work for you?

Most of my testing has been on v15, so perhaps the situation is
better on v28?  On v15, "replace -f" certainly didn't work.  Even
if "replace -f" does work in v28 (or is made to work), what would
be the correct way to just delete the label off of such a drive in
the current zpool command set?  At Spectra Logic, we've found it
very useful in our drive fault testing to be able to easily restore
a drive to an unlabeled state in order to verify that ZFSD does the
right thing with both labeled and unlabeled drives.

If our use case is considered rare, I don't need to push this change
back into FreeBSD.  However, a quick search indicates that at least some
Solaris users have desired a similar command:

http://opensolaris.org/jive/thread.jspa?messageID=462337

--
Justin




Want to link to this message? Use this URL: <https://mail-archive.FreeBSD.org/cgi/mid.cgi?4DF8ACD3.1070202>