Skip site navigation (1)Skip section navigation (2)
Date:      Tue, 7 Jul 2009 15:32:27 -0700
From:      Freddie Cash <fjwcash@gmail.com>
To:        freebsd-stable@freebsd.org
Subject:   Re: ZFS: drive replacement performance
Message-ID:  <b269bc570907071532ub95af78i6ad3a09e8c6887d7@mail.gmail.com>
In-Reply-To: <20090707222631.GA70750@martini.nu>
References:  <20090707195614.GA24326@martini.nu> <b269bc570907071354r36015689ha362ba83413efc46@mail.gmail.com> <20090707222631.GA70750@martini.nu>

next in thread | previous in thread | raw e-mail | index | archive | help
On Tue, Jul 7, 2009 at 3:26 PM, Mahlon E. Smith <mahlon@martini.nu> wrote:

> On Tue, Jul 07, 2009, Freddie Cash wrote:
> >
> > This is why we've started using glabel(8) to label our drives, and then
> add
> > the labels to the pool:
> >   # zpool create store raidz1 label/disk01 label/disk02 label/disk03
> >
> > That way, it does matter where the kernel detects the drives or what the
> > physical device node is called, GEOM picks up the label, and ZFS uses the
> > label.
>
> Ah, slick.  I'll definitely be doing that moving forward.  Wonder if I
> could do it piecemeal now via a shell game, labeling and replacing each
> individual drive?  Will put that on my "try it" list.
>

Yes, this can be done piecemeal, after the fact, on an already configured
pool.  That's how I did it on one of our servers.  It was originally
configured using the device node names (da0, da1, etc).  Then I set up the
second server, but used labels.  Then I went back to the first server,
labelled the drives, and did "zpool replace storage da0 label/disk01" for
each drive.  Doesn't take long to resilver, as it knows that it's the same
device.


>
>
> > > Once I swapped drives, I issued a 'zpool replace'.
> > >
> > See comment at the end:  what's the replace command that you used?
>
>
> After the reboot that shuffled device order, the 'da2' changed to that
> ID number.  To have it accept the replace command, I had to use the
> number itself -- I couldn't use 'da2' since that was now elsewhere, in
> use, on the raidz1.  Surprisingly, it worked.  Or at least, it appeared
> to.
>
>    % zpool replace store 2025342973333799752 da8
>

Hmm, you might be able to use glabel here, to label this new drive, and then
do the replace command using the label.

I think (never tried) you can use "zpool scrub -s store" to stop the
resilver.  If not, you should be able to re-do the replace command.


>
>
> > There's something wrong here.  It definitely should be incrementing.
>  Even
> > when we did the foolish thing of creating a 24-drive raidz2 vdev and had
> to
> > replace a drive, the progress bar did change.  Never got above 39% as it
> > kept restarting, but it did increment.
>
> Strangely, the ETA is jumping all over the place, from 50 hours to 2000+
> hours.  Never seen the percent complete over 0.01% done, but then it
> goes back to 0.00%.
>

Hrm, odd.

-- 
Freddie Cash
fjwcash@gmail.com



Want to link to this message? Use this URL: <https://mail-archive.FreeBSD.org/cgi/mid.cgi?b269bc570907071532ub95af78i6ad3a09e8c6887d7>