From owner-freebsd-stable@FreeBSD.ORG Tue Jul 7 22:32:28 2009 Return-Path: Delivered-To: freebsd-stable@freebsd.org Received: from mx1.freebsd.org (mx1.freebsd.org [IPv6:2001:4f8:fff6::34]) by hub.freebsd.org (Postfix) with ESMTP id CB3491065670 for ; Tue, 7 Jul 2009 22:32:28 +0000 (UTC) (envelope-from fjwcash@gmail.com) Received: from mail-yx0-f181.google.com (mail-yx0-f181.google.com [209.85.210.181]) by mx1.freebsd.org (Postfix) with ESMTP id 862EF8FC1B for ; Tue, 7 Jul 2009 22:32:28 +0000 (UTC) (envelope-from fjwcash@gmail.com) Received: by yxe11 with SMTP id 11so7418934yxe.3 for ; Tue, 07 Jul 2009 15:32:27 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=gamma; h=domainkey-signature:mime-version:received:in-reply-to:references :date:message-id:subject:from:to:content-type; bh=rOIZtQo1MHqBjMYSpoOpVmXvYQcIYbEOSUspihCa/IA=; b=w9SYRCdebGuxeOnbs78qgFVOlhp183jPl3eXlLIpJ7UmS0TyElxTA195nzjtwtsnJ3 t3Eutdxw4YH+Qf7Kh5J5ufnhV7GmMLIjBYAYI/44C8cnOovA1L79LbC2pgvAV+qWdEbL E0oF7tMXyeN42M7qlp++FFCmNc1F7sXfDSeaI= DomainKey-Signature: a=rsa-sha1; c=nofws; d=gmail.com; s=gamma; h=mime-version:in-reply-to:references:date:message-id:subject:from:to :content-type; b=uDvsllXJiRLVAoeN9uhQlB3NrFPicBOVoToe86jtFRrp+88yt/ULq+0I7qwzgW04W5 1h7ue6iKYTYBQuuHkSYubx7v9VjiSvi8M4lZqC8ocZtb8mQw3F1kZoC8npAdwYKquSCn XfcxK56WAezvQGNC416D48LVNk9waKA3Ir6EA= MIME-Version: 1.0 Received: by 10.151.111.19 with SMTP id o19mr749303ybm.6.1247005947516; Tue, 07 Jul 2009 15:32:27 -0700 (PDT) In-Reply-To: <20090707222631.GA70750@martini.nu> References: <20090707195614.GA24326@martini.nu> <20090707222631.GA70750@martini.nu> Date: Tue, 7 Jul 2009 15:32:27 -0700 Message-ID: From: Freddie Cash To: freebsd-stable@freebsd.org Content-Type: text/plain; charset=UTF-8 Content-Transfer-Encoding: 7bit X-Content-Filtered-By: Mailman/MimeDel 2.1.5 Subject: Re: ZFS: drive replacement performance X-BeenThere: freebsd-stable@freebsd.org X-Mailman-Version: 2.1.5 Precedence: list List-Id: Production branch of FreeBSD source code List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , X-List-Received-Date: Tue, 07 Jul 2009 22:32:29 -0000 On Tue, Jul 7, 2009 at 3:26 PM, Mahlon E. Smith wrote: > On Tue, Jul 07, 2009, Freddie Cash wrote: > > > > This is why we've started using glabel(8) to label our drives, and then > add > > the labels to the pool: > > # zpool create store raidz1 label/disk01 label/disk02 label/disk03 > > > > That way, it does matter where the kernel detects the drives or what the > > physical device node is called, GEOM picks up the label, and ZFS uses the > > label. > > Ah, slick. I'll definitely be doing that moving forward. Wonder if I > could do it piecemeal now via a shell game, labeling and replacing each > individual drive? Will put that on my "try it" list. > Yes, this can be done piecemeal, after the fact, on an already configured pool. That's how I did it on one of our servers. It was originally configured using the device node names (da0, da1, etc). Then I set up the second server, but used labels. Then I went back to the first server, labelled the drives, and did "zpool replace storage da0 label/disk01" for each drive. Doesn't take long to resilver, as it knows that it's the same device. > > > > > Once I swapped drives, I issued a 'zpool replace'. > > > > > See comment at the end: what's the replace command that you used? > > > After the reboot that shuffled device order, the 'da2' changed to that > ID number. To have it accept the replace command, I had to use the > number itself -- I couldn't use 'da2' since that was now elsewhere, in > use, on the raidz1. Surprisingly, it worked. Or at least, it appeared > to. > > % zpool replace store 2025342973333799752 da8 > Hmm, you might be able to use glabel here, to label this new drive, and then do the replace command using the label. I think (never tried) you can use "zpool scrub -s store" to stop the resilver. If not, you should be able to re-do the replace command. > > > > There's something wrong here. It definitely should be incrementing. > Even > > when we did the foolish thing of creating a 24-drive raidz2 vdev and had > to > > replace a drive, the progress bar did change. Never got above 39% as it > > kept restarting, but it did increment. > > Strangely, the ETA is jumping all over the place, from 50 hours to 2000+ > hours. Never seen the percent complete over 0.01% done, but then it > goes back to 0.00%. > Hrm, odd. -- Freddie Cash fjwcash@gmail.com