From owner-freebsd-questions@FreeBSD.ORG Fri Jan 8 11:10:30 2010 Return-Path: Delivered-To: freebsd-questions@freebsd.org Received: from mx1.freebsd.org (mx1.freebsd.org [IPv6:2001:4f8:fff6::34]) by hub.freebsd.org (Postfix) with ESMTP id A8E65106566C for ; Fri, 8 Jan 2010 11:10:30 +0000 (UTC) (envelope-from kraduk@googlemail.com) Received: from mail-fx0-f227.google.com (mail-fx0-f227.google.com [209.85.220.227]) by mx1.freebsd.org (Postfix) with ESMTP id 282538FC0A for ; Fri, 8 Jan 2010 11:10:29 +0000 (UTC) Received: by fxm27 with SMTP id 27so4555273fxm.3 for ; Fri, 08 Jan 2010 03:10:24 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=googlemail.com; s=gamma; h=domainkey-signature:mime-version:received:in-reply-to:references :date:message-id:subject:from:to:cc:content-type; bh=sAWtJ5F75K2pwR9uae1uVM66IpAs06rsmpbBl1YNIxQ=; b=abXMoy7jbin1YtksDs9yT5bh9p9w9kzaDzygsrKg65KrVlYecICZz31yQlt5BG0/l0 3ktqebWWfAu6wltti+3KF3WTUBvOcwSJAzRNWe7WRJnETCv/i6YRK0/SWxsFgU8G8cGN WriGY8viMPDjPow4ylc4El8Mo5Nu4FRn9c/7k= DomainKey-Signature: a=rsa-sha1; c=nofws; d=googlemail.com; s=gamma; h=mime-version:in-reply-to:references:date:message-id:subject:from:to :cc:content-type; b=kdWbcsUzFl4Gu1HuP9CKYmknpoD1zWh/b2UmHaUlx6D/V/d71i1fzAGh4w7O75Jnkq ykaPQ443zHnmaVaMtNRLiGvlIiuU5oP1JRzVrg1qCeRtOfh3ofViW2I7i+B0cSe5TFnL ewKXRaxwBQft0rhgHJZBZsQR/W4TKMR46JaSg= MIME-Version: 1.0 Received: by 10.239.186.12 with SMTP id e12mr496236hbh.59.1262949024012; Fri, 08 Jan 2010 03:10:24 -0800 (PST) In-Reply-To: References: <4B451FE9.6040501@ibctech.ca> Date: Fri, 8 Jan 2010 11:10:23 +0000 Message-ID: From: krad To: Wes Morgan Content-Type: text/plain; charset=ISO-8859-1 X-Content-Filtered-By: Mailman/MimeDel 2.1.5 Cc: Steve Bertrand , "freebsd-questions@freebsd.org Questions -" Subject: Re: Replacing disks in a ZFS pool X-BeenThere: freebsd-questions@freebsd.org X-Mailman-Version: 2.1.5 Precedence: list List-Id: User questions List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , X-List-Received-Date: Fri, 08 Jan 2010 11:10:30 -0000 > Also, I've been loosely following some of the GPT threads, and I like > > the idea of using this type of label instead of the disk names > themselves. > > I personally haven't run into any bad problems using the full device, but > I suppose it could be a problem. (Side note - geom should learn how to > parse zfs labels so it could create something like /dev/zfs/ for > device nodes instead of using other trickery) > > > How should I proceed? I'm assuming something like this: > > > > - add the new 1.5TB drives into the existing, running system > > - GPT label them > > - use 'zpool replace' to replace one drive at a time, allowing the pool > > to rebuild after each drive is replaced > > - once all four drives are complete, shut down the system, remove the > > four original drives, and connect the four new ones where the old ones > were > > If you have enough ports to bring all eight drives online at once, I would > recommend using 'zfs send' rather than the replacement. That way you'll > get something like a "burn-in" on your new drives, and I believe it will > probably be faster than the replacement process. Even on an active system, > you can use a couple of incremental snapshots and reduce the downtime to a > bare minimum. > > Surely it would be better to attach the drives either individually or as a matching vdev (assuming they can all run at once), then break the mirror after its resilvered. Far less work and far less liekly to miss something. What I have done with my system is label the drives up with a coloured sticker then create a glabel for the device. I then add the glabels to the zpool. Makes it very easy to identify the drives.