From owner-freebsd-questions@FreeBSD.ORG Fri Jan 8 16:48:58 2010 Return-Path: Delivered-To: freebsd-questions@freebsd.org Received: from mx1.freebsd.org (mx1.freebsd.org [IPv6:2001:4f8:fff6::34]) by hub.freebsd.org (Postfix) with ESMTP id B670B106566C for ; Fri, 8 Jan 2010 16:48:58 +0000 (UTC) (envelope-from steve@ibctech.ca) Received: from smtp.ibctech.ca (v6.ibctech.ca [IPv6:2607:f118::b6]) by mx1.freebsd.org (Postfix) with SMTP id 673448FC08 for ; Fri, 8 Jan 2010 16:48:53 +0000 (UTC) Received: (qmail 44805 invoked by uid 89); 8 Jan 2010 16:48:08 -0000 Received: from unknown (HELO ?IPv6:2607:f118:2:8000:2592:5a24:e9e:52b9?) (steve@ibctech.ca@2607:f118:2:8000:2592:5a24:e9e:52b9) by 2607:f118::b6 with ESMTPA; 8 Jan 2010 16:48:07 -0000 Message-ID: <4B4761E6.3000904@ibctech.ca> Date: Fri, 08 Jan 2010 11:48:38 -0500 From: Steve Bertrand User-Agent: Thunderbird 2.0.0.17 (Windows/20080914) MIME-Version: 1.0 To: krad References: <4B451FE9.6040501@ibctech.ca> In-Reply-To: X-Enigmail-Version: 0.96.0 Content-Type: text/plain; charset=ISO-8859-1 Content-Transfer-Encoding: 7bit Cc: Wes Morgan , "freebsd-questions@freebsd.org Questions -" Subject: Re: Replacing disks in a ZFS pool X-BeenThere: freebsd-questions@freebsd.org X-Mailman-Version: 2.1.5 Precedence: list List-Id: User questions List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , X-List-Received-Date: Fri, 08 Jan 2010 16:48:58 -0000 krad wrote: >>> the idea of using this type of label instead of the disk names >> themselves. >> >> I personally haven't run into any bad problems using the full device, but >> I suppose it could be a problem. (Side note - geom should learn how to >> parse zfs labels so it could create something like /dev/zfs/ for >> device nodes instead of using other trickery) >> >>> How should I proceed? I'm assuming something like this: >>> >>> - add the new 1.5TB drives into the existing, running system >>> - GPT label them >>> - use 'zpool replace' to replace one drive at a time, allowing the pool >>> to rebuild after each drive is replaced >>> - once all four drives are complete, shut down the system, remove the >>> four original drives, and connect the four new ones where the old ones >> were >> >> If you have enough ports to bring all eight drives online at once, I would >> recommend using 'zfs send' rather than the replacement. That way you'll >> get something like a "burn-in" on your new drives, and I believe it will >> probably be faster than the replacement process. Even on an active system, >> you can use a couple of incremental snapshots and reduce the downtime to a >> bare minimum. >> >> > Surely it would be better to attach the drives either individually or as a > matching vdev (assuming they can all run at once), then break the mirror > after its resilvered. Far less work and far less liekly to miss something. > > What I have done with my system is label the drives up with a coloured > sticker then create a glabel for the device. I then add the glabels to the > zpool. Makes it very easy to identify the drives. Ok. Unfortunately, the box only has four SATA ports. Can I: - shut down - replace a single existing drive with a new one (breaking the RAID) - boot back up - gpt label the new disk - import the new gpt labelled disk - rebuild array - rinse, repeat three more times If so, is there anything I should do prior to the initial drive replacement, or will simulating the drive failure be ok? Steve