Skip site navigation (1)Skip section navigation (2)
Date:      Fri, 08 Jan 2010 11:48:38 -0500
From:      Steve Bertrand <steve@ibctech.ca>
To:        krad <kraduk@googlemail.com>
Cc:        Wes Morgan <morganw@chemikals.org>, "freebsd-questions@freebsd.org Questions -" <freebsd-questions@freebsd.org>
Subject:   Re: Replacing disks in a ZFS pool
Message-ID:  <4B4761E6.3000904@ibctech.ca>
In-Reply-To: <d36406631001080310p1877ceb4w9753c2e6cac38491@mail.gmail.com>
References:  <4B451FE9.6040501@ibctech.ca>	<alpine.BSF.2.00.1001062106200.76339@ibyngvyr> <d36406631001080310p1877ceb4w9753c2e6cac38491@mail.gmail.com>

next in thread | previous in thread | raw e-mail | index | archive | help
krad wrote:

>>> the idea of using this type of label instead of the disk names
>> themselves.
>>
>> I personally haven't run into any bad problems using the full device, but
>> I suppose it could be a problem. (Side note - geom should learn how to
>> parse zfs labels so it could create something like /dev/zfs/<uuid> for
>> device nodes instead of using other trickery)
>>
>>> How should I proceed? I'm assuming something like this:
>>>
>>> - add the new 1.5TB drives into the existing, running system
>>> - GPT label them
>>> - use 'zpool replace' to replace one drive at a time, allowing the pool
>>> to rebuild after each drive is replaced
>>> - once all four drives are complete, shut down the system, remove the
>>> four original drives, and connect the four new ones where the old ones
>> were
>>
>> If you have enough ports to bring all eight drives online at once, I would
>> recommend using 'zfs send' rather than the replacement. That way you'll
>> get something like a "burn-in" on your new drives, and I believe it will
>> probably be faster than the replacement process. Even on an active system,
>> you can use a couple of incremental snapshots and reduce the downtime to a
>> bare minimum.
>>
>>
> Surely it would be better to attach the drives either individually or as a
> matching vdev (assuming they can all run at once), then break the mirror
> after its resilvered.  Far less work and far less liekly to miss something.
> 
> What I have done with my system is label the drives up with a coloured
> sticker then create a glabel for the device. I then add the glabels to the
> zpool. Makes it very easy to identify the drives.

Ok. Unfortunately, the box only has four SATA ports.

Can I:

- shut down
- replace a single existing drive with a new one (breaking the RAID)
- boot back up
- gpt label the new disk
- import the new gpt labelled disk
- rebuild array
- rinse, repeat three more times

If so, is there anything I should do prior to the initial drive
replacement, or will simulating the drive failure be ok?

Steve



Want to link to this message? Use this URL: <https://mail-archive.FreeBSD.org/cgi/mid.cgi?4B4761E6.3000904>