Skip site navigation (1)Skip section navigation (2)
Date:      Fri, 08 Jan 2010 13:04:13 -0500
From:      Steve Bertrand <steve@ibctech.ca>
To:        krad <kraduk@googlemail.com>
Cc:        Wes Morgan <morganw@chemikals.org>, "freebsd-questions@freebsd.org Questions -" <freebsd-questions@freebsd.org>
Subject:   Re: Replacing disks in a ZFS pool
Message-ID:  <4B47739D.1090206@ibctech.ca>
In-Reply-To: <4B4761E6.3000904@ibctech.ca>
References:  <4B451FE9.6040501@ibctech.ca>	<alpine.BSF.2.00.1001062106200.76339@ibyngvyr>	<d36406631001080310p1877ceb4w9753c2e6cac38491@mail.gmail.com> <4B4761E6.3000904@ibctech.ca>

next in thread | previous in thread | raw e-mail | index | archive | help
Steve Bertrand wrote:
> krad wrote:
> 
>>>> the idea of using this type of label instead of the disk names
>>> themselves.
>>>
>>> I personally haven't run into any bad problems using the full device, but
>>> I suppose it could be a problem. (Side note - geom should learn how to
>>> parse zfs labels so it could create something like /dev/zfs/<uuid> for
>>> device nodes instead of using other trickery)
>>>
>>>> How should I proceed? I'm assuming something like this:
>>>>
>>>> - add the new 1.5TB drives into the existing, running system
>>>> - GPT label them
>>>> - use 'zpool replace' to replace one drive at a time, allowing the pool
>>>> to rebuild after each drive is replaced
>>>> - once all four drives are complete, shut down the system, remove the
>>>> four original drives, and connect the four new ones where the old ones
>>> were
>>>
>>> If you have enough ports to bring all eight drives online at once, I would
>>> recommend using 'zfs send' rather than the replacement. That way you'll
>>> get something like a "burn-in" on your new drives, and I believe it will
>>> probably be faster than the replacement process. Even on an active system,
>>> you can use a couple of incremental snapshots and reduce the downtime to a
>>> bare minimum.
>>>
>>>
>> Surely it would be better to attach the drives either individually or as a
>> matching vdev (assuming they can all run at once), then break the mirror
>> after its resilvered.  Far less work and far less liekly to miss something.
>>
>> What I have done with my system is label the drives up with a coloured
>> sticker then create a glabel for the device. I then add the glabels to the
>> zpool. Makes it very easy to identify the drives.
> 
> Ok. Unfortunately, the box only has four SATA ports.
> 
> Can I:
> 
> - shut down
> - replace a single existing drive with a new one (breaking the RAID)
> - boot back up
> - gpt label the new disk
> - import the new gpt labelled disk
> - rebuild array
> - rinse, repeat three more times
> 

This seems to work ok:

# zpool offline storage ad6
# halt & replace disk, and start machine
# zpool online storage ad6
# zpool replace storage ad6

I don't know enough about gpt/gpart to be able to work that into the
mix. I would much prefer to have gpt labels as opposed to disk names,
but alas.

fwiw, can I label an entire disk (such as ad6) with gpt, without having
to install boot blocks etc?

I was hoping it would be as easy as:

# gpt create -f ad6
# gpt label -l disk1 ad6

...but it doesn't work.

Neither does:

# gpart create -s gpt ad6
# gpart add -t freebsd-zfs -l disk1 ad6

I'd like to do this so I don't have to manually specify a size to use. I
just want the system to Do The Right Thing, which in this case, would be
to just use the entire disk.

Steve




> If so, is there anything I should do prior to the initial drive
> replacement, or will simulating the drive failure be ok?
> 
> Steve
> _______________________________________________
> freebsd-questions@freebsd.org mailing list
> http://lists.freebsd.org/mailman/listinfo/freebsd-questions
> To unsubscribe, send any mail to "freebsd-questions-unsubscribe@freebsd.org"




Want to link to this message? Use this URL: <https://mail-archive.FreeBSD.org/cgi/mid.cgi?4B47739D.1090206>