Skip site navigation (1)Skip section navigation (2)
Date:      Sat, 1 Aug 2009 10:50:16 +0200
From:      =?ISO-8859-1?Q?Marius_N=FCnnerich?= <marius@nuenneri.ch>
To:        Ludwig Pummer <ludwigp@chip-web.com>
Cc:        freebsd-fs@freebsd.org
Subject:   Re: ZFS raidz1 pool unavailable from losing 1 device
Message-ID:  <b649e5e0908010150u2f1538dayfda799ec8fba0870@mail.gmail.com>
In-Reply-To: <4A73A096.5050106@chip-web.com>
References:  <4A712290.9030308@chip-web.com> <46899.11156.qm@web37301.mail.mud.yahoo.com> <4A714B03.6050704@chip-web.com> <4A73A096.5050106@chip-web.com>

next in thread | previous in thread | raw e-mail | index | archive | help
On Sat, Aug 1, 2009 at 03:55, Ludwig Pummer<ludwigp@chip-web.com> wrote:
> Ludwig Pummer wrote:
>>
>> Simun Mikecin wrote:
>>>
>>> Ludwin Pummer wrote:
>>>
>>>
>>>>
>>>> My system is 7.2-STABLE Jul 27, amd64, 4GB memory, just upgraded from
>>>> 6.4-STABLE from last year. I just set up a ZFS raidz volume to replace a
>>>> graid5 volume I had been using. I had it successfully set up using
>>>> partitions across 4 disks, ad{6,8,10,12}s1e. Then I wanted to expand the
>>>> raidz volume by merging the space from the adjacent disk partition. I
>>>> thought I could just fail out the partition device in ZFS, edit the
>>>> bsdlabel, and re-add the larger partition, ZFS would resilver, repeat until
>>>> done. That's when I found out that ZFS doesn't let you fail out a device in
>>>> a raidz volume. No big deal, I thought, I'll just go to single user mode and
>>>> mess with the partition when ZFS isn't looking. When it comes back up it
>>>> should notice that one of the device is gone, I can do a 'zfs replace' and
>>>> continue my plan.
>>>>
>>>> Well, after rebooting to single user mode, combining partitions ad12s1d
>>>> and ad12s1e (removed the d partiton), "zfs volinit", then "zpool status"
>>>> just hung (Ctrl-C didn't kill it, so I rebooted). I thought this was a bit
>>>> odd so I thought perhaps ZFS is confused by the ZFS metadata left on
>>>> ad12s1e, so I blanked it out with "dd". That didn't help. I changed the name
>>>> of the partition to ad12s1d thinking perhaps that would help. After that,
>>>> "zfs volinit; zfs mount -a; zpool status" showed my raidz pool UNAVAIL with
>>>> the message "insufficient replicas", ad{6,8,10}s1e ONLINE, and ad12s1e
>>>> UNAVAIL "cannot open", and a more detailed message pointing me to
>>>> http://www.sun.com/msg/ZFS-8000-3C. I tried doing a "zpool replace storage
>>>> ad12s1e ad12s1d" but it refused, saying my zpool ("storage") was
>>>> unavailable. Ditto for pretty much every zpool command I tried. "zpool
>>>> clear" gave me a "permission denied" error.
>>>>
>>>
>>> Was your pool imported while you were repartitioning in single user mode?
>>>
>>
>> Yes, I guess you could say it was. ZFS wasn't loaded while I was doing the
>> repartitioning, though.
>>
>> --Ludwig
>>
>
> Well, I figured out my problem. I didn't actually have a raidz1 volume. I
> missed the magic word "raidz" when I performed the "zpool create" so I
> created a JBOD. Removing one disk legitmately destroyed my zpool :(
>
> --Ludwig

That's bad. But it won't explain why the disk names changed. I guess
there is a race in tasting either the original ad* providers or the
one sector smaller label/foo providers. May I suggest that you or
other people reading this should try to use gpt labels in the future
as they are there definetly _after_ gpt has tasted. Sadly they are
only available in 8-current right now.



Want to link to this message? Use this URL: <https://mail-archive.FreeBSD.org/cgi/mid.cgi?b649e5e0908010150u2f1538dayfda799ec8fba0870>