Skip site navigation (1)Skip section navigation (2)
Date:      Wed, 17 Nov 2010 18:16:07 -0800
From:      Rumen Telbizov <telbizov@gmail.com>
To:        jhell <jhell@dataix.net>
Cc:        freebsd-stable@freebsd.org, Artem Belevich <fbsdlist@src.cx>
Subject:   Re: Degraded zpool cannot detach old/bad drive
Message-ID:  <AANLkTikUX0%2BRUgoUv6wVJekpJtAksTFeW_XQenqaZBnU@mail.gmail.com>
In-Reply-To: <4CE36054.6010503@DataIX.net>
References:  <AANLkTi=EWfVyZjKEYe=c0x6QvsdUcHGo2-iqGr4OaVG7@mail.gmail.com> <AANLkTi=h6ZJtbRHeUOpKX17uOD5_XyYmu01ZTTCCKw=_@mail.gmail.com> <AANLkTikPqgoxuYp7D88Dp0t5LvjXQeO3mCXdFw6onEZN@mail.gmail.com> <AANLkTimMM82=rqMQQfZZYTcaM_CU%2B01xPeZUGAik8H3v@mail.gmail.com> <AANLkTinKpMLeJOd_V7uxyAFqcStoGwV9PfTJDLDPq3By@mail.gmail.com> <AANLkTiktrL7LHkh3HLGqZeZx7ve6arBrs8ZE57NwtfN1@mail.gmail.com> <AANLkTinc1yQrwVsf%2Bk9LW5J50twbtcQ-d1SV_rny06Su@mail.gmail.com> <AANLkTimD_f1pZHy7cq4jA%2BSZwdQRmotndSiukpNvwi6Y@mail.gmail.com> <AANLkTikJp=1An8G%2BzTBbXBPyq8--Kq=dNN=_A3TkmsjE@mail.gmail.com> <AANLkTikg6SM7jHwEYXFAUT%2BD=ScFXjtR-Sa6fZe0Vbv=@mail.gmail.com> <AANLkTinj_Ty%2B7cfof34YHyA7K_O21bmhOqr-UKuZu5fZ@mail.gmail.com> <AANLkTim1pF3Cik5mMUJVtUqqSHFuWhTPGp%2BK3G6vUrZ-@mail.gmail.com> <AANLkTi=zq-JdZVnZ6dfySfV3whhQABMf6OmEgC61mNKj@mail.gmail.com> <AANLkTimrxrTmtRGkw0jTWME3zgE%2BF07OoFqWv4Khty-U@mail.gmail.com> <4CD6243B.90707@DataIX.net> <AANLkTik2WoLYgcB78%2B9_cCagWh1NVk6az_P-4VhE-jFt@mail.gmail.com> <4CE36054.6010503@DataIX.net>

next in thread | previous in thread | raw e-mail | index | archive | help
Hi jhell, everyone,

Thanks for your feedback and support everyone.
Indeed after successfully disabling /dev/gptid/* zfs managed to find all the
gpt/ labels
without a problem and the array looked exactly the way it did in the very
beginning.
So at that point I could say that I was able to fully recover the array
without data
loss to exactly the state it was in the beginning of its creation. Not
without adventure though ;)

Ironically due to some other reasons just after I fully recovered it I had
to destroy it
and rebuild from scratch with raidz2 vdevs (of 8 disks) rather than raidz1s
(of 4 disks) ;)
Basically I need better redundancy so that I can handle double disk failure
in a vdev. Seems
like the chance of a second disk failing while rebuilding the zpool for like
15 hours on those
2TB disks is quite significant.

I wonder if this conversion will reduce the IOPs of the pool in half ...

Anyway, thank you once again. Highly appreciated. I hope this is a helpful
piece of
discussion for other people having similar problems.

Cheers,
Rumen Telbizov



On Tue, Nov 16, 2010 at 8:55 PM, jhell <jhell@dataix.net> wrote:

> On 11/16/2010 16:15, Rumen Telbizov wrote:
> > It seems like *kern.geom.label.gptid.enable: 0 *does not work anymore? I
> am
> > pretty sure I was able to hide all the /dev/gptid/* entries with this
> > sysctl variable before but now it doesn't quite work for me.
>
> I could be wrong but I believe that is more of a loader tuneable than a
> sysctl that should be modified at run-time. Rebooting with this set to 0
> will disable showing the /dev/gptid directory.
>
> This makes me wonder if those sysctl's should be marked read-only at
> run-time. Though you could even rm -rf /dev/gptid ;)
>
> --
>
>  jhell,v
>



-- 
Rumen Telbizov
http://telbizov.com



Want to link to this message? Use this URL: <https://mail-archive.FreeBSD.org/cgi/mid.cgi?AANLkTikUX0%2BRUgoUv6wVJekpJtAksTFeW_XQenqaZBnU>