Skip site navigation (1)Skip section navigation (2)
Date:      Tue, 26 Jan 2010 13:07:11 +0900
From:      =?UTF-8?B?VG9tbWkgTMOkdHRp?= <sty@iki.fi>
To:        Steven Schlansker <stevenschlansker@gmail.com>
Cc:        freebsd-fs@freebsd.org
Subject:   Re: slight zfs problem after playing with WDIDLE3 and WDTLER
Message-ID:  <f43ef3191001252007j4fb54a96l843f4515ad87bedd@mail.gmail.com>
In-Reply-To: <3F785019-DB0E-4385-97EB-7CE69A11647A@gmail.com>
References:  <f43ef3191001251043n3a2d2780jfb2aa24be5f5371d@mail.gmail.com> <3F785019-DB0E-4385-97EB-7CE69A11647A@gmail.com>

next in thread | previous in thread | raw e-mail | index | archive | help
2010/1/26 Steven Schlansker <stevenschlansker@gmail.com>:
>
> On Jan 25, 2010, at 10:43 AM, Tommi L=C3=A4tti wrote:
>> After checking the logs carefully, it seems that the ada1 device
>> permanently lost some sectors. Before twiddling with the parameters,
>> it was 1953525168 sectors (953869MB), now it reports 1953523055
>> (953868MB). So, would removing it and maybe export/import get me back
>> to degraded state and then I could just replace the now
>> suddenly-lost-some-sectors drive?
>
> That will probably work. =C2=A0I had a similar problem a bit
> ago where suddenly my drives were too small, causing the UNAVAIL
> corrupted-data problem. =C2=A0I managed to fix it by using gconcat to sti=
tch
> an extra MB of space from the boot drive onto it. =C2=A0Not a very good s=
olution,
> but the best I found until FreeBSD gets shrink support (which sadly seems
> like it may be a long while)
>
> Failing that, you could use OpenSolaris to import it (as it does have min=
imal
> support for opening mismatched sized vdevs), copy the data off, destroy, =
and restore.

Forgot to reply-all...

--clip--
I just tried just to boot up the system without the 'reduced' drive to
see if would automatically go to reduced state. What I got after
booting up was that one of the labels had vanished and now it had
mixed up the drives it seems. GRR.

I guess that's an easy one to recover from, although now I suspect
that the zfs is writing to the disks while it scans for the pools and
making my life harder at the same time.

Maybe I'll just go to the opensol way and try it from there. Copying
data locally is quite fast.
--clip--

After thinking overnight I'm a bit curious why the whole filesystem
failed on that single vdev causing the whole pool loss. Shouldn't the
zfs just disregard the disk and just go to degraded state? I've had
normal catastrophic disk failures on this setup before and normal
replace drive+resilver has worked just fine.

Maybe a bug?

--=20
br,
Tommi



Want to link to this message? Use this URL: <https://mail-archive.FreeBSD.org/cgi/mid.cgi?f43ef3191001252007j4fb54a96l843f4515ad87bedd>