Skip site navigation (1)Skip section navigation (2)
Date:      Fri, 9 Jan 2015 10:18:50 -0900
From:      Henrik Hudson <lists@rhavenn.net>
To:        Da Rock <freebsd-questions@herveybayaustralia.com.au>
Cc:        freebsd-questions@freebsd.org
Subject:   Re: ZFS replacing drive issues
Message-ID:  <20150109191850.GA58984@vash.rhavenn.local>
In-Reply-To: <54AB25A7.4040901@herveybayaustralia.com.au>
References:  <54A9D9E6.2010008@herveybayaustralia.com.au> <54A9E3CC.1010009@hiwaay.net> <54AB25A7.4040901@herveybayaustralia.com.au>

next in thread | previous in thread | raw e-mail | index | archive | help
On Tue, 06 Jan 2015, Da Rock wrote:

> On 05/01/2015 11:07, William A. Mahaffey III wrote:
> > On 01/04/15 18:25, Da Rock wrote:
> >> I haven't seen anything specifically on this when googling, but I'm 
> >> having a strange issue in replacing a degraded drive in ZFS.
> >>
> >> The drive has been REMOVED from ZFS pool, and so I ran 'zpool replace 
> >> <pool> <old device> <new device>'. This normally just works, and I 
> >> have checked that I have removed the correct drive via serial number.
> >>
> >> After resilvering, it still shows that it is in a degraded state, and 
> >> that the old and the new drive have been REMOVED.
> >>
> >> No matter what I do, I can't seem to get the zfs system online and in 
> >> a good state.
> >>
> >> I'm running a raidz1 on 9.1 and zfs is v28.
> >>
> >> Cheers
> >> _______________________________________________
> >> freebsd-questions@freebsd.org mailing list
> >> http://lists.freebsd.org/mailman/listinfo/freebsd-questions
> >> To unsubscribe, send any mail to 
> >> "freebsd-questions-unsubscribe@freebsd.org"
> >>
> >
> > Someone posted a similar problem a few weeks ago; rebooting fixed it 
> > for them (as opposed to trying to get zfs to fix itself w/ management 
> > commands), might try that if feasible .... $0.02, no more,l no less ....
> >
> Sorry, that didn't work unfortunately. I had to wait a bit until I could 
> do it between it trying to resilver and workload. It came online at 
> first, but then went back to removed when I checked again later.
> 
> Any other diags I can do? I've already run smartctl on all the drives 
> (5hrs+) and they've come back clean. There's not much to go on in the 
> logs either. Do a small number of drives just naturally error when 
> placed in a raid or something?
> _______________________________________________
> freebsd-questions@freebsd.org mailing list
> http://lists.freebsd.org/mailman/listinfo/freebsd-questions
> To unsubscribe, send any mail to "freebsd-questions-unsubscribe@freebsd.org"

a) try a 'zpool clear' to perhaps force it to clear errors, but to
be safe I'd still do "c" below.

b) Did you physically remove the old drive and replace it and then
run a zpool replace? Did the devices have the same device ID or did
you use GPT ids?

c) If it's a mirror try just removing the device, zpool remove pool
device and then re-attaching it via zpool attach.

henrik

-- 
Henrik Hudson
lists@rhavenn.net
-----------------------------------------
"God, root, what is difference?" Pitr; UF 




Want to link to this message? Use this URL: <https://mail-archive.FreeBSD.org/cgi/mid.cgi?20150109191850.GA58984>