Skip site navigation (1)Skip section navigation (2)
Date:      Tue, 06 Jan 2015 10:00:39 +1000
From:      Da Rock <>
Subject:   Re: ZFS replacing drive issues
Message-ID:  <>
In-Reply-To: <>
References:  <> <>

Next in thread | Previous in thread | Raw E-Mail | Index | Archive | Help
On 05/01/2015 11:07, William A. Mahaffey III wrote:
> On 01/04/15 18:25, Da Rock wrote:
>> I haven't seen anything specifically on this when googling, but I'm 
>> having a strange issue in replacing a degraded drive in ZFS.
>> The drive has been REMOVED from ZFS pool, and so I ran 'zpool replace 
>> <pool> <old device> <new device>'. This normally just works, and I 
>> have checked that I have removed the correct drive via serial number.
>> After resilvering, it still shows that it is in a degraded state, and 
>> that the old and the new drive have been REMOVED.
>> No matter what I do, I can't seem to get the zfs system online and in 
>> a good state.
>> I'm running a raidz1 on 9.1 and zfs is v28.
>> Cheers
>> _______________________________________________
>> mailing list
>> To unsubscribe, send any mail to 
>> ""
> Someone posted a similar problem a few weeks ago; rebooting fixed it 
> for them (as opposed to trying to get zfs to fix itself w/ management 
> commands), might try that if feasible .... $0.02, no more,l no less ....
Sorry, that didn't work unfortunately. I had to wait a bit until I could 
do it between it trying to resilver and workload. It came online at 
first, but then went back to removed when I checked again later.

Any other diags I can do? I've already run smartctl on all the drives 
(5hrs+) and they've come back clean. There's not much to go on in the 
logs either. Do a small number of drives just naturally error when 
placed in a raid or something?

Want to link to this message? Use this URL: <>