Skip site navigation (1)Skip section navigation (2)
Date:      Mon, 1 Feb 2021 17:09:11 -0800
From:      David Christensen <dpchrist@holgerdanske.com>
To:        freebsd-questions@freebsd.org
Subject:   Re: Pool I/O failure, zpool=$pool error=$6
Message-ID:  <33b0e84d-ab50-1512-8778-a1c03fe504bf@holgerdanske.com>
In-Reply-To: <02ea055d-e4d5-15f4-e16a-356d9392d84c@studiokaraoke.co.id>
References:  <02ea055d-e4d5-15f4-e16a-356d9392d84c@studiokaraoke.co.id>

next in thread | previous in thread | raw e-mail | index | archive | help
On 2021-02-01 05:38, Budi Janto wrote:
> Hi,
> 
> I need help to fixed this ZFS disk failure after "zpool scrub pool" 
> whereis run once for a week.
> 
> # uname -mv
> FreeBSD 12.2-STABLE r368820 GENERIC  amd64
> 
> # zcat /var/log/messages.1.bz2 | grep ZFS | more
> Feb  1 10:17:51 SMD-DB-P1 ZFS[9243]: pool I/O failure, zpool=$pool error=$6
> Feb  1 10:17:51 SMD-DB-P1 ZFS[9244]: catastrophic pool I/O failure, 
> zpool=$pool
> Feb  1 10:21:58 SMD-DB-P1 ZFS[9278]: pool I/O failure, zpool=$pool error=$6
> Feb  1 10:21:58 SMD-DB-P1 ZFS[9279]: catastrophic pool I/O failure, 
> zpool=$pool
> Feb  1 11:08:28 SMD-DB-P1 kernel: ZFS filesystem version: 5
> Feb  1 11:08:28 SMD-DB-P1 kernel: ZFS storage pool version: features 
> support (5000)
> Feb  1 11:08:28 SMD-DB-P1 ZFS[818]: vdev state changed, 
> pool_guid=$1316963245586799881 vdev_guid=$5993430306208938633
> Feb  1 11:08:28 SMD-DB-P1 ZFS[820]: vdev state changed, 
> pool_guid=$1316963245586799881 vdev_guid=$3420027568210384620
> Feb  1 11:08:28 SMD-DB-P1 ZFS[825]: vdev state changed, 
> pool_guid=$1316963245586799881 vdev_guid=$5993430306208938633
> Feb  1 11:08:28 SMD-DB-P1 ZFS[831]: vdev state changed, 
> pool_guid=$1316963245586799881 vdev_guid=$3420027568210384620
> Feb  1 11:08:28 SMD-DB-P1 ZFS[836]: vdev state changed, 
> pool_guid=$1316963245586799881 vdev_guid=$5993430306208938633
> Feb  1 11:08:28 SMD-DB-P1 ZFS[852]: vdev state changed, 
> pool_guid=$1316963245586799881 vdev_guid=$3420027568210384620
> Feb  1 11:08:28 SMD-DB-P1 ZFS[872]: vdev state changed, 
> pool_guid=$1316963245586799881 vdev_guid=$5993430306208938633
> Feb  1 11:08:28 SMD-DB-P1 ZFS[880]: vdev state changed, 
> pool_guid=$1316963245586799881 vdev_guid=$3420027568210384620
> Feb  1 11:08:28 SMD-DB-P1 ZFS[882]: vdev state changed, 
> pool_guid=$1316963245586799881 vdev_guid=$5993430306208938633
> Feb  1 11:08:28 SMD-DB-P1 ZFS[884]: vdev state changed, 
> pool_guid=$1316963245586799881 vdev_guid=$3420027568210384620
> Feb  1 11:08:28 SMD-DB-P1 ZFS[885]: vdev state changed, 
> pool_guid=$1316963245586799881 vdev_guid=$5993430306208938633
> Feb  1 11:08:28 SMD-DB-P1 ZFS[887]: vdev state changed, 
> pool_guid=$1316963245586799881 vdev_guid=$3420027568210384620
> Feb  1 11:08:28 SMD-DB-P1 ZFS[888]: vdev state changed, 
> pool_guid=$1316963245586799881 vdev_guid=$5993430306208938633
> Feb  1 11:08:28 SMD-DB-P1 ZFS[890]: vdev state changed, 
> pool_guid=$1316963245586799881 vdev_guid=$3420027568210384620
> Feb  1 11:08:28 SMD-DB-P1 ZFS[891]: vdev state changed, 
> pool_guid=$1316963245586799881 vdev_guid=$5993430306208938633
> Feb  1 11:08:28 SMD-DB-P1 ZFS[893]: vdev state changed, 
> pool_guid=$1316963245586799881 vdev_guid=$3420027568210384620
> Feb  1 11:08:28 SMD-DB-P1 ZFS[894]: vdev state changed, 
> pool_guid=$1316963245586799881 vdev_guid=$5993430306208938633
> Feb  1 11:08:28 SMD-DB-P1 ZFS[896]: vdev state changed, 
> pool_guid=$1316963245586799881 vdev_guid=$3420027568210384620
> Feb  1 11:08:28 SMD-DB-P1 ZFS[897]: vdev state changed, 
> pool_guid=$1316963245586799881 vdev_guid=$5993430306208938633
> Feb  1 11:08:28 SMD-DB-P1 ZFS[899]: vdev state changed, 
> pool_guid=$1316963245586799881 vdev_guid=$3420027568210384620
> Feb  1 11:08:28 SMD-DB-P1 ZFS[900]: vdev state changed, 
> pool_guid=$1316963245586799881 vdev_guid=$5993430306208938633
> Feb  1 11:08:28 SMD-DB-P1 ZFS[902]: vdev state changed, 
> pool_guid=$1316963245586799881 vdev_guid=$3420027568210384620
> Feb  1 11:08:28 SMD-DB-P1 ZFS[903]: vdev state changed, 
> pool_guid=$1316963245586799881 vdev_guid=$5993430306208938633
> Feb  1 11:08:28 SMD-DB-P1 ZFS[905]: vdev state changed, 
> pool_guid=$1316963245586799881 vdev_guid=$3420027568210384620
> Feb  1 11:08:28 SMD-DB-P1 ZFS[906]: vdev state changed, 
> pool_guid=$1316963245586799881 vdev_guid=$5993430306208938633
> Feb  1 11:08:28 SMD-DB-P1 ZFS[908]: vdev state changed, 
> pool_guid=$1316963245586799881 vdev_guid=$3420027568210384620
> Feb  1 11:08:28 SMD-DB-P1 ZFS[909]: vdev state changed, 
> pool_guid=$1316963245586799881 vdev_guid=$5993430306208938633
> Feb  1 11:08:28 SMD-DB-P1 ZFS[911]: vdev state changed, 
> pool_guid=$1316963245586799881 vdev_guid=$3420027568210384620
> Feb  1 11:08:28 SMD-DB-P1 ZFS[912]: vdev state changed, 
> pool_guid=$1316963245586799881 vdev_guid=$5993430306208938633
> Feb  1 11:08:28 SMD-DB-P1 ZFS[914]: vdev state changed, 
> pool_guid=$1316963245586799881 vdev_guid=$3420027568210384620
> Feb  1 11:08:28 SMD-DB-P1 ZFS[915]: vdev state changed, 
> pool_guid=$1316963245586799881 vdev_guid=$5993430306208938633
> [...]
> 
> After restarting the machines, my HDD gone from BIOS (Undetected).
> I try to change port SATA in MB, and change SATA cable. But problem 
> still persist (There is a delay in the boot process), my questions is
> does ZFS scrubbing cause this problem, or indeed a bad hard drive?
> 
> FYI, I use Ironwolf 4 TB x 2 for my pool with striped mode. Thanks

First, backup your data.


Please download, install, and run Western Digital "Data Lifeguard 
Diagnostic", and post what it reports (unfortunately, you will need a 
computer with Microsoft Windows):

https://support.wdc.com/downloads.aspx?DL


David



Want to link to this message? Use this URL: <https://mail-archive.FreeBSD.org/cgi/mid.cgi?33b0e84d-ab50-1512-8778-a1c03fe504bf>