Skip site navigation (1)Skip section navigation (2)
Date:      Sun, 9 Jan 2011 07:22:35 -0500
From:      Rich <rincebrain@gmail.com>
To:        Jeremy Chadwick <freebsd@jdc.parodius.com>
Cc:        freebsd-fs@freebsd.org, freebsd-stable@freebsd.org
Subject:   Re: New ZFSv28 patchset for 8-STABLE
Message-ID:  <AANLkTimLCcOxngAC_5op4GdmwQVi8S_HyDKvYqaoFMxc@mail.gmail.com>
In-Reply-To: <20110109121800.GA37231@icarus.home.lan>
References:  <4D0A09AF.3040005@FreeBSD.org> <4D297943.1040507@fsn.hu> <4D29A0C7.8050002@fsn.hu> <20110109121800.GA37231@icarus.home.lan>

next in thread | previous in thread | raw e-mail | index | archive | help
Once upon a time, this was a known problem with the arcmsr driver not
correctly interacting with ZFS, resulting in this behavior.

Since I'm presuming that the arcmsr driver update which was intended
to fix this behavior (in my case, at least) is in your nightly build,
it's probably worth pinging the arcmsr driver maintainer about this.

- Rich

On Sun, Jan 9, 2011 at 7:18 AM, Jeremy Chadwick
<freebsd@jdc.parodius.com> wrote:
> On Sun, Jan 09, 2011 at 12:49:27PM +0100, Attila Nagy wrote:
>> =A0On 01/09/2011 10:00 AM, Attila Nagy wrote:
>> > On 12/16/2010 01:44 PM, Martin Matuska wrote:
>> >>Hi everyone,
>> >>
>> >>following the announcement of Pawel Jakub Dawidek (pjd@FreeBSD.org) I =
am
>> >>providing a ZFSv28 testing patch for 8-STABLE.
>> >>
>> >>Link to the patch:
>> >>
>> >>http://people.freebsd.org/~mm/patches/zfs/v28/stable-8-zfsv28-20101215=
.patch.xz
>> >>
>> >>
>> >I've got an IO hang with dedup enabled (not sure it's related,
>> >I've started to rewrite all data on pool, which makes a heavy
>> >load):
>> >
>> >The processes are in various states:
>> >65747 =A0 1001 =A0 =A0 =A01 =A054 =A0 10 28620K 24360K tx->tx =A00 =A0 =
6:58 =A00.00% cvsup
>> >80383 =A0 1001 =A0 =A0 =A01 =A054 =A0 10 40616K 30196K select =A01 =A0 =
5:38 =A00.00% rsync
>> > 1501 www =A0 =A0 =A0 =A0 1 =A044 =A0 =A00 =A07304K =A02504K zio->i =A0=
0 =A0 2:09 =A00.00% nginx
>> > 1479 www =A0 =A0 =A0 =A0 1 =A044 =A0 =A00 =A07304K =A02416K zio->i =A0=
1 =A0 2:03 =A00.00% nginx
>> > 1477 www =A0 =A0 =A0 =A0 1 =A044 =A0 =A00 =A07304K =A02664K zio->i =A0=
0 =A0 2:02 =A00.00% nginx
>> > 1487 www =A0 =A0 =A0 =A0 1 =A044 =A0 =A00 =A07304K =A02376K zio->i =A0=
0 =A0 1:40 =A00.00% nginx
>> > 1490 www =A0 =A0 =A0 =A0 1 =A044 =A0 =A00 =A07304K =A01852K zfs =A0 =
=A0 0 =A0 1:30 =A00.00% nginx
>> > 1486 www =A0 =A0 =A0 =A0 1 =A044 =A0 =A00 =A07304K =A02400K zfsvfs =A0=
1 =A0 1:05 =A00.00% nginx
>> >
>> >And everything which wants to touch the pool is/becomes dead.
>> >
>> >Procstat says about one process:
>> ># procstat -k 1497
>> > =A0PID =A0 =A0TID COMM =A0 =A0 =A0 =A0 =A0 =A0 TDNAME =A0 =A0 =A0 =A0 =
=A0 KSTACK
>> > 1497 100257 nginx =A0 =A0 =A0 =A0 =A0 =A0- =A0 =A0 =A0 =A0 =A0 =A0 =A0=
 =A0mi_switch
>> >sleepq_wait __lockmgr_args vop_stdlock VOP_LOCK1_APV null_lock
>> >VOP_LOCK1_APV _vn_lock nullfs_root lookup namei vn_open_cred
>> >kern_openat syscallenter syscall Xfast_syscall
>> No, it's not related. One of the disks in the RAIDZ2 pool went bad:
>> (da4:arcmsr0:0:4:0): READ(6). CDB: 8 0 2 10 10 0
>> (da4:arcmsr0:0:4:0): CAM status: SCSI Status Error
>> (da4:arcmsr0:0:4:0): SCSI status: Check Condition
>> (da4:arcmsr0:0:4:0): SCSI sense: MEDIUM ERROR asc:11,0 (Unrecovered
>> read error)
>> and it seems it froze the whole zpool. Removing the disk by hand
>> solved the problem.
>> I've seen this previously on other machines with ciss.
>> I wonder why ZFS didn't throw it out of the pool.
>
> Hold on a minute. =A0An unrecoverable read error does not necessarily mea=
n
> the drive is bad, it could mean that the individual LBA that was
> attempted to be read resulted in ASC 0x11 (MEDIUM ERROR) (e.g. a bad
> block was encountered). =A0I would check SMART stats on the disk (since
> these are probably SATA given use of arcmsr(4)) and provide those.
> *That* will tell you if the disk is bad. =A0I'll help you decode the
> attributes values if you provide them.
>
> My understanding is that a single LBA read failure should not warrant
> ZFS marking the disk UNAVAIL in the pool. =A0It should have incremented
> the READ error counter and that's it. =A0Did you receive a *single* error
> for the disk and then things went catatonic?
>
> If the entire system got wedged (a soft wedge, e.g. kernel is still
> alive but nothing's happening in userland), that could be a different
> problem -- either with ZFS or arcmsr(4). =A0Does ZFS have some sort of
> timeout value internal to itself where it will literally mark a disk
> UNAVAIL in the case that repeated I/O transactions takes "too long"?
> What is its error recovery methodology?
>
> Speaking strictly about Solaris 10 and ZFS: I have seen many, many times
> a system "soft wedge" after repeated I/O errors (read or write) are
> spewed out on the console for a single SATA disk (via AHCI), but only
> when the disk is used as a sole root filesystem disk (no mirror/raidz).
> My impression is that ZFS isn't the problem in this scenario. =A0In most
> cases, post-mortem debugging on my part shows that disks encountered
> some CRC errors (indicating cabling issues, etc.), sometimes as few as
> 2, but "something else" went crazy -- or possibly ZFS couldn't mark the
> disk UNAVAIL (if it has that logic) because it's a single disk
> associated with root. =A0Hardware in this scenario are Hitachi SATA disks
> with an ICH ESB2 controller, software is Solaris 10 (Generic_142901-06)
> with ZFS v15.
>
> --
> | Jeremy Chadwick =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0=
 =A0 =A0 =A0 jdc@parodius.com |
> | Parodius Networking =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 http://=
www.parodius.com/ |
> | UNIX Systems Administrator =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0Mountain =
View, CA, USA |
> | Making life hard for others since 1977. =A0 =A0 =A0 =A0 =A0 =A0 =A0 PGP=
 4BD6C0CB |
>
> _______________________________________________
> freebsd-fs@freebsd.org mailing list
> http://lists.freebsd.org/mailman/listinfo/freebsd-fs
> To unsubscribe, send any mail to "freebsd-fs-unsubscribe@freebsd.org"
>



Want to link to this message? Use this URL: <https://mail-archive.FreeBSD.org/cgi/mid.cgi?AANLkTimLCcOxngAC_5op4GdmwQVi8S_HyDKvYqaoFMxc>