Skip site navigation (1)Skip section navigation (2)
Date:      Mon, 2 Apr 2018 14:47:50 +0100
From:      Bob Bishop <rb@gid.co.uk>
To:        FreeBSD Stable <freebsd-stable@freebsd.org>
Subject:   ZFS panic, ARC compression?
Message-ID:  <AE288662-5857-47FE-B04F-594DF7DB079E@gid.co.uk>

next in thread | raw e-mail | index | archive | help
Hi,

Anyone offer any suggestions about this?

kernel: panic: solaris assert: arc_decompress(buf) =3D=3D 0 (0x5 =3D=3D =
0x0), file: =
/usr/src/sys/cddl/contrib/opensolaris/uts/common/fs/zfs/arc.c, line: =
4923
kernel: cpuid =3D 1
kernel: KDB: stack backtrace:
kernel: #0 0xffffffff80aadac7 at kdb_backtrace+0x67
kernel: #1 0xffffffff80a6bba6 at vpanic+0x186
kernel: #2 0xffffffff80a6ba13 at panic+0x43
kernel: #3 0xffffffff8248023c at assfail3+0x2c =20
kernel: #4 0xffffffff8218e2e0 at arc_read+0x9f0   =20
kernel: #5 0xffffffff82198e5e at dbuf_read+0x69e
kernel: #6 0xffffffff821b3db4 at dnode_hold_impl+0x194
kernel: #7 0xffffffff821a11dd at dmu_bonus_hold+0x1d
kernel: #8 0xffffffff8220fb05 at zfs_zget+0x65
kernel: #9 0xffffffff82227d42 at zfs_dirent_lookup+0x162
kernel: #10 0xffffffff82227e07 at zfs_dirlook+0x77
kernel: #11 0xffffffff8223fcea at zfs_lookup+0x44a  =20
kernel: #12 0xffffffff822403fd at zfs_freebsd_lookup+0x6d
kernel: #13 0xffffffff8104b963 at VOP_CACHEDLOOKUP_APV+0x83
kernel: #14 0xffffffff80b13816 at vfs_cache_lookup+0xd6
kernel: #15 0xffffffff8104b853 at VOP_LOOKUP_APV+0x83
kernel: #16 0xffffffff80b1d151 at lookup+0x701             =20
kernel: #17 0xffffffff80b1c606 at namei+0x486

Roughly 24 hours earlier (during the scrub), there was:

ZFS: vdev state changed, pool_guid=3D11921811386284628759 =
vdev_guid=3D1644286782598989949
ZFS: vdev state changed, pool_guid=3D11921811386284628759 =
vdev_guid=3D17800276530669255627

% uname -a
FreeBSD xxxxxxxxxxx 11.1-RELEASE-p4 FreeBSD 11.1-RELEASE-p4 #0: Tue Nov =
14 06:12:40 UTC 2017     =
root@amd64-builder.daemonology.net:/usr/obj/usr/src/sys/GENERIC  amd64
%
% zpool status
  pool: zroot
 state: ONLINE
status: One or more devices has experienced an error resulting in data
	corruption.  Applications may be affected.
action: Restore the file in question if possible.  Otherwise restore the
	entire pool from backup.
   see: http://illumos.org/msg/ZFS-8000-8A
  scan: scrub repaired 15.7M in 2h37m with 1 errors on Sun Apr  1 =
09:44:39 2018
config:

	NAME        STATE     READ WRITE CKSUM
	zroot       ONLINE       0     0     0
	  mirror-0  ONLINE       0     0     0
	    ada0p4  ONLINE       0     0     0
	    ada1p4  ONLINE       0     0     0

errors: 1 data errors, use '-v' for a list
%

The affected file (in a snapshot) is unimportant.

This pool is a daily rsync backup and contains about 120 snapshots.

No device or SMART errors were logged.

--
Bob Bishop
rb@gid.co.uk







Want to link to this message? Use this URL: <https://mail-archive.FreeBSD.org/cgi/mid.cgi?AE288662-5857-47FE-B04F-594DF7DB079E>