Date: Tue, 16 Feb 2016 01:15:26 +0000 From: Steven Hartland <killing@multiplay.co.uk> To: DemIS <demis@yandex.ru>, "freebsd-fs@freebsd.org" <freebsd-fs@freebsd.org> Subject: Re: Kernel panic zio.c, line: 270 FreeBSD 10.2 (or 10.3) Message-ID: <56C2782E.2010404@multiplay.co.uk> In-Reply-To: <1076701455583595@web2g.yandex.ru> References: <1061671455578760@web3g.yandex.ru> <56C2655F.9010809@multiplay.co.uk> <1076701455583595@web2g.yandex.ru>
next in thread | previous in thread | raw e-mail | index | archive | help
You don't need the system live just the kernel and the crash dump to get those values. On 16/02/2016 00:46, DemIS wrote: > Today I fell back to version 10.2 . Users work on the server . To reinstall 10.3 I need 4:00 . > Therefore, it is possible only on weekends . But this has the same effect on the version 10.2 > > 16.02.2016, 02:55, "Steven Hartland" <killing@multiplay.co.uk>: >> That sounds like you have some pool data corruption, from the 10.3 >> version dump can you print out the following: >> 1. frame 8: bp and size >> 2. frame 6: buf->b_hdr >> >> On 15/02/2016 23:26, DemIS wrote: >>> Any one knows about problem? >>> Server: SuperMicro Model:SYS-6026T-6RF+, MB:X8DTU-6F+, RAM 24ç DDR3, two XEON >>> RAM: KVR1333D3E9S/4G - DDR3, 1333MHz, ECC, CL9, X8, 1.5V, Unbuffered, DIMM >>> Version:uname -a >>> >>> FreeBSD teo.some.loc 10.2-RELEASE-p12 FreeBSD 10.2-RELEASE-p12 #0: Sat Feb 13 18:04:04 MSK 2016 demis@teo.some.loc:/usr/obj/usr/src/sys/TEO amd64 >>> (on GENERIC or custom kernel config persist too !!!) >>> >>> Memtest86+ v.4.40 (ECC mode) test - OK. >>> Every disk checked too (physically - mhdd, logically - zpool scrub, and additional checkit in external company recovery disk). No errors. >>> >>> Part of df -H >>> Filesystem Size Used Avail Capacity Mounted on >>> hdd/usr/wf 6,6T 4,1T 2,5T 62% /hdd/usr/wf >>> >>> zpool status hdd >>> pool: hdd >>> state: ONLINE >>> status: Some supported features are not enabled on the pool. The pool can >>> still be used, but some features are unavailable. >>> action: Enable all features using 'zpool upgrade'. Once this is done, >>> the pool may no longer be accessible by software that does not support >>> the features. See zpool-features(7) for details. >>> scan: scrub repaired 0 in 14h57m with 0 errors on Thu Feb 11 03:35:43 2016 >>> config: >>> >>> NAME STATE READ WRITE CKSUM >>> hdd ONLINE 0 0 0 >>> raidz2-0 ONLINE 0 0 0 >>> mfid1p1 ONLINE 0 0 0 >>> mfid2p1 ONLINE 0 0 0 >>> mfid3p1 ONLINE 0 0 0 >>> mfid4p1 ONLINE 0 0 0 >>> mfid5p1 ONLINE 0 0 0 >>> >>> errors: No known data errors >>> >>> hdd - is My zfs volume. >>> When I run command like: >>> rm /hdd/usr/some/path/to/file >>> or >>> rm /hdd/usr/some/path/to/folder >>> or >>> chown root:wheel /hdd/usr/some/path/to/file >>> or >>> chown root:wheel /hdd/usr/some/path/to/folder >>> or >>> setfacl ... to /hdd/usr/some/path/to/file >>> >>> I'm get kernel panic: >>> GNU gdb 6.1.1 [FreeBSD] >>> Copyright 2004 Free Software Foundation, Inc. >>> GDB is free software, covered by the GNU General Public License, and you are >>> welcome to change it and/or distribute copies of it under certain conditions. >>> Type "show copying" to see the conditions. >>> There is absolutely no warranty for GDB. Type "show warranty" for details. >>> This GDB was configured as "amd64-marcel-freebsd"... >>> >>> Unread portion of the kernel message buffer: >>> panic: solaris assert: c < (1ULL << 24) >> 9 (0x7fffffffffffff < 0x8000), file: /usr/src/sys/modules/zfs/../../cddl/contrib/opensolaris/uts/common/fs/zfs/zio.c, line: 270 >>> cpuid = 9 >>> KDB: stack backtrace: >>> #0 0xffffffff80984ef0 at kdb_backtrace+0x60 >>> #1 0xffffffff80948aa6 at vpanic+0x126 >>> #2 0xffffffff80948973 at panic+0x43 >>> #3 0xffffffff81c0222f at assfail3+0x2f >>> #4 0xffffffff81aa9d40 at zio_buf_alloc+0x50 >>> #5 0xffffffff81a2b9f8 at arc_get_data_buf+0x358 >>> #6 0xffffffff81a2e20a at arc_read+0x1ea >>> #7 0xffffffff81a3669c at dbuf_read+0x6ac >>> #8 0xffffffff81a3d8bf at dmu_spill_hold_existing+0xbf >>> #9 0xffffffff81a70dd7 at sa_attr_op+0x167 >>> #10 0xffffffff81a72ffb at sa_lookup+0x4b >>> #11 0xffffffff81abc82a at zfs_rmnode+0x2ba >>> #12 0xffffffff81ada58e at zfs_freebsd_reclaim+0x4e >>> #13 0xffffffff80e73537 at VOP_RECLAIM_APV+0xa7 >>> #14 0xffffffff809ec5b4 at vgonel+0x1b4 >>> #15 0xffffffff809eca49 at vrecycle+0x59 >>> #16 0xffffffff81ada52d at zfs_freebsd_inactive+0xd >>> #17 0xffffffff80e73427 at VOP_INACTIVE_APV+0xa7 >>> Uptime: 9m31s >>> Dumping 1286 out of 24543 MB:..2%..12%..22%..32%..42%..51%..61%..71% (CTRL-C to abort) ..81%..91% >>> >>> Reading symbols from /boot/kernel/zfs.ko.symbols...done. >>> Loaded symbols for /boot/kernel/zfs.ko.symbols >>> Reading symbols from /boot/kernel/opensolaris.ko.symbols...done. >>> Loaded symbols for /boot/kernel/opensolaris.ko.symbols >>> Reading symbols from /boot/kernel/if_lagg.ko.symbols...done. >>> Loaded symbols for /boot/kernel/if_lagg.ko.symbols >>> Reading symbols from /boot/kernel/ums.ko.symbols...done. >>> Loaded symbols for /boot/kernel/ums.ko.symbols >>> Reading symbols from /boot/kernel/ipfw.ko.symbols...done. >>> Loaded symbols for /boot/kernel/ipfw.ko.symbols >>> #0 doadump (textdump=<value optimized out>) at pcpu.h:219 >>> 219 pcpu.h: No such file or directory. >>> in pcpu.h >>> (kgdb) bt >>> #0 doadump (textdump=<value optimized out>) at pcpu.h:219 >>> #1 0xffffffff80948702 in kern_reboot (howto=260) at /usr/src/sys/kern/kern_shutdown.c:451 >>> #2 0xffffffff80948ae5 in vpanic (fmt=<value optimized out>, ap=<value optimized out>) at /usr/src/sys/kern/kern_shutdown.c:758 >>> #3 0xffffffff80948973 in panic (fmt=0x0) at /usr/src/sys/kern/kern_shutdown.c:687 >>> #4 0xffffffff81c0222f in assfail3 (a=<value optimized out>, lv=<value optimized out>, op=<value optimized out>, rv=<value optimized out>, >>> f=<value optimized out>, l=<value optimized out>) at /usr/src/sys/modules/opensolaris/../../cddl/compat/opensolaris/kern/opensolaris_cmn_err.c:91 >>> #5 0xffffffff81aa9d40 in zio_buf_alloc (size=0) at /usr/src/sys/modules/zfs/../../cddl/contrib/opensolaris/uts/common/fs/zfs/zio.c:270 >>> #6 0xffffffff81a2b9f8 in arc_get_data_buf (buf=<value optimized out>) >>> at /usr/src/sys/modules/zfs/../../cddl/contrib/opensolaris/uts/common/fs/zfs/arc.c:2898 >>> #7 0xffffffff81a2e20a in arc_read (pio=0xfffff80011791730, spa=0xfffff80011579000, bp=0xfffffe000aee7980, done=0xffffffff81a3a2d0 <dbuf_read_done>, >>> private=0xfffff8002244b000, priority=ZIO_PRIORITY_SYNC_READ, zio_flags=-528866606, arc_flags=0xfffffe06727fb3c4, zb=0xffffffff81a3a2d0) >>> at /usr/src/sys/modules/zfs/../../cddl/contrib/opensolaris/uts/common/fs/zfs/arc.c:1551 >>> #8 0xffffffff81a3669c in dbuf_read (db=0xfffff8002244b000, zio=0x0, flags=6) >>> at /usr/src/sys/modules/zfs/../../cddl/contrib/opensolaris/uts/common/fs/zfs/dbuf.c:573 >>> #9 0xffffffff81a3d8bf in dmu_spill_hold_existing (bonus=0xfffff800223bed20, tag=0x0, dbp=0xfffff800919966b8) >>> at /usr/src/sys/modules/zfs/../../cddl/contrib/opensolaris/uts/common/fs/zfs/dmu.c:333 >>> #10 0xffffffff81a70dd7 in sa_attr_op (hdl=0xfffff80091996690, bulk=0xfffffe06727fb528, count=1, data_op=SA_LOOKUP, tx=0x0) >>> at /usr/src/sys/modules/zfs/../../cddl/contrib/opensolaris/uts/common/fs/zfs/sa.c:310 >>> #11 0xffffffff81a72ffb in sa_lookup (hdl=0xfffff80091996690, attr=<value optimized out>, buf=<value optimized out>, buflen=<value optimized out>) >>> at /usr/src/sys/modules/zfs/../../cddl/contrib/opensolaris/uts/common/fs/zfs/sa.c:1441 >>> #12 0xffffffff81abc82a in zfs_rmnode (zp=0xfffff80091993730) at /usr/src/sys/modules/zfs/../../cddl/contrib/opensolaris/uts/common/fs/zfs/zfs_dir.c:633 >>> #13 0xffffffff81ada58e in zfs_freebsd_reclaim (ap=<value optimized out>) >>> at /usr/src/sys/modules/zfs/../../cddl/contrib/opensolaris/uts/common/fs/zfs/zfs_vnops.c:6569 >>> #14 0xffffffff80e73537 in VOP_RECLAIM_APV (vop=<value optimized out>, a=<value optimized out>) at vnode_if.c:2019 >>> #15 0xffffffff809ec5b4 in vgonel (vp=0xfffff800111733b0) at vnode_if.h:830 >>> #16 0xffffffff809eca49 in vrecycle (vp=0xfffff800111733b0) at /usr/src/sys/kern/vfs_subr.c:2703 >>> #17 0xffffffff81ada52d in zfs_freebsd_inactive (ap=<value optimized out>) >>> at /usr/src/sys/modules/zfs/../../cddl/contrib/opensolaris/uts/common/fs/zfs/zfs_vnops.c:6540 >>> #18 0xffffffff80e73427 in VOP_INACTIVE_APV (vop=<value optimized out>, a=<value optimized out>) at vnode_if.c:1953 >>> #19 0xffffffff809eb382 in vinactive (vp=0xfffff800111733b0, td=0xfffff800113d8000) at vnode_if.h:807 >>> #20 0xffffffff809eb772 in vputx (vp=0xfffff800111733b0, func=2) at /usr/src/sys/kern/vfs_subr.c:2306 >>> #21 0xffffffff809f401e in kern_rmdirat (td=<value optimized out>, fd=<value optimized out>, path=<value optimized out>, pathseg=<value optimized out>) >>> at /usr/src/sys/kern/vfs_syscalls.c:3842 >>> #22 0xffffffff80d4b3e7 in amd64_syscall (td=0xfffff800113d8000, traced=0) at subr_syscall.c:134 >>> #23 0xffffffff80d30acb in Xfast_syscall () at /usr/src/sys/amd64/amd64/exception.S:396 >>> #24 0x00000008008914ea in ?? () >>> Previous frame inner to this frame (corrupt stack?) >>> Current language: auto; currently minimal >>> >>> If setup FreeBSD 10.3 BETA (on GENERIC or custom kernel config): >>> Copyright 2004 Free Software Foundation, Inc. >>> GDB is free software, covered by the GNU General Public License, and you are >>> welcome to change it and/or distribute copies of it under certain conditions. >>> Type "show copying" to see the conditions. >>> There is absolutely no warranty for GDB. Type "show warranty" for details. >>> This GDB was configured as "amd64-marcel-freebsd"... >>> >>> Unread portion of the kernel message buffer: >>> panic: solaris assert: c < (1ULL << 24) >> 9 (0x7fffffffffffff < 0x8000), file: /usr/src/sys/modules/zfs/../../cddl/contrib/opensolaris/uts/common/fs/zfs/zio.c, line: 273 >>> cpuid = 13 >>> KDB: stack backtrace: >>> #0 0xffffffff8098f000 at kdb_backtrace+0x60 >>> #1 0xffffffff80951d06 at vpanic+0x126 >>> #2 0xffffffff80951bd3 at panic+0x43 >>> #3 0xffffffff81e0022f at assfail3+0x2f >>> #4 0xffffffff81cacc70 at zio_buf_alloc+0x50 >>> #5 0xffffffff81c2b8f2 at arc_get_data_buf+0x262 >>> #6 0xffffffff81c2b657 at arc_buf_alloc+0xc7 >>> #7 0xffffffff81c2d601 at arc_read+0x1c1 >>> #8 0xffffffff81c36ce9 at dbuf_read+0x6b9 >>> #9 0xffffffff81c3e415 at dmu_spill_hold_existing+0xc5 >>> #10 0xffffffff81c73707 at sa_attr_op+0x167 >>> #11 0xffffffff81c75972 at sa_lookup+0x52 >>> #12 0xffffffff81cbf8da at zfs_rmnode+0x2ba >>> #13 0xffffffff81cdd75e at zfs_freebsd_reclaim+0x4e >>> #14 0xffffffff80e81c27 at VOP_RECLAIM_APV+0xa7 >>> #15 0xffffffff809f9581 at vgonel+0x221 >>> #16 0xffffffff809f9a19 at vrecycle+0x59 >>> #17 0xffffffff81cdd6fd at zfs_freebsd_inactive+0xd >>> Uptime: 11m11s >>> Dumping 1368 out of 24542 MB:..2%..11%..22%..31%..41%..51%..61%..71%..81%..91% >>> >>> Reading symbols from /boot/kernel/if_lagg.ko.symbols...done. >>> Loaded symbols for /boot/kernel/if_lagg.ko.symbols >>> Reading symbols from /boot/kernel/aio.ko.symbols...done. >>> Loaded symbols for /boot/kernel/aio.ko.symbols >>> Reading symbols from /boot/kernel/ichsmb.ko.symbols...done. >>> Loaded symbols for /boot/kernel/ichsmb.ko.symbols >>> Reading symbols from /boot/kernel/smbus.ko.symbols...done. >>> Loaded symbols for /boot/kernel/smbus.ko.symbols >>> Reading symbols from /boot/kernel/ipmi.ko.symbols...done. >>> Loaded symbols for /boot/kernel/ipmi.ko.symbols >>> Reading symbols from /boot/kernel/zfs.ko.symbols...done. >>> Loaded symbols for /boot/kernel/zfs.ko.symbols >>> Reading symbols from /boot/kernel/opensolaris.ko.symbols...done. >>> Loaded symbols for /boot/kernel/opensolaris.ko.symbols >>> Reading symbols from /boot/kernel/ums.ko.symbols...done. >>> Loaded symbols for /boot/kernel/ums.ko.symbols >>> Reading symbols from /boot/kernel/ipfw.ko.symbols...done. >>> Loaded symbols for /boot/kernel/ipfw.ko.symbols >>> #0 doadump (textdump=<value optimized out>) at pcpu.h:219 >>> 219 pcpu.h: No such file or directory. >>> in pcpu.h >>> (kgdb) backtrace >>> #0 doadump (textdump=<value optimized out>) at pcpu.h:219 >>> #1 0xffffffff80951962 in kern_reboot (howto=260) at /usr/src/sys/kern/kern_shutdown.c:486 >>> #2 0xffffffff80951d45 in vpanic (fmt=<value optimized out>, ap=<value optimized out>) at /usr/src/sys/kern/kern_shutdown.c:889 >>> #3 0xffffffff80951bd3 in panic (fmt=0x0) at /usr/src/sys/kern/kern_shutdown.c:818 >>> #4 0xffffffff81e0022f in assfail3 (a=<value optimized out>, lv=<value optimized out>, op=<value optimized out>, rv=<value optimized out>, f=<value optimized out>, >>> l=<value optimized out>) at /usr/src/sys/modules/opensolaris/../../cddl/compat/opensolaris/kern/opensolaris_cmn_err.c:91 >>> #5 0xffffffff81cacc70 in zio_buf_alloc (size=0) at /usr/src/sys/modules/zfs/../../cddl/contrib/opensolaris/uts/common/fs/zfs/zio.c:273 >>> #6 0xffffffff81c2b8f2 in arc_get_data_buf (buf=<value optimized out>) at /usr/src/sys/modules/zfs/../../cddl/contrib/opensolaris/uts/common/fs/zfs/arc.c:3880 >>> #7 0xffffffff81c2b657 in arc_buf_alloc (spa=<value optimized out>, size=<value optimized out>, tag=0x0, type=<value optimized out>) >>> at /usr/src/sys/modules/zfs/../../cddl/contrib/opensolaris/uts/common/fs/zfs/arc.c:2057 >>> #8 0xffffffff81c2d601 in arc_read (pio=0xfffff8000fad03b0, spa=0xfffff8000f63d000, bp=0xfffffe000e509980, done=0xffffffff81c3aed0 <dbuf_read_done>, private=0xfffff8000fdd6360, >>> priority=ZIO_PRIORITY_SYNC_READ, zio_flags=-2117882160, arc_flags=0xfffffe02925483c4, zb=0xfffff8000fdd6360) >>> at /usr/src/sys/modules/zfs/../../cddl/contrib/opensolaris/uts/common/fs/zfs/arc.c:4397 >>> #9 0xffffffff81c36ce9 in dbuf_read (db=0xfffff8000fdd6360, zio=0x0, flags=<value optimized out>) >>> at /usr/src/sys/modules/zfs/../../cddl/contrib/opensolaris/uts/common/fs/zfs/dbuf.c:682 >>> #10 0xffffffff81c3e415 in dmu_spill_hold_existing (bonus=0xfffff8001f312438, tag=0x0, dbp=0xfffff80062d4e7d0) >>> at /usr/src/sys/modules/zfs/../../cddl/contrib/opensolaris/uts/common/fs/zfs/dmu.c:333 >>> #11 0xffffffff81c73707 in sa_attr_op (hdl=0xfffff80062d4e770, bulk=0xfffffe0292548528, count=1, data_op=SA_LOOKUP, tx=0x0) >>> at /usr/src/sys/modules/zfs/../../cddl/contrib/opensolaris/uts/common/fs/zfs/sa.c:305 >>> #12 0xffffffff81c75972 in sa_lookup (hdl=0xfffff80062d4e770, attr=<value optimized out>, buf=<value optimized out>, buflen=<value optimized out>) >>> at /usr/src/sys/modules/zfs/../../cddl/contrib/opensolaris/uts/common/fs/zfs/sa.c:1443 >>> #13 0xffffffff81cbf8da in zfs_rmnode (zp=0xfffff80062d4c8a0) at /usr/src/sys/modules/zfs/../../cddl/contrib/opensolaris/uts/common/fs/zfs/zfs_dir.c:633 >>> #14 0xffffffff81cdd75e in zfs_freebsd_reclaim (ap=<value optimized out>) at /usr/src/sys/modules/zfs/../../cddl/contrib/opensolaris/uts/common/fs/zfs/zfs_vnops.c:6619 >>> #15 0xffffffff80e81c27 in VOP_RECLAIM_APV (vop=<value optimized out>, a=<value optimized out>) at vnode_if.c:2019 >>> #16 0xffffffff809f9581 in vgonel (vp=0xfffff8000f1beb10) at vnode_if.h:830 >>> #17 0xffffffff809f9a19 in vrecycle (vp=0xfffff8000f1beb10) at /usr/src/sys/kern/vfs_subr.c:2951 >>> #18 0xffffffff81cdd6fd in zfs_freebsd_inactive (ap=<value optimized out>) at /usr/src/sys/modules/zfs/../../cddl/contrib/opensolaris/uts/common/fs/zfs/zfs_vnops.c:6590 >>> #19 0xffffffff80e81b17 in VOP_INACTIVE_APV (vop=<value optimized out>, a=<value optimized out>) at vnode_if.c:1953 >>> #20 0xffffffff809f8322 in vinactive (vp=0xfffff8000f1beb10, td=0xfffff8000f9f34b0) at vnode_if.h:807 >>> #21 0xffffffff809f8712 in vputx (vp=0xfffff8000f1beb10, func=2) at /usr/src/sys/kern/vfs_subr.c:2547 >>> #22 0xffffffff80a0137e in kern_rmdirat (td=<value optimized out>, fd=<value optimized out>, path=<value optimized out>, pathseg=<value optimized out>) >>> at /usr/src/sys/kern/vfs_syscalls.c:3964 >>> #23 0xffffffff80d574bf in amd64_syscall (td=0xfffff8000f9f34b0, traced=0) at subr_syscall.c:141 >>> #24 0xffffffff80d3c72b in Xfast_syscall () at /usr/src/sys/amd64/amd64/exception.S:396 >>> #25 0x000000080089458a in ?? () >>> Previous frame inner to this frame (corrupt stack?) >>> Current language: auto; currently minimal >>> >>> Crash folder (or file) have strange rights: >>> d---------+ 3 anna domain users 3 10 ÄÅË 10:32 01-Projcts >>> d---------+ 2 anna domain users 2 8 ÆÅ× 21:46 02-Text >>> >>> How correct kernel panic? >>> >>> _______________________________________________ >>> freebsd-fs@freebsd.org mailing list >>> https://lists.freebsd.org/mailman/listinfo/freebsd-fs >>> To unsubscribe, send any mail to "freebsd-fs-unsubscribe@freebsd.org" >> _______________________________________________ >> freebsd-fs@freebsd.org mailing list >> https://lists.freebsd.org/mailman/listinfo/freebsd-fs >> To unsubscribe, send any mail to "freebsd-fs-unsubscribe@freebsd.org"
Want to link to this message? Use this URL: <https://mail-archive.FreeBSD.org/cgi/mid.cgi?56C2782E.2010404>