Skip site navigation (1)Skip section navigation (2)
Date:      Fri, 4 Dec 2009 21:20:47 +0100
From:      Thomas Backman <serenity@exscape.org>
To:        Stefan Bethke <stb@lassitu.de>
Cc:        FreeBSD Stable <freebsd-stable@freebsd.org>
Subject:   Re: Fatal trap 9 triggered by zfs?
Message-ID:  <A990FF66-A5FC-401F-923B-D802C20251EF@exscape.org>
In-Reply-To: <06D8F596-649B-4478-8A2F-F9EA133B8DDC@lassitu.de>
References:  <831421F9-6344-4E68-BD64-9C013EB86523@lassitu.de> <06D8F596-649B-4478-8A2F-F9EA133B8DDC@lassitu.de>

next in thread | previous in thread | raw e-mail | index | archive | help
On Dec 4, 2009, at 8:56 PM, Stefan Bethke wrote:

> Am 04.12.2009 um 17:52 schrieb Stefan Bethke:
>=20
>> I'm getting panics like this every so often (couple weeks, sometimes =
just a few days.) A second machine that has identical hardware and is =
running the same source has no such problems.
>>=20
>> FreeBSD XXX.hanse.de 8.0-STABLE FreeBSD 8.0-STABLE #16: Tue Dec  1 =
14:30:54 UTC 2009     root@XXX.hanse.de:/usr/obj/usr/src/sys/EISENBOOT  =
amd64
>>=20
>> # zpool status
>> pool: tank
>> state: ONLINE
>> scrub: none requested
>> config:
>>=20
>> 	NAME        STATE     READ WRITE CKSUM
>> 	tank        ONLINE       0     0     0
>> 	  ad4s1d    ONLINE       0     0     0
>> # cat /boot/loader.conf
>> vfs.zfs.arc_max=3D"512M"
>> vfs.zfs.prefetch_disable=3D"1"
>> vfs.zfs.zil_disable=3D"1"
>=20
> Got another, different one.  Any tuning suggestions or similar?
>=20
>=20
> #6  0xffffffff80586c7a in vm_map_entry_splay (addr=3DVariable "addr" =
is not available.
> )
>    at /usr/src/sys/vm/vm_map.c:771
> #7  0xffffffff80587f37 in vm_map_lookup_entry (map=3D0xffffff00010000e8,=
=20
>    address=3D18446743523979624448, entry=3D0xffffff80625db170)
>    at /usr/src/sys/vm/vm_map.c:1021
> #8  0xffffffff80588aa3 in vm_map_delete (map=3D0xffffff00010000e8,=20
>    start=3D18446743523979624448, end=3D18446743523979689984)
>    at /usr/src/sys/vm/vm_map.c:2685
> #9  0xffffffff80588e61 in vm_map_remove (map=3D0xffffff00010000e8,=20
>    start=3D18446743523979624448, end=3D18446743523979689984)
>    at /usr/src/sys/vm/vm_map.c:2774
> #10 0xffffffff8057db85 in uma_large_free (slab=3D0xffffff005fcc7000)
>    at /usr/src/sys/vm/uma_core.c:3021
> #11 0xffffffff80325987 in free (addr=3D0xffffff80018b0000,=20
>    mtp=3D0xffffffff80ac61e0) at /usr/src/sys/kern/kern_malloc.c:471
> #12 0xffffffff80a36d03 in vdev_cache_evict (vc=3D0xffffff0001723ce0,=20=

>    ve=3D0xffffff003dd52200)
>    at =
/usr/src/sys/modules/zfs/../../cddl/contrib/opensolaris/uts/common/fs/zfs/=
vdev_cache.c:151
> #13 0xffffffff80a372ad in vdev_cache_read (zio=3D0xffffff005f5ca2d0)
>    at =
/usr/src/sys/modules/zfs/../../cddl/contrib/opensolaris/uts/common/fs/zfs/=
vdev_cache.c:182
Bad RAM/motherboard? My first thought when I read your first mail (re: =
identical hardware) was bad hardware, and this seems to point towards =
that too, no?
Have you tried memtest86+?

Regards,
Thomas=



Want to link to this message? Use this URL: <https://mail-archive.FreeBSD.org/cgi/mid.cgi?A990FF66-A5FC-401F-923B-D802C20251EF>