Skip site navigation (1)Skip section navigation (2)
Date:      Thu, 15 Oct 2015 09:27:28 -0500
From:      Karl Denninger <karl@denninger.net>
To:        freebsd-fs@freebsd.org
Subject:   Re: Panic in ZFS during zfs recv (while snapshots being destroyed)
Message-ID:  <561FB7D0.4080107@denninger.net>
In-Reply-To: <55DF76AA.3040103@denninger.net>
References:  <55BB443E.8040801@denninger.net> <55CF7926.1030901@denninger.net> <55DF7191.2080409@denninger.net> <sig.0681f4fd27.ADD991B6-BCF2-4B11-A5D6-EF1DB585AA33@chittenden.org> <55DF76AA.3040103@denninger.net>

next in thread | previous in thread | raw e-mail | index | archive | help
This is a cryptographically signed message in MIME format.

--------------ms020606080808060507010408
Content-Type: text/plain; charset=windows-1252
Content-Transfer-Encoding: quoted-printable

Got another one of these this morning, after a long absence...

Same symptom -- happened during a backup (send/receive) and it's in the
same place -- when the snapshot is unmounted.  I have a clean dump and
this is against a quite-recent checkout, so the most-currently-rolled
forward ZFS changes are in this kernel.


Fatal trap 12: page fault while in kernel mode
cpuid =3D 14; apic id =3D 34
fault virtual address   =3D 0x378
fault code              =3D supervisor read data, page not present
instruction pointer     =3D 0x20:0xffffffff80940ae0
stack pointer           =3D 0x28:0xfffffe066796c680
frame pointer           =3D 0x28:0xfffffe066796c700
code segment            =3D base 0x0, limit 0xfffff, type 0x1b
                        =3D DPL 0, pres 1, long 1, def32 0, gran 1
processor eflags        =3D interrupt enabled, resume, IOPL =3D 0
current process         =3D 81187 (zfs)
trap number             =3D 12
panic: page fault
cpuid =3D 14
KDB: stack backtrace:
db_trace_self_wrapper() at db_trace_self_wrapper+0x2b/frame
0xfffffe066796c160
kdb_backtrace() at kdb_backtrace+0x39/frame 0xfffffe066796c210
vpanic() at vpanic+0x126/frame 0xfffffe066796c250
panic() at panic+0x43/frame 0xfffffe066796c2b0
trap_fatal() at trap_fatal+0x36b/frame 0xfffffe066796c310
trap_pfault() at trap_pfault+0x2ed/frame 0xfffffe066796c3b0
trap() at trap+0x47a/frame 0xfffffe066796c5c0
calltrap() at calltrap+0x8/frame 0xfffffe066796c5c0
--- trap 0xc, rip =3D 0xffffffff80940ae0, rsp =3D 0xfffffe066796c680, rbp=
 =3D
0xfffffe
066796c700 ---
__mtx_lock_sleep() at __mtx_lock_sleep+0x1b0/frame 0xfffffe066796c700
dounmount() at dounmount+0x58/frame 0xfffffe066796c780
zfs_unmount_snap() at zfs_unmount_snap+0x114/frame 0xfffffe066796c7a0
dsl_dataset_user_release_impl() at
dsl_dataset_user_release_impl+0x103/frame 0xf
ffffe066796c920
dsl_dataset_user_release_onexit() at
dsl_dataset_user_release_onexit+0x5c/frame
0xfffffe066796c940
zfs_onexit_destroy() at zfs_onexit_destroy+0x56/frame 0xfffffe066796c970
zfsdev_close() at zfsdev_close+0x52/frame 0xfffffe066796c990
devfs_fpdrop() at devfs_fpdrop+0xa9/frame 0xfffffe066796c9b0
devfs_close_f() at devfs_close_f+0x45/frame 0xfffffe066796c9e0
_fdrop() at _fdrop+0x29/frame 0xfffffe066796ca00
closef() at closef+0x21e/frame 0xfffffe066796ca90
closefp() at closefp+0x98/frame 0xfffffe066796cad0
amd64_syscall() at amd64_syscall+0x35d/frame 0xfffffe066796cbf0
Xfast_syscall() at Xfast_syscall+0xfb/frame 0xfffffe066796cbf0
--- syscall (6, FreeBSD ELF64, sys_close), rip =3D 0x801a01f5a, rsp =3D
0x7fffffffd728, rbp =3D 0x7fffffffd740 ---
Uptime: 3d15h37m26s


On 8/27/2015 15:44, Karl Denninger wrote:
> No, but that does sound like it might be involved.....
>
> And yeah, this did start when I moved the root pool to a mirrored pair
> of Intel 530s off a pair of spinning-rust WD RE4s.... (The 530s are dar=
n
> nice performance-wise, reasonably inexpensive and thus very suitable fo=
r
> a root filesystem drive and they also pass the "pull the power cord"
> test, incidentally.)
>
> You may be onto something -- I'll try shutting it off, but due to the
> fact that I can't make this happen and it's a "every week or two" panic=
,
> but ALWAYS when the zfs send | zfs recv is running AND it's always on
> the same filesystem it will be a fair while before I know if it's fixed=

> (like over a month, given the usual pattern here, as that would be 4
> "average" periods without a panic).....
>
> I also wonder if I could tune this out with some of the other TRIM
> parameters instead of losing it entirely.
>
> vfs.zfs.trim.max_interval: 1
> vfs.zfs.trim.timeout: 30
> vfs.zfs.trim.txg_delay: 32
> vfs.zfs.trim.enabled: 1
> vfs.zfs.vdev.trim_max_pending: 10000
> vfs.zfs.vdev.trim_max_active: 64
> vfs.zfs.vdev.trim_min_active: 1
>
> That it's panic'ing on a mtx_lock_sleep might point this way.... the
> trace shows it coming from a zfs_onexit_destroy, which ends up calling
> zfs_unmount_snap() and then it blows in dounmount() while executing
> mtx_lock_sleep().
>
> I do wonder if I'm begging for new and innovative performance issues if=

> I run with TRIM off for an extended period of time, however..... :-)
>
>
> On 8/27/2015 15:30, Sean Chittenden wrote:
>> Have you tried disabling TRIM?  We recently ran in to an issue where a=
 `zfs delete` on a large dataset caused the host to panic because TRIM wa=
s tripping over the ZFS deadman timer.  Disabling TRIM worked as  valid w=
orkaround for us.  ?  You mentioned a recent move to SSDs, so this can ha=
ppen, esp after the drive has experienced a little bit of actual work.  ?=
  -sc
>>
>>
>> --
>> Sean Chittenden
>> sean@chittenden.org
>>
>>
>>> On Aug 27, 2015, at 13:22, Karl Denninger <karl@denninger.net> wrote:=

>>>
>>> On 8/15/2015 12:38, Karl Denninger wrote:
>>>> Update:
>>>>
>>>> This /appears /to be related to attempting to send or receive a
>>>> /cloned /snapshot.
>>>>
>>>> I use /beadm /to manage boot environments and the crashes have all
>>>> come while send/recv-ing the root pool, which is the one where these=

>>>> clones get created.  It is /not /consistent within a given snapshot
>>>> when it crashes and a second attempt (which does a "recovery"
>>>> send/receive) succeeds every time -- I've yet to have it panic twice=

>>>> sequentially.
>>>>
>>>> I surmise that the problem comes about when a file in the cloned
>>>> snapshot is modified, but this is a guess at this point.
>>>>
>>>> I'm going to try to force replication of the problem on my test syst=
em.
>>>>
>>>> On 7/31/2015 04:47, Karl Denninger wrote:
>>>>> I have an automated script that runs zfs send/recv copies to bring =
a
>>>>> backup data set into congruence with the running copies nightly.  T=
he
>>>>> source has automated snapshots running on a fairly frequent basis
>>>>> through zfs-auto-snapshot.
>>>>>
>>>>> Recently I have started having a panic show up about once a week du=
ring
>>>>> the backup run, but it's inconsistent.  It is in the same place, bu=
t I
>>>>> cannot force it to repeat.
>>>>>
>>>>> The trap itself is a page fault in kernel mode in the zfs code at
>>>>> zfs_unmount_snap(); here's the traceback from the kvm (sorry for th=
e
>>>>> image link but I don't have a better option right now.)
>>>>>
>>>>> I'll try to get a dump, this is a production machine with encrypted=
 swap
>>>>> so it's not normally turned on.
>>>>>
>>>>> Note that the pool that appears to be involved (the backup pool) ha=
s
>>>>> passed a scrub and thus I would assume the on-disk structure is ok.=
=2E...
>>>>> but that might be an unfair assumption.  It is always occurring in =
the
>>>>> same dataset although there are a half-dozen that are sync'd -- if =
this
>>>>> one (the first one) successfully completes during the run then all =
the
>>>>> rest will as well (that is, whenever I restart the process it has a=
lways
>>>>> failed here.)  The source pool is also clean and passes a scrub.
>>>>>
>>>>> traceback is at http://www.denninger.net/kvmimage.png; apologies fo=
r the
>>>>> image traceback but this is coming from a remote KVM.
>>>>>
>>>>> I first saw this on 10.1-STABLE and it is still happening on FreeBS=
D
>>>>> 10.2-PRERELEASE #9 r285890M, which I updated to in an attempt to se=
e if
>>>>> the problem was something that had been addressed.
>>>>>
>>>>>
>>>> --=20
>>>> Karl Denninger
>>>> karl@denninger.net <mailto:karl@denninger.net>
>>>> /The Market Ticker/
>>>> /[S/MIME encrypted email preferred]/
>>> Second update: I have now taken another panic on 10.2-Stable, same de=
al,
>>> but without any cloned snapshots in the source image. I had thought t=
hat
>>> removing cloned snapshots might eliminate the issue; that is now out =
the
>>> window.
>>>
>>> It ONLY happens on this one filesystem (the root one, incidentally)
>>> which is fairly-recently created as I moved this machine from spinnin=
g
>>> rust to SSDs for the OS and root pool -- and only when it is being
>>> backed up by using zfs send | zfs recv (with the receive going to a
>>> different pool in the same machine.)  I have yet to be able to provok=
e
>>> it when using zfs send to copy to a different machine on the same LAN=
,
>>> but given that it is not able to be reproduced on demand I can't be
>>> certain it's timing related (e.g. performance between the two pools i=
n
>>> question) or just that I haven't hit the unlucky combination.
>>>
>>> This looks like some sort of race condition and I will continue to se=
e
>>> if I can craft a case to make it occur "on demand"
>>>
>>> --=20
>>> Karl Denninger
>>> karl@denninger.net <mailto:karl@denninger.net>
>>> /The Market Ticker/
>>> /[S/MIME encrypted email preferred]/
>>
>> %SPAMBLOCK-SYS: Matched [+Sean Chittenden <sean@chittenden.org>], mess=
age ok
>>

--=20
Karl Denninger
karl@denninger.net <mailto:karl@denninger.net>
/The Market Ticker/
/[S/MIME encrypted email preferred]/

--------------ms020606080808060507010408
Content-Type: application/pkcs7-signature; name="smime.p7s"
Content-Transfer-Encoding: base64
Content-Disposition: attachment; filename="smime.p7s"
Content-Description: S/MIME Cryptographic Signature

MIAGCSqGSIb3DQEHAqCAMIACAQExDzANBglghkgBZQMEAgMFADCABgkqhkiG9w0BBwEAAKCC
Bl8wggZbMIIEQ6ADAgECAgEpMA0GCSqGSIb3DQEBCwUAMIGQMQswCQYDVQQGEwJVUzEQMA4G
A1UECBMHRmxvcmlkYTESMBAGA1UEBxMJTmljZXZpbGxlMRkwFwYDVQQKExBDdWRhIFN5c3Rl
bXMgTExDMRwwGgYDVQQDExNDdWRhIFN5c3RlbXMgTExDIENBMSIwIAYJKoZIhvcNAQkBFhND
dWRhIFN5c3RlbXMgTExDIENBMB4XDTE1MDQyMTAyMjE1OVoXDTIwMDQxOTAyMjE1OVowWjEL
MAkGA1UEBhMCVVMxEDAOBgNVBAgTB0Zsb3JpZGExGTAXBgNVBAoTEEN1ZGEgU3lzdGVtcyBM
TEMxHjAcBgNVBAMTFUthcmwgRGVubmluZ2VyIChPQ1NQKTCCAiIwDQYJKoZIhvcNAQEBBQAD
ggIPADCCAgoCggIBALmEWPhAdphrWd4K5VTvE5pxL3blRQPyGF3ApjUjgtavqU1Y8pbI3Byg
XDj2/Uz9Si8XVj/kNbKEjkRh5SsNvx3Fc0oQ1uVjyCq7zC/kctF7yLzQbvWnU4grAPZ3IuAp
3/fFxIVaXpxEdKmyZAVDhk9az+IgHH43rdJRIMzxJ5vqQMb+n2EjadVqiGPbtG9aZEImlq7f
IYDTnKyToi23PAnkPwwT+q1IkI2DTvf2jzWrhLR5DTX0fUYC0nxlHWbjgpiapyJWtR7K2YQO
aevQb/3vN9gSojT2h+cBem7QIj6U69rEYcEDvPyCMXEV9VcXdcmW42LSRsPvZcBHFkWAJqMZ
Myiz4kumaP+s+cIDaXitR/szoqDKGSHM4CPAZV9Yh8asvxQL5uDxz5wvLPgS5yS8K/o7zDR5
vNkMCyfYQuR6PAJxVOk5Arqvj9lfP3JSVapwbr01CoWDBkpuJlKfpQIEeC/pcCBKknllbMYq
yHBO2TipLyO5Ocd1nhN/nOsO+C+j31lQHfOMRZaPQykXVPWG5BbhWT7ttX4vy5hOW6yJgeT/
o3apynlp1cEavkQRS8uJHoQszF6KIrQMID/JfySWvVQ4ksnfzwB2lRomrdrwnQ4eG/HBS+0l
eozwOJNDIBlAP+hLe8A5oWZgooIIK/SulUAsfI6Sgd8dTZTTYmlhAgMBAAGjgfQwgfEwNwYI
KwYBBQUHAQEEKzApMCcGCCsGAQUFBzABhhtodHRwOi8vY3VkYXN5c3RlbXMubmV0Ojg4ODgw
CQYDVR0TBAIwADARBglghkgBhvhCAQEEBAMCBaAwCwYDVR0PBAQDAgXgMCwGCWCGSAGG+EIB
DQQfFh1PcGVuU1NMIEdlbmVyYXRlZCBDZXJ0aWZpY2F0ZTAdBgNVHQ4EFgQUxRyULenJaFwX
RtT79aNmIB/u5VkwHwYDVR0jBBgwFoAUJHGbnYV9/N3dvbDKkpQDofrTbTUwHQYDVR0RBBYw
FIESa2FybEBkZW5uaW5nZXIubmV0MA0GCSqGSIb3DQEBCwUAA4ICAQBPf3cYtmKowmGIYsm6
eBinJu7QVWvxi1vqnBz3KE+HapqoIZS8/PolB/hwiY0UAE1RsjBJ7yEjihVRwummSBvkoOyf
G30uPn4yg4vbJkR9lTz8d21fPshWETa6DBh2jx2Qf13LZpr3Pj2fTtlu6xMYKzg7cSDgd2bO
sJGH/rcvva9Spkx5Vfq0RyOrYph9boshRN3D4tbWgBAcX9POdXCVfJONDxhfBuPHsJ6vEmPb
An+XL5Yl26XYFPiODQ+Qbk44Ot1kt9s7oS3dVUrh92Qv0G3J3DF+Vt6C15nED+f+bk4gScu+
JHT7RjEmfa18GT8DcT//D1zEke1Ymhb41JH+GyZchDRWtjxsS5OBFMzrju7d264zJUFtX7iJ
3xvpKN7VcZKNtB6dLShj3v/XDsQVQWXmR/1YKWZ93C3LpRs2Y5nYdn6gEOpL/WfQFThtfnat
HNc7fNs5vjotaYpBl5H8+VCautKbGOs219uQbhGZLYTv6okuKcY8W+4EJEtK0xB08vqr9Jd0
FS9MGjQE++GWo+5eQxFt6nUENHbVYnsr6bYPQsZH0CRNycgTG9MwY/UIXOf4W034UpR82TBG
1LiMsYfb8ahQJhs3wdf1nzipIjRwoZKT1vGXh/cj3gwSr64GfenURBxaFZA5O1acOZUjPrRT
n3ci4McYW/0WVVA3lDGCBRMwggUPAgEBMIGWMIGQMQswCQYDVQQGEwJVUzEQMA4GA1UECBMH
RmxvcmlkYTESMBAGA1UEBxMJTmljZXZpbGxlMRkwFwYDVQQKExBDdWRhIFN5c3RlbXMgTExD
MRwwGgYDVQQDExNDdWRhIFN5c3RlbXMgTExDIENBMSIwIAYJKoZIhvcNAQkBFhNDdWRhIFN5
c3RlbXMgTExDIENBAgEpMA0GCWCGSAFlAwQCAwUAoIICTTAYBgkqhkiG9w0BCQMxCwYJKoZI
hvcNAQcBMBwGCSqGSIb3DQEJBTEPFw0xNTEwMTUxNDI3MjhaME8GCSqGSIb3DQEJBDFCBEBu
QCAmBvB+c3K7ThkDiy/LmOi12lW4s8uaVSDO57SbEaaAFfpiDF+Zwt9p5OPwMxlq5vMd18zw
F3zR++kZ1OENMGwGCSqGSIb3DQEJDzFfMF0wCwYJYIZIAWUDBAEqMAsGCWCGSAFlAwQBAjAK
BggqhkiG9w0DBzAOBggqhkiG9w0DAgICAIAwDQYIKoZIhvcNAwICAUAwBwYFKw4DAgcwDQYI
KoZIhvcNAwICASgwgacGCSsGAQQBgjcQBDGBmTCBljCBkDELMAkGA1UEBhMCVVMxEDAOBgNV
BAgTB0Zsb3JpZGExEjAQBgNVBAcTCU5pY2V2aWxsZTEZMBcGA1UEChMQQ3VkYSBTeXN0ZW1z
IExMQzEcMBoGA1UEAxMTQ3VkYSBTeXN0ZW1zIExMQyBDQTEiMCAGCSqGSIb3DQEJARYTQ3Vk
YSBTeXN0ZW1zIExMQyBDQQIBKTCBqQYLKoZIhvcNAQkQAgsxgZmggZYwgZAxCzAJBgNVBAYT
AlVTMRAwDgYDVQQIEwdGbG9yaWRhMRIwEAYDVQQHEwlOaWNldmlsbGUxGTAXBgNVBAoTEEN1
ZGEgU3lzdGVtcyBMTEMxHDAaBgNVBAMTE0N1ZGEgU3lzdGVtcyBMTEMgQ0ExIjAgBgkqhkiG
9w0BCQEWE0N1ZGEgU3lzdGVtcyBMTEMgQ0ECASkwDQYJKoZIhvcNAQEBBQAEggIAHE5bjQS9
dKEsZ6t8TBlQ4s9DeIOEtyea+erDtC/yPslXWz1iw47AOsoZN5XyaPKD09EqjIrBKtK7JRN5
dWoFP9m+G2UVzYGSSdwLBsO2MVJDQ9TTRWpJCIiPz+50MEByOUtViynUT3wbmw96wNIw+z8X
VItfBdxG3UXo8rqNzYRokjmhXMBWwuSaWDLMFQlEPDUtLV1AfH4mo0/oDutr7pEuT+xXau1u
5Ojie0X1D9vfB2rkaC+dkShiQue5e6K/Y2OWjD/12zBcJ/prbiAKvcbEs8ZGEr3TVKa/67Wg
KgpP3xPNEZPi7gNkdqLdgnDQhGvP5HSyAuJRWCnZuMsfBRflWT7N/xylf4pXsbpSRxeNo6Xz
GWZmXaMK/vb/fhkXSXhwqYLzEQr7vL79Gayz17ZXBwxnPjEpfmSWCBl/JjSkQSbTLwEz66Kj
uLxWFn/5j6W9Vx3O5CPD3MRngz3yj+E7sL+81tI73oRd1uU6GgWZ8OFFnZ9XDExg3H/FDyW0
JsYhaeXdLsd4/EqLv0U2K1C9POGJQK2WYIND2Y9USGnlA+mxyDBqPRthc3aXt4McPnI+EA7E
6pSxkOiv6q0YviYVnvZaCgHLxJwpyA0g5wlCImfDar11u04hZy+iPBhKABZe3S8XiHN+PPaQ
nO+qgVbEZgB5Vrpl9CNRMawzy8cAAAAAAAA=
--------------ms020606080808060507010408--



Want to link to this message? Use this URL: <https://mail-archive.FreeBSD.org/cgi/mid.cgi?561FB7D0.4080107>