Skip site navigation (1)Skip section navigation (2)
Date:      Fri, 18 Nov 2016 14:30:39 +0200
From:      Andriy Gapon <avg@FreeBSD.org>
To:        Henri Hennebert <hlh@restart.be>, Konstantin Belousov <kib@FreeBSD.org>
Cc:        freebsd-stable@FreeBSD.org
Subject:   Re: Freebsd 11.0 RELEASE - ZFS deadlock
Message-ID:  <7c932021-ff99-9ef9-7042-4f267fb0b955@FreeBSD.org>
In-Reply-To: <80f65c86-1015-c409-1bf6-c01a5fe569c8@restart.be>
References:  <0c223160-b76f-c635-bb15-4a068ba7efe7@restart.be> <9d1f9a76-5a8d-6eca-9a50-907d55099847@FreeBSD.org> <6bc95dce-31e1-3013-bfe3-7c2dd80f9d1e@restart.be> <e4878992-a362-3f12-e743-8efa1347cabf@FreeBSD.org> <23a66749-f138-1f1a-afae-c775f906ff37@restart.be> <8e7547ef-87f7-7fab-6f45-221e8cea1989@FreeBSD.org> <6d991cea-b420-531e-12cc-001e4aeed66b@restart.be> <67f2e8bd-bff0-f808-7557-7dabe5cad78c@FreeBSD.org> <1cb09c54-5f0e-2259-a41a-fefe76b4fe8b@restart.be> <d25c8035-b710-5de9-ebe3-7990b2d0e3b1@FreeBSD.org> <9f20020b-e2f1-862b-c3fc-dc6ff94e301e@restart.be> <c1b7aa94-1f1d-7edd-8764-adb72fdc053c@FreeBSD.org> <599c5a5b-aa08-2030-34f3-23ff19d09a9b@restart.be> <32686283-948a-6faf-7ded-ed8fcd23affb@FreeBSD.org> <cf0fc1e3-b621-074e-1351-4dd89d980ddd@restart.be> <af4e0c2b-00f8-bbaa-bcb7-d97062a393b8@FreeBSD.org> <26512d69-94c2-92da-e3ea-50aebf17e3a0@restart.be> <f406ad95-bd3f-710c-5a2c-cc526d1a9812@FreeBSD.org> <80f65c86-1015-c409-1bf6-c01a5fe569c8@restart.be>

next in thread | previous in thread | raw e-mail | index | archive | help
On 14/11/2016 14:00, Henri Hennebert wrote:
> On 11/14/2016 12:45, Andriy Gapon wrote:
>> Okay.  Luckily for us, it seems that 'm' is available in frame 5.  It also
>> happens to be the first field of 'struct faultstate'.  So, could you please go
>> to frame and print '*m' and '*(struct faultstate *)m' ?
>>
> (kgdb) fr 4
> #4  0xffffffff8089d1c1 in vm_page_busy_sleep (m=0xfffff800df68cd40, wmesg=<value
> optimized out>) at /usr/src/sys/vm/vm_page.c:753
> 753        msleep(m, vm_page_lockptr(m), PVM | PDROP, wmesg, 0);
> (kgdb) print *m
> $1 = {plinks = {q = {tqe_next = 0xfffff800dc5d85b0, tqe_prev =
> 0xfffff800debf3bd0}, s = {ss = {sle_next = 0xfffff800dc5d85b0},
>       pv = 0xfffff800debf3bd0}, memguard = {p = 18446735281313646000, v =
> 18446735281353604048}}, listq = {tqe_next = 0x0,
>     tqe_prev = 0xfffff800dc5d85c0}, object = 0xfffff800b62e9c60, pindex = 11,
> phys_addr = 3389358080, md = {pv_list = {
>       tqh_first = 0x0, tqh_last = 0xfffff800df68cd78}, pv_gen = 426, pat_mode =
> 6}, wire_count = 0, busy_lock = 6, hold_count = 0,
>   flags = 0, aflags = 2 '\002', oflags = 0 '\0', queue = 0 '\0', psind = 0 '\0',
> segind = 3 '\003', order = 13 '\r',
>   pool = 0 '\0', act_count = 0 '\0', valid = 0 '\0', dirty = 0 '\0'}

If I interpret this correctly the page is in the 'exclusive busy' state.
Unfortunately, I can't tell much beyond that.
But I am confident that this is the root cause of the lock-up.

> (kgdb) print *(struct faultstate *)m
> $2 = {m = 0xfffff800dc5d85b0, object = 0xfffff800debf3bd0, pindex = 0, first_m =
> 0xfffff800dc5d85c0,
>   first_object = 0xfffff800b62e9c60, first_pindex = 11, map = 0xca058000, entry
> = 0x0, lookup_still_valid = -546779784,
>   vp = 0x6000001aa}
> (kgdb)

I was wrong on this one as 'm' is actually a pointer, so the above is not
correct.  Maybe 'info reg' in frame 5 would give a clue about the value of 'fs'.

I am not sure how to proceed from here.
The only thing I can think of is a lock order reversal between the vnode lock
and the page busying quasi-lock.  But examining the code I can not spot it.
Another possibility is a leak of a busy page, but that's hard to debug.

How hard is it to reproduce the problem?

Maybe Konstantin would have some ideas or suggestions.

-- 
Andriy Gapon



Want to link to this message? Use this URL: <https://mail-archive.FreeBSD.org/cgi/mid.cgi?7c932021-ff99-9ef9-7042-4f267fb0b955>