From owner-freebsd-stable@freebsd.org Thu Nov 10 19:42:01 2016 Return-Path: Delivered-To: freebsd-stable@mailman.ysv.freebsd.org Received: from mx1.freebsd.org (mx1.freebsd.org [IPv6:2001:1900:2254:206a::19:1]) by mailman.ysv.freebsd.org (Postfix) with ESMTP id 162B3C3AD94 for ; Thu, 10 Nov 2016 19:42:01 +0000 (UTC) (envelope-from hlh@restart.be) Received: from tignes.restart.be (tignes.restart.be [5.135.182.190]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (Client CN "tignes.restart.be", Issuer "CA master" (not verified)) by mx1.freebsd.org (Postfix) with ESMTPS id A9572202; Thu, 10 Nov 2016 19:42:00 +0000 (UTC) (envelope-from hlh@restart.be) X-Comment: SPF check N/A for local connections - client-ip=2001:41d0:8:bdbe:1:1::; helo=restart.be; envelope-from=hlh@restart.be; receiver=avg@freebsd.org DKIM-Filter: OpenDKIM Filter v2.10.3 tignes.restart.be 3tFD2Y1SpzzsH9 DKIM-Signature: v=1; a=rsa-sha256; c=simple/simple; d=restart.be; s=tignes; t=1478806917; bh=Dh7nYDfuPqZpmn75o4eYoL0HO7C6o1sF6szRk3PFO2o=; h=From:Subject:To:References:Cc:Date:In-Reply-To; z=From:=20Henri=20Hennebert=20|Subject:=20Re:=20Fre ebsd=2011.0=20RELEASE=20-=20ZFS=20deadlock|To:=20Andriy=20Gapon=20 ,=20freebsd-stable@FreeBSD.org|References:=20<0c2 23160-b76f-c635-bb15-4a068ba7efe7@restart.be>=0D=0A=20=0D=0A=20<43c9d4d4-1995-5626 -d70a-f92a5b456629@FreeBSD.org>=0D=0A=20=0D=0A=20<9d1f9a76-5a8d-6eca-9a50-907d55099 847@FreeBSD.org>=0D=0A=20<6bc95dce-31e1-3013-bfe3-7c2dd80f9d1e@res tart.be>=0D=0A=20=0D=0A=20<23a66749-f138-1f1a-afae-c775f906ff37@restart.be>=0D=0A= 20<8e7547ef-87f7-7fab-6f45-221e8cea1989@FreeBSD.org>=0D=0A=20<6d99 1cea-b420-531e-12cc-001e4aeed66b@restart.be>=0D=0A=20<67f2e8bd-bff 0-f808-7557-7dabe5cad78c@FreeBSD.org>|Cc:=20Konstantin=20Belousov= 20|Date:=20Thu,=2010=20Nov=202016=2020:41:55=20+0 100|In-Reply-To:=20<67f2e8bd-bff0-f808-7557-7dabe5cad78c@FreeBSD.o rg>; b=Y62/HpIxiErN82wd0GKQ9nnsTsPTvvnz7XSlAKQWZVgcGkVooPnccJLE9jDqREAnD gZ/+6sMqr8dK1TfG2Lye/ftP43hftNzH2AJnvr7hR6rsQZQyTPhfXSuoRUn4QVcc8B 2RjGj+QkG76yxQ8d235+612xgDyKf9ZlE6CmEL+QKMWUDrxPGhbvK7bDTDrfLK+UJZ wLd0OnCCNTCJif4xm5RW6TooqSerLfLNUBMXhAYBKN0QiC+IjXviMpRZvK58QyOTPF LSXG1DTgrP/DmviH/eoA1RNxoGsdDN57s0EahROpmxMFxstDTEaDBGcd0bGH0zqpw6 9T55d9rJjAOSw== Received: from restart.be (avoriaz.restart.be [IPv6:2001:41d0:8:bdbe:1:1::]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (Client CN "smtp.restart.be", Issuer "CA master" (verified OK)) by tignes.restart.be (Postfix) with ESMTPS id 3tFD2Y1SpzzsH9; Thu, 10 Nov 2016 20:41:56 +0100 (CET) Received: from chamonix.restart.bel (chamonix.restart.bel [IPv6:2001:41d0:8:bdbe:1:9:0:0]) (authenticated bits=0) by restart.be (8.15.2/8.15.2) with ESMTPSA id uAAJft3K042337 (version=TLSv1.2 cipher=ECDHE-RSA-AES128-GCM-SHA256 bits=128 verify=NO); Thu, 10 Nov 2016 20:41:56 +0100 (CET) (envelope-from hlh@restart.be) From: Henri Hennebert Subject: Re: Freebsd 11.0 RELEASE - ZFS deadlock To: Andriy Gapon , freebsd-stable@FreeBSD.org References: <0c223160-b76f-c635-bb15-4a068ba7efe7@restart.be> <43c9d4d4-1995-5626-d70a-f92a5b456629@FreeBSD.org> <9d1f9a76-5a8d-6eca-9a50-907d55099847@FreeBSD.org> <6bc95dce-31e1-3013-bfe3-7c2dd80f9d1e@restart.be> <23a66749-f138-1f1a-afae-c775f906ff37@restart.be> <8e7547ef-87f7-7fab-6f45-221e8cea1989@FreeBSD.org> <6d991cea-b420-531e-12cc-001e4aeed66b@restart.be> <67f2e8bd-bff0-f808-7557-7dabe5cad78c@FreeBSD.org> Cc: Konstantin Belousov Message-ID: <1cb09c54-5f0e-2259-a41a-fefe76b4fe8b@restart.be> Date: Thu, 10 Nov 2016 20:41:55 +0100 User-Agent: Mozilla/5.0 (X11; FreeBSD amd64; rv:45.0) Gecko/20100101 Thunderbird/45.4.0 MIME-Version: 1.0 In-Reply-To: <67f2e8bd-bff0-f808-7557-7dabe5cad78c@FreeBSD.org> Content-Type: text/plain; charset=windows-1252; format=flowed Content-Transfer-Encoding: 7bit X-BeenThere: freebsd-stable@freebsd.org X-Mailman-Version: 2.1.23 Precedence: list List-Id: Production branch of FreeBSD source code List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , X-List-Received-Date: Thu, 10 Nov 2016 19:42:01 -0000 On 11/10/2016 19:40, Andriy Gapon wrote: > On 10/11/2016 19:55, Henri Hennebert wrote: >> >> >> On 11/10/2016 18:33, Andriy Gapon wrote: >>> On 10/11/2016 18:12, Henri Hennebert wrote: >>>> On 11/10/2016 16:54, Andriy Gapon wrote: >>>>> On 10/11/2016 17:20, Henri Hennebert wrote: >>>>>> On 11/10/2016 15:00, Andriy Gapon wrote: >>>>>>> Interesting. I can not spot any suspicious thread that would hold the vnode >>>>>>> lock. Could you please run kgdb (just like that, no arguments), then execute >>>>>>> 'bt' command and then select a frame when _vn_lock is called with 'fr N' >>>>>>> command. Then please 'print *vp' and share the result. >>>>>>> >>>>>> I Think I miss something in your request: >>>>> >>>>> Oh, sorry! The very first step should be 'tid 101112' to switch to the correct >>>>> context. >>>>> >>>> >>>> (kgdb) fr 7 >>>> #7 0xffffffff8063c5b3 in _vn_lock (vp=, flags=2121728, >>> >>> "value optimized out" - not good >>> >>>> file=, >>>> line=) at vnode_if.h:859 >>>> 859 vnode_if.h: No such file or directory. >>>> in vnode_if.h >>>> (kgdb) print *vp >>> >>> I am not sure if this output is valid, because of the message above. >>> Could you please try to navigate to nearby frames and see if vp itself has a >>> valid value there. If you can find such a frame please do *vp there. >>> >> >> Does this seems better? > > Yes! > >> (kgdb) fr 8 >> #8 0xffffffff8062a5f7 in vget (vp=0xfffff80049c2c000, flags=2121728, >> td=0xfffff80009ba0500) at /usr/src/sys/kern/vfs_subr.c:2523 >> 2523 if ((error = vn_lock(vp, flags)) != 0) { >> (kgdb) print *vp >> $1 = {v_tag = 0xffffffff813be535 "zfs", v_op = 0xffffffff813d0f70, v_data = >> 0xfffff80049c1f420, v_mount = 0xfffff800093aa660, >> v_nmntvnodes = {tqe_next = 0xfffff80049c2c938, tqe_prev = 0xfffff80049c2bb30}, >> v_un = {vu_mount = 0x0, vu_socket = 0x0, >> vu_cdev = 0x0, vu_fifoinfo = 0x0}, v_hashlist = {le_next = 0x0, le_prev = >> 0x0}, v_cache_src = {lh_first = 0x0}, v_cache_dst = { >> tqh_first = 0xfffff800bfc8e3f0, tqh_last = 0xfffff800bfc8e410}, v_cache_dd = >> 0x0, v_lock = {lock_object = { >> lo_name = 0xffffffff813be535 "zfs", lo_flags = 117112832, lo_data = 0, >> lo_witness = 0x0}, lk_lock = 23, lk_exslpfail = 0, >> lk_timo = 51, lk_pri = 96}, v_interlock = {lock_object = {lo_name = >> 0xffffffff8099e9e0 "vnode interlock", lo_flags = 16973824, >> lo_data = 0, lo_witness = 0x0}, mtx_lock = 4}, v_vnlock = >> 0xfffff80049c2c068, v_actfreelist = { >> tqe_next = 0xfffff80049c2c938, tqe_prev = 0xfffff80049ae9bd0}, v_bufobj = >> {bo_lock = {lock_object = { >> lo_name = 0xffffffff8099e9f0 "bufobj interlock", lo_flags = 86179840, >> lo_data = 0, lo_witness = 0x0}, rw_lock = 1}, >> bo_ops = 0xffffffff80c4bf70, bo_object = 0xfffff800b62e9c60, bo_synclist = >> {le_next = 0x0, le_prev = 0x0}, >> bo_private = 0xfffff80049c2c000, __bo_vnode = 0xfffff80049c2c000, bo_clean = >> {bv_hd = {tqh_first = 0x0, >> tqh_last = 0xfffff80049c2c120}, bv_root = {pt_root = 0}, bv_cnt = 0}, >> bo_dirty = {bv_hd = {tqh_first = 0x0, >> tqh_last = 0xfffff80049c2c140}, bv_root = {pt_root = 0}, bv_cnt = 0}, >> bo_numoutput = 0, bo_flag = 0, bo_bsize = 131072}, >> v_pollinfo = 0x0, v_label = 0x0, v_lockf = 0x0, v_rl = {rl_waiters = >> {tqh_first = 0x0, tqh_last = 0xfffff80049c2c188}, >> rl_currdep = 0x0}, v_cstart = 0, v_lasta = 0, v_lastw = 0, v_clen = 0, >> v_holdcnt = 9, v_usecount = 6, v_iflag = 512, >> v_vflag = 32, v_writecount = 0, v_hash = 4833984, v_type = VREG} >> (kgdb) > > flags=2121728 = 0x206000 = LK_SHARED | LK_VNHELD | LK_NODDLKTREAT > lk_lock = 23 = 0x17 = LK_ONE_SHARER | LK_EXCLUSIVE_WAITERS | LK_SHARED_WAITERS | > LK_SHARE > > So, here's what we have here: this thread tries to get a shared lock on the > vnode, the vnode is already locked in shared mode, but there is an exclusive > waiter (or, perhaps, multiple waiters). So, this thread can not get the lock > because of the exclusive waiter. And I do not see an easy way to identify that > waiter. > > In the procstat output that you provided earlier there was no other thread in > vn_lock. Hmm, I see this: > procstat: sysctl: kern.proc.kstack: 14789: Device busy > procstat: sysctl: kern.proc.kstack: 82034: Device busy > > Could you please check what those two processes are (if they are still running)? > Perhaps try procstat for each of the pids several times. > This 2 processes are the 2 instances of the innd daemon (news server) which seems in accordance with the directory /usr/local/news/bin. [root@avoriaz ~]# procstat 14789 PID PPID PGID SID TSID THR LOGIN WCHAN EMUL COMM 14789 29165 29165 29165 0 1 root zfs FreeBSD ELF64 innd [root@avoriaz ~]# procstat 82034 PID PPID PGID SID TSID THR LOGIN WCHAN EMUL COMM 82034 29165 29165 29165 0 1 root zfs FreeBSD ELF64 innd [root@avoriaz ~]# procstat -f 14789 procstat: kinfo_getfile(): Device busy PID COMM FD T V FLAGS REF OFFSET PRO NAME [root@avoriaz ~]# procstat -f 14789 procstat: kinfo_getfile(): Device busy PID COMM FD T V FLAGS REF OFFSET PRO NAME [root@avoriaz ~]# procstat -f 14789 procstat: kinfo_getfile(): Device busy PID COMM FD T V FLAGS REF OFFSET PRO NAME [root@avoriaz ~]# procstat -f 14789 procstat: kinfo_getfile(): Device busy PID COMM FD T V FLAGS REF OFFSET PRO NAME [root@avoriaz ~]# procstat -kk 14789 PID TID COMM TDNAME KSTACK procstat: sysctl: kern.proc.kstack: 14789: Device busy [root@avoriaz ~]# procstat -kk 14789 PID TID COMM TDNAME KSTACK procstat: sysctl: kern.proc.kstack: 14789: Device busy [root@avoriaz ~]# procstat -kk 14789 PID TID COMM TDNAME KSTACK procstat: sysctl: kern.proc.kstack: 14789: Device busy [root@avoriaz ~]# procstat -kk 14789 PID TID COMM TDNAME KSTACK procstat: sysctl: kern.proc.kstack: 14789: Device busy [root@avoriaz ~]# Same result for 82034. By the way the news server seems OK: [root@avoriaz ~]# ctlinnd mode Server running Allowing remote connections Parameters c 14 i 50 (0) l 5000000 o 1010 t 300 H 2 T 60 X 0 normal specified Not reserved Readers follow enabled Perl filtering enabled [root@avoriaz ~]# I can access the news server with thunderbird. I can restart the news server if you ask Henri