From owner-freebsd-current@FreeBSD.ORG Sun Sep 7 16:26:53 2014 Return-Path: Delivered-To: freebsd-current@freebsd.org Received: from mx1.freebsd.org (mx1.freebsd.org [IPv6:2001:1900:2254:206a::19:1]) (using TLSv1 with cipher ADH-AES256-SHA (256/256 bits)) (No client certificate requested) by hub.freebsd.org (Postfix) with ESMTPS id 54A959B9 for ; Sun, 7 Sep 2014 16:26:53 +0000 (UTC) Received: from smtp1.multiplay.co.uk (smtp1.multiplay.co.uk [85.236.96.35]) by mx1.freebsd.org (Postfix) with ESMTP id BD55419C6 for ; Sun, 7 Sep 2014 16:26:52 +0000 (UTC) Received: by smtp1.multiplay.co.uk (Postfix, from userid 65534) id A887120E7088A; Sun, 7 Sep 2014 16:26:45 +0000 (UTC) X-Spam-Checker-Version: SpamAssassin 3.3.1 (2010-03-16) on smtp1.multiplay.co.uk X-Spam-Level: ** X-Spam-Status: No, score=2.2 required=8.0 tests=AWL,BAYES_00,DOS_OE_TO_MX, FSL_HELO_NON_FQDN_1,RDNS_DYNAMIC,STOX_REPLY_TYPE autolearn=no version=3.3.1 Received: from r2d2 (82-69-141-170.dsl.in-addr.zen.co.uk [82.69.141.170]) by smtp1.multiplay.co.uk (Postfix) with ESMTPS id CAD9620E70885; Sun, 7 Sep 2014 16:26:41 +0000 (UTC) Message-ID: <0950545417594BC0835F74EA5804300E@multiplay.co.uk> From: "Steven Hartland" To: , "Fabian Keil" , References: <492dbacb.5942cc9b@fabiankeil.de> <540C66AC.8070809@delphij.net> <4fa875ba.3cc970d7@fabiankeil.de> <540C8039.7010309@delphij.net> Subject: Re: ZFS-related panic: "possible" spa->spa_errlog_lock deadlock Date: Sun, 7 Sep 2014 17:26:45 +0100 MIME-Version: 1.0 Content-Type: text/plain; format=flowed; charset="iso-8859-1"; reply-type=original Content-Transfer-Encoding: 7bit X-Priority: 3 X-MSMail-Priority: Normal X-Mailer: Microsoft Outlook Express 6.00.2900.5931 X-MimeOLE: Produced By Microsoft MimeOLE V6.00.2900.6157 Cc: Alexander Motin X-BeenThere: freebsd-current@freebsd.org X-Mailman-Version: 2.1.18-1 Precedence: list List-Id: Discussions about the use of FreeBSD-current List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , X-List-Received-Date: Sun, 07 Sep 2014 16:26:53 -0000 ----- Original Message ----- From: "Xin Li" To: "Fabian Keil" ; Cc: "Alexander Motin" Sent: Sunday, September 07, 2014 4:56 PM Subject: Re: ZFS-related panic: "possible" spa->spa_errlog_lock deadlock > -----BEGIN PGP SIGNED MESSAGE----- > Hash: SHA512 > > On 9/7/14 11:23 PM, Fabian Keil wrote: >> Xin Li wrote: >> >>> On 9/7/14 9:02 PM, Fabian Keil wrote: >>>> Using a kernel built from FreeBSD 11.0-CURRENT r271182 I got >>>> the following panic yesterday: >>>> >>>> [...] Unread portion of the kernel message buffer: [6880] >>>> panic: deadlkres: possible deadlock detected for >>>> 0xfffff80015289490, blocked for 1800503 ticks >>> >>> Any chance to get all backtraces (e.g. thread apply all bt full >>> 16)? I think a different thread that held the lock have been >>> blocked, probably related to your disconnected vdev. >> >> Output of "thread apply all bt full 16" is available at: >> http://www.fabiankeil.de/tmp/freebsd/kgdb-output-spa_errlog_lock-deadlock.txt >> >> A lot of the backtraces prematurely end with "Cannot access memory >> at address", therefore I also added "thread apply all bt" output. >> >> Apparently there are at least two additional threads blocking below >> spa_get_stats(): >> >> Thread 1182 (Thread 101989): #0 sched_switch >> (td=0xfffff800628cc490, newtd=, flags=> optimized out>) at /usr/src/sys/kern/sched_ule.c:1932 #1 >> 0xffffffff805a23c1 in mi_switch (flags=260, newtd=0x0) at >> /usr/src/sys/kern/kern_synch.c:493 #2 0xffffffff805e4bca in >> sleepq_wait (wchan=0x0, pri=0) at >> /usr/src/sys/kern/subr_sleepqueue.c:631 #3 0xffffffff80539f10 in >> _cv_wait (cvp=0xfffff80025534a50, lock=0xfffff80025534a30) at >> /usr/src/sys/kern/kern_condvar.c:139 #4 0xffffffff811721db in >> zio_wait (zio=) at >> /usr/src/sys/cddl/contrib/opensolaris/uts/common/fs/zfs/zio.c:1442 >> #5 0xffffffff81102111 in dbuf_read (db=, >> zio=, flags=) at >> /usr/src/sys/cddl/contrib/opensolaris/uts/common/fs/zfs/dbuf.c:649 >> #6 0xffffffff81108e6d in dmu_buf_hold (os=, >> object=, offset=, >> tag=0x0, dbp=0xfffffe00955c6648, flags=) at >> /usr/src/sys/cddl/contrib/opensolaris/uts/common/fs/zfs/dmu.c:172 >> #7 0xffffffff81163986 in zap_lockdir (os=0xfffff8002b7ab000, >> obj=92, tx=0x0, lti=RW_READER, fatreader=1, adding=0, zapp=> optimized out>) at >> /usr/src/sys/cddl/contrib/opensolaris/uts/common/fs/zfs/zap_micro.c:467 >> >> > #8 0xffffffff811644ad in zap_count (os=0x0, zapobj=0, > count=0xfffffe00955c66d8) at > /usr/src/sys/cddl/contrib/opensolaris/uts/common/fs/zfs/zap_micro.c:712 >> #9 0xffffffff8114a6dc in spa_get_errlog_size >> (spa=0xfffff800062ed000) at >> /usr/src/sys/cddl/contrib/opensolaris/uts/common/fs/zfs/spa_errlog.c:149 >> >> > - ---Type to continue, or q to quit--- >> #10 0xffffffff8113f549 in spa_get_stats (name=0xfffffe0044cac000 >> "spaceloop", config=0xfffffe00955c68e8, altroot=0xfffffe0044cac430 >> "", buflen=2048) at >> /usr/src/sys/cddl/contrib/opensolaris/uts/common/fs/zfs/spa.c:3287 >> #11 0xffffffff81189a45 in zfs_ioc_pool_stats >> (zc=0xfffffe0044cac000) at >> /usr/src/sys/cddl/contrib/opensolaris/uts/common/fs/zfs/zfs_ioctl.c:1656 >> >> > #12 0xffffffff81187290 in zfsdev_ioctl (dev=, > zcmd=, arg=, flag= optimized out>, td=) >> at >> /usr/src/sys/cddl/contrib/opensolaris/uts/common/fs/zfs/zfs_ioctl.c:6136 >> >> > #13 0xffffffff80464a55 in devfs_ioctl_f (fp=0xfffff80038bd00a0, > com=3222821381, data=0xfffff800067b80a0, cred=, > td=0xfffff800628cc490) at /usr/src/sys/fs/devfs/devfs_vnops.c:757 >> #14 0xffffffff805f3c3d in kern_ioctl (td=0xfffff800628cc490, >> fd=, com=0) at file.h:311 #15 >> 0xffffffff805f381c in sys_ioctl (td=0xfffff800628cc490, >> uap=0xfffffe00955c6b80) at /usr/src/sys/kern/sys_generic.c:702 #16 >> 0xffffffff8085c2db in amd64_syscall (td=0xfffff800628cc490, >> traced=0) at subr_syscall.c:133 #17 0xffffffff8083f90b in >> Xfast_syscall () at /usr/src/sys/amd64/amd64/exception.S:390 #18 >> 0x00000008019fc3da in ?? () Previous frame inner to this frame >> (corrupt stack?) > > Yes, thread 1182 owned the lock and is waiting for the zio be done. > Other threads that wanted the lock would have to wait. > > I don't have much clue why the system entered this state, however, as > the operations should have errored out (the GELI device is gone on > 21:44:56 based on your log, which suggests all references were closed) > instead of waiting. > > Adding mav@ as he may have some idea. We're seen a disk drop invalidating a pool before, which should fail all reads / writes but process have instead just wedged in the kernel. >From experience I'd say it happens ~5% of time, so its quite hard to catch. Unfortunately never managed to get a dump of it. Regards Steve