From owner-freebsd-virtualization@freebsd.org Thu Mar 14 16:53:32 2019 Return-Path: Delivered-To: freebsd-virtualization@mailman.ysv.freebsd.org Received: from mx1.freebsd.org (mx1.freebsd.org [IPv6:2610:1c1:1:606c::19:1]) by mailman.ysv.freebsd.org (Postfix) with ESMTP id A1AF01549534 for ; Thu, 14 Mar 2019 16:53:32 +0000 (UTC) (envelope-from bugzilla-noreply@freebsd.org) Received: from mailman.ysv.freebsd.org (mailman.ysv.freebsd.org [IPv6:2001:1900:2254:206a::50:5]) by mx1.freebsd.org (Postfix) with ESMTP id 333D4889BF for ; Thu, 14 Mar 2019 16:53:32 +0000 (UTC) (envelope-from bugzilla-noreply@freebsd.org) Received: by mailman.ysv.freebsd.org (Postfix) id E666F1549533; Thu, 14 Mar 2019 16:53:31 +0000 (UTC) Delivered-To: virtualization@mailman.ysv.freebsd.org Received: from mx1.freebsd.org (mx1.freebsd.org [IPv6:2610:1c1:1:606c::19:1]) by mailman.ysv.freebsd.org (Postfix) with ESMTP id C183F1549532 for ; Thu, 14 Mar 2019 16:53:31 +0000 (UTC) (envelope-from bugzilla-noreply@freebsd.org) Received: from mxrelay.ysv.freebsd.org (mxrelay.ysv.freebsd.org [IPv6:2001:1900:2254:206a::19:3]) (using TLSv1.3 with cipher TLS_AES_256_GCM_SHA384 (256/256 bits) server-signature RSA-PSS (4096 bits) client-signature RSA-PSS (4096 bits) client-digest SHA256) (Client CN "mxrelay.ysv.freebsd.org", Issuer "Let's Encrypt Authority X3" (verified OK)) by mx1.freebsd.org (Postfix) with ESMTPS id 59D02889BA for ; Thu, 14 Mar 2019 16:53:31 +0000 (UTC) (envelope-from bugzilla-noreply@freebsd.org) Received: from kenobi.freebsd.org (kenobi.freebsd.org [IPv6:2001:1900:2254:206a::16:76]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (Client did not present a certificate) by mxrelay.ysv.freebsd.org (Postfix) with ESMTPS id 957A9B04A for ; Thu, 14 Mar 2019 16:53:30 +0000 (UTC) (envelope-from bugzilla-noreply@freebsd.org) Received: from kenobi.freebsd.org ([127.0.1.118]) by kenobi.freebsd.org (8.15.2/8.15.2) with ESMTP id x2EGrUJh072227 for ; Thu, 14 Mar 2019 16:53:30 GMT (envelope-from bugzilla-noreply@freebsd.org) Received: (from www@localhost) by kenobi.freebsd.org (8.15.2/8.15.2/Submit) id x2EGrUDP072226 for virtualization@FreeBSD.org; Thu, 14 Mar 2019 16:53:30 GMT (envelope-from bugzilla-noreply@freebsd.org) X-Authentication-Warning: kenobi.freebsd.org: www set sender to bugzilla-noreply@freebsd.org using -f From: bugzilla-noreply@freebsd.org To: virtualization@FreeBSD.org Subject: [Bug 231117] I/O lockups inside bhyve vms Date: Thu, 14 Mar 2019 16:53:29 +0000 X-Bugzilla-Reason: AssignedTo X-Bugzilla-Type: changed X-Bugzilla-Watch-Reason: None X-Bugzilla-Product: Base System X-Bugzilla-Component: kern X-Bugzilla-Version: 11.2-RELEASE X-Bugzilla-Keywords: regression X-Bugzilla-Severity: Affects Some People X-Bugzilla-Who: roel@qsp.nl X-Bugzilla-Status: New X-Bugzilla-Resolution: X-Bugzilla-Priority: --- X-Bugzilla-Assigned-To: virtualization@FreeBSD.org X-Bugzilla-Flags: X-Bugzilla-Changed-Fields: cc Message-ID: In-Reply-To: References: Content-Type: text/plain; charset="UTF-8" Content-Transfer-Encoding: quoted-printable X-Bugzilla-URL: https://bugs.freebsd.org/bugzilla/ Auto-Submitted: auto-generated MIME-Version: 1.0 X-BeenThere: freebsd-virtualization@freebsd.org X-Mailman-Version: 2.1.29 Precedence: list List-Id: "Discussion of various virtualization techniques FreeBSD supports." List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , X-List-Received-Date: Thu, 14 Mar 2019 16:53:32 -0000 https://bugs.freebsd.org/bugzilla/show_bug.cgi?id=3D231117 roel@qsp.nl changed: What |Removed |Added ---------------------------------------------------------------------------- CC| |roel@qsp.nl --- Comment #18 from roel@qsp.nl --- Just had this occur again on a VM running under bhyve with 12.0-STABLE, che= cked out and compiled 6 days ago (r344917). VM host is running the exact same kernel. The modifications in zfs_znode.c are present, but we still had an i= ssue after the system has been running for a couple of days. VM has >4GB arc_max (so the workaround as described by Kristian doesn't work for us): vfs.zfs.arc_min: 903779840 vfs.zfs.arc_max: 7230238720 Hypervisor: vfs.zfs.arc_min: 8216929792 vfs.zfs.arc_max: 65735438336 Procstat -kk on the bhyve process: root@cloud02:/home/roel # procstat -kk 18178 PID TID COMM TDNAME KSTACK=20=20=20=20=20= =20=20=20=20=20=20=20=20=20=20=20=20=20=20=20=20 18178 101261 bhyve mevent mi_switch+0xe2 sleepq_catch_signals+0x405 sleepq_wait_sig+0xf _sleep+0x23a kqueue_kevent+0= x297 kern_kevent+0xb5 kern_kevent_generic+0x70 sys_kevent+0x61 amd64_syscall+0x3= 4d fast_syscall_common+0x101=20 18178 101731 bhyve vtnet-2:0 tx mi_switch+0xe2 sleepq_catch_signals+0x405 sleepq_wait_sig+0xf _sleep+0x23a umtxq_sleep+0x1= 33 do_wait+0x427 __umtx_op_wait_uint_private+0x53 amd64_syscall+0x34d fast_syscall_common+0x101=20 18178 101732 bhyve blk-3:0:0-0 mi_switch+0xe2 sleepq_catch_signals+0x405 sleepq_wait_sig+0xf _sleep+0x23a umtxq_sleep+0x1= 33 do_wait+0x427 __umtx_op_wait_uint_private+0x53 amd64_syscall+0x34d fast_syscall_common+0x101=20 18178 101733 bhyve blk-3:0:0-1 mi_switch+0xe2 sleepq_catch_signals+0x405 sleepq_wait_sig+0xf _sleep+0x23a umtxq_sleep+0x1= 33 do_wait+0x427 __umtx_op_wait_uint_private+0x53 amd64_syscall+0x34d fast_syscall_common+0x101=20 18178 101734 bhyve blk-3:0:0-2 mi_switch+0xe2 sleepq_catch_signals+0x405 sleepq_wait_sig+0xf _sleep+0x23a umtxq_sleep+0x1= 33 do_wait+0x427 __umtx_op_wait_uint_private+0x53 amd64_syscall+0x34d fast_syscall_common+0x101=20 18178 101735 bhyve blk-3:0:0-3 mi_switch+0xe2 sleepq_catch_signals+0x405 sleepq_wait_sig+0xf _sleep+0x23a umtxq_sleep+0x1= 33 do_wait+0x427 __umtx_op_wait_uint_private+0x53 amd64_syscall+0x34d fast_syscall_common+0x101=20 18178 101736 bhyve blk-3:0:0-4 mi_switch+0xe2 sleepq_catch_signals+0x405 sleepq_wait_sig+0xf _sleep+0x23a umtxq_sleep+0x1= 33 do_wait+0x427 __umtx_op_wait_uint_private+0x53 amd64_syscall+0x34d fast_syscall_common+0x101=20 18178 101737 bhyve blk-3:0:0-5 mi_switch+0xe2 sleepq_catch_signals+0x405 sleepq_wait_sig+0xf _sleep+0x23a umtxq_sleep+0x1= 33 do_wait+0x427 __umtx_op_wait_uint_private+0x53 amd64_syscall+0x34d fast_syscall_common+0x101=20 18178 101738 bhyve blk-3:0:0-6 mi_switch+0xe2 sleepq_catch_signals+0x405 sleepq_wait_sig+0xf _sleep+0x23a umtxq_sleep+0x1= 33 do_wait+0x427 __umtx_op_wait_uint_private+0x53 amd64_syscall+0x34d fast_syscall_common+0x101=20 18178 101739 bhyve blk-3:0:0-7 mi_switch+0xe2 sleepq_catch_signals+0x405 sleepq_wait_sig+0xf _sleep+0x23a umtxq_sleep+0x1= 33 do_wait+0x427 __umtx_op_wait_uint_private+0x53 amd64_syscall+0x34d fast_syscall_common+0x101=20 18178 101740 bhyve vcpu 0 =20=20=20=20= =20=20=20=20=20=20=20=20=20=20=20=20=20=20 18178 101741 bhyve vcpu 1 mi_switch+0xe2 sleepq_timedwait+0x2f msleep_spin_sbt+0x138 vm_run+0x502 vmmdev_ioctl+0xbed devfs_ioctl+0xad VOP_IOCTL_APV+0x7c vn_ioctl+0x161 devfs_ioctl_f+0x1f kern_ioctl+0x26d sys_ioctl+0x15d amd64_syscall+0x34d fast_syscall_common+0x= 101=20 18178 101742 bhyve vcpu 2 mi_switch+0xe2 sleepq_timedwait+0x2f msleep_spin_sbt+0x138 vm_run+0x502 vmmdev_ioctl+0xbed devfs_ioctl+0xad VOP_IOCTL_APV+0x7c vn_ioctl+0x161 devfs_ioctl_f+0x1f kern_ioctl+0x26d sys_ioctl+0x15d amd64_syscall+0x34d fast_syscall_common+0x= 101=20 18178 101743 bhyve vcpu 3 mi_switch+0xe2 sleepq_timedwait+0x2f msleep_spin_sbt+0x138 vm_run+0x502 vmmdev_ioctl+0xbed devfs_ioctl+0xad VOP_IOCTL_APV+0x7c vn_ioctl+0x161 devfs_ioctl_f+0x1f kern_ioctl+0x26d sys_ioctl+0x15d amd64_syscall+0x34d fast_syscall_common+0x= 101=20 18178 101744 bhyve vcpu 4 mi_switch+0xe2 sleepq_timedwait+0x2f msleep_spin_sbt+0x138 vm_run+0x502 vmmdev_ioctl+0xbed devfs_ioctl+0xad VOP_IOCTL_APV+0x7c vn_ioctl+0x161 devfs_ioctl_f+0x1f kern_ioctl+0x26d sys_ioctl+0x15d amd64_syscall+0x34d fast_syscall_common+0x= 101=20 18178 101745 bhyve vcpu 5 mi_switch+0xe2 sleepq_timedwait+0x2f msleep_spin_sbt+0x138 vm_run+0x502 vmmdev_ioctl+0xbed devfs_ioctl+0xad VOP_IOCTL_APV+0x7c vn_ioctl+0x161 devfs_ioctl_f+0x1f kern_ioctl+0x26d sys_ioctl+0x15d amd64_syscall+0x34d fast_syscall_common+0x= 101 --=20 You are receiving this mail because: You are the assignee for the bug.=