Skip site navigation (1)Skip section navigation (2)
Date:      Thu, 14 Mar 2019 16:53:29 +0000
From:      bugzilla-noreply@freebsd.org
To:        virtualization@FreeBSD.org
Subject:   [Bug 231117] I/O lockups inside bhyve vms
Message-ID:  <bug-231117-27103-LcgrmbQDn3@https.bugs.freebsd.org/bugzilla/>
In-Reply-To: <bug-231117-27103@https.bugs.freebsd.org/bugzilla/>
References:  <bug-231117-27103@https.bugs.freebsd.org/bugzilla/>

next in thread | previous in thread | raw e-mail | index | archive | help
https://bugs.freebsd.org/bugzilla/show_bug.cgi?id=3D231117

roel@qsp.nl changed:

           What    |Removed                     |Added
----------------------------------------------------------------------------
                 CC|                            |roel@qsp.nl

--- Comment #18 from roel@qsp.nl ---
Just had this occur again on a VM running under bhyve with 12.0-STABLE, che=
cked
out and compiled 6 days ago (r344917). VM host is running the exact same
kernel. The modifications in zfs_znode.c are present, but we still had an i=
ssue
after the system has been running for a couple of days.

VM has >4GB arc_max (so the workaround as described by Kristian doesn't work
for us):

vfs.zfs.arc_min: 903779840
vfs.zfs.arc_max: 7230238720

Hypervisor:

vfs.zfs.arc_min: 8216929792
vfs.zfs.arc_max: 65735438336

Procstat -kk on the bhyve process:

root@cloud02:/home/roel # procstat -kk 18178
  PID    TID COMM                TDNAME              KSTACK=20=20=20=20=20=
=20=20=20=20=20=20=20=20=20=20=20=20=20=20=20=20
18178 101261 bhyve               mevent              mi_switch+0xe2
sleepq_catch_signals+0x405 sleepq_wait_sig+0xf _sleep+0x23a kqueue_kevent+0=
x297
kern_kevent+0xb5 kern_kevent_generic+0x70 sys_kevent+0x61 amd64_syscall+0x3=
4d
fast_syscall_common+0x101=20
18178 101731 bhyve               vtnet-2:0 tx        mi_switch+0xe2
sleepq_catch_signals+0x405 sleepq_wait_sig+0xf _sleep+0x23a umtxq_sleep+0x1=
33
do_wait+0x427 __umtx_op_wait_uint_private+0x53 amd64_syscall+0x34d
fast_syscall_common+0x101=20
18178 101732 bhyve               blk-3:0:0-0         mi_switch+0xe2
sleepq_catch_signals+0x405 sleepq_wait_sig+0xf _sleep+0x23a umtxq_sleep+0x1=
33
do_wait+0x427 __umtx_op_wait_uint_private+0x53 amd64_syscall+0x34d
fast_syscall_common+0x101=20
18178 101733 bhyve               blk-3:0:0-1         mi_switch+0xe2
sleepq_catch_signals+0x405 sleepq_wait_sig+0xf _sleep+0x23a umtxq_sleep+0x1=
33
do_wait+0x427 __umtx_op_wait_uint_private+0x53 amd64_syscall+0x34d
fast_syscall_common+0x101=20
18178 101734 bhyve               blk-3:0:0-2         mi_switch+0xe2
sleepq_catch_signals+0x405 sleepq_wait_sig+0xf _sleep+0x23a umtxq_sleep+0x1=
33
do_wait+0x427 __umtx_op_wait_uint_private+0x53 amd64_syscall+0x34d
fast_syscall_common+0x101=20
18178 101735 bhyve               blk-3:0:0-3         mi_switch+0xe2
sleepq_catch_signals+0x405 sleepq_wait_sig+0xf _sleep+0x23a umtxq_sleep+0x1=
33
do_wait+0x427 __umtx_op_wait_uint_private+0x53 amd64_syscall+0x34d
fast_syscall_common+0x101=20
18178 101736 bhyve               blk-3:0:0-4         mi_switch+0xe2
sleepq_catch_signals+0x405 sleepq_wait_sig+0xf _sleep+0x23a umtxq_sleep+0x1=
33
do_wait+0x427 __umtx_op_wait_uint_private+0x53 amd64_syscall+0x34d
fast_syscall_common+0x101=20
18178 101737 bhyve               blk-3:0:0-5         mi_switch+0xe2
sleepq_catch_signals+0x405 sleepq_wait_sig+0xf _sleep+0x23a umtxq_sleep+0x1=
33
do_wait+0x427 __umtx_op_wait_uint_private+0x53 amd64_syscall+0x34d
fast_syscall_common+0x101=20
18178 101738 bhyve               blk-3:0:0-6         mi_switch+0xe2
sleepq_catch_signals+0x405 sleepq_wait_sig+0xf _sleep+0x23a umtxq_sleep+0x1=
33
do_wait+0x427 __umtx_op_wait_uint_private+0x53 amd64_syscall+0x34d
fast_syscall_common+0x101=20
18178 101739 bhyve               blk-3:0:0-7         mi_switch+0xe2
sleepq_catch_signals+0x405 sleepq_wait_sig+0xf _sleep+0x23a umtxq_sleep+0x1=
33
do_wait+0x427 __umtx_op_wait_uint_private+0x53 amd64_syscall+0x34d
fast_syscall_common+0x101=20
18178 101740 bhyve               vcpu 0              <running>=20=20=20=20=
=20=20=20=20=20=20=20=20=20=20=20=20=20=20
18178 101741 bhyve               vcpu 1              mi_switch+0xe2
sleepq_timedwait+0x2f msleep_spin_sbt+0x138 vm_run+0x502 vmmdev_ioctl+0xbed
devfs_ioctl+0xad VOP_IOCTL_APV+0x7c vn_ioctl+0x161 devfs_ioctl_f+0x1f
kern_ioctl+0x26d sys_ioctl+0x15d amd64_syscall+0x34d fast_syscall_common+0x=
101=20
18178 101742 bhyve               vcpu 2              mi_switch+0xe2
sleepq_timedwait+0x2f msleep_spin_sbt+0x138 vm_run+0x502 vmmdev_ioctl+0xbed
devfs_ioctl+0xad VOP_IOCTL_APV+0x7c vn_ioctl+0x161 devfs_ioctl_f+0x1f
kern_ioctl+0x26d sys_ioctl+0x15d amd64_syscall+0x34d fast_syscall_common+0x=
101=20
18178 101743 bhyve               vcpu 3              mi_switch+0xe2
sleepq_timedwait+0x2f msleep_spin_sbt+0x138 vm_run+0x502 vmmdev_ioctl+0xbed
devfs_ioctl+0xad VOP_IOCTL_APV+0x7c vn_ioctl+0x161 devfs_ioctl_f+0x1f
kern_ioctl+0x26d sys_ioctl+0x15d amd64_syscall+0x34d fast_syscall_common+0x=
101=20
18178 101744 bhyve               vcpu 4              mi_switch+0xe2
sleepq_timedwait+0x2f msleep_spin_sbt+0x138 vm_run+0x502 vmmdev_ioctl+0xbed
devfs_ioctl+0xad VOP_IOCTL_APV+0x7c vn_ioctl+0x161 devfs_ioctl_f+0x1f
kern_ioctl+0x26d sys_ioctl+0x15d amd64_syscall+0x34d fast_syscall_common+0x=
101=20
18178 101745 bhyve               vcpu 5              mi_switch+0xe2
sleepq_timedwait+0x2f msleep_spin_sbt+0x138 vm_run+0x502 vmmdev_ioctl+0xbed
devfs_ioctl+0xad VOP_IOCTL_APV+0x7c vn_ioctl+0x161 devfs_ioctl_f+0x1f
kern_ioctl+0x26d sys_ioctl+0x15d amd64_syscall+0x34d fast_syscall_common+0x=
101

--=20
You are receiving this mail because:
You are the assignee for the bug.=



Want to link to this message? Use this URL: <https://mail-archive.FreeBSD.org/cgi/mid.cgi?bug-231117-27103-LcgrmbQDn3>