Skip site navigation (1)Skip section navigation (2)
Date:      Fri, 26 Feb 2016 22:17:36 +0000
From:      bugzilla-noreply@freebsd.org
To:        freebsd-amd64@FreeBSD.org
Subject:   [Bug 207446] Hang bringing up vtnet(4) on >8 cpu GCE VMs
Message-ID:  <bug-207446-6-ZU4gYW9xGE@https.bugs.freebsd.org/bugzilla/>
In-Reply-To: <bug-207446-6@https.bugs.freebsd.org/bugzilla/>
References:  <bug-207446-6@https.bugs.freebsd.org/bugzilla/>

next in thread | previous in thread | raw e-mail | index | archive | help
https://bugs.freebsd.org/bugzilla/show_bug.cgi?id=3D207446

--- Comment #6 from Andy Carrel <wac@google.com> ---
After further investigation it looks like the driver is accidentally using
driver's vtnet_max_vq_pairs*2 + 1 for the control virtqueue instead of devi=
ce's
max_virtqueue_pairs*2 + 1.

I'm about to attach a patch to current which propagates the device's
max_virtqueue_pairs number in order to make sure the control virtqueue wind=
s up
in the correct place per the virtio spec. "vt_device_max_vq_pairs"  The pat=
ch
also exposes this as a read-only sysctl dev.vtnet.X.device_max_vq_pairs.

e.g. # sysctl -a | grep vq_pair
dev.vtnet.0.act_vq_pairs: 3
dev.vtnet.0.max_vq_pairs: 3
dev.vtnet.0.device_max_vq_pairs: 16

I've tested the patch successfully with a VM that supports 16
max_virtqueue_pairs with vtnet_max_vq_pairs at the default of 8, as well as
hw.vtnet.mq_max_pairs=3D3, and with hw.vtnet.mq_disable=3D1.

It'd be nice to include the original patch that raises VTNET_MAX_QUEUE_PAIR=
S as
well though since that should have some performance advantages on many cpu-=
ed
VMs.

--=20
You are receiving this mail because:
You are on the CC list for the bug.=



Want to link to this message? Use this URL: <https://mail-archive.FreeBSD.org/cgi/mid.cgi?bug-207446-6-ZU4gYW9xGE>