From owner-freebsd-amd64@freebsd.org Tue Feb 23 23:07:49 2016 Return-Path: Delivered-To: freebsd-amd64@mailman.ysv.freebsd.org Received: from mx1.freebsd.org (mx1.freebsd.org [IPv6:2001:1900:2254:206a::19:1]) by mailman.ysv.freebsd.org (Postfix) with ESMTP id 60B22AB12B2 for ; Tue, 23 Feb 2016 23:07:49 +0000 (UTC) (envelope-from bugzilla-noreply@freebsd.org) Received: from kenobi.freebsd.org (kenobi.freebsd.org [IPv6:2001:1900:2254:206a::16:76]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (Client did not present a certificate) by mx1.freebsd.org (Postfix) with ESMTPS id 389421579 for ; Tue, 23 Feb 2016 23:07:49 +0000 (UTC) (envelope-from bugzilla-noreply@freebsd.org) Received: from bugs.freebsd.org ([127.0.1.118]) by kenobi.freebsd.org (8.15.2/8.15.2) with ESMTP id u1NN7m5c095481 for ; Tue, 23 Feb 2016 23:07:49 GMT (envelope-from bugzilla-noreply@freebsd.org) From: bugzilla-noreply@freebsd.org To: freebsd-amd64@FreeBSD.org Subject: [Bug 207446] Hang bringing up vtnet(4) on >8 cpu GCE VMs Date: Tue, 23 Feb 2016 23:07:49 +0000 X-Bugzilla-Reason: CC X-Bugzilla-Type: changed X-Bugzilla-Watch-Reason: None X-Bugzilla-Product: Base System X-Bugzilla-Component: kern X-Bugzilla-Version: 10.3-BETA2 X-Bugzilla-Keywords: X-Bugzilla-Severity: Affects Only Me X-Bugzilla-Who: jonolson@google.com X-Bugzilla-Status: New X-Bugzilla-Resolution: X-Bugzilla-Priority: --- X-Bugzilla-Assigned-To: bryanv@FreeBSD.org X-Bugzilla-Flags: X-Bugzilla-Changed-Fields: cc Message-ID: In-Reply-To: References: Content-Type: text/plain; charset="UTF-8" Content-Transfer-Encoding: quoted-printable X-Bugzilla-URL: https://bugs.freebsd.org/bugzilla/ Auto-Submitted: auto-generated MIME-Version: 1.0 X-Mailman-Approved-At: Tue, 23 Feb 2016 23:40:00 +0000 X-BeenThere: freebsd-amd64@freebsd.org X-Mailman-Version: 2.1.20 Precedence: list List-Id: Porting FreeBSD to the AMD64 platform List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , X-List-Received-Date: Tue, 23 Feb 2016 23:07:49 -0000 https://bugs.freebsd.org/bugzilla/show_bug.cgi?id=3D207446 Jon Olson changed: What |Removed |Added ---------------------------------------------------------------------------- CC| |jonolson@google.com --- Comment #4 from Jon Olson --- As far as I know we're compliance with the VIRTIO spec here. We do advertise max queues =3D=3D to number of vCPUs, but there is not requirement that a g= uest configure/use all of them to take advantage of multiqueue support. From the (draft) spec change introducing multi-queue (edited for readability from https://github.com/rustyrussell/virtio-spec/commit/67023431c8796bc430ec0a79= b15bab57e2e0f1f6):=20 """ Only receiveq0, transmitq0 and controlq are used by default. To use more qu= eues driver must negotiate the VIRTIO_NET_F_MQ feature; initialize up to `max_virtqueue_pairs` of each of transmit and receive queues; execute VIRTIO_NET_CTRL_MQ_VQ_PAIRS_SET command specifying the number of the transm= it and receive queues that is going to be used and wait until the device consu= mes the controlq buffer and acks this command. """ This is what we do -- by default we service only rx0, tx0, and cq. Upon receiving VIRTIO_NET_CTRL_MQ_VQ_PAIRS_SET we will begin servicing the numbe= r of queues requested up to the value from the `max_virtqueue_pairs`. Larger val= ues (and zero) should be NAK'd by the virtual device (and no change in the numb= er of queue serviced will occur). If a guest kernel does not wish to use more than, e.g., 8 tx/rx virtqueue p= airs it need not configure more than the first eight and the control queue (alwa= ys at index 2*max_queue_pairs + 1, per the spec). As I said, I think we're in compliance with the spec here, but certainly if= not I'll treat it as a bug. --=20 You are receiving this mail because: You are on the CC list for the bug.=