Skip site navigation (1)Skip section navigation (2)
Date:      Sat, 2 Feb 2019 15:29:09 -0500
From:      Patrick Kelsey <pkelsey@freebsd.org>
To:        Yuri Pankov <yuripv@yuripv.net>
Cc:        src-committers@freebsd.org, svn-src-all@freebsd.org,  svn-src-head@freebsd.org
Subject:   Re: svn commit: r343291 - in head/sys: dev/vmware/vmxnet3 net
Message-ID:  <CAD44qMW4p4c61%2BntC1hAsUuZzT66Fk8-sQoJjRxnGE-FOCTmKA@mail.gmail.com>
In-Reply-To: <2c0e0179-63e7-1e2f-ba7f-6a373c927d88@yuripv.net>
References:  <201901220111.x0M1BHS1025063@repo.freebsd.org> <2c0e0179-63e7-1e2f-ba7f-6a373c927d88@yuripv.net>

next in thread | previous in thread | raw e-mail | index | archive | help
On Sat, Feb 2, 2019 at 9:28 AM Yuri Pankov <yuripv@yuripv.net> wrote:

> Patrick Kelsey wrote:
> > Author: pkelsey
> > Date: Tue Jan 22 01:11:17 2019
> > New Revision: 343291
> > URL: https://svnweb.freebsd.org/changeset/base/343291
> >
> > Log:
> >   onvert vmx(4) to being an iflib driver.
> >
> >   Also, expose IFLIB_MAX_RX_SEGS to iflib drivers and add
> >   iflib_dma_alloc_align() to the iflib API.
> >
> >   Performance is generally better with the tunable/sysctl
> >   dev.vmx.<index>.iflib.tx_abdicate=1.
> >
> >   Reviewed by:        shurd
> >   MFC after:  1 week
> >   Relnotes:   yes
> >   Sponsored by:       RG Nets
> >   Differential Revision:      https://reviews.freebsd.org/D18761
>
> This breaks vmx interfaces for me in ESXi 6.7 (output below).  The
> review mentions setting hw.pci.honor_msi_blacklist="0" and it helps
> indeed -- worth mentioning in UPDATING?
>
> vmx0: <VMware VMXNET3 Ethernet Adapter> port 0x3000-0x300f mem
> 0xfe903000-0xfe903fff,0xfe902000-0xfe902fff,0xfe900000-0xfe901fff at
> device 0.0 on pci3
> vmx0: Using 512 tx descriptors and 256 rx descriptors
> vmx0: msix_init qsets capped at 8
> vmx0: intr CPUs: 20 queue msgs: 24 admincnt: 1
> vmx0: Using 8 rx queues 8 tx queues
> vmx0: attempting to allocate 9 MSI-X vectors (25 supported)
> vmx0: failed to allocate 9 MSI-X vectors, err: 6 - using MSI
> vmx0: attempting to allocate 1 MSI vectors (1 supported)
> msi: routing MSI IRQ 25 to local APIC 6 vector 48
> vmx0: using IRQ 25 for MSI
> vmx0: Using an MSI interrupt
> msi: Assigning MSI IRQ 25 to local APIC 25 vector 48
> msi: Assigning MSI IRQ 25 to local APIC 24 vector 48
> vmx0: bpf attached
> vmx0: Ethernet address: 00:00:00:00:00:33
> vmx0: netmap queues/slots: TX 1/512, RX 1/512
> vmx0: device enable command failed!
> vmx0: link state changed to UP
> vmx0: device enable command failed!
>
>
Setting hw.pci.honor_msi_blacklist="0" should only be necessary if you want
to operate with more than one queue.  If  hw.pci.honor_msi_blacklist="0" is
not set, then MSI-X will not be available, and MSI will be used, which
reduces the number of queues that can be configured for use to 1.  This
case should work correctly.

I am able to reproduce the behavior you described above on ESXi 6.7 using
the latest snapshot release (based on r343598).  The error that appears in
the ESXi logs will be similar to:

2019-02-02T15:14:02.986Z| vcpu-1| I125: VMXNET3 user: failed to activate
'Ethernet0', status: 0xbad0001

which vaguely means 'the device did not like something about the
configuration it was given'.  I will see if I can determine the root
cause.  Given that enabling MSI-X seems to work around the problem, and
based on other issues I encountered during development, I currently suspect
there is a problem with the interrupt index that is being configured for
the transmit queue in the device configuration structure when using MSI.

-Patrick



Want to link to this message? Use this URL: <https://mail-archive.FreeBSD.org/cgi/mid.cgi?CAD44qMW4p4c61%2BntC1hAsUuZzT66Fk8-sQoJjRxnGE-FOCTmKA>