From owner-freebsd-net@freebsd.org Sun Aug 16 11:56:33 2015 Return-Path: Delivered-To: freebsd-net@mailman.ysv.freebsd.org Received: from mx1.freebsd.org (mx1.freebsd.org [IPv6:2001:1900:2254:206a::19:1]) by mailman.ysv.freebsd.org (Postfix) with ESMTP id 91F0B99891B for ; Sun, 16 Aug 2015 11:56:33 +0000 (UTC) (envelope-from pyunyh@gmail.com) Received: from mail-pa0-x229.google.com (mail-pa0-x229.google.com [IPv6:2607:f8b0:400e:c03::229]) (using TLSv1.2 with cipher ECDHE-RSA-AES128-GCM-SHA256 (128/128 bits)) (Client CN "smtp.gmail.com", Issuer "Google Internet Authority G2" (verified OK)) by mx1.freebsd.org (Postfix) with ESMTPS id 61363104B; Sun, 16 Aug 2015 11:56:33 +0000 (UTC) (envelope-from pyunyh@gmail.com) Received: by pacum4 with SMTP id um4so3406344pac.3; Sun, 16 Aug 2015 04:56:32 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20120113; h=from:date:to:cc:subject:message-id:reply-to:references:mime-version :content-type:content-disposition:in-reply-to:user-agent; bh=wr1+bjzOcDqRvqGgR1wLAMfZEmSUpueny/zxwB81seY=; b=uy1GQkWJsyUunI23f685Bsumo1z3cfT9pAyTz8ITHCSnWQhMVi4xQzkNe9jhWcysvW jFm2aE25/Onn2x6klIC39gRy1+2CbOxiUMYR1505esP6NzUoyD1KNQMmhyraHduwUyUE CxLZkvE8yG6bixCzrEYV5emHpMR07VjpgeA5Bf/Uk8q8N7pAMvMEMFXzVqk60C3DKWEv X4NrrGeCFnOAs9VLO/1RHK1Ra7uBVO0r07C//QtUi4EZ3cEC4KXSxPvzxUbXAgDPs/fA w+0yJLY93PAtvJ4qiDByqV4V8/rHE5v1dJb7YSP80SbTZR9bLjEXdHfXkBDP818foAxL sakg== X-Received: by 10.66.253.170 with SMTP id ab10mr107030272pad.135.1439726192565; Sun, 16 Aug 2015 04:56:32 -0700 (PDT) Received: from pyunyh@gmail.com ([106.247.248.2]) by smtp.gmail.com with ESMTPSA id ya3sm11228885pbb.71.2015.08.16.04.56.28 (version=TLSv1 cipher=RC4-SHA bits=128/128); Sun, 16 Aug 2015 04:56:31 -0700 (PDT) From: Yonghyeon PYUN X-Google-Original-From: "Yonghyeon PYUN" Received: by pyunyh@gmail.com (sSMTP sendmail emulation); Sun, 16 Aug 2015 20:56:23 +0900 Date: Sun, 16 Aug 2015 20:56:23 +0900 To: Kim Culhan Cc: sbruno@freebsd.org, freebsd-net@freebsd.org Subject: Re: RE not working on 10.2-RELEASE #0 r286731M Message-ID: <20150816115623.GA1288@michelle.fasterthan.com> Reply-To: pyunyh@gmail.com References: Mime-Version: 1.0 Content-Type: multipart/mixed; boundary="x+6KMIRAuhnl3hBn" Content-Disposition: inline In-Reply-To: User-Agent: Mutt/1.4.2.3i X-BeenThere: freebsd-net@freebsd.org X-Mailman-Version: 2.1.20 Precedence: list List-Id: Networking and TCP/IP with FreeBSD List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , X-List-Received-Date: Sun, 16 Aug 2015 11:56:33 -0000 --x+6KMIRAuhnl3hBn Content-Type: text/plain; charset=us-ascii Content-Disposition: inline On Fri, Aug 14, 2015 at 06:29:08PM -0400, Kim Culhan wrote: [...] > > On 08/14/15 13:34, Kim Culhan wrote: > >> RE on 10.2-RELEASE #0 r286731M appears to pass only arp traffic. > >> > >> Replaced if_re.c with version from 273757, appears to work > >> normally. > >> > >> The diff: > >> > >> 34c34 < __FBSDID("$FreeBSD: stable/10/sys/dev/re/if_re.c 273757 > >> 2014-10-28 00:43:00Z yongari $"); --- > >>> __FBSDID("$FreeBSD: releng/10.2/sys/dev/re/if_re.c 285177 > >>> 2015-07-05 > >> 20:16:38Z marius $"); 3198,3202d3197 < * Enable transmit and > >> receive. < */ < CSR_WRITE_1(sc, RL_COMMAND, > >> RL_CMD_TX_ENB|RL_CMD_RX_ENB); < < /* 3227a3223,3227 > >>> /* * Enable transmit and receive. */ CSR_WRITE_1(sc, RL_COMMAND, > >>> RL_CMD_TX_ENB | RL_CMD_RX_ENB); > >>> > >> 3251,3254d3250 < #ifdef notdef < /* Enable receiver and > >> transmitter. */ < CSR_WRITE_1(sc, RL_COMMAND, > >> RL_CMD_TX_ENB|RL_CMD_RX_ENB); < #endif > >> > >> Let me know what additional info I can provide. [...] > > I'm running -current with all changes in place, I'm not seeing the > > issues noted here with my hardware. Can you post your hardware from > > pciconf -lv? > > > > re0@pci0:3:0:0: class=0x020000 card=0x84321043 chip=0x816810ec > > rev=0x06 hdr=0x00 > > vendor = 'Realtek Semiconductor Co., Ltd.' > > device = 'RTL8111/8168B PCI Express Gigabit Ethernet controller' > > class = network > > subclass = ethernet > > re1@pci0:4:5:0: class=0x020000 card=0x43021186 chip=0x43021186 > > rev=0x10 hdr=0x00 > > vendor = 'D-Link System Inc' > > device = 'DGE-530T Gigabit Ethernet Adapter (rev.C1) [Realtek > > RTL8169]' > > class = network > > subclass = ethernet > > > > > > sean > > pciconf -lv > > re0@pci0:2:0:0: class=0x020000 card=0x83671043 chip=0x816810ec rev=0x02 > hdr=0x00 > vendor = 'Realtek Semiconductor Co., Ltd.' > device = 'RTL8111/8168B PCI Express Gigabit Ethernet controller' > class = network > subclass = ethernet > re1@pci0:6:0:0: class=0x020000 card=0x816910ec chip=0x816910ec rev=0x10 > hdr=0x00 > vendor = 'Realtek Semiconductor Co., Ltd.' > device = 'RTL8169 PCI Gigabit Ethernet Controller' > class = network > subclass = ethernet > re2@pci0:6:1:0: class=0x020000 card=0x4c001186 chip=0x43001186 rev=0x10 > hdr=0x00 > vendor = 'D-Link System Inc' > device = 'DGE-528T Gigabit Ethernet Adapter' > class = network > subclass = ethernet > > The problem was noted on re2, re0 and re1 appeared to be working normally. > Hmm, it seems your PCI controller does not work. I can't explain why Sean's re1 still works though. Would you try attached patch? BTW, it would be better to see the re(4) related dmesg output. Driver will show Chip/MAC revision and that is the only way to identify each MAC revision. --x+6KMIRAuhnl3hBn Content-Type: text/x-diff; charset=us-ascii Content-Disposition: attachment; filename="re.pci_mac.diff" Index: sys/dev/re/if_re.c =================================================================== --- sys/dev/re/if_re.c (revision 286823) +++ sys/dev/re/if_re.c (working copy) @@ -3197,6 +3197,12 @@ re_init_locked(struct rl_softc *sc) ~0x00080000); /* + * Enable transmit and receive for non-PCIe controllers. + * RX/TX MACs should be enabled before RX/TX configuration. + */ + if ((sc->rl_flags & RL_FLAG_PCIE) == 0) + CSR_WRITE_1(sc, RL_COMMAND, RL_CMD_TX_ENB | RL_CMD_RX_ENB); + /* * Set the initial TX configuration. */ if (sc->rl_testmode) { @@ -3223,9 +3229,11 @@ re_init_locked(struct rl_softc *sc) } /* - * Enable transmit and receive. + * Enable transmit and receive for PCIe controllers. + * RX/TX MACs should be enabled after RX/TX configuration. */ - CSR_WRITE_1(sc, RL_COMMAND, RL_CMD_TX_ENB | RL_CMD_RX_ENB); + if ((sc->rl_flags & RL_FLAG_PCIE) != 0) + CSR_WRITE_1(sc, RL_COMMAND, RL_CMD_TX_ENB | RL_CMD_RX_ENB); #ifdef DEVICE_POLLING /* --x+6KMIRAuhnl3hBn-- From owner-freebsd-net@freebsd.org Sun Aug 16 12:56:31 2015 Return-Path: Delivered-To: freebsd-net@mailman.ysv.freebsd.org Received: from mx1.freebsd.org (mx1.freebsd.org [IPv6:2001:1900:2254:206a::19:1]) by mailman.ysv.freebsd.org (Postfix) with ESMTP id 983669B9972 for ; Sun, 16 Aug 2015 12:56:31 +0000 (UTC) (envelope-from w8hdkim@gmail.com) Received: from mail-ig0-x241.google.com (mail-ig0-x241.google.com [IPv6:2607:f8b0:4001:c05::241]) (using TLSv1.2 with cipher ECDHE-RSA-AES128-GCM-SHA256 (128/128 bits)) (Client CN "smtp.gmail.com", Issuer "Google Internet Authority G2" (verified OK)) by mx1.freebsd.org (Postfix) with ESMTPS id 66337128A; Sun, 16 Aug 2015 12:56:31 +0000 (UTC) (envelope-from w8hdkim@gmail.com) Received: by igcwe12 with SMTP id we12so5068884igc.3; Sun, 16 Aug 2015 05:56:30 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20120113; h=mime-version:in-reply-to:references:date:message-id:subject:from:to :cc:content-type; bh=FtYt11y8j0gl8s3xFi/T+Gex8Dgx2gPU/Z+VFync4jY=; b=XDSMfDp+lRT531fZF1hOem8NmfItNm5QOdH2nQF7X90rV7Rajm/1LtgDTi6NHHZeQ2 utCRX+4untvJLenJX91zKP/KY4yGeX9DNz0g+/k/A24MbZ8agmfmN1V6XD11a1E8MJGL p0cY1yfE/kF1rZCdsuPcGDi5cavLKQ7hEOBZfOmY9leOY+nRsGXQkdenKfGrhQeslx1H WDEtRiop5inex1G8yoAvtPbEu9NnWtOHDHeToGnlmE6XWpGPYbnxYV5P/Yob+281QWCu AKuRvHHmiPFBqYjp4wnnI9Gd/2QriywYgTC7LCXUlf8UHLRMC571O+28X7FAuz1o2fKl /Vdg== MIME-Version: 1.0 X-Received: by 10.50.66.197 with SMTP id h5mr11134486igt.82.1439729790736; Sun, 16 Aug 2015 05:56:30 -0700 (PDT) Received: by 10.107.202.132 with HTTP; Sun, 16 Aug 2015 05:56:30 -0700 (PDT) In-Reply-To: <20150816115623.GA1288@michelle.fasterthan.com> References: <20150816115623.GA1288@michelle.fasterthan.com> Date: Sun, 16 Aug 2015 08:56:30 -0400 Message-ID: Subject: Re: RE not working on 10.2-RELEASE #0 r286731M From: Kim Culhan To: pyunyh@gmail.com Cc: Sean Bruno , freebsd-net@freebsd.org Content-Type: text/plain; charset=UTF-8 X-Content-Filtered-By: Mailman/MimeDel 2.1.20 X-BeenThere: freebsd-net@freebsd.org X-Mailman-Version: 2.1.20 Precedence: list List-Id: Networking and TCP/IP with FreeBSD List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , X-List-Received-Date: Sun, 16 Aug 2015 12:56:31 -0000 The dmesg output: pci4: on pcib3 em0: port 0xdc00-0xdc1f mem 0xfe9e0000-0xfe9fffff,0xfe900000-0xfe97ffff,0xfe9dc000-0xfe9dffff irq 17 at device 0.0 on pci4 em0: Using MSIX interrupts with 3 vectors em0: Ethernet address: 68:05:ca:12:91:cd [snip] pcib5: irq 16 at device 28.5 on pci0 pci2: on pcib5 re0: port 0xb800-0xb8ff mem 0xf8fff000-0xf8ffffff,0xf8fe0000-0xf8feffff irq 17 at device 0.0 on pci2 re0: Using 1 MSI-X message re0: Chip rev. 0x3c000000 re0: MAC rev. 0x00400000 miibus0: on re0 rgephy0: PHY 1 on miibus0 rgephy0: none, 10baseT, 10baseT-FDX, 10baseT-FDX-flow, 100baseTX, 100baseTX-FDX, 100baseTX-FDX-flow, 1000baseT, 1000baseT-master, 1000baseT-FDX, 1000baseT-FDX-master, 1000baseT-FDX-flow, 1000baseT-FDX-flow-master, auto, auto-flow re0: Using defaults for TSO: 65518/35/2048 re0: Ethernet address: 90:e6:ba:c8:52:df [snip] pcib6: at device 30.0 on pci0 pci6: on pcib6 re1: port 0xe800-0xe8ff mem 0xfebffc00-0xfebffcff irq 16 at device 0.0 on pci6 re1: Chip rev. 0x10000000 re1: MAC rev. 0x00000000 miibus1: on re1 rgephy1: PHY 1 on miibus1 rgephy1: none, 10baseT, 10baseT-FDX, 10baseT-FDX-flow, 100baseTX, 100baseTX-FDX, 100baseTX-FDX-flow, 1000baseT, 1000baseT-master, 1000baseT-FDX, 1000baseT-FDX-master, 1000baseT-FDX-flow, 1000baseT-FDX-flow-master, auto, auto-flow re1: Using defaults for TSO: 65518/35/2048 re1: Ethernet address: d8:eb:97:91:12:d7 re2: port 0xe400-0xe4ff mem 0xfebff800-0xfebff8ff irq 17 at device 1.0 on pci6 re2: Chip rev. 0x10000000 re2: MAC rev. 0x00000000 miibus2: on re2 rgephy2: PHY 1 on miibus2 rgephy2: none, 10baseT, 10baseT-FDX, 10baseT-FDX-flow, 100baseTX, 100baseTX-FDX, 100baseTX-FDX-flow, 1000baseT, 1000baseT-master, 1000baseT-FDX, 1000baseT-FDX-master, 1000baseT-FDX-flow, 1000baseT-FDX-flow-master, auto, auto-flow re2: Using defaults for TSO: 65518/35/2048 re2: Ethernet address: c4:12:f5:32:57:76 I will need to try the patch tomorrow when i am located with the machine, thanks -kim On Sun, Aug 16, 2015 at 7:56 AM, Yonghyeon PYUN wrote: > On Fri, Aug 14, 2015 at 06:29:08PM -0400, Kim Culhan wrote: > > [...] > > > > On 08/14/15 13:34, Kim Culhan wrote: > > >> RE on 10.2-RELEASE #0 r286731M appears to pass only arp traffic. > > >> > > >> Replaced if_re.c with version from 273757, appears to work > > >> normally. > > >> > > >> The diff: > > >> > > >> 34c34 < __FBSDID("$FreeBSD: stable/10/sys/dev/re/if_re.c 273757 > > >> 2014-10-28 00:43:00Z yongari $"); --- > > >>> __FBSDID("$FreeBSD: releng/10.2/sys/dev/re/if_re.c 285177 > > >>> 2015-07-05 > > >> 20:16:38Z marius $"); 3198,3202d3197 < * Enable transmit and > > >> receive. < */ < CSR_WRITE_1(sc, RL_COMMAND, > > >> RL_CMD_TX_ENB|RL_CMD_RX_ENB); < < /* 3227a3223,3227 > > >>> /* * Enable transmit and receive. */ CSR_WRITE_1(sc, RL_COMMAND, > > >>> RL_CMD_TX_ENB | RL_CMD_RX_ENB); > > >>> > > >> 3251,3254d3250 < #ifdef notdef < /* Enable receiver and > > >> transmitter. */ < CSR_WRITE_1(sc, RL_COMMAND, > > >> RL_CMD_TX_ENB|RL_CMD_RX_ENB); < #endif > > >> > > >> Let me know what additional info I can provide. > > [...] > > > > I'm running -current with all changes in place, I'm not seeing the > > > issues noted here with my hardware. Can you post your hardware from > > > pciconf -lv? > > > > > > re0@pci0:3:0:0: class=0x020000 card=0x84321043 chip=0x816810ec > > > rev=0x06 hdr=0x00 > > > vendor = 'Realtek Semiconductor Co., Ltd.' > > > device = 'RTL8111/8168B PCI Express Gigabit Ethernet > controller' > > > class = network > > > subclass = ethernet > > > re1@pci0:4:5:0: class=0x020000 card=0x43021186 chip=0x43021186 > > > rev=0x10 hdr=0x00 > > > vendor = 'D-Link System Inc' > > > device = 'DGE-530T Gigabit Ethernet Adapter (rev.C1) [Realtek > > > RTL8169]' > > > class = network > > > subclass = ethernet > > > > > > > > > sean > > > > pciconf -lv > > > > re0@pci0:2:0:0: class=0x020000 card=0x83671043 chip=0x816810ec rev=0x02 > > hdr=0x00 > > vendor = 'Realtek Semiconductor Co., Ltd.' > > device = 'RTL8111/8168B PCI Express Gigabit Ethernet controller' > > class = network > > subclass = ethernet > > re1@pci0:6:0:0: class=0x020000 card=0x816910ec chip=0x816910ec rev=0x10 > > hdr=0x00 > > vendor = 'Realtek Semiconductor Co., Ltd.' > > device = 'RTL8169 PCI Gigabit Ethernet Controller' > > class = network > > subclass = ethernet > > re2@pci0:6:1:0: class=0x020000 card=0x4c001186 chip=0x43001186 rev=0x10 > > hdr=0x00 > > vendor = 'D-Link System Inc' > > device = 'DGE-528T Gigabit Ethernet Adapter' > > class = network > > subclass = ethernet > > > > The problem was noted on re2, re0 and re1 appeared to be working > normally. > > > > Hmm, it seems your PCI controller does not work. > I can't explain why Sean's re1 still works though. > Would you try attached patch? > > BTW, it would be better to see the re(4) related dmesg output. > Driver will show Chip/MAC revision and that is the only way to > identify each MAC revision. > From owner-freebsd-net@freebsd.org Sun Aug 16 13:54:53 2015 Return-Path: Delivered-To: freebsd-net@mailman.ysv.freebsd.org Received: from mx1.freebsd.org (mx1.freebsd.org [IPv6:2001:1900:2254:206a::19:1]) by mailman.ysv.freebsd.org (Postfix) with ESMTP id 3C4379BA5C1 for ; Sun, 16 Aug 2015 13:54:53 +0000 (UTC) (envelope-from julian@freebsd.org) Received: from vps1.elischer.org (vps1.elischer.org [204.109.63.16]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (Client CN "vps1.elischer.org", Issuer "CA Cert Signing Authority" (not verified)) by mx1.freebsd.org (Postfix) with ESMTPS id 191D3CE8 for ; Sun, 16 Aug 2015 13:54:52 +0000 (UTC) (envelope-from julian@freebsd.org) Received: from Julian-MBP3.local (ppp121-45-240-35.lns20.per4.internode.on.net [121.45.240.35]) (authenticated bits=0) by vps1.elischer.org (8.15.2/8.15.2) with ESMTPSA id t7GDsfG8023379 (version=TLSv1.2 cipher=DHE-RSA-AES128-SHA bits=128 verify=NO); Sun, 16 Aug 2015 06:54:45 -0700 (PDT) (envelope-from julian@freebsd.org) Subject: Re: Ethernet tunneling options under FreeBSD To: James Lott , freebsd-net@freebsd.org References: <55CD1CE6.2010502@lottspot.com> <55CE0659.6050206@freebsd.org> <3236701.dypBHjs8Lg@arch_project> From: Julian Elischer Message-ID: <55D0961C.7090107@freebsd.org> Date: Sun, 16 Aug 2015 21:54:36 +0800 User-Agent: Mozilla/5.0 (Macintosh; Intel Mac OS X 10.10; rv:38.0) Gecko/20100101 Thunderbird/38.1.0 MIME-Version: 1.0 In-Reply-To: <3236701.dypBHjs8Lg@arch_project> Content-Type: text/plain; charset=windows-1252; format=flowed Content-Transfer-Encoding: 7bit X-BeenThere: freebsd-net@freebsd.org X-Mailman-Version: 2.1.20 Precedence: list List-Id: Networking and TCP/IP with FreeBSD List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , X-List-Received-Date: Sun, 16 Aug 2015 13:54:53 -0000 On 8/15/15 10:40 AM, James Lott wrote: >> you haven't really described the network well enough.. >> try an ascii-art diagram (don't forget to set fixed width font :-) >> a VPN required two ends.. one is FreeBSD... what's the other? > The thing is, the "other" could be any number of operating systems. I'm > looking for a tunneling protocol with good cross-platform representation, but > the higher priority it enduring it tunnels ethernet frames. > > For the sake of example we can say the other end is a FreeBSD host, since > FreeBSD is looking like the "lowest common denominator" on this topic. > >> if both ends are FreeBSD there are dozens of possibilities.. >> for example: >> ng_eif->netgraph->ppp->ipsec->ppp->netgraph->ng_eif >> >> ng_eif->ng_ksock(udp)->IPsec->ng_ksock->ng_eif >> > I'm not overly concerned with the host side interfaces. What I'm really > concerned with is the tunneling protocol since that's what will need support > on all of my platforms. Thus, a solution requiring netgraph on both ends is > not an option in my case. > >> tap->ppp->ppp->tap > I have not found any ppp implementations under FreeBSD which support BCP. > To my understanding, that's the only method by which ethernet frames can be > tunneled over ppp... if I'm wrong, please do correct me! I would love nothing > more than to be wrong about that :) I have, in the past used UDP packets to encapsulate ethernet frames, and tunnelled them over a PPP link using mpd. I don't have specifics any more. I think there may be support in Openvpn for what you want but I've never tried it. > > On Friday, August 14, 2015 23:16:41 Julian Elischer wrote: >> On 8/14/15 6:40 AM, James Lott wrote: >>> Hello list, >>> >>> I am in the process of planning a build out of a L2 VPN, in which >>> I'd like to have my primary "switch" and DHCP server be a FreeBSD >>> system. I would like to join each new host to the VPN by >>> establishing an IP tunnel with the primary "switch" which transports >>> ethernet frames over the tunnel. >> you haven't really described the network well enough.. >> try an ascii-art diagram (don't forget to set fixed width font :-) >> a VPN required two ends.. one is FreeBSD... what's the other? >> >>> So far, the only protocol I have found supported by FreeBSD which >>> seems capable of this is EtherIP. As far as I can tell, it doesn't >>> look like there is any support for L2TPv3, and none of the PPP >>> implementations available appear to support BCP. >>> >>> I'm not completely opposed to using EtherIP, but if there is >>> something more modern which will meet my needs, I would probably try >>> that first. So my question becomes: >>> >>> * Does anyone know of a method supported under FreeBSD (other than >>> EtherIP) for tunneling ethernet over IP that they may be able to >>> suggest I check out? >> if both ends are FreeBSD there are dozens of possibilities.. >> for example: >> ng_eif->netgraph->ppp->ipsec->ppp->netgraph->ng_eif >> >> ng_eif->ng_ksock(udp)->IPsec->ng_ksock->ng_eif >> >> tap->ppp->ppp->tap >> >>> Thanks for any suggestions! >>> _______________________________________________ >>> freebsd-net@freebsd.org mailing list >>> https://lists.freebsd.org/mailman/listinfo/freebsd-net >>> To unsubscribe, send any mail to "freebsd-net-unsubscribe@freebsd.org" >> _______________________________________________ >> freebsd-net@freebsd.org mailing list >> https://lists.freebsd.org/mailman/listinfo/freebsd-net >> To unsubscribe, send any mail to "freebsd-net-unsubscribe@freebsd.org" From owner-freebsd-net@freebsd.org Sun Aug 16 14:05:03 2015 Return-Path: Delivered-To: freebsd-net@mailman.ysv.freebsd.org Received: from mx1.freebsd.org (mx1.freebsd.org [IPv6:2001:1900:2254:206a::19:1]) by mailman.ysv.freebsd.org (Postfix) with ESMTP id 81AC99BA857 for ; Sun, 16 Aug 2015 14:05:03 +0000 (UTC) (envelope-from julian@freebsd.org) Received: from vps1.elischer.org (vps1.elischer.org [204.109.63.16]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (Client CN "vps1.elischer.org", Issuer "CA Cert Signing Authority" (not verified)) by mx1.freebsd.org (Postfix) with ESMTPS id 51E941269 for ; Sun, 16 Aug 2015 14:05:02 +0000 (UTC) (envelope-from julian@freebsd.org) Received: from Julian-MBP3.local (ppp121-45-240-35.lns20.per4.internode.on.net [121.45.240.35]) (authenticated bits=0) by vps1.elischer.org (8.15.2/8.15.2) with ESMTPSA id t7GE4wfi023439 (version=TLSv1.2 cipher=DHE-RSA-AES128-SHA bits=128 verify=NO); Sun, 16 Aug 2015 07:05:01 -0700 (PDT) (envelope-from julian@freebsd.org) Subject: Re: Ethernet tunneling options under FreeBSD To: James Lott , freebsd-net@freebsd.org References: <55CD1CE6.2010502@lottspot.com> <3236701.dypBHjs8Lg@arch_project> <2628655.0T22OuP5Ng@arch_project> From: Julian Elischer Message-ID: <55D09884.7010102@freebsd.org> Date: Sun, 16 Aug 2015 22:04:52 +0800 User-Agent: Mozilla/5.0 (Macintosh; Intel Mac OS X 10.10; rv:38.0) Gecko/20100101 Thunderbird/38.1.0 MIME-Version: 1.0 In-Reply-To: <2628655.0T22OuP5Ng@arch_project> Content-Type: text/plain; charset=windows-1252; format=flowed Content-Transfer-Encoding: 7bit X-BeenThere: freebsd-net@freebsd.org X-Mailman-Version: 2.1.20 Precedence: list List-Id: Networking and TCP/IP with FreeBSD List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , X-List-Received-Date: Sun, 16 Aug 2015 14:05:03 -0000 On 8/15/15 11:32 AM, James Lott wrote: > n2n honestly looks wonderful, but it also appears to be dead... I'm trying to > stay as close to the OS layer as possible with my options, so I would prefer > to limit the role of comprehensive software like OpenVPN or what > ZeroTierOne appears to be. > > I actually found this interesting github project, which provides a simple > solution for what I'm trying to do... > > https://github.com/vsergeev/tinytaptunnel you can do this on freebsd with no added software look at /usr/share/examples/netgraph. In particular the ether.bridge, virtual.lan and the udp.tunnel examples. You should be able to create a script that will tunnel two ethernet bridges together using elements from each script. I suspect you could make it totally compatible with tinytaptunnel. > > Unfortunately, it's written for Linux... and... in go... but the README at > least gave me a couple more ideas to look into. > > Feel free to keep coming with the suggestions if anyone has anymore! This is > great stuff > > On Saturday, August 15, 2015 13:05:17 Outback Dingo wrote: >> On Sat, Aug 15, 2015 at 12:40 PM, James Lott > wrote: >>>> you haven't really described the network well enough.. >>>> try an ascii-art diagram (don't forget to set fixed width font :-) >>>> a VPN required two ends.. one is FreeBSD... what's the other? >>> The thing is, the "other" could be any number of operating systems. I'm >>> looking for a tunneling protocol with good cross-platform representation, >>> but >>> the higher priority it enduring it tunnels ethernet frames. >>> >>> For the sake of example we can say the other end is a FreeBSD host, since >>> FreeBSD is looking like the "lowest common denominator" on this topic. >>> >>>> if both ends are FreeBSD there are dozens of possibilities.. >>>> for example: >>>> ng_eif->netgraph->ppp->ipsec->ppp->netgraph->ng_eif >>>> >>>> ng_eif->ng_ksock(udp)->IPsec->ng_ksock->ng_eif >>> I'm not overly concerned with the host side interfaces. What I'm really >>> concerned with is the tunneling protocol since that's what will need >>> support >>> on all of my platforms. Thus, a solution requiring netgraph on both ends >>> is >>> not an option in my case. >>> >>>> tap->ppp->ppp->tap >>> I have not found any ppp implementations under FreeBSD which support > BCP. >>> To my understanding, that's the only method by which ethernet frames can >>> be >>> tunneled over ppp... if I'm wrong, please do correct me! I would love >>> nothing >>> more than to be wrong about that :) >>> >>> On Friday, August 14, 2015 23:16:41 Julian Elischer wrote: >>>> On 8/14/15 6:40 AM, James Lott wrote: >>>>> Hello list, >>>>> >>>>> I am in the process of planning a build out of a L2 VPN, in which >>>>> I'd like to have my primary "switch" and DHCP server be a FreeBSD >>>>> system. I would like to join each new host to the VPN by >>>>> establishing an IP tunnel with the primary "switch" which transports >>>>> ethernet frames over the tunnel. >>>> you haven't really described the network well enough.. >>>> try an ascii-art diagram (don't forget to set fixed width font :-) >>>> a VPN required two ends.. one is FreeBSD... what's the other? >>>> >>>>> So far, the only protocol I have found supported by FreeBSD which >>>>> seems capable of this is EtherIP. As far as I can tell, it doesn't >>>>> look like there is any support for L2TPv3, and none of the PPP >>>>> implementations available appear to support BCP. >>>>> >>>>> I'm not completely opposed to using EtherIP, but if there is >>>>> something more modern which will meet my needs, I would probably > try >>>>> that first. So my question becomes: >>>>> >>>>> * Does anyone know of a method supported under FreeBSD (other than >>>>> EtherIP) for tunneling ethernet over IP that they may be able to >>>>> suggest I check out? >>>> if both ends are FreeBSD there are dozens of possibilities.. >>>> for example: >>>> ng_eif->netgraph->ppp->ipsec->ppp->netgraph->ng_eif >>>> >>>> ng_eif->ng_ksock(udp)->IPsec->ng_ksock->ng_eif >>>> >>>> tap->ppp->ppp->tap >>>> >>>>> Thanks for any suggestions! >> theres also N2N which is pretty nice, and well ZeroTierOne which is >> somewhat unique >> >>>>> _______________________________________________ >>>>> freebsd-net@freebsd.org mailing list >>>>> https://lists.freebsd.org/mailman/listinfo/freebsd-net >>>>> To unsubscribe, send any mail to "freebsd-net- > unsubscribe@freebsd.org" >>>> _______________________________________________ >>>> freebsd-net@freebsd.org mailing list >>>> https://lists.freebsd.org/mailman/listinfo/freebsd-net >>>> To unsubscribe, send any mail to "freebsd-net-unsubscribe@freebsd.org" >>> -- >>> James Lott >>> _______________________________________________ >>> freebsd-net@freebsd.org mailing list >>> https://lists.freebsd.org/mailman/listinfo/freebsd-net >>> To unsubscribe, send any mail to "freebsd-net-unsubscribe@freebsd.org" From owner-freebsd-net@freebsd.org Sun Aug 16 14:21:25 2015 Return-Path: Delivered-To: freebsd-net@mailman.ysv.freebsd.org Received: from mx1.freebsd.org (mx1.freebsd.org [IPv6:2001:1900:2254:206a::19:1]) by mailman.ysv.freebsd.org (Postfix) with ESMTP id 9ED149BAB53 for ; Sun, 16 Aug 2015 14:21:25 +0000 (UTC) (envelope-from james@lottspot.com) Received: from mx0.lottspot.com (sfo.lottspot.com [198.199.98.33]) by mx1.freebsd.org (Postfix) with ESMTP id 86FCB1CBB for ; Sun, 16 Aug 2015 14:21:25 +0000 (UTC) (envelope-from james@lottspot.com) Received: from localhost (localhost [127.0.0.1]) by mail.lottspot.com (Postfix) with ESMTP id A0A8641277 for ; Sun, 16 Aug 2015 07:21:18 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=lottspot.com; h= content-type:content-type:content-transfer-encoding:mime-version :references:in-reply-to:user-agent:organization:message-id:date :date:subject:subject:from:from:received:received; s=mail; t= 1439734824; bh=fIpVSZgl7r3n+mp5Tknwmm5xjuqg8cB1WlHYg8zV4pc=; b=K KeLxjKSadQmVudjDxwprL2efh7KkxuL0e/8T+l82e3hXm9c41Lk4VJ6vr8W9xFt5 WqZl8Q6Bzjoi1VBA9YPlkUTQosrHosFFD6gPPx0d5xRQ0PL2tOCd+RFXAQrtEsNA Iytu6JGX+TBV4DxdPFSzUzvOCtwFHyBvbHluhg7630= X-Virus-Scanned: amavisd-new at lottspot.com Received: from mx0.lottspot.com ([127.0.0.1]) by localhost (mail.lottspot.com [127.0.0.1]) (amavisd-new, port 10024) with LMTP id GD4Rdk4uhqBw for ; Sun, 16 Aug 2015 07:20:24 -0700 (PDT) Received: from arch_project.localnet (h69-131-58-73.nrfdvt.dsl.dynamic.tds.net [69.131.58.73]) by mx0.lottspot.com (Postfix) with ESMTPSA id 80F0B41265 for ; Sun, 16 Aug 2015 07:20:23 -0700 (PDT) From: James Lott To: freebsd-net@freebsd.org Subject: Re: Ethernet tunneling options under FreeBSD Date: Sun, 16 Aug 2015 07:20:17 -0700 Message-ID: <4557283.4pSJrcFaUO@arch_project> Organization: LottSpot User-Agent: KMail/4.14.10 (Linux/4.1.4-1-ARCH; KDE/4.14.10; x86_64; ; ) In-Reply-To: <55D0961C.7090107@freebsd.org> References: <55CD1CE6.2010502@lottspot.com> <3236701.dypBHjs8Lg@arch_project> <55D0961C.7090107@freebsd.org> MIME-Version: 1.0 Content-Transfer-Encoding: 7Bit Content-Type: text/plain; charset="us-ascii" X-BeenThere: freebsd-net@freebsd.org X-Mailman-Version: 2.1.20 Precedence: list List-Id: Networking and TCP/IP with FreeBSD List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , X-List-Received-Date: Sun, 16 Aug 2015 14:21:25 -0000 > > I have, in the past used UDP packets to encapsulate ethernet frames, > and tunnelled them over a PPP link using mpd. > I don't have specifics any more. I think there may be support in > Openvpn for what you want but I've never tried it. > How interesting.. That is definitely something worth looking into then. OpenVPN is fine, and I will probably use it as a component in the big picture of my solution, but it's honestly not my favorite solution to manage, so I would prefer to have as few clients on it as possible. Although I really was gunning for a pure kernel space solution, what I think I'm going to end up using as the center piece of this network is tinc. It's mesh networking is really what won me over. If I could find a decent way to secure vxlans over the open internet, I would probably have gone that route instead. On Sunday, August 16, 2015 21:54:36 Julian Elischer wrote: > On 8/15/15 10:40 AM, James Lott wrote: > >> you haven't really described the network well enough.. > >> try an ascii-art diagram (don't forget to set fixed width font :-) > >> a VPN required two ends.. one is FreeBSD... what's the other? > > > > The thing is, the "other" could be any number of operating systems. I'm > > looking for a tunneling protocol with good cross-platform representation, > > but the higher priority it enduring it tunnels ethernet frames. > > > > For the sake of example we can say the other end is a FreeBSD host, since > > FreeBSD is looking like the "lowest common denominator" on this topic. > > > >> if both ends are FreeBSD there are dozens of possibilities.. > >> for example: > >> ng_eif->netgraph->ppp->ipsec->ppp->netgraph->ng_eif > >> > >> ng_eif->ng_ksock(udp)->IPsec->ng_ksock->ng_eif > > > > I'm not overly concerned with the host side interfaces. What I'm really > > concerned with is the tunneling protocol since that's what will need > > support on all of my platforms. Thus, a solution requiring netgraph on > > both ends is not an option in my case. > > > >> tap->ppp->ppp->tap > > > > I have not found any ppp implementations under FreeBSD which support BCP. > > To my understanding, that's the only method by which ethernet frames can > > be > > tunneled over ppp... if I'm wrong, please do correct me! I would love > > nothing more than to be wrong about that :) > > I have, in the past used UDP packets to encapsulate ethernet frames, > and tunnelled them over a PPP link using mpd. > I don't have specifics any more. I think there may be support in > Openvpn for what you want but I've never tried it. > > > On Friday, August 14, 2015 23:16:41 Julian Elischer wrote: > >> On 8/14/15 6:40 AM, James Lott wrote: > >>> Hello list, > >>> > >>> I am in the process of planning a build out of a L2 VPN, in which > >>> I'd like to have my primary "switch" and DHCP server be a FreeBSD > >>> system. I would like to join each new host to the VPN by > >>> establishing an IP tunnel with the primary "switch" which transports > >>> ethernet frames over the tunnel. > >> > >> you haven't really described the network well enough.. > >> try an ascii-art diagram (don't forget to set fixed width font :-) > >> a VPN required two ends.. one is FreeBSD... what's the other? > >> > >>> So far, the only protocol I have found supported by FreeBSD which > >>> seems capable of this is EtherIP. As far as I can tell, it doesn't > >>> look like there is any support for L2TPv3, and none of the PPP > >>> implementations available appear to support BCP. > >>> > >>> I'm not completely opposed to using EtherIP, but if there is > >>> something more modern which will meet my needs, I would probably try > >>> that first. So my question becomes: > >>> > >>> * Does anyone know of a method supported under FreeBSD (other than > >>> EtherIP) for tunneling ethernet over IP that they may be able to > >>> suggest I check out? > >> > >> if both ends are FreeBSD there are dozens of possibilities.. > >> for example: > >> ng_eif->netgraph->ppp->ipsec->ppp->netgraph->ng_eif > >> > >> ng_eif->ng_ksock(udp)->IPsec->ng_ksock->ng_eif > >> > >> tap->ppp->ppp->tap > >> > >>> Thanks for any suggestions! > >>> _______________________________________________ > >>> freebsd-net@freebsd.org mailing list > >>> https://lists.freebsd.org/mailman/listinfo/freebsd-net > >>> To unsubscribe, send any mail to "freebsd-net- unsubscribe@freebsd.org" > >> > >> _______________________________________________ > >> freebsd-net@freebsd.org mailing list > >> https://lists.freebsd.org/mailman/listinfo/freebsd-net > >> To unsubscribe, send any mail to "freebsd-net-unsubscribe@freebsd.org" > > _______________________________________________ > freebsd-net@freebsd.org mailing list > https://lists.freebsd.org/mailman/listinfo/freebsd-net > To unsubscribe, send any mail to "freebsd-net-unsubscribe@freebsd.org" -- James Lott From owner-freebsd-net@freebsd.org Sun Aug 16 14:26:40 2015 Return-Path: Delivered-To: freebsd-net@mailman.ysv.freebsd.org Received: from mx1.freebsd.org (mx1.freebsd.org [IPv6:2001:1900:2254:206a::19:1]) by mailman.ysv.freebsd.org (Postfix) with ESMTP id 824D59BAC08 for ; Sun, 16 Aug 2015 14:26:40 +0000 (UTC) (envelope-from james@lottspot.com) Received: from mx0.lottspot.com (sfo.lottspot.com [198.199.98.33]) by mx1.freebsd.org (Postfix) with ESMTP id 69C761ECB for ; Sun, 16 Aug 2015 14:26:39 +0000 (UTC) (envelope-from james@lottspot.com) Received: from localhost (localhost [127.0.0.1]) by mail.lottspot.com (Postfix) with ESMTP id 2712F41277 for ; Sun, 16 Aug 2015 07:26:40 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=lottspot.com; h= content-type:content-type:content-transfer-encoding:mime-version :references:in-reply-to:user-agent:organization:message-id:date :date:subject:subject:from:from:received:received; s=mail; t= 1439735145; bh=I/tkt4pB+4FbLMHEaDuHj5aExrNQKM6cADwX0NFbzJc=; b=E Me253fqmZfv6qNalr/wZTfvr9StJB/mwZZJsRae2P6IkkBpRo78fAHK2OFE/8zSE G2bW6iJgz/asWa0nAFOCz+EsYm2fozPMD/6cKAAn34GjkXqxQiaI1pQDNE4ICd4Q DnxkTvgR/ale0jK+KtQjxG2LtXdEceVqytVFBs30O8= X-Virus-Scanned: amavisd-new at lottspot.com Received: from mx0.lottspot.com ([127.0.0.1]) by localhost (mail.lottspot.com [127.0.0.1]) (amavisd-new, port 10024) with LMTP id irwqSujvwRXn for ; Sun, 16 Aug 2015 07:25:45 -0700 (PDT) Received: from arch_project.localnet (h69-131-58-73.nrfdvt.dsl.dynamic.tds.net [69.131.58.73]) by mx0.lottspot.com (Postfix) with ESMTPSA id D169641265 for ; Sun, 16 Aug 2015 07:25:44 -0700 (PDT) From: James Lott To: freebsd-net@freebsd.org Subject: Re: Ethernet tunneling options under FreeBSD Date: Sun, 16 Aug 2015 07:25:40 -0700 Message-ID: <2049148.2xMuIgxkh4@arch_project> Organization: LottSpot User-Agent: KMail/4.14.10 (Linux/4.1.4-1-ARCH; KDE/4.14.10; x86_64; ; ) In-Reply-To: <55D09884.7010102@freebsd.org> References: <55CD1CE6.2010502@lottspot.com> <2628655.0T22OuP5Ng@arch_project> <55D09884.7010102@freebsd.org> MIME-Version: 1.0 Content-Transfer-Encoding: 7Bit Content-Type: text/plain; charset="us-ascii" X-BeenThere: freebsd-net@freebsd.org X-Mailman-Version: 2.1.20 Precedence: list List-Id: Networking and TCP/IP with FreeBSD List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , X-List-Received-Date: Sun, 16 Aug 2015 14:26:40 -0000 > you can do this on freebsd with no added software > look at /usr/share/examples/netgraph. In particular the ether.bridge, > virtual.lan and the udp.tunnel > examples. > You should be able to create a script that will tunnel two ethernet > bridges together using elements from each script. Ah, ok, I'm understanding your original suggestion better now. If that is the case, I will definitely be checking out netgraph examples. Having simple tunnel connections for tap devices in this manner is something I've been after for a while, and I think will be desirable for certain hosts I intend to connect to the VPN. Thank you for this great suggestion! On Sunday, August 16, 2015 22:04:52 Julian Elischer wrote: > On 8/15/15 11:32 AM, James Lott wrote: > > n2n honestly looks wonderful, but it also appears to be dead... I'm trying > > to stay as close to the OS layer as possible with my options, so I would > > prefer to limit the role of comprehensive software like OpenVPN or what > > ZeroTierOne appears to be. > > > > I actually found this interesting github project, which provides a simple > > solution for what I'm trying to do... > > > > https://github.com/vsergeev/tinytaptunnel > > you can do this on freebsd with no added software > look at /usr/share/examples/netgraph. In particular the ether.bridge, > virtual.lan and the udp.tunnel > examples. > You should be able to create a script that will tunnel two ethernet > bridges together using elements from each script. > > I suspect you could make it totally compatible with tinytaptunnel. > > > Unfortunately, it's written for Linux... and... in go... but the README at > > least gave me a couple more ideas to look into. > > > > Feel free to keep coming with the suggestions if anyone has anymore! This > > is great stuff > > > > On Saturday, August 15, 2015 13:05:17 Outback Dingo wrote: > >> On Sat, Aug 15, 2015 at 12:40 PM, James Lott > > > > wrote: > >>>> you haven't really described the network well enough.. > >>>> try an ascii-art diagram (don't forget to set fixed width font :-) > >>>> a VPN required two ends.. one is FreeBSD... what's the other? > >>> > >>> The thing is, the "other" could be any number of operating systems. I'm > >>> looking for a tunneling protocol with good cross-platform > >>> representation, > >>> but > >>> the higher priority it enduring it tunnels ethernet frames. > >>> > >>> For the sake of example we can say the other end is a FreeBSD host, > >>> since > >>> FreeBSD is looking like the "lowest common denominator" on this topic. > >>> > >>>> if both ends are FreeBSD there are dozens of possibilities.. > >>>> for example: > >>>> ng_eif->netgraph->ppp->ipsec->ppp->netgraph->ng_eif > >>>> > >>>> ng_eif->ng_ksock(udp)->IPsec->ng_ksock->ng_eif > >>> > >>> I'm not overly concerned with the host side interfaces. What I'm really > >>> concerned with is the tunneling protocol since that's what will need > >>> support > >>> on all of my platforms. Thus, a solution requiring netgraph on both ends > >>> is > >>> not an option in my case. > >>> > >>>> tap->ppp->ppp->tap > >>> > >>> I have not found any ppp implementations under FreeBSD which support > > > > BCP. > > > >>> To my understanding, that's the only method by which ethernet frames can > >>> be > >>> tunneled over ppp... if I'm wrong, please do correct me! I would love > >>> nothing > >>> more than to be wrong about that :) > >>> > >>> On Friday, August 14, 2015 23:16:41 Julian Elischer wrote: > >>>> On 8/14/15 6:40 AM, James Lott wrote: > >>>>> Hello list, > >>>>> > >>>>> I am in the process of planning a build out of a L2 VPN, in which > >>>>> I'd like to have my primary "switch" and DHCP server be a FreeBSD > >>>>> system. I would like to join each new host to the VPN by > >>>>> establishing an IP tunnel with the primary "switch" which transports > >>>>> ethernet frames over the tunnel. > >>>> > >>>> you haven't really described the network well enough.. > >>>> try an ascii-art diagram (don't forget to set fixed width font :-) > >>>> a VPN required two ends.. one is FreeBSD... what's the other? > >>>> > >>>>> So far, the only protocol I have found supported by FreeBSD which > >>>>> seems capable of this is EtherIP. As far as I can tell, it doesn't > >>>>> look like there is any support for L2TPv3, and none of the PPP > >>>>> implementations available appear to support BCP. > >>>>> > >>>>> I'm not completely opposed to using EtherIP, but if there is > >>>>> something more modern which will meet my needs, I would probably > > > > try > > > >>>>> that first. So my question becomes: > >>>>> > >>>>> * Does anyone know of a method supported under FreeBSD (other than > >>>>> EtherIP) for tunneling ethernet over IP that they may be able to > >>>>> suggest I check out? > >>>> > >>>> if both ends are FreeBSD there are dozens of possibilities.. > >>>> for example: > >>>> ng_eif->netgraph->ppp->ipsec->ppp->netgraph->ng_eif > >>>> > >>>> ng_eif->ng_ksock(udp)->IPsec->ng_ksock->ng_eif > >>>> > >>>> tap->ppp->ppp->tap > >>>> > >>>>> Thanks for any suggestions! > >> > >> theres also N2N which is pretty nice, and well ZeroTierOne which is > >> somewhat unique > >> > >>>>> _______________________________________________ > >>>>> freebsd-net@freebsd.org mailing list > >>>>> https://lists.freebsd.org/mailman/listinfo/freebsd-net > >>>>> To unsubscribe, send any mail to "freebsd-net- > > > > unsubscribe@freebsd.org" > > > >>>> _______________________________________________ > >>>> freebsd-net@freebsd.org mailing list > >>>> https://lists.freebsd.org/mailman/listinfo/freebsd-net > >>>> To unsubscribe, send any mail to "freebsd-net- unsubscribe@freebsd.org" > >>> > >>> -- > >>> James Lott > >>> _______________________________________________ > >>> freebsd-net@freebsd.org mailing list > >>> https://lists.freebsd.org/mailman/listinfo/freebsd-net > >>> To unsubscribe, send any mail to "freebsd-net- unsubscribe@freebsd.org" > > _______________________________________________ > freebsd-net@freebsd.org mailing list > https://lists.freebsd.org/mailman/listinfo/freebsd-net > To unsubscribe, send any mail to "freebsd-net-unsubscribe@freebsd.org" -- James Lott From owner-freebsd-net@freebsd.org Sun Aug 16 19:07:21 2015 Return-Path: Delivered-To: freebsd-net@mailman.ysv.freebsd.org Received: from mx1.freebsd.org (mx1.freebsd.org [IPv6:2001:1900:2254:206a::19:1]) by mailman.ysv.freebsd.org (Postfix) with ESMTP id B1D5E9BB1CF for ; Sun, 16 Aug 2015 19:07:21 +0000 (UTC) (envelope-from bugzilla-noreply@freebsd.org) Received: from kenobi.freebsd.org (kenobi.freebsd.org [IPv6:2001:1900:2254:206a::16:76]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (Client did not present a certificate) by mx1.freebsd.org (Postfix) with ESMTPS id 9C8EF169D for ; Sun, 16 Aug 2015 19:07:21 +0000 (UTC) (envelope-from bugzilla-noreply@freebsd.org) Received: from bugs.freebsd.org ([127.0.1.118]) by kenobi.freebsd.org (8.15.2/8.15.2) with ESMTP id t7GJ7LV4097188 for ; Sun, 16 Aug 2015 19:07:21 GMT (envelope-from bugzilla-noreply@freebsd.org) From: bugzilla-noreply@freebsd.org To: freebsd-net@FreeBSD.org Subject: [Bug 161277] [em] [patch] BMC cannot receive IPMI traffic after loading or enabling the if_em driver Date: Sun, 16 Aug 2015 19:07:21 +0000 X-Bugzilla-Reason: AssignedTo X-Bugzilla-Type: changed X-Bugzilla-Watch-Reason: None X-Bugzilla-Product: Base System X-Bugzilla-Component: kern X-Bugzilla-Version: unspecified X-Bugzilla-Keywords: IntelNetworking X-Bugzilla-Severity: Affects Only Me X-Bugzilla-Who: commit-hook@freebsd.org X-Bugzilla-Status: In Progress X-Bugzilla-Priority: Normal X-Bugzilla-Assigned-To: freebsd-net@FreeBSD.org X-Bugzilla-Target-Milestone: --- X-Bugzilla-Flags: X-Bugzilla-Changed-Fields: Message-ID: In-Reply-To: References: Content-Type: text/plain; charset="UTF-8" Content-Transfer-Encoding: 7bit X-Bugzilla-URL: https://bugs.freebsd.org/bugzilla/ Auto-Submitted: auto-generated MIME-Version: 1.0 X-BeenThere: freebsd-net@freebsd.org X-Mailman-Version: 2.1.20 Precedence: list List-Id: Networking and TCP/IP with FreeBSD List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , X-List-Received-Date: Sun, 16 Aug 2015 19:07:21 -0000 https://bugs.freebsd.org/bugzilla/show_bug.cgi?id=161277 --- Comment #5 from commit-hook@freebsd.org --- A commit references this bug: Author: sbruno Date: Sun Aug 16 19:06:24 UTC 2015 New revision: 286829 URL: https://svnweb.freebsd.org/changeset/base/286829 Log: Add capability to disable CRC stripping. This breaks IPMI/BMC capabilities on certain adatpers. Linux has been doing the exact same thing since 2008 https://github.com/torvalds/linux/commit/eb7c3adb1ca92450870dbb0d347fc986cd5e2af4 PR: 161277 Differential Revision: https://reviews.freebsd.org/D3282 Submitted by: Fravadona@gmail.com Reviewed by: erj wblock MFC after: 2 weeks Relnotes: yes Sponsored by: Limelight Networks Changes: head/share/man/man4/em.4 head/sys/dev/e1000/if_em.c -- You are receiving this mail because: You are the assignee for the bug. From owner-freebsd-net@freebsd.org Sun Aug 16 19:07:40 2015 Return-Path: Delivered-To: freebsd-net@mailman.ysv.freebsd.org Received: from mx1.freebsd.org (mx1.freebsd.org [IPv6:2001:1900:2254:206a::19:1]) by mailman.ysv.freebsd.org (Postfix) with ESMTP id A6CA19BB1EF for ; Sun, 16 Aug 2015 19:07:40 +0000 (UTC) (envelope-from bugzilla-noreply@freebsd.org) Received: from kenobi.freebsd.org (kenobi.freebsd.org [IPv6:2001:1900:2254:206a::16:76]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (Client did not present a certificate) by mx1.freebsd.org (Postfix) with ESMTPS id 8F91E1743 for ; Sun, 16 Aug 2015 19:07:40 +0000 (UTC) (envelope-from bugzilla-noreply@freebsd.org) Received: from bugs.freebsd.org ([127.0.1.118]) by kenobi.freebsd.org (8.15.2/8.15.2) with ESMTP id t7GJ7eQu097806 for ; Sun, 16 Aug 2015 19:07:40 GMT (envelope-from bugzilla-noreply@freebsd.org) From: bugzilla-noreply@freebsd.org To: freebsd-net@FreeBSD.org Subject: [Bug 161277] [em] [patch] BMC cannot receive IPMI traffic after loading or enabling the if_em driver Date: Sun, 16 Aug 2015 19:07:40 +0000 X-Bugzilla-Reason: AssignedTo X-Bugzilla-Type: changed X-Bugzilla-Watch-Reason: None X-Bugzilla-Product: Base System X-Bugzilla-Component: kern X-Bugzilla-Version: unspecified X-Bugzilla-Keywords: IntelNetworking X-Bugzilla-Severity: Affects Only Me X-Bugzilla-Who: sbruno@FreeBSD.org X-Bugzilla-Status: Closed X-Bugzilla-Priority: Normal X-Bugzilla-Assigned-To: freebsd-net@FreeBSD.org X-Bugzilla-Target-Milestone: --- X-Bugzilla-Flags: X-Bugzilla-Changed-Fields: bug_status resolution Message-ID: In-Reply-To: References: Content-Type: text/plain; charset="UTF-8" Content-Transfer-Encoding: 7bit X-Bugzilla-URL: https://bugs.freebsd.org/bugzilla/ Auto-Submitted: auto-generated MIME-Version: 1.0 X-BeenThere: freebsd-net@freebsd.org X-Mailman-Version: 2.1.20 Precedence: list List-Id: Networking and TCP/IP with FreeBSD List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , X-List-Received-Date: Sun, 16 Aug 2015 19:07:40 -0000 https://bugs.freebsd.org/bugzilla/show_bug.cgi?id=161277 Sean Bruno changed: What |Removed |Added ---------------------------------------------------------------------------- Status|In Progress |Closed Resolution|--- |FIXED -- You are receiving this mail because: You are the assignee for the bug. From owner-freebsd-net@freebsd.org Sun Aug 16 19:44:26 2015 Return-Path: Delivered-To: freebsd-net@mailman.ysv.freebsd.org Received: from mx1.freebsd.org (mx1.freebsd.org [IPv6:2001:1900:2254:206a::19:1]) by mailman.ysv.freebsd.org (Postfix) with ESMTP id BD8F39BB80A for ; Sun, 16 Aug 2015 19:44:26 +0000 (UTC) (envelope-from bugzilla-noreply@freebsd.org) Received: from kenobi.freebsd.org (kenobi.freebsd.org [IPv6:2001:1900:2254:206a::16:76]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (Client did not present a certificate) by mx1.freebsd.org (Postfix) with ESMTPS id A8D5F1B8D for ; Sun, 16 Aug 2015 19:44:26 +0000 (UTC) (envelope-from bugzilla-noreply@freebsd.org) Received: from bugs.freebsd.org ([127.0.1.118]) by kenobi.freebsd.org (8.15.2/8.15.2) with ESMTP id t7GJiQXZ010604 for ; Sun, 16 Aug 2015 19:44:26 GMT (envelope-from bugzilla-noreply@freebsd.org) From: bugzilla-noreply@freebsd.org To: freebsd-net@FreeBSD.org Subject: [Bug 200221] em0 watchdog timeout under load Date: Sun, 16 Aug 2015 19:44:24 +0000 X-Bugzilla-Reason: AssignedTo X-Bugzilla-Type: changed X-Bugzilla-Watch-Reason: None X-Bugzilla-Product: Base System X-Bugzilla-Component: kern X-Bugzilla-Version: 10.1-RELEASE X-Bugzilla-Keywords: IntelNetworking, needs-qa, patch X-Bugzilla-Severity: Affects Only Me X-Bugzilla-Who: commit-hook@freebsd.org X-Bugzilla-Status: In Progress X-Bugzilla-Priority: --- X-Bugzilla-Assigned-To: freebsd-net@FreeBSD.org X-Bugzilla-Target-Milestone: --- X-Bugzilla-Flags: X-Bugzilla-Changed-Fields: Message-ID: In-Reply-To: References: Content-Type: text/plain; charset="UTF-8" Content-Transfer-Encoding: 7bit X-Bugzilla-URL: https://bugs.freebsd.org/bugzilla/ Auto-Submitted: auto-generated MIME-Version: 1.0 X-BeenThere: freebsd-net@freebsd.org X-Mailman-Version: 2.1.20 Precedence: list List-Id: Networking and TCP/IP with FreeBSD List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , X-List-Received-Date: Sun, 16 Aug 2015 19:44:26 -0000 https://bugs.freebsd.org/bugzilla/show_bug.cgi?id=200221 --- Comment #14 from commit-hook@freebsd.org --- A commit references this bug: Author: sbruno Date: Sun Aug 16 19:43:45 UTC 2015 New revision: 286831 URL: https://svnweb.freebsd.org/changeset/base/286831 Log: Increase EM_MAX_SCATTER to 64 such that the size of em_xmit()::segs[EM_MAX_SCATTER] doesn't get overrun by things like NFS that can and do shove more than 32 segs when being used with em(4) and TSO4. Update tso handling code in em_xmit() with update from jhb@ in email thread: https://lists.freebsd.org/pipermail/freebsd-net/2014-July/039306.html set ifp->if_hw_tsomax, ifp->if_hw_tsomaxsegcount & ifp->if_hw_tsomaxsegsize to appropriate values. Define a TSO workaround "magic" number of 4 that is used to avoid an alignment issue in hardware. Change a couple of integer values that were used as booleans to actual bool types. Ensure that em_enable_intr() enables the appropriate mask of interrupts and not just a hardcoded define of values. PR: 200221 199174 195078 Differential Revision: https://reviews.freebsd.org/D3192 Reviewed by: erj jhb hiren MFC after: 2 weeks Sponsored by: Limelight Networks Changes: head/sys/dev/e1000/if_em.c head/sys/dev/e1000/if_em.h -- You are receiving this mail because: You are the assignee for the bug. From owner-freebsd-net@freebsd.org Sun Aug 16 19:44:28 2015 Return-Path: Delivered-To: freebsd-net@mailman.ysv.freebsd.org Received: from mx1.freebsd.org (mx1.freebsd.org [IPv6:2001:1900:2254:206a::19:1]) by mailman.ysv.freebsd.org (Postfix) with ESMTP id CD8959BB81C for ; Sun, 16 Aug 2015 19:44:28 +0000 (UTC) (envelope-from bugzilla-noreply@freebsd.org) Received: from kenobi.freebsd.org (kenobi.freebsd.org [IPv6:2001:1900:2254:206a::16:76]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (Client did not present a certificate) by mx1.freebsd.org (Postfix) with ESMTPS id B8EDB1B94 for ; Sun, 16 Aug 2015 19:44:28 +0000 (UTC) (envelope-from bugzilla-noreply@freebsd.org) Received: from bugs.freebsd.org ([127.0.1.118]) by kenobi.freebsd.org (8.15.2/8.15.2) with ESMTP id t7GJiSVR010627 for ; Sun, 16 Aug 2015 19:44:28 GMT (envelope-from bugzilla-noreply@freebsd.org) From: bugzilla-noreply@freebsd.org To: freebsd-net@FreeBSD.org Subject: [Bug 199174] em tx and rx hang Date: Sun, 16 Aug 2015 19:44:28 +0000 X-Bugzilla-Reason: CC X-Bugzilla-Type: changed X-Bugzilla-Watch-Reason: None X-Bugzilla-Product: Base System X-Bugzilla-Component: kern X-Bugzilla-Version: 10.1-STABLE X-Bugzilla-Keywords: IntelNetworking X-Bugzilla-Severity: Affects Only Me X-Bugzilla-Who: commit-hook@freebsd.org X-Bugzilla-Status: In Progress X-Bugzilla-Priority: --- X-Bugzilla-Assigned-To: sbruno@FreeBSD.org X-Bugzilla-Target-Milestone: --- X-Bugzilla-Flags: mfc-stable9? mfc-stable10? X-Bugzilla-Changed-Fields: Message-ID: In-Reply-To: References: Content-Type: text/plain; charset="UTF-8" Content-Transfer-Encoding: 7bit X-Bugzilla-URL: https://bugs.freebsd.org/bugzilla/ Auto-Submitted: auto-generated MIME-Version: 1.0 X-BeenThere: freebsd-net@freebsd.org X-Mailman-Version: 2.1.20 Precedence: list List-Id: Networking and TCP/IP with FreeBSD List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , X-List-Received-Date: Sun, 16 Aug 2015 19:44:28 -0000 https://bugs.freebsd.org/bugzilla/show_bug.cgi?id=199174 --- Comment #33 from commit-hook@freebsd.org --- A commit references this bug: Author: sbruno Date: Sun Aug 16 19:43:45 UTC 2015 New revision: 286831 URL: https://svnweb.freebsd.org/changeset/base/286831 Log: Increase EM_MAX_SCATTER to 64 such that the size of em_xmit()::segs[EM_MAX_SCATTER] doesn't get overrun by things like NFS that can and do shove more than 32 segs when being used with em(4) and TSO4. Update tso handling code in em_xmit() with update from jhb@ in email thread: https://lists.freebsd.org/pipermail/freebsd-net/2014-July/039306.html set ifp->if_hw_tsomax, ifp->if_hw_tsomaxsegcount & ifp->if_hw_tsomaxsegsize to appropriate values. Define a TSO workaround "magic" number of 4 that is used to avoid an alignment issue in hardware. Change a couple of integer values that were used as booleans to actual bool types. Ensure that em_enable_intr() enables the appropriate mask of interrupts and not just a hardcoded define of values. PR: 200221 199174 195078 Differential Revision: https://reviews.freebsd.org/D3192 Reviewed by: erj jhb hiren MFC after: 2 weeks Sponsored by: Limelight Networks Changes: head/sys/dev/e1000/if_em.c head/sys/dev/e1000/if_em.h -- You are receiving this mail because: You are on the CC list for the bug. From owner-freebsd-net@freebsd.org Sun Aug 16 19:44:30 2015 Return-Path: Delivered-To: freebsd-net@mailman.ysv.freebsd.org Received: from mx1.freebsd.org (mx1.freebsd.org [IPv6:2001:1900:2254:206a::19:1]) by mailman.ysv.freebsd.org (Postfix) with ESMTP id 62A269BB81F for ; Sun, 16 Aug 2015 19:44:30 +0000 (UTC) (envelope-from bugzilla-noreply@freebsd.org) Received: from kenobi.freebsd.org (kenobi.freebsd.org [IPv6:2001:1900:2254:206a::16:76]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (Client did not present a certificate) by mx1.freebsd.org (Postfix) with ESMTPS id 4E1121BCD for ; Sun, 16 Aug 2015 19:44:30 +0000 (UTC) (envelope-from bugzilla-noreply@freebsd.org) Received: from bugs.freebsd.org ([127.0.1.118]) by kenobi.freebsd.org (8.15.2/8.15.2) with ESMTP id t7GJiUjL010679 for ; Sun, 16 Aug 2015 19:44:30 GMT (envelope-from bugzilla-noreply@freebsd.org) From: bugzilla-noreply@freebsd.org To: freebsd-net@FreeBSD.org Subject: [Bug 195078] em tx_dma_fails and dropped packets Date: Sun, 16 Aug 2015 19:44:30 +0000 X-Bugzilla-Reason: AssignedTo X-Bugzilla-Type: changed X-Bugzilla-Watch-Reason: None X-Bugzilla-Product: Base System X-Bugzilla-Component: kern X-Bugzilla-Version: 9.2-RELEASE X-Bugzilla-Keywords: IntelNetworking, easy, regression X-Bugzilla-Severity: Affects Many People X-Bugzilla-Who: commit-hook@freebsd.org X-Bugzilla-Status: Closed X-Bugzilla-Priority: Normal X-Bugzilla-Assigned-To: freebsd-net@FreeBSD.org X-Bugzilla-Target-Milestone: --- X-Bugzilla-Flags: X-Bugzilla-Changed-Fields: Message-ID: In-Reply-To: References: Content-Type: text/plain; charset="UTF-8" Content-Transfer-Encoding: 7bit X-Bugzilla-URL: https://bugs.freebsd.org/bugzilla/ Auto-Submitted: auto-generated MIME-Version: 1.0 X-BeenThere: freebsd-net@freebsd.org X-Mailman-Version: 2.1.20 Precedence: list List-Id: Networking and TCP/IP with FreeBSD List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , X-List-Received-Date: Sun, 16 Aug 2015 19:44:30 -0000 https://bugs.freebsd.org/bugzilla/show_bug.cgi?id=195078 --- Comment #5 from commit-hook@freebsd.org --- A commit references this bug: Author: sbruno Date: Sun Aug 16 19:43:45 UTC 2015 New revision: 286831 URL: https://svnweb.freebsd.org/changeset/base/286831 Log: Increase EM_MAX_SCATTER to 64 such that the size of em_xmit()::segs[EM_MAX_SCATTER] doesn't get overrun by things like NFS that can and do shove more than 32 segs when being used with em(4) and TSO4. Update tso handling code in em_xmit() with update from jhb@ in email thread: https://lists.freebsd.org/pipermail/freebsd-net/2014-July/039306.html set ifp->if_hw_tsomax, ifp->if_hw_tsomaxsegcount & ifp->if_hw_tsomaxsegsize to appropriate values. Define a TSO workaround "magic" number of 4 that is used to avoid an alignment issue in hardware. Change a couple of integer values that were used as booleans to actual bool types. Ensure that em_enable_intr() enables the appropriate mask of interrupts and not just a hardcoded define of values. PR: 200221 199174 195078 Differential Revision: https://reviews.freebsd.org/D3192 Reviewed by: erj jhb hiren MFC after: 2 weeks Sponsored by: Limelight Networks Changes: head/sys/dev/e1000/if_em.c head/sys/dev/e1000/if_em.h -- You are receiving this mail because: You are the assignee for the bug. From owner-freebsd-net@freebsd.org Sun Aug 16 21:00:09 2015 Return-Path: Delivered-To: freebsd-net@mailman.ysv.freebsd.org Received: from mx1.freebsd.org (mx1.freebsd.org [IPv6:2001:1900:2254:206a::19:1]) by mailman.ysv.freebsd.org (Postfix) with ESMTP id 98FAB9BB1FB for ; Sun, 16 Aug 2015 21:00:09 +0000 (UTC) (envelope-from bugzilla-noreply@FreeBSD.org) Received: from kenobi.freebsd.org (kenobi.freebsd.org [IPv6:2001:1900:2254:206a::16:76]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (Client did not present a certificate) by mx1.freebsd.org (Postfix) with ESMTPS id 726CA1AA8 for ; Sun, 16 Aug 2015 21:00:09 +0000 (UTC) (envelope-from bugzilla-noreply@FreeBSD.org) Received: from bugs.freebsd.org ([127.0.1.118]) by kenobi.freebsd.org (8.15.2/8.15.2) with ESMTP id t7GL09Qq080667 for ; Sun, 16 Aug 2015 21:00:09 GMT (envelope-from bugzilla-noreply@FreeBSD.org) Message-Id: <201508162100.t7GL09Qq080667@kenobi.freebsd.org> From: bugzilla-noreply@FreeBSD.org To: freebsd-net@FreeBSD.org Subject: Problem reports for freebsd-net@FreeBSD.org that need special attention X-Bugzilla-URL: https://bugs.freebsd.org/bugzilla/ Auto-Submitted: auto-generated MIME-Version: 1.0 Date: Sun, 16 Aug 2015 21:00:09 +0000 Content-Type: text/plain; charset="UTF-8" X-BeenThere: freebsd-net@freebsd.org X-Mailman-Version: 2.1.20 Precedence: list List-Id: Networking and TCP/IP with FreeBSD List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , X-List-Received-Date: Sun, 16 Aug 2015 21:00:09 -0000 To view an individual PR, use: https://bugs.freebsd.org/bugzilla/show_bug.cgi?id=(Bug Id). The following is a listing of current problems submitted by FreeBSD users, which need special attention. These represent problem reports covering all versions including experimental development code and obsolete releases. Status | Bug Id | Description ------------+-----------+--------------------------------------------------- Open | 194515 | Fatal Trap 12 Kernel with vimage Open | 199136 | [if_tap] Added down_on_close sysctl variable to t 2 problems total for which you should take action. From owner-freebsd-net@freebsd.org Mon Aug 17 07:33:29 2015 Return-Path: Delivered-To: freebsd-net@mailman.ysv.freebsd.org Received: from mx1.freebsd.org (mx1.freebsd.org [IPv6:2001:1900:2254:206a::19:1]) by mailman.ysv.freebsd.org (Postfix) with ESMTP id 9A3769BA9EA; Mon, 17 Aug 2015 07:33:29 +0000 (UTC) (envelope-from danny@cs.huji.ac.il) Received: from kabab.cs.huji.ac.il (kabab.cs.huji.ac.il [132.65.116.210]) (using TLSv1 with cipher DHE-RSA-AES256-SHA (256/256 bits)) (Client did not present a certificate) by mx1.freebsd.org (Postfix) with ESMTPS id 54ACA1131; Mon, 17 Aug 2015 07:33:28 +0000 (UTC) (envelope-from danny@cs.huji.ac.il) Received: from mbpro-w.cs.huji.ac.il ([132.65.80.91]) by kabab.cs.huji.ac.il with esmtp id 1ZREpZ-000LVA-N9; Mon, 17 Aug 2015 10:27:41 +0300 From: Daniel Braniss Content-Type: text/plain; charset=utf-8 Content-Transfer-Encoding: quoted-printable Subject: ix(intel) vs mlxen(mellanox) 10Gb performance Date: Mon, 17 Aug 2015 10:27:41 +0300 Message-Id: <1D52028A-B39F-4F9B-BD38-CB1D73BF5D56@cs.huji.ac.il> Cc: FreeBSD Net To: FreeBSD stable Mime-Version: 1.0 (Mac OS X Mail 8.2 \(2102\)) X-Mailer: Apple Mail (2.2102) X-BeenThere: freebsd-net@freebsd.org X-Mailman-Version: 2.1.20 Precedence: list List-Id: Networking and TCP/IP with FreeBSD List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , X-List-Received-Date: Mon, 17 Aug 2015 07:33:29 -0000 hi, I have a host (Dell R730) with both cards, connected to an = HP8200 switch at 10Gb. when writing to the same storage (netapp) this is what I get: ix0: ~130MGB/s mlxen0 ~330MGB/s this is via nfs/tcpv3 I can get similar (bad) performance with the mellanox if I = increase the file size to 512MGB. so at face value, it seems the mlxen does a better use of = resources than the intel. Any ideas how to improve ix/intel=E2=80=99s performance? cheers, dnny From owner-freebsd-net@freebsd.org Mon Aug 17 08:14:36 2015 Return-Path: Delivered-To: freebsd-net@mailman.ysv.freebsd.org Received: from mx1.freebsd.org (mx1.freebsd.org [IPv6:2001:1900:2254:206a::19:1]) by mailman.ysv.freebsd.org (Postfix) with ESMTP id 4FCB99BB567 for ; Mon, 17 Aug 2015 08:14:36 +0000 (UTC) (envelope-from emeric.poupon@stormshield.eu) Received: from work.netasq.com (gwlille.netasq.com [91.212.116.1]) by mx1.freebsd.org (Postfix) with ESMTP id 19E4112EC for ; Mon, 17 Aug 2015 08:14:35 +0000 (UTC) (envelope-from emeric.poupon@stormshield.eu) Received: from work.netasq.com (localhost.localdomain [127.0.0.1]) by work.netasq.com (Postfix) with ESMTP id 3DF4D2705ED3 for ; Mon, 17 Aug 2015 10:07:48 +0200 (CEST) Received: from localhost (localhost.localdomain [127.0.0.1]) by work.netasq.com (Postfix) with ESMTP id DE1A62705CE3 for ; Mon, 17 Aug 2015 10:07:47 +0200 (CEST) Received: from work.netasq.com ([127.0.0.1]) by localhost (work.netasq.com [127.0.0.1]) (amavisd-new, port 10026) with ESMTP id IcrynqMuwwYA for ; Mon, 17 Aug 2015 10:07:47 +0200 (CEST) Received: from work.netasq.com (localhost.localdomain [127.0.0.1]) by work.netasq.com (Postfix) with ESMTP id 4CA4F2705C9C for ; Mon, 17 Aug 2015 10:07:47 +0200 (CEST) Date: Mon, 17 Aug 2015 10:07:45 +0200 (CEST) From: Emeric POUPON To: FreeBSD Net Message-ID: <868621474.11105551.1439798865541.JavaMail.zimbra@stormshield.eu> In-Reply-To: <2101280536.11100114.1439798033324.JavaMail.zimbra@stormshield.eu> Subject: IPsec: question on the sysctl preferred_oldsa MIME-Version: 1.0 Content-Type: text/plain; charset=utf-8 Content-Transfer-Encoding: 7bit Thread-Topic: IPsec: question on the sysctl preferred_oldsa Thread-Index: IeXRZTKnQSSas6XdJUVl2KQ2WmNtSg== X-BeenThere: freebsd-net@freebsd.org X-Mailman-Version: 2.1.20 Precedence: list List-Id: Networking and TCP/IP with FreeBSD List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , X-List-Received-Date: Mon, 17 Aug 2015 08:14:36 -0000 Hello, I have some questions about the sysctl "net.key.preferred_oldsa": https://svnweb.freebsd.org/base/head/sys/netipsec/key.c?view=markup#l971 When I set the net.key.preferred_oldsa to 0 (similar to Linux's behavior, according to what I have read so far): - why does the kernel delete itself the old SA ? Why not just selecting the newest one? - why does it delete the old SA only if it has been created in another "second" of time? strongSwan does not expect that behavior and I can see a lot of errors in its logs: the SA has been deleted but it does not know about that (strongSwan wants to control the SA installation/deletion itself). Two pairs of SA may be negotiated and installed at the same time due to high load, bidirectional traffic. It seems to be quite questionable to delete the old one in that case. What do you think? Emeric From owner-freebsd-net@freebsd.org Mon Aug 17 09:41:55 2015 Return-Path: Delivered-To: freebsd-net@mailman.ysv.freebsd.org Received: from mx1.freebsd.org (mx1.freebsd.org [IPv6:2001:1900:2254:206a::19:1]) by mailman.ysv.freebsd.org (Postfix) with ESMTP id 05B179BBBDF; Mon, 17 Aug 2015 09:41:55 +0000 (UTC) (envelope-from slw@zxy.spb.ru) Received: from zxy.spb.ru (zxy.spb.ru [195.70.199.98]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (Client did not present a certificate) by mx1.freebsd.org (Postfix) with ESMTPS id B7E051C15; Mon, 17 Aug 2015 09:41:54 +0000 (UTC) (envelope-from slw@zxy.spb.ru) Received: from slw by zxy.spb.ru with local (Exim 4.84 (FreeBSD)) (envelope-from ) id 1ZRGvJ-000HwX-Ji; Mon, 17 Aug 2015 12:41:45 +0300 Date: Mon, 17 Aug 2015 12:41:45 +0300 From: Slawa Olhovchenkov To: Daniel Braniss Cc: FreeBSD stable , FreeBSD Net Subject: Re: ix(intel) vs mlxen(mellanox) 10Gb performance Message-ID: <20150817094145.GB3158@zxy.spb.ru> References: <1D52028A-B39F-4F9B-BD38-CB1D73BF5D56@cs.huji.ac.il> MIME-Version: 1.0 Content-Type: text/plain; charset=koi8-r Content-Disposition: inline In-Reply-To: <1D52028A-B39F-4F9B-BD38-CB1D73BF5D56@cs.huji.ac.il> User-Agent: Mutt/1.5.23 (2014-03-12) X-SA-Exim-Connect-IP: X-SA-Exim-Mail-From: slw@zxy.spb.ru X-SA-Exim-Scanned: No (on zxy.spb.ru); SAEximRunCond expanded to false X-BeenThere: freebsd-net@freebsd.org X-Mailman-Version: 2.1.20 Precedence: list List-Id: Networking and TCP/IP with FreeBSD List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , X-List-Received-Date: Mon, 17 Aug 2015 09:41:55 -0000 On Mon, Aug 17, 2015 at 10:27:41AM +0300, Daniel Braniss wrote: > hi, > I have a host (Dell R730) with both cards, connected to an HP8200 switch at 10Gb. > when writing to the same storage (netapp) this is what I get: > ix0: ~130MGB/s > mlxen0 ~330MGB/s > this is via nfs/tcpv3 > > I can get similar (bad) performance with the mellanox if I increase the file size > to 512MGB. Look like mellanox have internal beffer for caching and do ACK acclerating. > so at face value, it seems the mlxen does a better use of resources than the intel. > Any ideas how to improve ix/intel's performance? Are you sure about netapp performance? From owner-freebsd-net@freebsd.org Mon Aug 17 10:35:14 2015 Return-Path: Delivered-To: freebsd-net@mailman.ysv.freebsd.org Received: from mx1.freebsd.org (mx1.freebsd.org [IPv6:2001:1900:2254:206a::19:1]) by mailman.ysv.freebsd.org (Postfix) with ESMTP id 1B52C9BBA94; Mon, 17 Aug 2015 10:35:14 +0000 (UTC) (envelope-from danny@cs.huji.ac.il) Received: from kabab.cs.huji.ac.il (kabab.cs.huji.ac.il [132.65.116.210]) (using TLSv1 with cipher DHE-RSA-AES256-SHA (256/256 bits)) (Client did not present a certificate) by mx1.freebsd.org (Postfix) with ESMTPS id C77E11812; Mon, 17 Aug 2015 10:35:13 +0000 (UTC) (envelope-from danny@cs.huji.ac.il) Received: from mbpro-w.cs.huji.ac.il ([132.65.80.91]) by kabab.cs.huji.ac.il with esmtp id 1ZRHkw-000Pr5-D5; Mon, 17 Aug 2015 13:35:06 +0300 Content-Type: text/plain; charset=utf-8 Mime-Version: 1.0 (Mac OS X Mail 8.2 \(2102\)) Subject: Re: ix(intel) vs mlxen(mellanox) 10Gb performance From: Daniel Braniss In-Reply-To: <20150817094145.GB3158@zxy.spb.ru> Date: Mon, 17 Aug 2015 13:35:06 +0300 Cc: FreeBSD stable , FreeBSD Net Content-Transfer-Encoding: quoted-printable Message-Id: <197995E2-0C11-43A2-AB30-FBB0FB8CE2C5@cs.huji.ac.il> References: <1D52028A-B39F-4F9B-BD38-CB1D73BF5D56@cs.huji.ac.il> <20150817094145.GB3158@zxy.spb.ru> To: Slawa Olhovchenkov X-Mailer: Apple Mail (2.2102) X-BeenThere: freebsd-net@freebsd.org X-Mailman-Version: 2.1.20 Precedence: list List-Id: Networking and TCP/IP with FreeBSD List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , X-List-Received-Date: Mon, 17 Aug 2015 10:35:14 -0000 > On Aug 17, 2015, at 12:41 PM, Slawa Olhovchenkov = wrote: >=20 > On Mon, Aug 17, 2015 at 10:27:41AM +0300, Daniel Braniss wrote: >=20 >> hi, >> I have a host (Dell R730) with both cards, connected to an = HP8200 switch at 10Gb. >> when writing to the same storage (netapp) this is what I get: >> ix0: ~130MGB/s >> mlxen0 ~330MGB/s >> this is via nfs/tcpv3 >>=20 >> I can get similar (bad) performance with the mellanox if I = increase the file size >> to 512MGB. >=20 > Look like mellanox have internal beffer for caching and do ACK = acclerating. what ever they are doing, it=E2=80=99s impressive :-) >=20 >> so at face value, it seems the mlxen does a better use of = resources than the intel. >> Any ideas how to improve ix/intel's performance? >=20 > Are you sure about netapp performance? yes, and why should it act differently if the request is coming from the = same host? in any case the numbers are quiet consistent since I have measured it from several = hosts, and at different times. danny From owner-freebsd-net@freebsd.org Mon Aug 17 10:41:54 2015 Return-Path: Delivered-To: freebsd-net@mailman.ysv.freebsd.org Received: from mx1.freebsd.org (mx1.freebsd.org [IPv6:2001:1900:2254:206a::19:1]) by mailman.ysv.freebsd.org (Postfix) with ESMTP id 3DD409BBD5E; Mon, 17 Aug 2015 10:41:54 +0000 (UTC) (envelope-from csforgeron@gmail.com) Received: from mail-io0-x22d.google.com (mail-io0-x22d.google.com [IPv6:2607:f8b0:4001:c06::22d]) (using TLSv1.2 with cipher ECDHE-RSA-AES128-GCM-SHA256 (128/128 bits)) (Client CN "smtp.gmail.com", Issuer "Google Internet Authority G2" (verified OK)) by mx1.freebsd.org (Postfix) with ESMTPS id 0A3171CBC; Mon, 17 Aug 2015 10:41:54 +0000 (UTC) (envelope-from csforgeron@gmail.com) Received: by iodt126 with SMTP id t126so146649979iod.2; Mon, 17 Aug 2015 03:41:53 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20120113; h=mime-version:in-reply-to:references:date:message-id:subject:from:to :cc:content-type; bh=+vFQ/mA59UR3Ela9yMxgaI2A+vddpfy13o0C64efML4=; b=PA62iMuS76ELRZCw5Z+OKwSBhg5yW0f4KL4io6R6F1J3CsPqAHRsMq5dGMwRfz5N5C ugakzYcXPr030nLuwLC6fnyHlYCWCXidjQLPcMfJrAbVE8Exs3Au6Zzf/U1+tzG3qC1U cne/0fdY5Fn/ig9cWyPbDIFnt0AsCxBfHztdn7p/Ci9WwWcPooMhag8DBIazJoxYS7Bf ItWhnlKLXrTSYMTilxcgp2CpZVrxt7xZ2/ATrmrX5DYUaSErIAJJrG+ewYnRBDFF8cuR vevpmROUtPiXiJz4qznsS/hQp31wtyVuFiINuu12YY4aHTbTlx98hxy3NfZZQpnqzUgB KMtA== MIME-Version: 1.0 X-Received: by 10.107.14.84 with SMTP id 81mr603388ioo.188.1439808113168; Mon, 17 Aug 2015 03:41:53 -0700 (PDT) Received: by 10.36.34.77 with HTTP; Mon, 17 Aug 2015 03:41:53 -0700 (PDT) In-Reply-To: <20150817094145.GB3158@zxy.spb.ru> References: <1D52028A-B39F-4F9B-BD38-CB1D73BF5D56@cs.huji.ac.il> <20150817094145.GB3158@zxy.spb.ru> Date: Mon, 17 Aug 2015 07:41:53 -0300 Message-ID: Subject: Re: ix(intel) vs mlxen(mellanox) 10Gb performance From: Christopher Forgeron To: Slawa Olhovchenkov Cc: Daniel Braniss , FreeBSD Net , FreeBSD stable Content-Type: text/plain; charset=UTF-8 X-Content-Filtered-By: Mailman/MimeDel 2.1.20 X-BeenThere: freebsd-net@freebsd.org X-Mailman-Version: 2.1.20 Precedence: list List-Id: Networking and TCP/IP with FreeBSD List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , X-List-Received-Date: Mon, 17 Aug 2015 10:41:54 -0000 FYI, I can regularly hit 9.3 Gib/s with my Intel X520-DA2's and FreeBSD 10.1. Before 10.1 it was less. I used to tweak the card settings, but now it's just stock. You may want to check your settings, the Mellanox may just have better defaults for your switch. On Mon, Aug 17, 2015 at 6:41 AM, Slawa Olhovchenkov wrote: > On Mon, Aug 17, 2015 at 10:27:41AM +0300, Daniel Braniss wrote: > > > hi, > > I have a host (Dell R730) with both cards, connected to an HP8200 > switch at 10Gb. > > when writing to the same storage (netapp) this is what I get: > > ix0: ~130MGB/s > > mlxen0 ~330MGB/s > > this is via nfs/tcpv3 > > > > I can get similar (bad) performance with the mellanox if I > increase the file size > > to 512MGB. > > Look like mellanox have internal beffer for caching and do ACK acclerating. > > > so at face value, it seems the mlxen does a better use of > resources than the intel. > > Any ideas how to improve ix/intel's performance? > > Are you sure about netapp performance? > _______________________________________________ > freebsd-net@freebsd.org mailing list > https://lists.freebsd.org/mailman/listinfo/freebsd-net > To unsubscribe, send any mail to "freebsd-net-unsubscribe@freebsd.org" > From owner-freebsd-net@freebsd.org Mon Aug 17 10:51:51 2015 Return-Path: Delivered-To: freebsd-net@mailman.ysv.freebsd.org Received: from mx1.freebsd.org (mx1.freebsd.org [IPv6:2001:1900:2254:206a::19:1]) by mailman.ysv.freebsd.org (Postfix) with ESMTP id C21519BBF07; Mon, 17 Aug 2015 10:51:51 +0000 (UTC) (envelope-from danny@cs.huji.ac.il) Received: from kabab.cs.huji.ac.il (kabab.cs.huji.ac.il [132.65.116.210]) (using TLSv1 with cipher DHE-RSA-AES256-SHA (256/256 bits)) (Client did not present a certificate) by mx1.freebsd.org (Postfix) with ESMTPS id 7591613CA; Mon, 17 Aug 2015 10:51:50 +0000 (UTC) (envelope-from danny@cs.huji.ac.il) Received: from mbpro-w.cs.huji.ac.il ([132.65.80.91]) by kabab.cs.huji.ac.il with esmtp id 1ZRI13-00005x-IM; Mon, 17 Aug 2015 13:51:45 +0300 Mime-Version: 1.0 (Mac OS X Mail 8.2 \(2102\)) Subject: Re: ix(intel) vs mlxen(mellanox) 10Gb performance From: Daniel Braniss In-Reply-To: Date: Mon, 17 Aug 2015 13:51:45 +0300 Cc: Slawa Olhovchenkov , FreeBSD Net , FreeBSD stable Message-Id: <17871443-E105-4434-80B1-6939306A865F@cs.huji.ac.il> References: <1D52028A-B39F-4F9B-BD38-CB1D73BF5D56@cs.huji.ac.il> <20150817094145.GB3158@zxy.spb.ru> To: Christopher Forgeron X-Mailer: Apple Mail (2.2102) Content-Type: text/plain; charset=utf-8 Content-Transfer-Encoding: quoted-printable X-Content-Filtered-By: Mailman/MimeDel 2.1.20 X-BeenThere: freebsd-net@freebsd.org X-Mailman-Version: 2.1.20 Precedence: list List-Id: Networking and TCP/IP with FreeBSD List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , X-List-Received-Date: Mon, 17 Aug 2015 10:51:52 -0000 > On Aug 17, 2015, at 1:41 PM, Christopher Forgeron = wrote: >=20 > FYI, I can regularly hit 9.3 Gib/s with my Intel X520-DA2's and = FreeBSD 10.1. Before 10.1 it was less. >=20 this is NOT iperf/3 where i do get close to wire speed, it=E2=80=99s NFS writes, i.e., almost real work :-) > I used to tweak the card settings, but now it's just stock. You may = want to check your settings, the Mellanox may just have better defaults = for your switch.=20 >=20 > On Mon, Aug 17, 2015 at 6:41 AM, Slawa Olhovchenkov > wrote: > On Mon, Aug 17, 2015 at 10:27:41AM +0300, Daniel Braniss wrote: >=20 > > hi, > > I have a host (Dell R730) with both cards, connected to an = HP8200 switch at 10Gb. > > when writing to the same storage (netapp) this is what I get: > > ix0: ~130MGB/s > > mlxen0 ~330MGB/s > > this is via nfs/tcpv3 > > > > I can get similar (bad) performance with the mellanox if I = increase the file size > > to 512MGB. >=20 > Look like mellanox have internal beffer for caching and do ACK = acclerating. >=20 > > so at face value, it seems the mlxen does a better use of = resources than the intel. > > Any ideas how to improve ix/intel's performance? >=20 > Are you sure about netapp performance? > _______________________________________________ > freebsd-net@freebsd.org mailing list > https://lists.freebsd.org/mailman/listinfo/freebsd-net = > To unsubscribe, send any mail to "freebsd-net-unsubscribe@freebsd.org = " >=20 From owner-freebsd-net@freebsd.org Mon Aug 17 11:24:11 2015 Return-Path: Delivered-To: freebsd-net@mailman.ysv.freebsd.org Received: from mx1.freebsd.org (mx1.freebsd.org [IPv6:2001:1900:2254:206a::19:1]) by mailman.ysv.freebsd.org (Postfix) with ESMTP id D27A29BB6E4 for ; Mon, 17 Aug 2015 11:24:11 +0000 (UTC) (envelope-from gpalmer@freebsd.org) Received: from mail.in-addr.com (mail.in-addr.com [IPv6:2a01:4f8:191:61e8::2525:2525]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (Client did not present a certificate) by mx1.freebsd.org (Postfix) with ESMTPS id 9AA3C1140; Mon, 17 Aug 2015 11:24:11 +0000 (UTC) (envelope-from gpalmer@freebsd.org) Received: from gjp by mail.in-addr.com with local (Exim 4.86 (FreeBSD)) (envelope-from ) id 1ZRIWO-0006UO-Gy; Mon, 17 Aug 2015 12:24:08 +0100 Date: Mon, 17 Aug 2015 12:24:08 +0100 From: Gary Palmer To: freebsd-net@freebsd.org Subject: RFC7084 "Basic Requirements for IPv6 Customer Edge Routers" Message-ID: <20150817112408.GB13503@in-addr.com> MIME-Version: 1.0 Content-Type: text/plain; charset=us-ascii Content-Disposition: inline X-SA-Exim-Connect-IP: X-SA-Exim-Mail-From: gpalmer@freebsd.org X-SA-Exim-Scanned: No (on mail.in-addr.com); SAEximRunCond expanded to false X-BeenThere: freebsd-net@freebsd.org X-Mailman-Version: 2.1.20 Precedence: list List-Id: Networking and TCP/IP with FreeBSD List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , X-List-Received-Date: Mon, 17 Aug 2015 11:24:11 -0000 Hi, Does anyone know if FreeBSD 9.3 is compliant with RFC7034? Thanks, Gary From owner-freebsd-net@freebsd.org Mon Aug 17 11:39:27 2015 Return-Path: Delivered-To: freebsd-net@mailman.ysv.freebsd.org Received: from mx1.freebsd.org (mx1.freebsd.org [IPv6:2001:1900:2254:206a::19:1]) by mailman.ysv.freebsd.org (Postfix) with ESMTP id 4908C9BB901; Mon, 17 Aug 2015 11:39:27 +0000 (UTC) (envelope-from slw@zxy.spb.ru) Received: from zxy.spb.ru (zxy.spb.ru [195.70.199.98]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (Client did not present a certificate) by mx1.freebsd.org (Postfix) with ESMTPS id F0F8F16A8; Mon, 17 Aug 2015 11:39:26 +0000 (UTC) (envelope-from slw@zxy.spb.ru) Received: from slw by zxy.spb.ru with local (Exim 4.84 (FreeBSD)) (envelope-from ) id 1ZRIl9-000K58-9v; Mon, 17 Aug 2015 14:39:23 +0300 Date: Mon, 17 Aug 2015 14:39:23 +0300 From: Slawa Olhovchenkov To: Daniel Braniss Cc: FreeBSD stable , FreeBSD Net Subject: Re: ix(intel) vs mlxen(mellanox) 10Gb performance Message-ID: <20150817113923.GK1872@zxy.spb.ru> References: <1D52028A-B39F-4F9B-BD38-CB1D73BF5D56@cs.huji.ac.il> <20150817094145.GB3158@zxy.spb.ru> <197995E2-0C11-43A2-AB30-FBB0FB8CE2C5@cs.huji.ac.il> MIME-Version: 1.0 Content-Type: text/plain; charset=koi8-r Content-Disposition: inline In-Reply-To: <197995E2-0C11-43A2-AB30-FBB0FB8CE2C5@cs.huji.ac.il> User-Agent: Mutt/1.5.23 (2014-03-12) X-SA-Exim-Connect-IP: X-SA-Exim-Mail-From: slw@zxy.spb.ru X-SA-Exim-Scanned: No (on zxy.spb.ru); SAEximRunCond expanded to false X-BeenThere: freebsd-net@freebsd.org X-Mailman-Version: 2.1.20 Precedence: list List-Id: Networking and TCP/IP with FreeBSD List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , X-List-Received-Date: Mon, 17 Aug 2015 11:39:27 -0000 On Mon, Aug 17, 2015 at 01:35:06PM +0300, Daniel Braniss wrote: > > > On Aug 17, 2015, at 12:41 PM, Slawa Olhovchenkov wrote: > > > > On Mon, Aug 17, 2015 at 10:27:41AM +0300, Daniel Braniss wrote: > > > >> hi, > >> I have a host (Dell R730) with both cards, connected to an HP8200 switch at 10Gb. > >> when writing to the same storage (netapp) this is what I get: > >> ix0: ~130MGB/s > >> mlxen0 ~330MGB/s > >> this is via nfs/tcpv3 > >> > >> I can get similar (bad) performance with the mellanox if I increase the file size > >> to 512MGB. > > > > Look like mellanox have internal beffer for caching and do ACK acclerating. > what ever they are doing, it's impressive :-) > > > > >> so at face value, it seems the mlxen does a better use of resources than the intel. > >> Any ideas how to improve ix/intel's performance? > > > > Are you sure about netapp performance? > > yes, and why should it act differently if the request is coming from the same host? in any case > the numbers are quiet consistent since I have measured it from several hosts, and at different times. In any case, for 10Gb expect about 1200MGB/s. I see lesser speed. What netapp maximum performance? From other hosts, or local, any? From owner-freebsd-net@freebsd.org Mon Aug 17 11:49:28 2015 Return-Path: Delivered-To: freebsd-net@mailman.ysv.freebsd.org Received: from mx1.freebsd.org (mx1.freebsd.org [IPv6:2001:1900:2254:206a::19:1]) by mailman.ysv.freebsd.org (Postfix) with ESMTP id 572029BBB27; Mon, 17 Aug 2015 11:49:28 +0000 (UTC) (envelope-from haramrae@gmail.com) Received: from mail-io0-x236.google.com (mail-io0-x236.google.com [IPv6:2607:f8b0:4001:c06::236]) (using TLSv1.2 with cipher ECDHE-RSA-AES128-GCM-SHA256 (128/128 bits)) (Client CN "smtp.gmail.com", Issuer "Google Internet Authority G2" (verified OK)) by mx1.freebsd.org (Postfix) with ESMTPS id 243B31B23; Mon, 17 Aug 2015 11:49:28 +0000 (UTC) (envelope-from haramrae@gmail.com) Received: by iodt126 with SMTP id t126so148269275iod.2; Mon, 17 Aug 2015 04:49:27 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20120113; h=mime-version:in-reply-to:references:date:message-id:subject:from:to :cc:content-type; bh=9GXDOBqwjOHp180geY81O4hOqReO4F4gNo+V2nRhhSY=; b=EkpHRuSOXM/xXwBY11wO0UGT2r81htP3dcxr2O8GbA598NImLx9ONkxtloVjIr8yjQ y3KpPD3BdBhlcj7yz7fHnNndNEHdrRc+LaobiJ4VzSYTM+QGWqIYrnyc3/J8FvCIF+6E uckkCU66O4t2pLuUrYKXa7QkNMe/pxbyYmXYO+t8jSAmZIpyXRM/1qmHhDWuvLddNT9M CDxkJL3Xv0RbNlKUKWt9/SQBBTxLjweFGG3azq63wvD4MJ6JE0ONLOTfISDR896Cr5VZ V0cncw2VA/uG7I7alTMF8sdcaoZbzXVHG3a1/PfUGbtLP53K+ZfzEKgJB1ih60np1CD9 FlgQ== MIME-Version: 1.0 X-Received: by 10.107.169.215 with SMTP id f84mr1071456ioj.42.1439812167503; Mon, 17 Aug 2015 04:49:27 -0700 (PDT) Received: by 10.64.80.197 with HTTP; Mon, 17 Aug 2015 04:49:27 -0700 (PDT) In-Reply-To: <20150817113923.GK1872@zxy.spb.ru> References: <1D52028A-B39F-4F9B-BD38-CB1D73BF5D56@cs.huji.ac.il> <20150817094145.GB3158@zxy.spb.ru> <197995E2-0C11-43A2-AB30-FBB0FB8CE2C5@cs.huji.ac.il> <20150817113923.GK1872@zxy.spb.ru> Date: Mon, 17 Aug 2015 13:49:27 +0200 Message-ID: Subject: Re: ix(intel) vs mlxen(mellanox) 10Gb performance From: Alban Hertroys To: Slawa Olhovchenkov Cc: Daniel Braniss , FreeBSD Net , FreeBSD stable Content-Type: text/plain; charset=UTF-8 X-BeenThere: freebsd-net@freebsd.org X-Mailman-Version: 2.1.20 Precedence: list List-Id: Networking and TCP/IP with FreeBSD List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , X-List-Received-Date: Mon, 17 Aug 2015 11:49:28 -0000 On 17 August 2015 at 13:39, Slawa Olhovchenkov wrote: > In any case, for 10Gb expect about 1200MGB/s. Your usage of units is confusing. Above you claim you expect 1200 million gigabytes per second, or 1.2 * 10^18 Bytes/s. I don't think any known network interface can do that, including highly experimental ones. I suspect you intended to claim that you expect 1.2GB/s (Gigabytes per second) over that 10Gb/s (Gigabits per second) network. That's still on the high side of what's possible. On TCP/IP there is some TCP overhead, so 1.0 GB/s is probably more realistic. WRT the actual problem you're trying to solve, I'm no help there. -- If you can't see the forest for the trees, Cut the trees and you'll see there is no forest. From owner-freebsd-net@freebsd.org Mon Aug 17 11:54:08 2015 Return-Path: Delivered-To: freebsd-net@mailman.ysv.freebsd.org Received: from mx1.freebsd.org (mx1.freebsd.org [IPv6:2001:1900:2254:206a::19:1]) by mailman.ysv.freebsd.org (Postfix) with ESMTP id 809BD9BBCF5; Mon, 17 Aug 2015 11:54:08 +0000 (UTC) (envelope-from slw@zxy.spb.ru) Received: from zxy.spb.ru (zxy.spb.ru [195.70.199.98]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (Client did not present a certificate) by mx1.freebsd.org (Postfix) with ESMTPS id 39F6F1F3D; Mon, 17 Aug 2015 11:54:08 +0000 (UTC) (envelope-from slw@zxy.spb.ru) Received: from slw by zxy.spb.ru with local (Exim 4.84 (FreeBSD)) (envelope-from ) id 1ZRIzN-000KLs-HV; Mon, 17 Aug 2015 14:54:05 +0300 Date: Mon, 17 Aug 2015 14:54:05 +0300 From: Slawa Olhovchenkov To: Alban Hertroys Cc: Daniel Braniss , FreeBSD Net , FreeBSD stable Subject: Re: ix(intel) vs mlxen(mellanox) 10Gb performance Message-ID: <20150817115405.GL1872@zxy.spb.ru> References: <1D52028A-B39F-4F9B-BD38-CB1D73BF5D56@cs.huji.ac.il> <20150817094145.GB3158@zxy.spb.ru> <197995E2-0C11-43A2-AB30-FBB0FB8CE2C5@cs.huji.ac.il> <20150817113923.GK1872@zxy.spb.ru> MIME-Version: 1.0 Content-Type: text/plain; charset=us-ascii Content-Disposition: inline In-Reply-To: User-Agent: Mutt/1.5.23 (2014-03-12) X-SA-Exim-Connect-IP: X-SA-Exim-Mail-From: slw@zxy.spb.ru X-SA-Exim-Scanned: No (on zxy.spb.ru); SAEximRunCond expanded to false X-BeenThere: freebsd-net@freebsd.org X-Mailman-Version: 2.1.20 Precedence: list List-Id: Networking and TCP/IP with FreeBSD List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , X-List-Received-Date: Mon, 17 Aug 2015 11:54:08 -0000 On Mon, Aug 17, 2015 at 01:49:27PM +0200, Alban Hertroys wrote: > On 17 August 2015 at 13:39, Slawa Olhovchenkov wrote: > > > In any case, for 10Gb expect about 1200MGB/s. > > Your usage of units is confusing. Above you claim you expect 1200 I am use as topic starter and expect MeGaBytes per second > million gigabytes per second, or 1.2 * 10^18 Bytes/s. I don't think > any known network interface can do that, including highly experimental > ones. > > I suspect you intended to claim that you expect 1.2GB/s (Gigabytes per > second) over that 10Gb/s (Gigabits per second) network. > That's still on the high side of what's possible. On TCP/IP there is > some TCP overhead, so 1.0 GB/s is probably more realistic. TCP give 5-7% overhead (include retrasmits). 10^9/8*0.97 = 1.2125 From owner-freebsd-net@freebsd.org Mon Aug 17 12:21:21 2015 Return-Path: Delivered-To: freebsd-net@mailman.ysv.freebsd.org Received: from mx1.freebsd.org (mx1.freebsd.org [IPv6:2001:1900:2254:206a::19:1]) by mailman.ysv.freebsd.org (Postfix) with ESMTP id CADDD9B982E; Mon, 17 Aug 2015 12:21:21 +0000 (UTC) (envelope-from rmacklem@uoguelph.ca) Received: from esa-jnhn.mail.uoguelph.ca (esa-jnhn.mail.uoguelph.ca [131.104.91.44]) by mx1.freebsd.org (Postfix) with ESMTP id 6F4F410BD; Mon, 17 Aug 2015 12:21:20 +0000 (UTC) (envelope-from rmacklem@uoguelph.ca) IronPort-PHdr: 9a23:YM7nTxamSOtZlVGynGzpRfT/LSx+4OfEezUN459isYplN5qZpMi7bnLW6fgltlLVR4KTs6sC0LqN9fy6EjBbqb+681k8M7V0HycfjssXmwFySOWkMmbcaMDQUiohAc5ZX0Vk9XzoeWJcGcL5ekGA6ibqtW1aJBzzOEJPK/jvHcaK1oLsh7v0p8eYP14ArQH+SI0xBS3+lR/WuMgSjNkqAYcK4TyNnEF1ff9Lz3hjP1OZkkW0zM6x+Jl+73YY4Kp5pIZoGJ/3dKUgTLFeEC9ucyVsvJWq5lH+SxCS7C4cTnkOiUgPRAzE9w3hGJnrvybwreY73zOVesj/TLQxUDLl66ZwVB7uhiBAOSQ0/WvMholrkKtRpB/ymxsq74fSYYyRfNBkd6XcZshSEWZIWMBAfydaRIOhbYpJBuFHPOIO/KfnoF5blxq1BkGJDejszjJNzivs2KQx0OAsFCnb2wM9EtYWsDLfpYOmZ+8pTempwfyQnn34ZPRM1GK4sdCQfw== X-IronPort-Anti-Spam-Filtered: true X-IronPort-Anti-Spam-Result: A2BFAgDX0NFV/61jaINdDoNhaQaDHrpFAQmBawqFL0oCgWYUAQEBAQEBAQGBCYIdggYBAQEDAQEBASArIAsQAgEIDgoCAg0WAwICIQYBCRURAgQIBwQBHASHeAMKCA26CI9pDYVXAQEBAQEBBAEBAQEBARgEgSKKMIJPgWgBAQcVATMHgmmBQwWHIo17hQSFBnWDN5Eng0+DZQImgz9aIjMHfwgXI4EEAQEB X-IronPort-AV: E=Sophos;i="5.15,694,1432612800"; d="scan'208";a="231258543" Received: from nipigon.cs.uoguelph.ca (HELO zcs1.mail.uoguelph.ca) ([131.104.99.173]) by esa-jnhn.mail.uoguelph.ca with ESMTP; 17 Aug 2015 08:21:14 -0400 Received: from localhost (localhost [127.0.0.1]) by zcs1.mail.uoguelph.ca (Postfix) with ESMTP id 3283815F577; Mon, 17 Aug 2015 08:21:14 -0400 (EDT) Received: from zcs1.mail.uoguelph.ca ([127.0.0.1]) by localhost (zcs1.mail.uoguelph.ca [127.0.0.1]) (amavisd-new, port 10032) with ESMTP id PbWpkVNngp3n; Mon, 17 Aug 2015 08:21:13 -0400 (EDT) Received: from localhost (localhost [127.0.0.1]) by zcs1.mail.uoguelph.ca (Postfix) with ESMTP id 9000215F578; Mon, 17 Aug 2015 08:21:13 -0400 (EDT) X-Virus-Scanned: amavisd-new at zcs1.mail.uoguelph.ca Received: from zcs1.mail.uoguelph.ca ([127.0.0.1]) by localhost (zcs1.mail.uoguelph.ca [127.0.0.1]) (amavisd-new, port 10026) with ESMTP id N6GgvCEtRE1Y; Mon, 17 Aug 2015 08:21:13 -0400 (EDT) Received: from zcs1.mail.uoguelph.ca (zcs1.mail.uoguelph.ca [172.17.95.18]) by zcs1.mail.uoguelph.ca (Postfix) with ESMTP id 6426915F577; Mon, 17 Aug 2015 08:21:13 -0400 (EDT) Date: Mon, 17 Aug 2015 08:21:12 -0400 (EDT) From: Rick Macklem To: Daniel Braniss Cc: Christopher Forgeron , FreeBSD Net , FreeBSD stable , Slawa Olhovchenkov Message-ID: <473274181.23263108.1439814072514.JavaMail.zimbra@uoguelph.ca> In-Reply-To: <17871443-E105-4434-80B1-6939306A865F@cs.huji.ac.il> References: <1D52028A-B39F-4F9B-BD38-CB1D73BF5D56@cs.huji.ac.il> <20150817094145.GB3158@zxy.spb.ru> <17871443-E105-4434-80B1-6939306A865F@cs.huji.ac.il> Subject: Re: ix(intel) vs mlxen(mellanox) 10Gb performance MIME-Version: 1.0 Content-Type: text/plain; charset=utf-8 Content-Transfer-Encoding: quoted-printable X-Originating-IP: [172.17.95.10] X-Mailer: Zimbra 8.0.9_GA_6191 (ZimbraWebClient - FF34 (Win)/8.0.9_GA_6191) Thread-Topic: ix(intel) vs mlxen(mellanox) 10Gb performance Thread-Index: tmoW4T+6Z7dNXU5bN4I24LLlQtGA7w== X-BeenThere: freebsd-net@freebsd.org X-Mailman-Version: 2.1.20 Precedence: list List-Id: Networking and TCP/IP with FreeBSD List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , X-List-Received-Date: Mon, 17 Aug 2015 12:21:21 -0000 Daniel Braniss wrote: >=20 > > On Aug 17, 2015, at 1:41 PM, Christopher Forgeron > > wrote: > >=20 > > FYI, I can regularly hit 9.3 Gib/s with my Intel X520-DA2's and FreeBSD > > 10.1. Before 10.1 it was less. > >=20 >=20 > this is NOT iperf/3 where i do get close to wire speed, > it=E2=80=99s NFS writes, i.e., almost real work :-) >=20 > > I used to tweak the card settings, but now it's just stock. You may wan= t to > > check your settings, the Mellanox may just have better defaults for you= r > > switch. > >=20 Have you tried disabling TSO for the Intel? With TSO enabled, it will be co= pying every transmitted mbuf chain to a new chain of mbuf clusters via. m_defrag(= ) when TSO is enabled. (Assuming you aren't an 82598 chip. Most seem to be the 825= 99 chip these days?) This has been fixed in the driver very recently, but those fixes won't be i= n 10.1. rick ps: If you could test with 10.2, it would be interesting to see how the ix = does with the current driver fixes in it? > > On Mon, Aug 17, 2015 at 6:41 AM, Slawa Olhovchenkov > > wrote: > > On Mon, Aug 17, 2015 at 10:27:41AM +0300, Daniel Braniss wrote: > >=20 > > > hi, > > > I have a host (Dell R730) with both cards, connected to an HP82= 00 > > > switch at 10Gb. > > > when writing to the same storage (netapp) this is what I get: > > > ix0: ~130MGB/s > > > mlxen0 ~330MGB/s > > > this is via nfs/tcpv3 > > > > > > I can get similar (bad) performance with the mellanox if I incr= ease > > > the file size > > > to 512MGB. > >=20 > > Look like mellanox have internal beffer for caching and do ACK acclerat= ing. > >=20 > > > so at face value, it seems the mlxen does a better use of resou= rces > > > than the intel. > > > Any ideas how to improve ix/intel's performance? > >=20 > > Are you sure about netapp performance? > > _______________________________________________ > > freebsd-net@freebsd.org mailing list > > https://lists.freebsd.org/mailman/listinfo/freebsd-net > > > > To unsubscribe, send any mail to "freebsd-net-unsubscribe@freebsd.org > > " > >=20 >=20 > _______________________________________________ > freebsd-stable@freebsd.org mailing list > https://lists.freebsd.org/mailman/listinfo/freebsd-stable > To unsubscribe, send any mail to "freebsd-stable-unsubscribe@freebsd.org" From owner-freebsd-net@freebsd.org Mon Aug 17 12:28:38 2015 Return-Path: Delivered-To: freebsd-net@mailman.ysv.freebsd.org Received: from mx1.freebsd.org (mx1.freebsd.org [IPv6:2001:1900:2254:206a::19:1]) by mailman.ysv.freebsd.org (Postfix) with ESMTP id 621A49B998E; Mon, 17 Aug 2015 12:28:38 +0000 (UTC) (envelope-from slw@zxy.spb.ru) Received: from zxy.spb.ru (zxy.spb.ru [195.70.199.98]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (Client did not present a certificate) by mx1.freebsd.org (Postfix) with ESMTPS id 1FA7116AE; Mon, 17 Aug 2015 12:28:38 +0000 (UTC) (envelope-from slw@zxy.spb.ru) Received: from slw by zxy.spb.ru with local (Exim 4.84 (FreeBSD)) (envelope-from ) id 1ZRJWm-000Kyn-7S; Mon, 17 Aug 2015 15:28:36 +0300 Date: Mon, 17 Aug 2015 15:28:36 +0300 From: Slawa Olhovchenkov To: Daniel Braniss Cc: FreeBSD stable , FreeBSD Net Subject: Re: ix(intel) vs mlxen(mellanox) 10Gb performance Message-ID: <20150817122836.GC3158@zxy.spb.ru> References: <1D52028A-B39F-4F9B-BD38-CB1D73BF5D56@cs.huji.ac.il> MIME-Version: 1.0 Content-Type: text/plain; charset=koi8-r Content-Disposition: inline In-Reply-To: <1D52028A-B39F-4F9B-BD38-CB1D73BF5D56@cs.huji.ac.il> User-Agent: Mutt/1.5.23 (2014-03-12) X-SA-Exim-Connect-IP: X-SA-Exim-Mail-From: slw@zxy.spb.ru X-SA-Exim-Scanned: No (on zxy.spb.ru); SAEximRunCond expanded to false X-BeenThere: freebsd-net@freebsd.org X-Mailman-Version: 2.1.20 Precedence: list List-Id: Networking and TCP/IP with FreeBSD List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , X-List-Received-Date: Mon, 17 Aug 2015 12:28:38 -0000 On Mon, Aug 17, 2015 at 10:27:41AM +0300, Daniel Braniss wrote: > hi, > I have a host (Dell R730) with both cards, connected to an HP8200 switch at 10Gb. > when writing to the same storage (netapp) this is what I get: > ix0: ~130MGB/s > mlxen0 ~330MGB/s > this is via nfs/tcpv3 > > I can get similar (bad) performance with the mellanox if I increase the file size > to 512MGB. > so at face value, it seems the mlxen does a better use of resources than the intel. > Any ideas how to improve ix/intel's performance? Any way, please show OS version /var/run/dmesg.boot What's tuning perfomed (loader.conf, sysctl.conf)? top -PHS in both cases ifconfig -a in both cases netstat -rn in both cases I am don't know netapp -- what is hardware configuration (disks and etc) and software tuning (MTU?). From owner-freebsd-net@freebsd.org Mon Aug 17 13:29:24 2015 Return-Path: Delivered-To: freebsd-net@mailman.ysv.freebsd.org Received: from mx1.freebsd.org (mx1.freebsd.org [IPv6:2001:1900:2254:206a::19:1]) by mailman.ysv.freebsd.org (Postfix) with ESMTP id 2C6359BB3EF; Mon, 17 Aug 2015 13:29:24 +0000 (UTC) (envelope-from danny@cs.huji.ac.il) Received: from kabab.cs.huji.ac.il (kabab.cs.huji.ac.il [132.65.116.210]) (using TLSv1 with cipher DHE-RSA-AES256-SHA (256/256 bits)) (Client did not present a certificate) by mx1.freebsd.org (Postfix) with ESMTPS id D417E1881; Mon, 17 Aug 2015 13:29:23 +0000 (UTC) (envelope-from danny@cs.huji.ac.il) Received: from mbpro-w.cs.huji.ac.il ([132.65.80.91]) by kabab.cs.huji.ac.il with esmtp id 1ZRKTY-0002fM-G3; Mon, 17 Aug 2015 16:29:20 +0300 Content-Type: text/plain; charset=utf-8 Mime-Version: 1.0 (Mac OS X Mail 8.2 \(2102\)) Subject: Re: ix(intel) vs mlxen(mellanox) 10Gb performance From: Daniel Braniss In-Reply-To: <473274181.23263108.1439814072514.JavaMail.zimbra@uoguelph.ca> Date: Mon, 17 Aug 2015 16:29:20 +0300 Cc: Christopher Forgeron , FreeBSD Net , FreeBSD stable , Slawa Olhovchenkov Content-Transfer-Encoding: quoted-printable Message-Id: <7F892C70-9C04-4468-9514-EDBFE75CF2C6@cs.huji.ac.il> References: <1D52028A-B39F-4F9B-BD38-CB1D73BF5D56@cs.huji.ac.il> <20150817094145.GB3158@zxy.spb.ru> <17871443-E105-4434-80B1-6939306A865F@cs.huji.ac.il> <473274181.23263108.1439814072514.JavaMail.zimbra@uoguelph.ca> To: Rick Macklem X-Mailer: Apple Mail (2.2102) X-BeenThere: freebsd-net@freebsd.org X-Mailman-Version: 2.1.20 Precedence: list List-Id: Networking and TCP/IP with FreeBSD List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , X-List-Received-Date: Mon, 17 Aug 2015 13:29:24 -0000 > On Aug 17, 2015, at 3:21 PM, Rick Macklem = wrote: >=20 > Daniel Braniss wrote: >>=20 >>> On Aug 17, 2015, at 1:41 PM, Christopher Forgeron = >>> wrote: >>>=20 >>> FYI, I can regularly hit 9.3 Gib/s with my Intel X520-DA2's and = FreeBSD >>> 10.1. Before 10.1 it was less. >>>=20 >>=20 >> this is NOT iperf/3 where i do get close to wire speed, >> it=E2=80=99s NFS writes, i.e., almost real work :-) >>=20 >>> I used to tweak the card settings, but now it's just stock. You may = want to >>> check your settings, the Mellanox may just have better defaults for = your >>> switch. >>>=20 > Have you tried disabling TSO for the Intel? With TSO enabled, it will = be copying > every transmitted mbuf chain to a new chain of mbuf clusters via. = m_defrag() when > TSO is enabled. (Assuming you aren't an 82598 chip. Most seem to be = the 82599 chip > these days?) >=20 hi Rick how can i check the chip? > This has been fixed in the driver very recently, but those fixes won't = be in 10.1. >=20 > rick > ps: If you could test with 10.2, it would be interesting to see how = the ix does with > the current driver fixes in it? I new TSO was involved!=20 ok, firstly, it=E2=80=99s 10.2 stable. with TSO enabled, ix is bad, around 64MGB/s. disabling TSO it=E2=80=99s better, around 130 still, mlxen0 is about 250! with and without TSO >=20 >>> On Mon, Aug 17, 2015 at 6:41 AM, Slawa Olhovchenkov >> > wrote: >>> On Mon, Aug 17, 2015 at 10:27:41AM +0300, Daniel Braniss wrote: >>>=20 >>>> hi, >>>> I have a host (Dell R730) with both cards, connected to an = HP8200 >>>> switch at 10Gb. >>>> when writing to the same storage (netapp) this is what I get: >>>> ix0: ~130MGB/s >>>> mlxen0 ~330MGB/s >>>> this is via nfs/tcpv3 >>>>=20 >>>> I can get similar (bad) performance with the mellanox if I = increase >>>> the file size >>>> to 512MGB. >>>=20 >>> Look like mellanox have internal beffer for caching and do ACK = acclerating. >>>=20 >>>> so at face value, it seems the mlxen does a better use of = resources >>>> than the intel. >>>> Any ideas how to improve ix/intel's performance? >>>=20 >>> Are you sure about netapp performance? >>> _______________________________________________ >>> freebsd-net@freebsd.org mailing = list >>> https://lists.freebsd.org/mailman/listinfo/freebsd-net >>> >>> To unsubscribe, send any mail to = "freebsd-net-unsubscribe@freebsd.org >>> " >>>=20 >>=20 >> _______________________________________________ >> freebsd-stable@freebsd.org mailing list >> https://lists.freebsd.org/mailman/listinfo/freebsd-stable >> To unsubscribe, send any mail to = "freebsd-stable-unsubscribe@freebsd.org" From owner-freebsd-net@freebsd.org Mon Aug 17 15:44:38 2015 Return-Path: Delivered-To: freebsd-net@mailman.ysv.freebsd.org Received: from mx1.freebsd.org (mx1.freebsd.org [IPv6:2001:1900:2254:206a::19:1]) by mailman.ysv.freebsd.org (Postfix) with ESMTP id 33D049BBFC5; Mon, 17 Aug 2015 15:44:38 +0000 (UTC) (envelope-from haramrae@gmail.com) Received: from mail-io0-x233.google.com (mail-io0-x233.google.com [IPv6:2607:f8b0:4001:c06::233]) (using TLSv1.2 with cipher ECDHE-RSA-AES128-GCM-SHA256 (128/128 bits)) (Client CN "smtp.gmail.com", Issuer "Google Internet Authority G2" (verified OK)) by mx1.freebsd.org (Postfix) with ESMTPS id EF46F1647; Mon, 17 Aug 2015 15:44:37 +0000 (UTC) (envelope-from haramrae@gmail.com) Received: by iods203 with SMTP id s203so156015012iod.0; Mon, 17 Aug 2015 08:44:37 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20120113; h=mime-version:in-reply-to:references:date:message-id:subject:from:to :cc:content-type; bh=j/eS1ITatBCtcp/wsRl5hVp7ISn540RkaGwxK3hNflo=; b=VglJYEwaPEp3ZXnd/a6DatLJOosZE4TVCTgnRRnDRmiymDOwtP+csGGhprqQKap0yJ eVejujHCBVKa5F+BnHeUw6242XVbqTXkLrPenwGwnOZHRcrJPvb/jt4TIQW+4zYG0SDv jI5yquL8Mt5EAbUcZRoegYQXA2XFk0g4/8PTUDD9Pl3IfB3iR8XfuBk+Vd96QFjYnEAc p04Np8CdfuwQtpz4Y4pY+TUi69GuZRgxcmvAaXeAn1hvkLNDldltngUojoMS6oJKyhcr BocNqkNzOL4pRdPFI6+ka33rch4h5OnwXP4B5QMnR7v7h6HzlV8n44opaT0JgeQGqiBD OOxg== MIME-Version: 1.0 X-Received: by 10.107.169.215 with SMTP id f84mr2330734ioj.42.1439826277490; Mon, 17 Aug 2015 08:44:37 -0700 (PDT) Received: by 10.64.80.197 with HTTP; Mon, 17 Aug 2015 08:44:37 -0700 (PDT) In-Reply-To: <20150817115405.GL1872@zxy.spb.ru> References: <1D52028A-B39F-4F9B-BD38-CB1D73BF5D56@cs.huji.ac.il> <20150817094145.GB3158@zxy.spb.ru> <197995E2-0C11-43A2-AB30-FBB0FB8CE2C5@cs.huji.ac.il> <20150817113923.GK1872@zxy.spb.ru> <20150817115405.GL1872@zxy.spb.ru> Date: Mon, 17 Aug 2015 17:44:37 +0200 Message-ID: Subject: Re: ix(intel) vs mlxen(mellanox) 10Gb performance From: Alban Hertroys To: Slawa Olhovchenkov Cc: Daniel Braniss , FreeBSD Net , FreeBSD stable Content-Type: text/plain; charset=UTF-8 X-BeenThere: freebsd-net@freebsd.org X-Mailman-Version: 2.1.20 Precedence: list List-Id: Networking and TCP/IP with FreeBSD List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , X-List-Received-Date: Mon, 17 Aug 2015 15:44:38 -0000 On 17 August 2015 at 13:54, Slawa Olhovchenkov wrote: > On Mon, Aug 17, 2015 at 01:49:27PM +0200, Alban Hertroys wrote: > >> On 17 August 2015 at 13:39, Slawa Olhovchenkov wrote: >> >> > In any case, for 10Gb expect about 1200MGB/s. >> >> Your usage of units is confusing. Above you claim you expect 1200 > > I am use as topic starter and expect MeGaBytes per second That's a highly unusual way of writing MB/s. There are standards for unit prefixes: k means kilo, M means Mega, G means Giga, etc. See: https://en.wikipedia.org/wiki/International_System_of_Units#Prefixes >> million gigabytes per second, or 1.2 * 10^18 Bytes/s. I don't think >> any known network interface can do that, including highly experimental >> ones. >> >> I suspect you intended to claim that you expect 1.2GB/s (Gigabytes per >> second) over that 10Gb/s (Gigabits per second) network. >> That's still on the high side of what's possible. On TCP/IP there is >> some TCP overhead, so 1.0 GB/s is probably more realistic. > > TCP give 5-7% overhead (include retrasmits). > 10^9/8*0.97 = 1.2125 In information science, Bytes are counted in multiples of 2, not 10. A kb is 1024 bits or 2^10 b. So 10 Gb is 10 * 2^30 bits. It's also not unusual to be more specific about that 2-base and use kib, Mib and Gib instead. Apparently you didn't know that... Also, if you take 5% off, you are left with (0.95 * 10 * 2^30) / 8 = 1.1875 B/s, not 0.97 * ... Your calculations were a bit optimistic. Now I have to admit I'm used to use a factor of 10 to convert from b/s to B/s (that's 20%!), but that's probably no longer correct, what with jumbo frames and all. -- If you can't see the forest for the trees, Cut the trees and you'll see there is no forest. From owner-freebsd-net@freebsd.org Mon Aug 17 16:01:49 2015 Return-Path: Delivered-To: freebsd-net@mailman.ysv.freebsd.org Received: from mx1.freebsd.org (mx1.freebsd.org [IPv6:2001:1900:2254:206a::19:1]) by mailman.ysv.freebsd.org (Postfix) with ESMTP id C4EE59BB506; Mon, 17 Aug 2015 16:01:49 +0000 (UTC) (envelope-from slw@zxy.spb.ru) Received: from zxy.spb.ru (zxy.spb.ru [195.70.199.98]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (Client did not present a certificate) by mx1.freebsd.org (Postfix) with ESMTPS id 7F4C1104D; Mon, 17 Aug 2015 16:01:49 +0000 (UTC) (envelope-from slw@zxy.spb.ru) Received: from slw by zxy.spb.ru with local (Exim 4.84 (FreeBSD)) (envelope-from ) id 1ZRMr3-0003cS-JL; Mon, 17 Aug 2015 19:01:45 +0300 Date: Mon, 17 Aug 2015 19:01:45 +0300 From: Slawa Olhovchenkov To: Alban Hertroys Cc: FreeBSD Net , FreeBSD stable Subject: Re: ix(intel) vs mlxen(mellanox) 10Gb performance Message-ID: <20150817160145.GE3158@zxy.spb.ru> References: <1D52028A-B39F-4F9B-BD38-CB1D73BF5D56@cs.huji.ac.il> <20150817094145.GB3158@zxy.spb.ru> <197995E2-0C11-43A2-AB30-FBB0FB8CE2C5@cs.huji.ac.il> <20150817113923.GK1872@zxy.spb.ru> <20150817115405.GL1872@zxy.spb.ru> MIME-Version: 1.0 Content-Type: text/plain; charset=us-ascii Content-Disposition: inline In-Reply-To: User-Agent: Mutt/1.5.23 (2014-03-12) X-SA-Exim-Connect-IP: X-SA-Exim-Mail-From: slw@zxy.spb.ru X-SA-Exim-Scanned: No (on zxy.spb.ru); SAEximRunCond expanded to false X-BeenThere: freebsd-net@freebsd.org X-Mailman-Version: 2.1.20 Precedence: list List-Id: Networking and TCP/IP with FreeBSD List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , X-List-Received-Date: Mon, 17 Aug 2015 16:01:49 -0000 On Mon, Aug 17, 2015 at 05:44:37PM +0200, Alban Hertroys wrote: > On 17 August 2015 at 13:54, Slawa Olhovchenkov wrote: > > On Mon, Aug 17, 2015 at 01:49:27PM +0200, Alban Hertroys wrote: > > > >> On 17 August 2015 at 13:39, Slawa Olhovchenkov wrote: > >> > >> > In any case, for 10Gb expect about 1200MGB/s. > >> > >> Your usage of units is confusing. Above you claim you expect 1200 > > > > I am use as topic starter and expect MeGaBytes per second > > That's a highly unusual way of writing MB/s. I am know. This is do not care for me. > There are standards for unit prefixes: k means kilo, M means Mega, G > means Giga, etc. See: > https://en.wikipedia.org/wiki/International_System_of_Units#Prefixes > > >> million gigabytes per second, or 1.2 * 10^18 Bytes/s. I don't think > >> any known network interface can do that, including highly experimental > >> ones. > >> > >> I suspect you intended to claim that you expect 1.2GB/s (Gigabytes per > >> second) over that 10Gb/s (Gigabits per second) network. > >> That's still on the high side of what's possible. On TCP/IP there is > >> some TCP overhead, so 1.0 GB/s is probably more realistic. > > > > TCP give 5-7% overhead (include retrasmits). > > 10^9/8*0.97 = 1.2125 > > In information science, Bytes are counted in multiples of 2, not 10. A > kb is 1024 bits or 2^10 b. So 10 Gb is 10 * 2^30 bits. Interface speeds counted in multile of 10. 10Mbit ethernet have speed 10^7 bit/s. 64Kbit ISDN have speed 64000, not 65536. > It's also not unusual to be more specific about that 2-base and use > kib, Mib and Gib instead. > > Apparently you didn't know that... > > Also, if you take 5% off, you are left with (0.95 * 10 * 2^30) / 8 = > 1.1875 B/s, not 0.97 * ... Your calculations were a bit optimistic. May bug. 10^10/8*0.93 = 1162500000 = 1162.5 > Now I have to admit I'm used to use a factor of 10 to convert from b/s > to B/s (that's 20%!), but that's probably no longer correct, what with > jumbo frames and all. Ok, may be topic started use software metered speed with MGBs as 1048576 per second. 1162500000/1048576 = 1108.64 From owner-freebsd-net@freebsd.org Mon Aug 17 16:54:44 2015 Return-Path: Delivered-To: freebsd-net@mailman.ysv.freebsd.org Received: from mx1.freebsd.org (mx1.freebsd.org [IPv6:2001:1900:2254:206a::19:1]) by mailman.ysv.freebsd.org (Postfix) with ESMTP id 1394E9BB0BA for ; Mon, 17 Aug 2015 16:54:44 +0000 (UTC) (envelope-from sobomax@sippysoft.com) Received: from mail-ig0-f175.google.com (mail-ig0-f175.google.com [209.85.213.175]) (using TLSv1.2 with cipher ECDHE-RSA-AES128-GCM-SHA256 (128/128 bits)) (Client CN "smtp.gmail.com", Issuer "Google Internet Authority G2" (verified OK)) by mx1.freebsd.org (Postfix) with ESMTPS id CEF071101 for ; Mon, 17 Aug 2015 16:54:43 +0000 (UTC) (envelope-from sobomax@sippysoft.com) Received: by igfj19 with SMTP id j19so62208975igf.0 for ; Mon, 17 Aug 2015 09:54:36 -0700 (PDT) X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20130820; h=x-gm-message-state:mime-version:sender:in-reply-to:references:date :message-id:subject:from:to:cc:content-type; bh=Zh8KAxPjL6L+1YaNZ2REcOqnkh/SeV7M6yaOm6Tuawk=; b=AQDD0LvnRYH4XUcd4MTWhGd1JHz/yeIFtJeqM8UPOaSWiKWPbVn28ya6SKJIlSqj8y wsAR5BJNTkir7OwQXpcJZk2l8Lk0mt32BC9mufnujn4F3r+AHjZQiFy42NlE4g+Mxq8C /yA2p6aE2c8D5AaTtZfVwOGcTrnGHUrQY1wOzxcHnhedYoLogXY24hsrYLXmb8FjMBiA scZ6C6EagOWt2oN9lrFdcL0vV4XO+nGnhE/ad4+iWzUEyPsNjCnHp1O0ihHWfhV0NbuI LI1C7h3moLvtqRmdPceG8PmVwfmuDMvkx/+b1YDNl6oLaBa9zadp5xZL1CEkQ8aZuqsc YzJg== X-Gm-Message-State: ALoCoQk+yMjbDVFyFu7pyvlstlAqSlh4jrLN7tnV9e4qWhUEDnf+IRVeolJ1xB0KzpyWQqos3bQj MIME-Version: 1.0 X-Received: by 10.50.64.244 with SMTP id r20mr16634854igs.33.1439830476566; Mon, 17 Aug 2015 09:54:36 -0700 (PDT) Sender: sobomax@sippysoft.com Received: by 10.79.107.143 with HTTP; Mon, 17 Aug 2015 09:54:36 -0700 (PDT) In-Reply-To: References: <77171439377164@web21h.yandex.ru> <55CB2F18.40902@FreeBSD.org> Date: Mon, 17 Aug 2015 09:54:36 -0700 X-Google-Sender-Auth: ftvdksynj4flWblXyPjrxjex3so Message-ID: Subject: Re: Poor high-PPS performance of the 10G ixgbe(9) NIC/driver in FreeBSD 10.1 From: Maxim Sobolev To: Luigi Rizzo Cc: Babak Farrokhi , "Alexander V. Chernikov" , =?UTF-8?Q?Olivier_Cochard=2DLabb=C3=A9?= , "freebsd@intel.com" , =?UTF-8?Q?Jev_Bj=C3=B6rsell?= , FreeBSD Net Content-Type: text/plain; charset=UTF-8 Content-Transfer-Encoding: quoted-printable X-Content-Filtered-By: Mailman/MimeDel 2.1.20 X-BeenThere: freebsd-net@freebsd.org X-Mailman-Version: 2.1.20 Precedence: list List-Id: Networking and TCP/IP with FreeBSD List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , X-List-Received-Date: Mon, 17 Aug 2015 16:54:44 -0000 I think we are getting a better performance today with the IXGBE_FDIR switched off. It's not 100% decisive though, since we've only pushed it to little bit below 200kpps. We'll push more traffic tomorrow and see how it goes. -Maxim On Fri, Aug 14, 2015 at 10:29 AM, Maxim Sobolev wrote= : > Hi guys, unfortunately no, neither reduction of the number of queues from > 8 to 6 nor pinning interrupt rate at 20000 per queue have not made any > difference. The card still goes kaboom at about 200Kpps no matter what. i= n > fact I've gone bit further, and after the first spike went on an pushed > interrupt rate even further down to 10000, but again no difference either= , > it still blows at the same mark. Although it did have effect on interrupt > rate reduction from 190K to some 130K according to the systat -vm, so tha= t > the moderation itself seems to be working fine. We will try disabling IXG= BE_FDIR > tomorrow and see if it helps. > > http://sobomax.sippysoft.com/ScreenShot391.png <- systat -vm with > max_interrupt_rate =3D 20000 right before overload > > http://sobomax.sippysoft.com/ScreenShot392.png <- systat -vm during issue > unfolding (max_interrupt_rate =3D 10000) > > http://sobomax.sippysoft.com/ScreenShot394.png <- cpu/net monitoring, > first two spikes are with max_interrupt_rate =3D 20000, the third one max= _interrupt_rate > =3D 10000 > > -Max > > On Wed, Aug 12, 2015 at 5:23 AM, Luigi Rizzo wrote: > >> As I was telling to maxim, you should disable aim because it only matche= s >> the max interrupt rate to the average packet size, which is the last thi= ng >> you want. >> >> Setting the interrupt rate with sysctl (one per queue) gives you precise >> control on the max rate and (hence, extra latency). 20k interrupts/s giv= e >> you 50us of latency, and the 2k slots in the queue are still enough to >> absorb a burst of min-sized frames hitting a single queue (the os will >> start dropping long before that level, but that's another story). >> >> Cheers >> Luigi >> >> On Wednesday, August 12, 2015, Babak Farrokhi >> wrote: >> >>> I ran into the same problem with almost the same hardware (Intel X520) >>> on 10-STABLE. HT/SMT is disabled and cards are configured with 8 queues= , >>> with the same sysctl tunings as sobomax@ did. I am not using lagg, no >>> FLOWTABLE. >>> >>> I experimented with pmcstat (RESOURCE_STALLS) a while ago and here [1] >>> [2] you can see the results, including pmc output, callchain, flamegrap= h >>> and gprof output. >>> >>> I am experiencing huge number of interrupts with 200kpps load: >>> >>> # sysctl dev.ix | grep interrupt_rate >>> dev.ix.1.queue7.interrupt_rate: 125000 >>> dev.ix.1.queue6.interrupt_rate: 6329 >>> dev.ix.1.queue5.interrupt_rate: 500000 >>> dev.ix.1.queue4.interrupt_rate: 100000 >>> dev.ix.1.queue3.interrupt_rate: 50000 >>> dev.ix.1.queue2.interrupt_rate: 500000 >>> dev.ix.1.queue1.interrupt_rate: 500000 >>> dev.ix.1.queue0.interrupt_rate: 100000 >>> dev.ix.0.queue7.interrupt_rate: 500000 >>> dev.ix.0.queue6.interrupt_rate: 6097 >>> dev.ix.0.queue5.interrupt_rate: 10204 >>> dev.ix.0.queue4.interrupt_rate: 5208 >>> dev.ix.0.queue3.interrupt_rate: 5208 >>> dev.ix.0.queue2.interrupt_rate: 71428 >>> dev.ix.0.queue1.interrupt_rate: 5494 >>> dev.ix.0.queue0.interrupt_rate: 6250 >>> >>> [1] http://farrokhi.net/~farrokhi/pmc/6/ >>> [2] http://farrokhi.net/~farrokhi/pmc/7/ >>> >>> Regards, >>> Babak >>> >>> >>> Alexander V. Chernikov wrote: >>> > 12.08.2015, 02:28, "Maxim Sobolev" : >>> >> Olivier, keep in mind that we are not "kernel forwarding" packets, >>> but "app >>> >> forwarding", i.e. the packet goes full way >>> >> net->kernel->recvfrom->app->sendto->kernel->net, which is why we hav= e >>> much >>> >> lower PPS limits and which is why I think we are actually benefiting >>> from >>> >> the extra queues. Single-thread sendto() in a loop is CPU-bound at >>> about >>> >> 220K PPS, and while running the test I am observing that outbound >>> traffic >>> >> from one thread is mapped into a specific queue (well, pair of queue= s >>> on >>> >> two separate adaptors, due to lagg load balancing action). And the >>> peak >>> >> performance of that test is at 7 threads, which I believe correspond= s >>> to >>> >> the number of queues. We have plenty of CPU cores in the box (24) wi= th >>> >> HTT/SMT disabled and one CPU is mapped to a specific queue. This >>> leaves us >>> >> with at least 8 CPUs fully capable of running our app. If you look a= t >>> the >>> >> CPU utilization, we are at about 10% when the issue hits. >>> > >>> > In any case, it would be great if you could provide some profiling >>> info since there could be >>> > plenty of problematic places starting from TX rings contention to som= e >>> locks inside udp or even >>> > (in)famous random entropy harvester.. >>> > e.g. something like pmcstat -TS instructions -w1 might be sufficient >>> to determine the reason >>> >> ix0: >> 2.5.15> port >>> >> 0x6020-0x603f mem 0xc7c00000-0xc7dfffff,0xc7e04000-0xc7e07fff irq 40 >>> at >>> >> device 0.0 on pci3 >>> >> ix0: Using MSIX interrupts with 9 vectors >>> >> ix0: Bound queue 0 to cpu 0 >>> >> ix0: Bound queue 1 to cpu 1 >>> >> ix0: Bound queue 2 to cpu 2 >>> >> ix0: Bound queue 3 to cpu 3 >>> >> ix0: Bound queue 4 to cpu 4 >>> >> ix0: Bound queue 5 to cpu 5 >>> >> ix0: Bound queue 6 to cpu 6 >>> >> ix0: Bound queue 7 to cpu 7 >>> >> ix0: Ethernet address: 0c:c4:7a:5e:be:64 >>> >> ix0: PCI Express Bus: Speed 5.0GT/s Width x8 >>> >> 001.000008 [2705] netmap_attach success for ix0 tx 8/4096 rx >>> >> 8/4096 queues/slots >>> >> ix1: >> 2.5.15> port >>> >> 0x6000-0x601f mem 0xc7a00000-0xc7bfffff,0xc7e00000-0xc7e03fff irq 44 >>> at >>> >> device 0.1 on pci3 >>> >> ix1: Using MSIX interrupts with 9 vectors >>> >> ix1: Bound queue 0 to cpu 8 >>> >> ix1: Bound queue 1 to cpu 9 >>> >> ix1: Bound queue 2 to cpu 10 >>> >> ix1: Bound queue 3 to cpu 11 >>> >> ix1: Bound queue 4 to cpu 12 >>> >> ix1: Bound queue 5 to cpu 13 >>> >> ix1: Bound queue 6 to cpu 14 >>> >> ix1: Bound queue 7 to cpu 15 >>> >> ix1: Ethernet address: 0c:c4:7a:5e:be:65 >>> >> ix1: PCI Express Bus: Speed 5.0GT/s Width x8 >>> >> 001.000009 [2705] netmap_attach success for ix1 tx 8/4096 rx >>> >> 8/4096 queues/slots >>> >> >>> >> On Tue, Aug 11, 2015 at 4:14 PM, Olivier Cochard-Labb=C3=A9 < >>> olivier@cochard.me> >>> >> wrote: >>> >> >>> >>> On Tue, Aug 11, 2015 at 11:18 PM, Maxim Sobolev < >>> sobomax@freebsd.org> >>> >>> wrote: >>> >>> >>> >>>> Hi folks, >>> >>>> >>> >>>> =E2=80=8BHi, >>> >>> =E2=80=8B >>> >>> >>> >>>> We've trying to migrate some of our high-PPS systems to a new >>> hardware >>> >>>> that >>> >>>> has four X540-AT2 10G NICs and observed that interrupt time goes >>> through >>> >>>> roof after we cross around 200K PPS in and 200K out (two ports in >>> LACP). >>> >>>> The previous hardware was stable up to about 350K PPS in and 350K >>> out. I >>> >>>> believe the old one was equipped with the I350 and had the >>> identical LACP >>> >>>> configuration. The new box also has better CPU with more cores >>> (i.e. 24 >>> >>>> cores vs. 16 cores before). CPU itself is 2 x E5-2690 v3. >>> >>> =E2=80=8B200K PPS, and even 350K PPS are very low value indeed. >>> >>> On a Intel Xeon L5630 (4 cores only) with one X540-AT2=E2=80=8B >>> >>> >>> >>> =E2=80=8B(then 2 10Gigabit ports)=E2=80=8B I've reached about 1.8M= pps >>> (fastforwarding >>> >>> enabled) [1]. >>> >>> But my setup didn't use lagg(4): Can you disable lagg configuratio= n >>> and >>> >>> re-measure your performance without lagg ? >>> >>> >>> >>> Do you let Intel NIC drivers using 8 queues for port too? >>> >>> In my use case (forwarding smallest UDP packet size), I obtain >>> better >>> >>> behaviour by limiting NIC queues to 4 (hw.ix.num_queues or >>> >>> hw.ixgbe.num_queues, don't remember) if my system had 8 cores. And >>> this >>> >>> with Gigabit Intel[2] or Chelsio NIC [3]. >>> >>> >>> >>> Don't forget to disable TSO and LRO too. >>> >>> >>> >>> =E2=80=8BRegards, >>> >>> >>> >>> Olivier >>> >>> >>> >>> [1] >>> >>> >>> http://bsdrp.net/documentation/examples/forwarding_performance_lab_of_a= n_ibm_system_x3550_m3_with_10-gigabit_intel_x540-at2#graphs >>> >>> [2] >>> >>> >>> http://bsdrp.net/documentation/examples/forwarding_performance_lab_of_a= _superserver_5018a-ftn4#graph1 >>> >>> [3] >>> >>> >>> http://bsdrp.net/documentation/examples/forwarding_performance_lab_of_a= _hp_proliant_dl360p_gen8_with_10-gigabit_with_10-gigabit_chelsio_t540-cr#re= ducing_nic_queues >>> >> _______________________________________________ >>> >> freebsd-net@freebsd.org mailing list >>> >> http://lists.freebsd.org/mailman/listinfo/freebsd-net >>> >> To unsubscribe, send any mail to "freebsd-net-unsubscribe@freebsd.or= g >>> " >>> > _______________________________________________ >>> > freebsd-net@freebsd.org mailing list >>> > http://lists.freebsd.org/mailman/listinfo/freebsd-net >>> > To unsubscribe, send any mail to "freebsd-net-unsubscribe@freebsd.org= " >>> _______________________________________________ >>> freebsd-net@freebsd.org mailing list >>> http://lists.freebsd.org/mailman/listinfo/freebsd-net >>> To unsubscribe, send any mail to "freebsd-net-unsubscribe@freebsd.org" >> >> >> >> -- >> -----------------------------------------+------------------------------= - >> Prof. Luigi RIZZO, rizzo@iet.unipi.it . Dip. di Ing. dell'Informazione >> http://www.iet.unipi.it/~luigi/ . Universita` di Pisa >> TEL +39-050-2217533 . via Diotisalvi 2 >> Mobile +39-338-6809875 . 56122 PISA (Italy) >> -----------------------------------------+------------------------------= - >> >> > From owner-freebsd-net@freebsd.org Mon Aug 17 19:06:30 2015 Return-Path: Delivered-To: freebsd-net@mailman.ysv.freebsd.org Received: from mx1.freebsd.org (mx1.freebsd.org [IPv6:2001:1900:2254:206a::19:1]) by mailman.ysv.freebsd.org (Postfix) with ESMTP id 7CCCB9BC003 for ; Mon, 17 Aug 2015 19:06:30 +0000 (UTC) (envelope-from bugzilla-noreply@freebsd.org) Received: from kenobi.freebsd.org (kenobi.freebsd.org [IPv6:2001:1900:2254:206a::16:76]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (Client did not present a certificate) by mx1.freebsd.org (Postfix) with ESMTPS id 695CD1883 for ; Mon, 17 Aug 2015 19:06:30 +0000 (UTC) (envelope-from bugzilla-noreply@freebsd.org) Received: from bugs.freebsd.org ([127.0.1.118]) by kenobi.freebsd.org (8.15.2/8.15.2) with ESMTP id t7HJ6UBT025695 for ; Mon, 17 Aug 2015 19:06:30 GMT (envelope-from bugzilla-noreply@freebsd.org) From: bugzilla-noreply@freebsd.org To: freebsd-net@FreeBSD.org Subject: [Bug 200323] BPF userland misuse can crash the system Date: Mon, 17 Aug 2015 19:06:28 +0000 X-Bugzilla-Reason: AssignedTo X-Bugzilla-Type: changed X-Bugzilla-Watch-Reason: None X-Bugzilla-Product: Base System X-Bugzilla-Component: kern X-Bugzilla-Version: 10.1-RELEASE X-Bugzilla-Keywords: needs-qa, patch X-Bugzilla-Severity: Affects Only Me X-Bugzilla-Who: commit-hook@freebsd.org X-Bugzilla-Status: Open X-Bugzilla-Priority: --- X-Bugzilla-Assigned-To: freebsd-net@FreeBSD.org X-Bugzilla-Target-Milestone: --- X-Bugzilla-Flags: mfc-stable10+ X-Bugzilla-Changed-Fields: Message-ID: In-Reply-To: References: Content-Type: text/plain; charset="UTF-8" Content-Transfer-Encoding: 7bit X-Bugzilla-URL: https://bugs.freebsd.org/bugzilla/ Auto-Submitted: auto-generated MIME-Version: 1.0 X-BeenThere: freebsd-net@freebsd.org X-Mailman-Version: 2.1.20 Precedence: list List-Id: Networking and TCP/IP with FreeBSD List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , X-List-Received-Date: Mon, 17 Aug 2015 19:06:30 -0000 https://bugs.freebsd.org/bugzilla/show_bug.cgi?id=200323 --- Comment #23 from commit-hook@freebsd.org --- A commit references this bug: Author: loos Date: Mon Aug 17 19:06:15 UTC 2015 New revision: 286859 URL: https://svnweb.freebsd.org/changeset/base/286859 Log: MFC r286260: Remove the mtx_sleep() from the kqueue f_event filter. The filter is called from the network hot path and must not sleep. The filter runs with the descriptor lock held and does not manipulate the buffers, so it is not necessary sleep when the hold buffer is in use. Just ignore the hold buffer contents when it is being copied to user space (when hold buffer in use is set). This fix the "Sleeping thread owns a non-sleepable lock" panic when the userland thread is too busy reading the packets from bpf(4). PR: 200323 Sponsored by: Rubicon Communications (Netgate) Changes: _U stable/10/ stable/10/sys/net/bpf.c -- You are receiving this mail because: You are the assignee for the bug. From owner-freebsd-net@freebsd.org Mon Aug 17 19:08:22 2015 Return-Path: Delivered-To: freebsd-net@mailman.ysv.freebsd.org Received: from mx1.freebsd.org (mx1.freebsd.org [IPv6:2001:1900:2254:206a::19:1]) by mailman.ysv.freebsd.org (Postfix) with ESMTP id 045089BC044 for ; Mon, 17 Aug 2015 19:08:22 +0000 (UTC) (envelope-from bugzilla-noreply@freebsd.org) Received: from kenobi.freebsd.org (kenobi.freebsd.org [IPv6:2001:1900:2254:206a::16:76]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (Client did not present a certificate) by mx1.freebsd.org (Postfix) with ESMTPS id E46F01953 for ; Mon, 17 Aug 2015 19:08:21 +0000 (UTC) (envelope-from bugzilla-noreply@freebsd.org) Received: from bugs.freebsd.org ([127.0.1.118]) by kenobi.freebsd.org (8.15.2/8.15.2) with ESMTP id t7HJ8LZe027911 for ; Mon, 17 Aug 2015 19:08:21 GMT (envelope-from bugzilla-noreply@freebsd.org) From: bugzilla-noreply@freebsd.org To: freebsd-net@FreeBSD.org Subject: [Bug 200323] BPF userland misuse can crash the system Date: Mon, 17 Aug 2015 19:08:21 +0000 X-Bugzilla-Reason: AssignedTo X-Bugzilla-Type: changed X-Bugzilla-Watch-Reason: None X-Bugzilla-Product: Base System X-Bugzilla-Component: kern X-Bugzilla-Version: 10.1-RELEASE X-Bugzilla-Keywords: needs-qa, patch X-Bugzilla-Severity: Affects Only Me X-Bugzilla-Who: loos@FreeBSD.org X-Bugzilla-Status: Closed X-Bugzilla-Priority: --- X-Bugzilla-Assigned-To: freebsd-net@FreeBSD.org X-Bugzilla-Target-Milestone: --- X-Bugzilla-Flags: mfc-stable10+ X-Bugzilla-Changed-Fields: bug_status resolution Message-ID: In-Reply-To: References: Content-Type: text/plain; charset="UTF-8" Content-Transfer-Encoding: 7bit X-Bugzilla-URL: https://bugs.freebsd.org/bugzilla/ Auto-Submitted: auto-generated MIME-Version: 1.0 X-BeenThere: freebsd-net@freebsd.org X-Mailman-Version: 2.1.20 Precedence: list List-Id: Networking and TCP/IP with FreeBSD List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , X-List-Received-Date: Mon, 17 Aug 2015 19:08:22 -0000 https://bugs.freebsd.org/bugzilla/show_bug.cgi?id=200323 Luiz Otavio O Souza,+55 (14) 99772-1255 changed: What |Removed |Added ---------------------------------------------------------------------------- Status|Open |Closed Resolution|--- |FIXED -- You are receiving this mail because: You are the assignee for the bug. From owner-freebsd-net@freebsd.org Mon Aug 17 21:50:22 2015 Return-Path: Delivered-To: freebsd-net@mailman.ysv.freebsd.org Received: from mx1.freebsd.org (mx1.freebsd.org [IPv6:2001:1900:2254:206a::19:1]) by mailman.ysv.freebsd.org (Postfix) with ESMTP id C9F8D9BB836; Mon, 17 Aug 2015 21:50:22 +0000 (UTC) (envelope-from rmacklem@uoguelph.ca) Received: from esa-annu.net.uoguelph.ca (esa-annu.mail.uoguelph.ca [131.104.91.36]) by mx1.freebsd.org (Postfix) with ESMTP id 6384511F2; Mon, 17 Aug 2015 21:50:21 +0000 (UTC) (envelope-from rmacklem@uoguelph.ca) IronPort-PHdr: 9a23:oguZsh8vk9kE7f9uRHKM819IXTAuvvDOBiVQ1KB82uMcTK2v8tzYMVDF4r011RmSDd6dt6kMotGVmp6jcFRI2YyGvnEGfc4EfD4+ouJSoTYdBtWYA1bwNv/gYn9yNs1DUFh44yPzahANS47AblHf6ke/8SQVUk2mc1ElfaKpQcb7tIee6aObw9XreQJGhT6wM/tZDS6dikHvjPQQmpZoMa0ryxHE8TNicuVSwn50dxrIx06vru/5xpNo8jxRtvQ97IYAFPyiJ+VrBYFeFyksZmAp+NXw516ESQqU+mBaXH8bnxBTD07C9h69W57wti7zsK152TKGPMv4Svc6Qzmv5bxnDQT0gS0DOm0E9nrKgJlwkL5Du0Dm4Bh+2JLPJo+POfd0Za+beskVAm9IX8JUXioGBoKnc4oJAe1GM/xVooPmqx4VsRK0AQT/OOS65jZOh3LylYcg2uIgChqOiAApGdQfmH/P6tXoNqZUWOvzza2enhvZaPYD4zb268DtexsipfyJFeZqdMPayk0iEivYiVqNpIj9P3We37Je4CCg8+N8WLf32CYcoAZrr23qn590hw== X-IronPort-Anti-Spam-Filtered: true X-IronPort-Anti-Spam-Result: A2BFAgBUGNJV/61jaINdDoNhaQaDHrpKAQmBawqFL0oCgWgUAQEBAQEBAQGBCYIdggYBAQEDAQEBASAEJyALBQsCAQgOCgICDRYDAgIhBgEJFRECBAgHBAEcBId4AwoIDbsukB0NhVcBAQEBAQEEAQEBAQEBGASBIoowgk+BaAEBBxUBMweCaYFDBYcijXuFBIUGdYM3kSeDT4NlAiaDP1oiMwd/CBcjgQQBAQE X-IronPort-AV: E=Sophos;i="5.15,697,1432612800"; d="scan'208";a="233068245" Received: from nipigon.cs.uoguelph.ca (HELO zcs1.mail.uoguelph.ca) ([131.104.99.173]) by esa-annu.net.uoguelph.ca with ESMTP; 17 Aug 2015 17:49:11 -0400 Received: from localhost (localhost [127.0.0.1]) by zcs1.mail.uoguelph.ca (Postfix) with ESMTP id DD2B815F565; Mon, 17 Aug 2015 17:49:11 -0400 (EDT) Received: from zcs1.mail.uoguelph.ca ([127.0.0.1]) by localhost (zcs1.mail.uoguelph.ca [127.0.0.1]) (amavisd-new, port 10032) with ESMTP id jAkikwheNEJB; Mon, 17 Aug 2015 17:49:11 -0400 (EDT) Received: from localhost (localhost [127.0.0.1]) by zcs1.mail.uoguelph.ca (Postfix) with ESMTP id 05CB815F56D; Mon, 17 Aug 2015 17:49:11 -0400 (EDT) X-Virus-Scanned: amavisd-new at zcs1.mail.uoguelph.ca Received: from zcs1.mail.uoguelph.ca ([127.0.0.1]) by localhost (zcs1.mail.uoguelph.ca [127.0.0.1]) (amavisd-new, port 10026) with ESMTP id 6mbRortKRG-f; Mon, 17 Aug 2015 17:49:10 -0400 (EDT) Received: from zcs1.mail.uoguelph.ca (zcs1.mail.uoguelph.ca [172.17.95.18]) by zcs1.mail.uoguelph.ca (Postfix) with ESMTP id D914115F565; Mon, 17 Aug 2015 17:49:10 -0400 (EDT) Date: Mon, 17 Aug 2015 17:49:10 -0400 (EDT) From: Rick Macklem To: Daniel Braniss Cc: FreeBSD Net , Slawa Olhovchenkov , FreeBSD stable , Christopher Forgeron Message-ID: <805850043.24018217.1439848150695.JavaMail.zimbra@uoguelph.ca> In-Reply-To: <7F892C70-9C04-4468-9514-EDBFE75CF2C6@cs.huji.ac.il> References: <1D52028A-B39F-4F9B-BD38-CB1D73BF5D56@cs.huji.ac.il> <20150817094145.GB3158@zxy.spb.ru> <17871443-E105-4434-80B1-6939306A865F@cs.huji.ac.il> <473274181.23263108.1439814072514.JavaMail.zimbra@uoguelph.ca> <7F892C70-9C04-4468-9514-EDBFE75CF2C6@cs.huji.ac.il> Subject: Re: ix(intel) vs mlxen(mellanox) 10Gb performance MIME-Version: 1.0 Content-Type: text/plain; charset=utf-8 Content-Transfer-Encoding: quoted-printable X-Originating-IP: [172.17.95.11] X-Mailer: Zimbra 8.0.9_GA_6191 (ZimbraWebClient - FF34 (Win)/8.0.9_GA_6191) Thread-Topic: ix(intel) vs mlxen(mellanox) 10Gb performance Thread-Index: eHOLT35UpIGat4XSkuNV2057nk2L2g== X-BeenThere: freebsd-net@freebsd.org X-Mailman-Version: 2.1.20 Precedence: list List-Id: Networking and TCP/IP with FreeBSD List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , X-List-Received-Date: Mon, 17 Aug 2015 21:50:23 -0000 Daniel Braniss wrote: >=20 > > On Aug 17, 2015, at 3:21 PM, Rick Macklem wrote: > >=20 > > Daniel Braniss wrote: > >>=20 > >>> On Aug 17, 2015, at 1:41 PM, Christopher Forgeron > >>> wrote: > >>>=20 > >>> FYI, I can regularly hit 9.3 Gib/s with my Intel X520-DA2's and FreeB= SD > >>> 10.1. Before 10.1 it was less. > >>>=20 > >>=20 > >> this is NOT iperf/3 where i do get close to wire speed, > >> it=E2=80=99s NFS writes, i.e., almost real work :-) > >>=20 > >>> I used to tweak the card settings, but now it's just stock. You may w= ant > >>> to > >>> check your settings, the Mellanox may just have better defaults for y= our > >>> switch. > >>>=20 > > Have you tried disabling TSO for the Intel? With TSO enabled, it will b= e > > copying > > every transmitted mbuf chain to a new chain of mbuf clusters via. > > m_defrag() when > > TSO is enabled. (Assuming you aren't an 82598 chip. Most seem to be the > > 82599 chip > > these days?) > >=20 >=20 > hi Rick >=20 > how can i check the chip? >=20 Haven't a clue. Does "dmesg" tell you? (To be honest, since disabling TSO h= elped, I'll bet you don't have a 82598.) > > This has been fixed in the driver very recently, but those fixes won't = be > > in 10.1. > >=20 > > rick > > ps: If you could test with 10.2, it would be interesting to see how the= ix > > does with > > the current driver fixes in it? >=20 > I new TSO was involved! > ok, firstly, it=E2=80=99s 10.2 stable. > with TSO enabled, ix is bad, around 64MGB/s. > disabling TSO it=E2=80=99s better, around 130 >=20 Hmm, could you check to see of these lines are in sys/dev/ixgbe/if_ix.c at = around line#2500? /* TSO parameters */ 2572 =09 =09 ifp->if_hw_tsomax =3D 65518; 2573 =09 =09 ifp->if_hw_tsomaxsegcount =3D IXGBE_82599_SCATTER; 2574 =09 =09 ifp->if_hw_tsomaxsegsize =3D 2048; They are in stable/10. I didn't look at releng/10.2. (And if they're in a #= ifdef for FreeBSD11, take the #ifdef away.) If they are there and not ifdef'd, I can't explain why disabling TSO would = help. Once TSO is fixed so that it handles the 64K transmit segments without copy= ing all the mbufs, I suspect you might get better perf. with it enabled? Good luck with it, rick > still, mlxen0 is about 250! with and without TSO >=20 >=20 > >=20 > >>> On Mon, Aug 17, 2015 at 6:41 AM, Slawa Olhovchenkov >>> > wrote: > >>> On Mon, Aug 17, 2015 at 10:27:41AM +0300, Daniel Braniss wrote: > >>>=20 > >>>> hi, > >>>> I have a host (Dell R730) with both cards, connected to an HP82= 00 > >>>> switch at 10Gb. > >>>> when writing to the same storage (netapp) this is what I get: > >>>> ix0: ~130MGB/s > >>>> mlxen0 ~330MGB/s > >>>> this is via nfs/tcpv3 > >>>>=20 > >>>> I can get similar (bad) performance with the mellanox if I incr= ease > >>>> the file size > >>>> to 512MGB. > >>>=20 > >>> Look like mellanox have internal beffer for caching and do ACK > >>> acclerating. > >>>=20 > >>>> so at face value, it seems the mlxen does a better use of resou= rces > >>>> than the intel. > >>>> Any ideas how to improve ix/intel's performance? > >>>=20 > >>> Are you sure about netapp performance? > >>> _______________________________________________ > >>> freebsd-net@freebsd.org mailing list > >>> https://lists.freebsd.org/mailman/listinfo/freebsd-net > >>> > >>> To unsubscribe, send any mail to "freebsd-net-unsubscribe@freebsd.org > >>> " > >>>=20 > >>=20 > >> _______________________________________________ > >> freebsd-stable@freebsd.org mailing list > >> https://lists.freebsd.org/mailman/listinfo/freebsd-stable > >> To unsubscribe, send any mail to "freebsd-stable-unsubscribe@freebsd.o= rg" >=20 > _______________________________________________ > freebsd-stable@freebsd.org mailing list > https://lists.freebsd.org/mailman/listinfo/freebsd-stable > To unsubscribe, send any mail to "freebsd-stable-unsubscribe@freebsd.org" From owner-freebsd-net@freebsd.org Mon Aug 17 23:10:31 2015 Return-Path: Delivered-To: freebsd-net@mailman.ysv.freebsd.org Received: from mx1.freebsd.org (mx1.freebsd.org [IPv6:2001:1900:2254:206a::19:1]) by mailman.ysv.freebsd.org (Postfix) with ESMTP id C28FA9BC6AC for ; Mon, 17 Aug 2015 23:10:31 +0000 (UTC) (envelope-from delphij@delphij.net) Received: from anubis.delphij.net (anubis.delphij.net [IPv6:2001:470:1:117::25]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (Client CN "anubis.delphij.net", Issuer "StartCom Class 1 Primary Intermediate Server CA" (not verified)) by mx1.freebsd.org (Postfix) with ESMTPS id AC6E21387; Mon, 17 Aug 2015 23:10:31 +0000 (UTC) (envelope-from delphij@delphij.net) Received: from zeta.ixsystems.com (unknown [12.229.62.2]) (using TLSv1 with cipher ECDHE-RSA-AES256-SHA (256/256 bits)) (Client did not present a certificate) by anubis.delphij.net (Postfix) with ESMTPSA id 0A9371232A; Mon, 17 Aug 2015 16:10:31 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=delphij.net; s=anubis; t=1439853031; x=1439867431; bh=wl5LkbfYvL6NJtUF8LIbczHnnQVB+tEYHiu45S9lPTA=; h=Reply-To:To:Cc:From:Subject:Date; b=PVmEYcVYH/bddyMgeFxYAgCZCIZW4HE0ft6MVH8Imh3ifkcodLDAUwZK1tuxsCsXj esJDFsRdh2NAGuXNEzlJ+waful1xBRuz4Tku3kbKlOIlCNAVgyTR/ekKEheJNzYbee Ue258xmMEP8Pe1QlWb8FgR8iWlmcXsM8CyClj7DY= Reply-To: d@delphij.net To: "Alexander V. Chernikov" Cc: "freebsd-net@freebsd.org" From: Xin Li Subject: Panic with recent -CURRENT X-Enigmail-Draft-Status: N1110 Organization: The FreeBSD Project Message-ID: <55D269E1.8000307@delphij.net> Date: Mon, 17 Aug 2015 16:10:25 -0700 MIME-Version: 1.0 Content-Type: multipart/signed; micalg=pgp-sha512; protocol="application/pgp-signature"; boundary="ucXQOKKobDgKJXKsH1AjVk26qCXpwfMjd" X-BeenThere: freebsd-net@freebsd.org X-Mailman-Version: 2.1.20 Precedence: list List-Id: Networking and TCP/IP with FreeBSD List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , X-List-Received-Date: Mon, 17 Aug 2015 23:10:32 -0000 This is an OpenPGP/MIME signed message (RFC 4880 and 3156) --ucXQOKKobDgKJXKsH1AjVk26qCXpwfMjd Content-Type: text/plain; charset=utf-8 Content-Transfer-Encoding: quoted-printable Hi, Alexander, I'm seeing the following backtrace with kernel trap 12 at fault address of 0xf4, and the backtrace is: arpintr() at arpintr+0x85e netisr_dispatch_src() at netisr_dispatch_src+0x61 I have then read the if_ether.c as of r286525. In line 611, la is initialized as NULL; In line 751, the test ifp->if_addrlen !=3D ah->ar_hln takes the true path= , then we would reach line 752: LLE_WUNLOCK(la); And that would cause the panic. Take a more closer look, it seems that we can't reach 'match:' with a known llentry and can assert la =3D=3D NULL in line 752. The unlock seem= s to be unneeded there and should be removed. Do the following patch look sane to you? Index: sys/netinet/if_ether.c =3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D= =3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D= =3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D --- sys/netinet/if_ether.c (revision 286847) +++ sys/netinet/if_ether.c (working copy) @@ -749,7 +749,6 @@ match: } if (ifp->if_addrlen !=3D ah->ar_hln) { - LLE_WUNLOCK(la); ARP_LOG(LOG_WARNING, "from %*D: addr len: new %d, " "i/f %d (ignored)\n", ifp->if_addrlen, (u_char *) ar_sha(ah), ":", ah->ar_hln, Cheers, --=20 Xin LI https://www.delphij.net/ FreeBSD - The Power to Serve! Live free or die --ucXQOKKobDgKJXKsH1AjVk26qCXpwfMjd Content-Type: application/pgp-signature; name="signature.asc" Content-Description: OpenPGP digital signature Content-Disposition: attachment; filename="signature.asc" -----BEGIN PGP SIGNATURE----- Version: GnuPG v2.1.7 (FreeBSD) iQIcBAEBCgAGBQJV0mnmAAoJEJW2GBstM+nsV+8P/3pCS7xDQc9u0dyS4PafYAPf l3Eibhlr62ppO88gadPo/FJlQRGQV08zIdE8pzx0Ewb+aa4LJpLAWF3G6fKmcnXS FSv/iBr3yBCYBxhm5fCTzsXEtt61yKtXQWR0dyFLjMJ0xc6m9Gm0L6LXdMNn2y9W 2VX32h8IygB2ihM4X2PtjR/r/WLFAWND6Jch22loIsoyGHy46iUxLkc4rPO1S6O5 25If8Qivp9RvBuPztIqn4Ak1FHlcy1tpdbt32T3N2TUP0A/uenJCNxxmVCH5fDa6 jZ/5VoQTLuT582e6HgK3NW417sj5Ue9h24bj5p1i5fOvGyZrSZ9JPyGy0qVQpzrl wxq39nRWQdt3hamitypllB+f0VSo823NaQdJAvpq6aiBVklummCPl4/dD7lW14Vd zpdo8diIxtdzNdriIpYJSg9pesbhlV9xcYZKWEZoiOKlK7Ruvri7gfq1uOXQTiFM CfXBbLXKIEXoyGqLJvywTAglOvnNiCQmVlgWonfRyKv54XnDGpj1yWhKJmQdrbeJ vWenmXGILhPL0uB3D/jT8Xol3v6c49SRT9rZfj/fCaCp7nvJXtkQOHlwiY4IdhwJ midKQDKd6cB2kur6MPMmW27P8e6kblB+dDe3iMK+TWZcWi/m7F3LYORAwncz88Hf uDK4O37QuoLf3sV6MK2S =m6Ld -----END PGP SIGNATURE----- --ucXQOKKobDgKJXKsH1AjVk26qCXpwfMjd-- From owner-freebsd-net@freebsd.org Tue Aug 18 06:17:52 2015 Return-Path: Delivered-To: freebsd-net@mailman.ysv.freebsd.org Received: from mx1.freebsd.org (mx1.freebsd.org [IPv6:2001:1900:2254:206a::19:1]) by mailman.ysv.freebsd.org (Postfix) with ESMTP id EC4F09BC5E6 for ; Tue, 18 Aug 2015 06:17:51 +0000 (UTC) (envelope-from melifaro@ipfw.ru) Received: from forward16j.cmail.yandex.net (forward16j.cmail.yandex.net [5.255.227.235]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (Client CN "forwards.mail.yandex.net", Issuer "Certum Level IV CA" (verified OK)) by mx1.freebsd.org (Postfix) with ESMTPS id A770B964 for ; Tue, 18 Aug 2015 06:17:51 +0000 (UTC) (envelope-from melifaro@ipfw.ru) Received: from web25j.yandex.ru (web25j.yandex.ru [5.45.198.66]) by forward16j.cmail.yandex.net (Yandex) with ESMTP id CBACF2165B; Tue, 18 Aug 2015 09:17:23 +0300 (MSK) Received: from 127.0.0.1 (localhost [127.0.0.1]) by web25j.yandex.ru (Yandex) with ESMTP id 1E3561320B56; Tue, 18 Aug 2015 09:17:22 +0300 (MSK) Received: by web25j.yandex.ru with HTTP; Tue, 18 Aug 2015 09:17:22 +0300 From: Alexander V. Chernikov Envelope-From: melifaro@ipfw.ru To: "d@delphij.net" Cc: "freebsd-net@freebsd.org" In-Reply-To: <55D269E1.8000307@delphij.net> References: null <55D269E1.8000307@delphij.net> Subject: Re: Panic with recent -CURRENT Message-Id: <2268691439878642@web25j.yandex.ru> X-Mailer: Yamail [ http://yandex.ru ] 5.0 Date: Tue, 18 Aug 2015 09:17:22 +0300 MIME-Version: 1.0 Content-Type: text/plain; charset="koi8-r" X-Content-Filtered-By: Mailman/MimeDel 2.1.20 X-BeenThere: freebsd-net@freebsd.org X-Mailman-Version: 2.1.20 Precedence: list List-Id: Networking and TCP/IP with FreeBSD List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , X-List-Received-Date: Tue, 18 Aug 2015 06:17:52 -0000 From owner-freebsd-net@freebsd.org Tue Aug 18 07:17:32 2015 Return-Path: Delivered-To: freebsd-net@mailman.ysv.freebsd.org Received: from mx1.freebsd.org (mx1.freebsd.org [IPv6:2001:1900:2254:206a::19:1]) by mailman.ysv.freebsd.org (Postfix) with ESMTP id BAE3D9A0522; Tue, 18 Aug 2015 07:17:32 +0000 (UTC) (envelope-from danny@cs.huji.ac.il) Received: from kabab.cs.huji.ac.il (kabab.cs.huji.ac.il [132.65.116.210]) (using TLSv1 with cipher DHE-RSA-AES256-SHA (256/256 bits)) (Client did not present a certificate) by mx1.freebsd.org (Postfix) with ESMTPS id 44F84616; Tue, 18 Aug 2015 07:17:31 +0000 (UTC) (envelope-from danny@cs.huji.ac.il) Received: from mbpro-w.cs.huji.ac.il ([132.65.80.91]) by kabab.cs.huji.ac.il with esmtp id 1ZRb99-0000T0-QC; Tue, 18 Aug 2015 10:17:23 +0300 Mime-Version: 1.0 (Mac OS X Mail 8.2 \(2102\)) Subject: Re: ix(intel) vs mlxen(mellanox) 10Gb performance From: Daniel Braniss In-Reply-To: <805850043.24018217.1439848150695.JavaMail.zimbra@uoguelph.ca> Date: Tue, 18 Aug 2015 10:07:23 +0300 Cc: FreeBSD Net , Slawa Olhovchenkov , FreeBSD stable , Christopher Forgeron Message-Id: <9D8B0503-E8FA-43CA-88F0-01F184F84D9B@cs.huji.ac.il> References: <1D52028A-B39F-4F9B-BD38-CB1D73BF5D56@cs.huji.ac.il> <20150817094145.GB3158@zxy.spb.ru> <17871443-E105-4434-80B1-6939306A865F@cs.huji.ac.il> <473274181.23263108.1439814072514.JavaMail.zimbra@uoguelph.ca> <7F892C70-9C04-4468-9514-EDBFE75CF2C6@cs.huji.ac.il> <805850043.24018217.1439848150695.JavaMail.zimbra@uoguelph.ca> To: Rick Macklem X-Mailer: Apple Mail (2.2102) Content-Type: text/plain; charset=utf-8 Content-Transfer-Encoding: quoted-printable X-Content-Filtered-By: Mailman/MimeDel 2.1.20 X-BeenThere: freebsd-net@freebsd.org X-Mailman-Version: 2.1.20 Precedence: list List-Id: Networking and TCP/IP with FreeBSD List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , X-List-Received-Date: Tue, 18 Aug 2015 07:17:32 -0000 > On Aug 18, 2015, at 12:49 AM, Rick Macklem = wrote: >=20 > Daniel Braniss wrote: >>=20 >>> On Aug 17, 2015, at 3:21 PM, Rick Macklem = wrote: >>>=20 >>> Daniel Braniss wrote: >>>>=20 >>>>> On Aug 17, 2015, at 1:41 PM, Christopher Forgeron = >>>>> wrote: >>>>>=20 >>>>> FYI, I can regularly hit 9.3 Gib/s with my Intel X520-DA2's and = FreeBSD >>>>> 10.1. Before 10.1 it was less. >>>>>=20 >>>>=20 >>>> this is NOT iperf/3 where i do get close to wire speed, >>>> it=E2=80=99s NFS writes, i.e., almost real work :-) >>>>=20 >>>>> I used to tweak the card settings, but now it's just stock. You = may want >>>>> to >>>>> check your settings, the Mellanox may just have better defaults = for your >>>>> switch. >>>>>=20 >>> Have you tried disabling TSO for the Intel? With TSO enabled, it = will be >>> copying >>> every transmitted mbuf chain to a new chain of mbuf clusters via. >>> m_defrag() when >>> TSO is enabled. (Assuming you aren't an 82598 chip. Most seem to be = the >>> 82599 chip >>> these days?) >>>=20 >>=20 >> hi Rick >>=20 >> how can i check the chip? >>=20 > Haven't a clue. Does "dmesg" tell you? (To be honest, since disabling = TSO helped, > I'll bet you don't have a 82598.) >=20 >>> This has been fixed in the driver very recently, but those fixes = won't be >>> in 10.1. >>>=20 >>> rick >>> ps: If you could test with 10.2, it would be interesting to see how = the ix >>> does with >>> the current driver fixes in it? >>=20 >> I new TSO was involved! >> ok, firstly, it=E2=80=99s 10.2 stable. >> with TSO enabled, ix is bad, around 64MGB/s. >> disabling TSO it=E2=80=99s better, around 130 >>=20 > Hmm, could you check to see of these lines are in = sys/dev/ixgbe/if_ix.c at around > line#2500? > /* TSO parameters */ > 2572 ifp->if_hw_tsomax =3D 65518; > 2573 ifp->if_hw_tsomaxsegcount =3D = IXGBE_82599_SCATTER; > 2574 ifp->if_hw_tsomaxsegsize =3D 2048; >=20 > They are in stable/10. I didn't look at releng/10.2. (And if they're = in a #ifdef > for FreeBSD11, take the #ifdef away.) > If they are there and not ifdef'd, I can't explain why disabling TSO = would help. > Once TSO is fixed so that it handles the 64K transmit segments without = copying all > the mbufs, I suspect you might get better perf. with it enabled? >=20 this is 10.2 : they are on lines 2509-2511 and I don=E2=80=99t see any #ifdefs around = it. the plot thickens :-) danny > Good luck with it, rick >=20 >> still, mlxen0 is about 250! with and without TSO >>=20 >>=20 >>>=20 >>>>> On Mon, Aug 17, 2015 at 6:41 AM, Slawa Olhovchenkov = >>>> > wrote: >>>>> On Mon, Aug 17, 2015 at 10:27:41AM +0300, Daniel Braniss wrote: >>>>>=20 >>>>>> hi, >>>>>> I have a host (Dell R730) with both cards, connected to an = HP8200 >>>>>> switch at 10Gb. >>>>>> when writing to the same storage (netapp) this is what I get: >>>>>> ix0: ~130MGB/s >>>>>> mlxen0 ~330MGB/s >>>>>> this is via nfs/tcpv3 >>>>>>=20 >>>>>> I can get similar (bad) performance with the mellanox if I = increase >>>>>> the file size >>>>>> to 512MGB. >>>>>=20 >>>>> Look like mellanox have internal beffer for caching and do ACK >>>>> acclerating. >>>>>=20 >>>>>> so at face value, it seems the mlxen does a better use of = resources >>>>>> than the intel. >>>>>> Any ideas how to improve ix/intel's performance? >>>>>=20 >>>>> Are you sure about netapp performance? >>>>> _______________________________________________ >>>>> freebsd-net@freebsd.org mailing = list >>>>> https://lists.freebsd.org/mailman/listinfo/freebsd-net >>>>> >>>>> To unsubscribe, send any mail to = "freebsd-net-unsubscribe@freebsd.org >>>>> " >>>>>=20 >>>>=20 >>>> _______________________________________________ >>>> freebsd-stable@freebsd.org mailing list >>>> https://lists.freebsd.org/mailman/listinfo/freebsd-stable >>>> To unsubscribe, send any mail to = "freebsd-stable-unsubscribe@freebsd.org" >>=20 >> _______________________________________________ >> freebsd-stable@freebsd.org mailing list >> https://lists.freebsd.org/mailman/listinfo/freebsd-stable >> To unsubscribe, send any mail to = "freebsd-stable-unsubscribe@freebsd.org" From owner-freebsd-net@freebsd.org Tue Aug 18 08:16:51 2015 Return-Path: Delivered-To: freebsd-net@mailman.ysv.freebsd.org Received: from mx1.freebsd.org (mx1.freebsd.org [IPv6:2001:1900:2254:206a::19:1]) by mailman.ysv.freebsd.org (Postfix) with ESMTP id 489549B89A2 for ; Tue, 18 Aug 2015 08:16:51 +0000 (UTC) (envelope-from avg@FreeBSD.org) Received: from citadel.icyb.net.ua (citadel.icyb.net.ua [212.40.38.140]) by mx1.freebsd.org (Postfix) with ESMTP id 6E84D1EF9 for ; Tue, 18 Aug 2015 08:16:49 +0000 (UTC) (envelope-from avg@FreeBSD.org) Received: from porto.starpoint.kiev.ua (porto-e.starpoint.kiev.ua [212.40.38.100]) by citadel.icyb.net.ua (8.8.8p3/ICyb-2.3exp) with ESMTP id LAA22004 for ; Tue, 18 Aug 2015 11:16:42 +0300 (EEST) (envelope-from avg@FreeBSD.org) Received: from localhost ([127.0.0.1]) by porto.starpoint.kiev.ua with esmtp (Exim 4.34 (FreeBSD)) id 1ZRc4X-00092N-KN for freebsd-net@freebsd.org; Tue, 18 Aug 2015 11:16:41 +0300 To: freebsd-net@FreeBSD.org From: Andriy Gapon Subject: pf and new interface Message-ID: <55D2E9B3.2040301@FreeBSD.org> Date: Tue, 18 Aug 2015 11:15:47 +0300 User-Agent: Mozilla/5.0 (Windows NT 6.3; WOW64; rv:38.0) Gecko/20100101 Thunderbird/38.1.0 MIME-Version: 1.0 Content-Type: text/plain; charset=utf-8 Content-Transfer-Encoding: 7bit X-Mailman-Approved-At: Tue, 18 Aug 2015 11:05:07 +0000 X-BeenThere: freebsd-net@freebsd.org X-Mailman-Version: 2.1.20 Precedence: list List-Id: Networking and TCP/IP with FreeBSD List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , X-List-Received-Date: Tue, 18 Aug 2015 08:16:51 -0000 I have the following rule in pf.conf: set skip on tap and even the following one: set skip on tap0 The rules are loaded at the system start-up time, but the tap interface may not be created until much later. When tap0 is first created the skip rules are not applied to it and the traffic gets filtered. If I reload the pf configuration, then the rules start working. Is there a way to make pf honor such rules for the dynamic interfaces? -- Andriy Gapon From owner-freebsd-net@freebsd.org Tue Aug 18 11:18:34 2015 Return-Path: Delivered-To: freebsd-net@mailman.ysv.freebsd.org Received: from mx1.freebsd.org (mx1.freebsd.org [IPv6:2001:1900:2254:206a::19:1]) by mailman.ysv.freebsd.org (Postfix) with ESMTP id 50E1C9BCFF8 for ; Tue, 18 Aug 2015 11:18:34 +0000 (UTC) (envelope-from artemrts@ukr.net) Received: from frv199.fwdcdn.com (frv199.fwdcdn.com [212.42.77.199]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (Client did not present a certificate) by mx1.freebsd.org (Postfix) with ESMTPS id 158FF3E3 for ; Tue, 18 Aug 2015 11:18:33 +0000 (UTC) (envelope-from artemrts@ukr.net) DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=ukr.net; s=ffe; h=Content-Transfer-Encoding:Content-Type:MIME-Version:References:In-Reply-To:Message-Id:Cc:To:Subject:From:Date; bh=A0QgCvLFF12bwhsvQFWnSoR9J9DXRXSvf3KUJVz06qs=; b=AN5r/LPZeWI6joX9YdpzpRpZSb2xLP3E2Fqsxvryfu1y9P21tBpGctABonWzh82jI7e3y2/fqdwJk9ksxy2wsmoa7yO2czYIsmHSeywQKVUzzOPMQkL0y6wbXviMK0Hc0jEPPXRf2qrZTWXdU+FvP27yR7ZA8MVf3LLsJpH4kfA=; Received: from [10.10.10.34] (helo=frv34.fwdcdn.com) by frv199.fwdcdn.com with smtp ID 1ZReuL-000Gy2-VQ for freebsd-net@freebsd.org; Tue, 18 Aug 2015 14:18:21 +0300 Date: Tue, 18 Aug 2015 14:18:21 +0300 From: wishmaster Subject: Re: pf and new interface To: Andriy Gapon Cc: freebsd-net@freebsd.org X-Mailer: mail.ukr.net 5.0 Message-Id: <1439896563.102588062.s8ouf3nc@frv34.fwdcdn.com> In-Reply-To: <55D2E9B3.2040301@FreeBSD.org> References: <55D2E9B3.2040301@FreeBSD.org> X-Reply-Action: reply Received: from artemrts@ukr.net by frv34.fwdcdn.com; Tue, 18 Aug 2015 14:18:21 +0300 MIME-Version: 1.0 Content-Type: text/plain; charset="UTF-8" Content-Transfer-Encoding: binary Content-Disposition: inline X-BeenThere: freebsd-net@freebsd.org X-Mailman-Version: 2.1.20 Precedence: list List-Id: Networking and TCP/IP with FreeBSD List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , X-List-Received-Date: Tue, 18 Aug 2015 11:18:34 -0000   --- Original message --- From: "Andriy Gapon" Date: 18 August 2015, 14:05:15 > I have the following rule in pf.conf: > set skip on tap > and even the following one: > set skip on tap0 > > The rules are loaded at the system start-up time, but the tap interface > may not be created until much later. When tap0 is first created the > skip rules are not applied to it and the traffic gets filtered. If I > reload the pf configuration, then the rules start working. > > Is there a way to make pf honor such rules for the dynamic interfaces?Hi, You should do it in your application, e.g. in mpd this is something like below         set iface up-script /usr/local/etc/mpd5/link_up.sh         set iface down-script /usr/local/etc/mpd5/link_down.sh in openvpn - see manuals. Cheers, Vitaliy From owner-freebsd-net@freebsd.org Tue Aug 18 11:35:35 2015 Return-Path: Delivered-To: freebsd-net@mailman.ysv.freebsd.org Received: from mx1.freebsd.org (mx1.freebsd.org [IPv6:2001:1900:2254:206a::19:1]) by mailman.ysv.freebsd.org (Postfix) with ESMTP id 4E9519BC4A7 for ; Tue, 18 Aug 2015 11:35:35 +0000 (UTC) (envelope-from avg@FreeBSD.org) Received: from citadel.icyb.net.ua (citadel.icyb.net.ua [212.40.38.140]) by mx1.freebsd.org (Postfix) with ESMTP id 8F58AEDE for ; Tue, 18 Aug 2015 11:35:34 +0000 (UTC) (envelope-from avg@FreeBSD.org) Received: from porto.starpoint.kiev.ua (porto-e.starpoint.kiev.ua [212.40.38.100]) by citadel.icyb.net.ua (8.8.8p3/ICyb-2.3exp) with ESMTP id OAA25347; Tue, 18 Aug 2015 14:35:31 +0300 (EEST) (envelope-from avg@FreeBSD.org) Received: from localhost ([127.0.0.1]) by porto.starpoint.kiev.ua with esmtp (Exim 4.34 (FreeBSD)) id 1ZRfAw-0009GY-M5; Tue, 18 Aug 2015 14:35:30 +0300 Subject: Re: pf and new interface To: wishmaster References: <55D2E9B3.2040301@FreeBSD.org> <1439896563.102588062.s8ouf3nc@frv34.fwdcdn.com> Cc: freebsd-net@FreeBSD.org From: Andriy Gapon Message-ID: <55D3184B.7050200@FreeBSD.org> Date: Tue, 18 Aug 2015 14:34:35 +0300 User-Agent: Mozilla/5.0 (Windows NT 6.3; WOW64; rv:38.0) Gecko/20100101 Thunderbird/38.1.0 MIME-Version: 1.0 In-Reply-To: <1439896563.102588062.s8ouf3nc@frv34.fwdcdn.com> Content-Type: text/plain; charset=utf-8 Content-Transfer-Encoding: 7bit X-BeenThere: freebsd-net@freebsd.org X-Mailman-Version: 2.1.20 Precedence: list List-Id: Networking and TCP/IP with FreeBSD List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , X-List-Received-Date: Tue, 18 Aug 2015 11:35:35 -0000 On 18/08/2015 14:18, wishmaster wrote: > --- Original message --- > From: "Andriy Gapon" > Date: 18 August 2015, 14:05:15 > > >> I have the following rule in pf.conf: >> set skip on tap >> and even the following one: >> set skip on tap0 >> >> The rules are loaded at the system start-up time, but the tap interface >> may not be created until much later. When tap0 is first created the >> skip rules are not applied to it and the traffic gets filtered. If I >> reload the pf configuration, then the rules start working. >> >> Is there a way to make pf honor such rules for the dynamic interfaces?Hi, > > You should do it in your application, e.g. in mpd this is something like below > > set iface up-script /usr/local/etc/mpd5/link_up.sh > set iface down-script /usr/local/etc/mpd5/link_down.sh > > in openvpn - see manuals. That's a good suggestion. But how to add a single rule for pf? Reloading the whole configuration is disruptive to existing connections. -- Andriy Gapon From owner-freebsd-net@freebsd.org Tue Aug 18 11:55:38 2015 Return-Path: Delivered-To: freebsd-net@mailman.ysv.freebsd.org Received: from mx1.freebsd.org (mx1.freebsd.org [IPv6:2001:1900:2254:206a::19:1]) by mailman.ysv.freebsd.org (Postfix) with ESMTP id A9D769BC82A for ; Tue, 18 Aug 2015 11:55:38 +0000 (UTC) (envelope-from artemrts@ukr.net) Received: from frv197.fwdcdn.com (frv197.fwdcdn.com [212.42.77.197]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (Client did not present a certificate) by mx1.freebsd.org (Postfix) with ESMTPS id 6B4941BE2 for ; Tue, 18 Aug 2015 11:55:38 +0000 (UTC) (envelope-from artemrts@ukr.net) DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=ukr.net; s=ffe; h=Content-Transfer-Encoding:Content-Type:MIME-Version:References:In-Reply-To:Message-Id:Cc:To:Subject:From:Date; bh=GnhKhWQCbqYqxLuVCdlRnUhpzYe0/isAmqx7t6WX5BI=; b=fV74pqDGF1TS/mEEZCwyQJl4rqaPgz6/Omm4Qx2q9ptY7fZKu7mGUWTwMYiTSq6JzawfDoC6Q5RpcP9eNpjM0/hujhxSxQw6CceO/YdJaR//KdmG6ya/G4CAPypQ+UuXUx3tdkIzd3KuSTCKyC2r4idJxbWXpS0E1KS2lJwomtU=; Received: from [10.10.10.34] (helo=frv34.fwdcdn.com) by frv197.fwdcdn.com with smtp ID 1ZRfUH-000Cdy-VP for freebsd-net@freebsd.org; Tue, 18 Aug 2015 14:55:29 +0300 Date: Tue, 18 Aug 2015 14:55:29 +0300 From: wishmaster Subject: Re[2]: pf and new interface To: Andriy Gapon Cc: freebsd-net@freebsd.org X-Mailer: mail.ukr.net 5.0 Message-Id: <1439898859.98223622.d5j81kl5@frv34.fwdcdn.com> In-Reply-To: <55D3184B.7050200@FreeBSD.org> References: <55D2E9B3.2040301@FreeBSD.org> <1439896563.102588062.s8ouf3nc@frv34.fwdcdn.com> <55D3184B.7050200@FreeBSD.org> X-Reply-Action: reply Received: from artemrts@ukr.net by frv34.fwdcdn.com; Tue, 18 Aug 2015 14:55:29 +0300 MIME-Version: 1.0 Content-Type: text/plain; charset="UTF-8" Content-Transfer-Encoding: binary Content-Disposition: inline X-BeenThere: freebsd-net@freebsd.org X-Mailman-Version: 2.1.20 Precedence: list List-Id: Networking and TCP/IP with FreeBSD List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , X-List-Received-Date: Tue, 18 Aug 2015 11:55:38 -0000 --- Original message --- From: "Andriy Gapon" Date: 18 August 2015, 14:35:36 > On 18/08/2015 14:18, wishmaster wrote: > > --- Original message --- > > From: "Andriy Gapon" > > Date: 18 August 2015, 14:05:15 > > > > > >> I have the following rule in pf.conf: > >> set skip on tap > >> and even the following one: > >> set skip on tap0 > >> > >> The rules are loaded at the system start-up time, but the tap interface > >> may not be created until much later. When tap0 is first created the > >> skip rules are not applied to it and the traffic gets filtered. If I > >> reload the pf configuration, then the rules start working. > >> > >> Is there a way to make pf honor such rules for the dynamic interfaces?Hi, > > > > You should do it in your application, e.g. in mpd this is something like below > > > > set iface up-script /usr/local/etc/mpd5/link_up.sh > > set iface down-script /usr/local/etc/mpd5/link_down.sh > > > > in openvpn - see manuals. > > That's a good suggestion. But how to add a single rule for pf? > Reloading the whole configuration is disruptive to existing connections. Use anchors. Small example: # VPN Interface Up Script # # Script is called like this: # # script interface proto local-ip remote-ip authname # $1 $2 $3 $4 $5 # anchor "ng-int/*" # less if-up.sh #!/bin/sh echo "pass quick on $1 all" | pfctl -a ng-int/$1 -f - # less if-down.sh #!/bin/sh pfctl -a ng-int/$1 -F rules From owner-freebsd-net@freebsd.org Tue Aug 18 12:52:09 2015 Return-Path: Delivered-To: freebsd-net@mailman.ysv.freebsd.org Received: from mx1.freebsd.org (mx1.freebsd.org [IPv6:2001:1900:2254:206a::19:1]) by mailman.ysv.freebsd.org (Postfix) with ESMTP id A91C19BCD17 for ; Tue, 18 Aug 2015 12:52:09 +0000 (UTC) (envelope-from avg@FreeBSD.org) Received: from citadel.icyb.net.ua (citadel.icyb.net.ua [212.40.38.140]) by mx1.freebsd.org (Postfix) with ESMTP id E6D3C9B7 for ; Tue, 18 Aug 2015 12:52:08 +0000 (UTC) (envelope-from avg@FreeBSD.org) Received: from porto.starpoint.kiev.ua (porto-e.starpoint.kiev.ua [212.40.38.100]) by citadel.icyb.net.ua (8.8.8p3/ICyb-2.3exp) with ESMTP id PAA26486; Tue, 18 Aug 2015 15:52:07 +0300 (EEST) (envelope-from avg@FreeBSD.org) Received: from localhost ([127.0.0.1]) by porto.starpoint.kiev.ua with esmtp (Exim 4.34 (FreeBSD)) id 1ZRgN4-0009M9-PZ; Tue, 18 Aug 2015 15:52:06 +0300 Subject: Re: pf and new interface To: wishmaster References: <55D2E9B3.2040301@FreeBSD.org> <1439896563.102588062.s8ouf3nc@frv34.fwdcdn.com> <55D3184B.7050200@FreeBSD.org> <1439898859.98223622.d5j81kl5@frv34.fwdcdn.com> Cc: freebsd-net@FreeBSD.org From: Andriy Gapon Message-ID: <55D32A25.8070001@FreeBSD.org> Date: Tue, 18 Aug 2015 15:50:45 +0300 User-Agent: Mozilla/5.0 (Windows NT 6.3; WOW64; rv:38.0) Gecko/20100101 Thunderbird/38.1.0 MIME-Version: 1.0 In-Reply-To: <1439898859.98223622.d5j81kl5@frv34.fwdcdn.com> Content-Type: text/plain; charset=utf-8 Content-Transfer-Encoding: 7bit X-BeenThere: freebsd-net@freebsd.org X-Mailman-Version: 2.1.20 Precedence: list List-Id: Networking and TCP/IP with FreeBSD List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , X-List-Received-Date: Tue, 18 Aug 2015 12:52:09 -0000 On 18/08/2015 14:55, wishmaster wrote: > --- Original message --- > From: "Andriy Gapon" > Date: 18 August 2015, 14:35:36 > > > >> On 18/08/2015 14:18, wishmaster wrote: >>> --- Original message --- >>> From: "Andriy Gapon" >>> Date: 18 August 2015, 14:05:15 >>> >>> >>>> I have the following rule in pf.conf: >>>> set skip on tap >>>> and even the following one: >>>> set skip on tap0 >>>> >>>> The rules are loaded at the system start-up time, but the tap interface >>>> may not be created until much later. When tap0 is first created the >>>> skip rules are not applied to it and the traffic gets filtered. If I >>>> reload the pf configuration, then the rules start working. >>>> >>>> Is there a way to make pf honor such rules for the dynamic interfaces?Hi, >>> >>> You should do it in your application, e.g. in mpd this is something like below >>> >>> set iface up-script /usr/local/etc/mpd5/link_up.sh >>> set iface down-script /usr/local/etc/mpd5/link_down.sh >>> >>> in openvpn - see manuals. >> >> That's a good suggestion. But how to add a single rule for pf? >> Reloading the whole configuration is disruptive to existing connections. > > > Use anchors. Thank you for the hint! > Small example: > > # VPN Interface Up Script > # > # Script is called like this: > # > # script interface proto local-ip remote-ip authname > # $1 $2 $3 $4 $5 > # > > anchor "ng-int/*" > > # less if-up.sh > #!/bin/sh > echo "pass quick on $1 all" | pfctl -a ng-int/$1 -f - > > # less if-down.sh > #!/bin/sh > pfctl -a ng-int/$1 -F rules > > > > -- Andriy Gapon From owner-freebsd-net@freebsd.org Tue Aug 18 12:53:07 2015 Return-Path: Delivered-To: freebsd-net@mailman.ysv.freebsd.org Received: from mx1.freebsd.org (mx1.freebsd.org [IPv6:2001:1900:2254:206a::19:1]) by mailman.ysv.freebsd.org (Postfix) with ESMTP id 0927A9BCD30; Tue, 18 Aug 2015 12:53:07 +0000 (UTC) (envelope-from rmacklem@uoguelph.ca) Received: from esa-annu.net.uoguelph.ca (esa-annu.mail.uoguelph.ca [131.104.91.36]) by mx1.freebsd.org (Postfix) with ESMTP id 8F812A62; Tue, 18 Aug 2015 12:53:06 +0000 (UTC) (envelope-from rmacklem@uoguelph.ca) IronPort-PHdr: 9a23:12uXCRYSEKe6zyw54jqPidD/LSx+4OfEezUN459isYplN5qZo8q6bnLW6fgltlLVR4KTs6sC0LqN9fu+EUU7or+/81k6OKRWUBEEjchE1ycBO+WiTXPBEfjxciYhF95DXlI2t1uyMExSBdqsLwaK+i760zceF13FOBZvIaytQ8iJ35/xjL760qaQSjsLrQL1Wal1IhSyoFeZnegtqqwmFJwMzADUqGBDYeVcyDAgD1uSmxHh+pX4p8Y7oGx48sgs/M9YUKj8Y79wDfkBVGxnYCgI4tb2v0zDUReX/SlbFWEXiQZTRQbf4RzwRZu3tTH18e902S2fNMuxSbEvRTWk4aAsRgXlhS0cO3si7GjdjsEjsaRAvRj0pwBj25WGJ8aRNeFiZeXTZ94XT3FNGMFLWGtEC4K4aoIJSO4AJvpZqYf64FUUoBa0HgXpH//mwDtF1ULwxrAwhuQ9DRndjktnG9MVrG+Sos/4Oa0JXaay1qaPyDzCa/Zf33D56ZPUcxYvpraCR799e9HdjE8iC1D5iQC8oIrkMjfd/P4EtWmA9KI0WeupjX8PoBo3oiWtx4Elgc/IgtRG5ErD8HBDwY02bfixQ01/bNvsRIFVviqZM4Zzat4lTHxlvD46jLYP783oNBMWwYgqkkaMI8eMdJKFt1e6DL6c X-IronPort-Anti-Spam-Filtered: true X-IronPort-Anti-Spam-Result: A2AcAgBFKtNV/61jaINdDoNhaQaDHrpgAQmBbAqFL0oCgWUUAQEBAQEBAQGBCYIdggYBAQEDAQEBASAEJyALBQsCAQgOCgICDRYDAgIhBgEJFRECBAgHBAEcBId4AwoIDbpvkE8NhVcBAQEBAQEEAQEBAQEBGASBIoowgk+BaAEBBxUBMweCaYFDBYcijX6FBIUGdYM3kS+DT4NlAiaCDhyBFVoiMwd/CBcjgQQBAQE X-IronPort-AV: E=Sophos;i="5.15,701,1432612800"; d="scan'208";a="233131262" Received: from nipigon.cs.uoguelph.ca (HELO zcs1.mail.uoguelph.ca) ([131.104.99.173]) by esa-annu.net.uoguelph.ca with ESMTP; 18 Aug 2015 08:53:02 -0400 Received: from localhost (localhost [127.0.0.1]) by zcs1.mail.uoguelph.ca (Postfix) with ESMTP id 09B1F15F565; Tue, 18 Aug 2015 08:53:03 -0400 (EDT) Received: from zcs1.mail.uoguelph.ca ([127.0.0.1]) by localhost (zcs1.mail.uoguelph.ca [127.0.0.1]) (amavisd-new, port 10032) with ESMTP id sT5Pw9aDTzj7; Tue, 18 Aug 2015 08:53:02 -0400 (EDT) Received: from localhost (localhost [127.0.0.1]) by zcs1.mail.uoguelph.ca (Postfix) with ESMTP id 0844315F56D; Tue, 18 Aug 2015 08:53:02 -0400 (EDT) X-Virus-Scanned: amavisd-new at zcs1.mail.uoguelph.ca Received: from zcs1.mail.uoguelph.ca ([127.0.0.1]) by localhost (zcs1.mail.uoguelph.ca [127.0.0.1]) (amavisd-new, port 10026) with ESMTP id 3okoUCjgvaiq; Tue, 18 Aug 2015 08:53:01 -0400 (EDT) Received: from zcs1.mail.uoguelph.ca (zcs1.mail.uoguelph.ca [172.17.95.18]) by zcs1.mail.uoguelph.ca (Postfix) with ESMTP id DA91F15F565; Tue, 18 Aug 2015 08:53:01 -0400 (EDT) Date: Tue, 18 Aug 2015 08:53:01 -0400 (EDT) From: Rick Macklem To: Daniel Braniss Cc: FreeBSD Net , Christopher Forgeron , FreeBSD stable , Slawa Olhovchenkov Message-ID: <1721122651.24481798.1439902381663.JavaMail.zimbra@uoguelph.ca> In-Reply-To: <9D8B0503-E8FA-43CA-88F0-01F184F84D9B@cs.huji.ac.il> References: <1D52028A-B39F-4F9B-BD38-CB1D73BF5D56@cs.huji.ac.il> <20150817094145.GB3158@zxy.spb.ru> <17871443-E105-4434-80B1-6939306A865F@cs.huji.ac.il> <473274181.23263108.1439814072514.JavaMail.zimbra@uoguelph.ca> <7F892C70-9C04-4468-9514-EDBFE75CF2C6@cs.huji.ac.il> <805850043.24018217.1439848150695.JavaMail.zimbra@uoguelph.ca> <9D8B0503-E8FA-43CA-88F0-01F184F84D9B@cs.huji.ac.il> Subject: Re: ix(intel) vs mlxen(mellanox) 10Gb performance MIME-Version: 1.0 Content-Type: text/plain; charset=utf-8 Content-Transfer-Encoding: quoted-printable X-Originating-IP: [172.17.95.10] X-Mailer: Zimbra 8.0.9_GA_6191 (ZimbraWebClient - FF34 (Win)/8.0.9_GA_6191) Thread-Topic: ix(intel) vs mlxen(mellanox) 10Gb performance Thread-Index: HIzeCIwOEaU79f03Ql6Ay7y4PoXZQw== X-BeenThere: freebsd-net@freebsd.org X-Mailman-Version: 2.1.20 Precedence: list List-Id: Networking and TCP/IP with FreeBSD List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , X-List-Received-Date: Tue, 18 Aug 2015 12:53:07 -0000 Daniel Braniss wrote: >=20 > > On Aug 18, 2015, at 12:49 AM, Rick Macklem wrote= : > >=20 > > Daniel Braniss wrote: > >>=20 > >>> On Aug 17, 2015, at 3:21 PM, Rick Macklem wrot= e: > >>>=20 > >>> Daniel Braniss wrote: > >>>>=20 > >>>>> On Aug 17, 2015, at 1:41 PM, Christopher Forgeron > >>>>> > >>>>> wrote: > >>>>>=20 > >>>>> FYI, I can regularly hit 9.3 Gib/s with my Intel X520-DA2's and Fre= eBSD > >>>>> 10.1. Before 10.1 it was less. > >>>>>=20 > >>>>=20 > >>>> this is NOT iperf/3 where i do get close to wire speed, > >>>> it=E2=80=99s NFS writes, i.e., almost real work :-) > >>>>=20 > >>>>> I used to tweak the card settings, but now it's just stock. You may > >>>>> want > >>>>> to > >>>>> check your settings, the Mellanox may just have better defaults for > >>>>> your > >>>>> switch. > >>>>>=20 > >>> Have you tried disabling TSO for the Intel? With TSO enabled, it will= be > >>> copying > >>> every transmitted mbuf chain to a new chain of mbuf clusters via. > >>> m_defrag() when > >>> TSO is enabled. (Assuming you aren't an 82598 chip. Most seem to be t= he > >>> 82599 chip > >>> these days?) > >>>=20 > >>=20 > >> hi Rick > >>=20 > >> how can i check the chip? > >>=20 > > Haven't a clue. Does "dmesg" tell you? (To be honest, since disabling T= SO > > helped, > > I'll bet you don't have a 82598.) > >=20 > >>> This has been fixed in the driver very recently, but those fixes won'= t be > >>> in 10.1. > >>>=20 > >>> rick > >>> ps: If you could test with 10.2, it would be interesting to see how t= he > >>> ix > >>> does with > >>> the current driver fixes in it? > >>=20 > >> I new TSO was involved! > >> ok, firstly, it=E2=80=99s 10.2 stable. > >> with TSO enabled, ix is bad, around 64MGB/s. > >> disabling TSO it=E2=80=99s better, around 130 > >>=20 > > Hmm, could you check to see of these lines are in sys/dev/ixgbe/if_ix.c= at > > around > > line#2500? > > /* TSO parameters */ > > 2572 =09 =09 ifp->if_hw_tsomax =3D 65518; > > 2573 =09 =09 ifp->if_hw_tsomaxsegcount =3D IXGBE_82599_SCATTER= ; > > 2574 =09 =09 ifp->if_hw_tsomaxsegsize =3D 2048; > >=20 > > They are in stable/10. I didn't look at releng/10.2. (And if they're in= a > > #ifdef > > for FreeBSD11, take the #ifdef away.) > > If they are there and not ifdef'd, I can't explain why disabling TSO wo= uld > > help. > > Once TSO is fixed so that it handles the 64K transmit segments without > > copying all > > the mbufs, I suspect you might get better perf. with it enabled? > >=20 >=20 > this is 10.2 : > they are on lines 2509-2511 and I don=E2=80=99t see any #ifdefs around i= t. >=20 > the plot thickens :-) >=20 If this is just a test machine, maybe you could test with these lines (at a= bout #880) in sys/netinet/tcp_output.c commented out? (It looks to me like this will d= isable TSO for almost all the NFS writes.) - around line #880 in sys/netinet/tcp_output.c: =09=09=09/* =09=09=09 * In case there are too many small fragments =09=09=09 * don't use TSO: =09=09=09 */ =09=09=09if (len <=3D max_len) { =09=09=09=09len =3D max_len; =09=09=09=09sendalot =3D 1; =09=09=09=09tso =3D 0; =09=09=09} This was added along with the other stuff that did the if_hw_tsomaxsegcount= , etc and I never noticed it until now (not my patch). rick > danny >=20 > > Good luck with it, rick > >=20 > >> still, mlxen0 is about 250! with and without TSO > >>=20 > >>=20 > >>>=20 > >>>>> On Mon, Aug 17, 2015 at 6:41 AM, Slawa Olhovchenkov >>>>> > wrote: > >>>>> On Mon, Aug 17, 2015 at 10:27:41AM +0300, Daniel Braniss wrote: > >>>>>=20 > >>>>>> hi, > >>>>>> I have a host (Dell R730) with both cards, connected to an HP8= 200 > >>>>>> switch at 10Gb. > >>>>>> when writing to the same storage (netapp) this is what I get: > >>>>>> ix0: ~130MGB/s > >>>>>> mlxen0 ~330MGB/s > >>>>>> this is via nfs/tcpv3 > >>>>>>=20 > >>>>>> I can get similar (bad) performance with the mellanox if I > >>>>>> increase > >>>>>> the file size > >>>>>> to 512MGB. > >>>>>=20 > >>>>> Look like mellanox have internal beffer for caching and do ACK > >>>>> acclerating. > >>>>>=20 > >>>>>> so at face value, it seems the mlxen does a better use of > >>>>>> resources > >>>>>> than the intel. > >>>>>> Any ideas how to improve ix/intel's performance? > >>>>>=20 > >>>>> Are you sure about netapp performance? > >>>>> _______________________________________________ > >>>>> freebsd-net@freebsd.org mailing li= st > >>>>> https://lists.freebsd.org/mailman/listinfo/freebsd-net > >>>>> > >>>>> To unsubscribe, send any mail to "freebsd-net-unsubscribe@freebsd.o= rg > >>>>> " > >>>>>=20 > >>>>=20 > >>>> _______________________________________________ > >>>> freebsd-stable@freebsd.org mailing list > >>>> https://lists.freebsd.org/mailman/listinfo/freebsd-stable > >>>> To unsubscribe, send any mail to > >>>> "freebsd-stable-unsubscribe@freebsd.org" > >>=20 > >> _______________________________________________ > >> freebsd-stable@freebsd.org mailing list > >> https://lists.freebsd.org/mailman/listinfo/freebsd-stable > >> To unsubscribe, send any mail to "freebsd-stable-unsubscribe@freebsd.o= rg" >=20 > _______________________________________________ > freebsd-net@freebsd.org mailing list > https://lists.freebsd.org/mailman/listinfo/freebsd-net > To unsubscribe, send any mail to "freebsd-net-unsubscribe@freebsd.org" From owner-freebsd-net@freebsd.org Tue Aug 18 13:21:23 2015 Return-Path: Delivered-To: freebsd-net@mailman.ysv.freebsd.org Received: from mx1.freebsd.org (mx1.freebsd.org [IPv6:2001:1900:2254:206a::19:1]) by mailman.ysv.freebsd.org (Postfix) with ESMTP id 02CC69BB8C1; Tue, 18 Aug 2015 13:21:23 +0000 (UTC) (envelope-from hps@selasky.org) Received: from mail.turbocat.net (mail.turbocat.net [IPv6:2a01:4f8:d16:4514::2]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (Client did not present a certificate) by mx1.freebsd.org (Postfix) with ESMTPS id B7BE86AF; Tue, 18 Aug 2015 13:21:22 +0000 (UTC) (envelope-from hps@selasky.org) Received: from laptop015.home.selasky.org (cm-176.74.213.204.customer.telag.net [176.74.213.204]) (using TLSv1.2 with cipher ECDHE-RSA-AES128-GCM-SHA256 (128/128 bits)) (No client certificate requested) by mail.turbocat.net (Postfix) with ESMTPSA id B062F1FE023; Tue, 18 Aug 2015 15:21:20 +0200 (CEST) Subject: Re: ix(intel) vs mlxen(mellanox) 10Gb performance To: Rick Macklem , Daniel Braniss References: <1D52028A-B39F-4F9B-BD38-CB1D73BF5D56@cs.huji.ac.il> <20150817094145.GB3158@zxy.spb.ru> <17871443-E105-4434-80B1-6939306A865F@cs.huji.ac.il> <473274181.23263108.1439814072514.JavaMail.zimbra@uoguelph.ca> <7F892C70-9C04-4468-9514-EDBFE75CF2C6@cs.huji.ac.il> <805850043.24018217.1439848150695.JavaMail.zimbra@uoguelph.ca> <9D8B0503-E8FA-43CA-88F0-01F184F84D9B@cs.huji.ac.il> <1721122651.24481798.1439902381663.JavaMail.zimbra@uoguelph.ca> Cc: FreeBSD Net , Slawa Olhovchenkov , FreeBSD stable , Christopher Forgeron From: Hans Petter Selasky Message-ID: <55D331A5.9050601@selasky.org> Date: Tue, 18 Aug 2015 15:22:45 +0200 User-Agent: Mozilla/5.0 (X11; FreeBSD amd64; rv:38.0) Gecko/20100101 Thunderbird/38.1.0 MIME-Version: 1.0 In-Reply-To: <1721122651.24481798.1439902381663.JavaMail.zimbra@uoguelph.ca> Content-Type: text/plain; charset=utf-8; format=flowed Content-Transfer-Encoding: 7bit X-BeenThere: freebsd-net@freebsd.org X-Mailman-Version: 2.1.20 Precedence: list List-Id: Networking and TCP/IP with FreeBSD List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , X-List-Received-Date: Tue, 18 Aug 2015 13:21:23 -0000 On 08/18/15 14:53, Rick Macklem wrote: > If this is just a test machine, maybe you could test with these lines (at about #880) > in sys/netinet/tcp_output.c commented out? (It looks to me like this will disable TSO > for almost all the NFS writes.) > - around line #880 in sys/netinet/tcp_output.c: > /* > * In case there are too many small fragments > * don't use TSO: > */ > if (len <= max_len) { > len = max_len; > sendalot = 1; > tso = 0; > } > > This was added along with the other stuff that did the if_hw_tsomaxsegcount, etc and I > never noticed it until now (not my patch). FYI: These lines are needed by other hardware, like the mlxen driver. If you remove them mlxen will start doing m_defrag(). I believe if you set the correct parameters in the "struct ifnet" for the TSO size/count limits this problem will go away. If you print the "len" and "max_len" and also the cases where TSO limits are reached, you'll see what parameter is triggering it and needs to be increased. --HPS From owner-freebsd-net@freebsd.org Tue Aug 18 13:30:44 2015 Return-Path: Delivered-To: freebsd-net@mailman.ysv.freebsd.org Received: from mx1.freebsd.org (mx1.freebsd.org [IPv6:2001:1900:2254:206a::19:1]) by mailman.ysv.freebsd.org (Postfix) with ESMTP id 2992F9BBBD8; Tue, 18 Aug 2015 13:30:44 +0000 (UTC) (envelope-from hps@selasky.org) Received: from mail.turbocat.net (mail.turbocat.net [IPv6:2a01:4f8:d16:4514::2]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (Client did not present a certificate) by mx1.freebsd.org (Postfix) with ESMTPS id DF251FD1; Tue, 18 Aug 2015 13:30:43 +0000 (UTC) (envelope-from hps@selasky.org) Received: from laptop015.home.selasky.org (cm-176.74.213.204.customer.telag.net [176.74.213.204]) (using TLSv1.2 with cipher ECDHE-RSA-AES128-GCM-SHA256 (128/128 bits)) (No client certificate requested) by mail.turbocat.net (Postfix) with ESMTPSA id 92BB21FE023; Tue, 18 Aug 2015 15:30:41 +0200 (CEST) Subject: Re: ix(intel) vs mlxen(mellanox) 10Gb performance To: Rick Macklem , Daniel Braniss References: <1D52028A-B39F-4F9B-BD38-CB1D73BF5D56@cs.huji.ac.il> <20150817094145.GB3158@zxy.spb.ru> <17871443-E105-4434-80B1-6939306A865F@cs.huji.ac.il> <473274181.23263108.1439814072514.JavaMail.zimbra@uoguelph.ca> <7F892C70-9C04-4468-9514-EDBFE75CF2C6@cs.huji.ac.il> <805850043.24018217.1439848150695.JavaMail.zimbra@uoguelph.ca> <9D8B0503-E8FA-43CA-88F0-01F184F84D9B@cs.huji.ac.il> <1721122651.24481798.1439902381663.JavaMail.zimbra@uoguelph.ca> Cc: FreeBSD Net , Slawa Olhovchenkov , FreeBSD stable , Christopher Forgeron From: Hans Petter Selasky Message-ID: <55D333D6.5040102@selasky.org> Date: Tue, 18 Aug 2015 15:32:06 +0200 User-Agent: Mozilla/5.0 (X11; FreeBSD amd64; rv:38.0) Gecko/20100101 Thunderbird/38.1.0 MIME-Version: 1.0 In-Reply-To: <1721122651.24481798.1439902381663.JavaMail.zimbra@uoguelph.ca> Content-Type: text/plain; charset=utf-8; format=flowed Content-Transfer-Encoding: 7bit X-BeenThere: freebsd-net@freebsd.org X-Mailman-Version: 2.1.20 Precedence: list List-Id: Networking and TCP/IP with FreeBSD List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , X-List-Received-Date: Tue, 18 Aug 2015 13:30:44 -0000 On 08/18/15 14:53, Rick Macklem wrote: > 2572 ifp->if_hw_tsomax = 65518; >> >2573 ifp->if_hw_tsomaxsegcount = IXGBE_82599_SCATTER; >> >2574 ifp->if_hw_tsomaxsegsize = 2048; Hi, If IXGBE_82599_SCATTER is the maximum scatter/gather entries the hardware can do, remember to subtract one fragment for the TCP/IP-header mbuf! I think there is an off-by-one here: ifp->if_hw_tsomax = 65518; ifp->if_hw_tsomaxsegcount = IXGBE_82599_SCATTER - 1; ifp->if_hw_tsomaxsegsize = 2048; Refer to: > * > * NOTE: The TSO limits only apply to the data payload part of > * a TCP/IP packet. That means there is no need to subtract > * space for ethernet-, vlan-, IP- or TCP- headers from the > * TSO limits unless the hardware driver in question requires > * so. In sys/net/if_var.h Thank you! --HPS From owner-freebsd-net@freebsd.org Tue Aug 18 14:09:50 2015 Return-Path: Delivered-To: freebsd-net@mailman.ysv.freebsd.org Received: from mx1.freebsd.org (mx1.freebsd.org [IPv6:2001:1900:2254:206a::19:1]) by mailman.ysv.freebsd.org (Postfix) with ESMTP id 218D39BC532; Tue, 18 Aug 2015 14:09:50 +0000 (UTC) (envelope-from danny@cs.huji.ac.il) Received: from kabab.cs.huji.ac.il (kabab.cs.huji.ac.il [132.65.116.210]) (using TLSv1 with cipher DHE-RSA-AES256-SHA (256/256 bits)) (Client did not present a certificate) by mx1.freebsd.org (Postfix) with ESMTPS id C6EE6118; Tue, 18 Aug 2015 14:09:48 +0000 (UTC) (envelope-from danny@cs.huji.ac.il) Received: from chamsa.cs.huji.ac.il ([132.65.80.19]) by kabab.cs.huji.ac.il with esmtp id 1ZRhaA-000Ac9-LX; Tue, 18 Aug 2015 17:09:42 +0300 Content-Type: text/plain; charset=utf-8 Mime-Version: 1.0 (Mac OS X Mail 8.2 \(2104\)) Subject: Re: ix(intel) vs mlxen(mellanox) 10Gb performance From: Daniel Braniss In-Reply-To: <55D333D6.5040102@selasky.org> Date: Tue, 18 Aug 2015 17:09:41 +0300 Cc: Rick Macklem , FreeBSD Net , Slawa Olhovchenkov , FreeBSD stable , Christopher Forgeron Content-Transfer-Encoding: quoted-printable Message-Id: <47EC9292-082C-4801-B52F-4BD6B8310F99@cs.huji.ac.il> References: <1D52028A-B39F-4F9B-BD38-CB1D73BF5D56@cs.huji.ac.il> <20150817094145.GB3158@zxy.spb.ru> <17871443-E105-4434-80B1-6939306A865F@cs.huji.ac.il> <473274181.23263108.1439814072514.JavaMail.zimbra@uoguelph.ca> <7F892C70-9C04-4468-9514-EDBFE75CF2C6@cs.huji.ac.il> <805850043.24018217.1439848150695.JavaMail.zimbra@uoguelph.ca> <9D8B0503-E8FA-43CA-88F0-01F184F84D9B@cs.huji.ac.il> <1721122651.24481798.1439902381663.JavaMail.zimbra@uoguelph.ca> <55D333D6.5040102@selasky.org> To: Hans Petter Selasky X-Mailer: Apple Mail (2.2104) X-BeenThere: freebsd-net@freebsd.org X-Mailman-Version: 2.1.20 Precedence: list List-Id: Networking and TCP/IP with FreeBSD List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , X-List-Received-Date: Tue, 18 Aug 2015 14:09:50 -0000 sorry, it=E2=80=99s been a tough day, we had a major meltdown, caused by = a faulty gbic :-( anyways, could you tell me what to do? comment out, fix the off by one? the machine is not yet production. thanks, danny > On 18 Aug 2015, at 16:32, Hans Petter Selasky wrote: >=20 > On 08/18/15 14:53, Rick Macklem wrote: >> 2572 ifp->if_hw_tsomax =3D 65518; >>> >2573 ifp->if_hw_tsomaxsegcount =3D = IXGBE_82599_SCATTER; >>> >2574 ifp->if_hw_tsomaxsegsize =3D 2048; >=20 > Hi, >=20 > If IXGBE_82599_SCATTER is the maximum scatter/gather entries the = hardware can do, remember to subtract one fragment for the TCP/IP-header = mbuf! >=20 > I think there is an off-by-one here: >=20 > ifp->if_hw_tsomax =3D 65518; > ifp->if_hw_tsomaxsegcount =3D IXGBE_82599_SCATTER - 1; > ifp->if_hw_tsomaxsegsize =3D 2048; >=20 > Refer to: >=20 >> * >> * NOTE: The TSO limits only apply to the data payload part of >> * a TCP/IP packet. That means there is no need to subtract >> * space for ethernet-, vlan-, IP- or TCP- headers from the >> * TSO limits unless the hardware driver in question requires >> * so. >=20 > In sys/net/if_var.h >=20 > Thank you! >=20 > --HPS >=20 From owner-freebsd-net@freebsd.org Tue Aug 18 14:18:51 2015 Return-Path: Delivered-To: freebsd-net@mailman.ysv.freebsd.org Received: from mx1.freebsd.org (mx1.freebsd.org [IPv6:2001:1900:2254:206a::19:1]) by mailman.ysv.freebsd.org (Postfix) with ESMTP id 1ADF79BC901; Tue, 18 Aug 2015 14:18:51 +0000 (UTC) (envelope-from slw@zxy.spb.ru) Received: from zxy.spb.ru (zxy.spb.ru [195.70.199.98]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (Client did not present a certificate) by mx1.freebsd.org (Postfix) with ESMTPS id C2ACCD7D; Tue, 18 Aug 2015 14:18:50 +0000 (UTC) (envelope-from slw@zxy.spb.ru) Received: from slw by zxy.spb.ru with local (Exim 4.84 (FreeBSD)) (envelope-from ) id 1ZRhio-0002d4-G8; Tue, 18 Aug 2015 17:18:38 +0300 Date: Tue, 18 Aug 2015 17:18:38 +0300 From: Slawa Olhovchenkov To: Daniel Braniss Cc: Hans Petter Selasky , Rick Macklem , FreeBSD Net , FreeBSD stable , Christopher Forgeron Subject: Re: ix(intel) vs mlxen(mellanox) 10Gb performance Message-ID: <20150818141838.GO1872@zxy.spb.ru> References: <20150817094145.GB3158@zxy.spb.ru> <17871443-E105-4434-80B1-6939306A865F@cs.huji.ac.il> <473274181.23263108.1439814072514.JavaMail.zimbra@uoguelph.ca> <7F892C70-9C04-4468-9514-EDBFE75CF2C6@cs.huji.ac.il> <805850043.24018217.1439848150695.JavaMail.zimbra@uoguelph.ca> <9D8B0503-E8FA-43CA-88F0-01F184F84D9B@cs.huji.ac.il> <1721122651.24481798.1439902381663.JavaMail.zimbra@uoguelph.ca> <55D333D6.5040102@selasky.org> <47EC9292-082C-4801-B52F-4BD6B8310F99@cs.huji.ac.il> MIME-Version: 1.0 Content-Type: text/plain; charset=koi8-r Content-Disposition: inline In-Reply-To: <47EC9292-082C-4801-B52F-4BD6B8310F99@cs.huji.ac.il> User-Agent: Mutt/1.5.23 (2014-03-12) X-SA-Exim-Connect-IP: X-SA-Exim-Mail-From: slw@zxy.spb.ru X-SA-Exim-Scanned: No (on zxy.spb.ru); SAEximRunCond expanded to false X-BeenThere: freebsd-net@freebsd.org X-Mailman-Version: 2.1.20 Precedence: list List-Id: Networking and TCP/IP with FreeBSD List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , X-List-Received-Date: Tue, 18 Aug 2015 14:18:51 -0000 On Tue, Aug 18, 2015 at 05:09:41PM +0300, Daniel Braniss wrote: > sorry, it's been a tough day, we had a major meltdown, caused by a faulty gbic :-( > anyways, could you tell me what to do? > comment out, fix the off by one? > > the machine is not yet production. Can you collect this information? https://lists.freebsd.org/pipermail/freebsd-stable/2015-August/083113.html And 'show interface' (or equivalent: error/collsion/events counters) from both ports from HP8200. From owner-freebsd-net@freebsd.org Tue Aug 18 17:59:17 2015 Return-Path: Delivered-To: freebsd-net@mailman.ysv.freebsd.org Received: from mx1.freebsd.org (mx1.freebsd.org [IPv6:2001:1900:2254:206a::19:1]) by mailman.ysv.freebsd.org (Postfix) with ESMTP id 4816A9BD253 for ; Tue, 18 Aug 2015 17:59:17 +0000 (UTC) (envelope-from sobomax@sippysoft.com) Received: from mail-io0-f173.google.com (mail-io0-f173.google.com [209.85.223.173]) (using TLSv1.2 with cipher ECDHE-RSA-AES128-GCM-SHA256 (128/128 bits)) (Client CN "smtp.gmail.com", Issuer "Google Internet Authority G2" (verified OK)) by mx1.freebsd.org (Postfix) with ESMTPS id 12CA012C5 for ; Tue, 18 Aug 2015 17:59:16 +0000 (UTC) (envelope-from sobomax@sippysoft.com) Received: by iods203 with SMTP id s203so198720862iod.0 for ; Tue, 18 Aug 2015 10:59:16 -0700 (PDT) X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20130820; h=x-gm-message-state:mime-version:in-reply-to:references:date :message-id:subject:from:to:cc:content-type; bh=IEsgypL6ZaktT5pVnTZmwsOUq8b+rOnS0mxKFH3slJo=; b=Eog+ZRy210E8mcxriNGDGFDPMmnEOPf9wSRQPyYIS2TwxrPFTspGyYG76jvB6WPdaY 6c1wY2OPrB88xGThDjZxW3w8MVHv47SOuJSW/mYs0/bjrX1aUhYTZgrCdCcUI0Iu328d Hs48dZIh0bMKtrJtE2LiLvNtDNRS+1T2BWwoD91H0PsLhXi+ELYmm/udGf1Tpj8E7/CL 47tUwo62vPB1lujXlB62bGwL6sl6CYkRZbXSICwdbR3s0hvrqaxhB/oGT4Br2ofH7k5B V/fYnsZMNsPao0eMSi700qut//llAgm8Fob+DMOfR6dqP4BKlLceTtM5hGqsAm/maBGu wfrw== X-Gm-Message-State: ALoCoQnuWz+T/EH7ONojcWG+F38ICVYGz+M5cvxjIXZJrIOUQSGC5ZqdTsJtp2jiiqtGqH1BlY9N MIME-Version: 1.0 X-Received: by 10.107.152.81 with SMTP id a78mr8405242ioe.145.1439920756024; Tue, 18 Aug 2015 10:59:16 -0700 (PDT) Received: by 10.79.107.143 with HTTP; Tue, 18 Aug 2015 10:59:15 -0700 (PDT) Received: by 10.79.107.143 with HTTP; Tue, 18 Aug 2015 10:59:15 -0700 (PDT) In-Reply-To: References: <77171439377164@web21h.yandex.ru> <55CB2F18.40902@FreeBSD.org> Date: Tue, 18 Aug 2015 10:59:15 -0700 Message-ID: Subject: Re: Poor high-PPS performance of the 10G ixgbe(9) NIC/driver in FreeBSD 10.1 From: Maxim Sobolev To: Luigi Rizzo Cc: freebsd@intel.com, FreeBSD Net , =?UTF-8?Q?Olivier_Cochard=2DLabb=C3=A9?= , "Alexander V. Chernikov" , Babak Farrokhi , Jev Bjorsell Content-Type: text/plain; charset=UTF-8 X-Content-Filtered-By: Mailman/MimeDel 2.1.20 X-BeenThere: freebsd-net@freebsd.org X-Mailman-Version: 2.1.20 Precedence: list List-Id: Networking and TCP/IP with FreeBSD List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , X-List-Received-Date: Tue, 18 Aug 2015 17:59:17 -0000 Yes, we've confirmed it's IXGBE_FDIR. That's good it comes disabled in 10.2. Thanks everyone for constructive input! -Max From owner-freebsd-net@freebsd.org Tue Aug 18 18:03:44 2015 Return-Path: Delivered-To: freebsd-net@mailman.ysv.freebsd.org Received: from mx1.freebsd.org (mx1.freebsd.org [IPv6:2001:1900:2254:206a::19:1]) by mailman.ysv.freebsd.org (Postfix) with ESMTP id 9C9B59BD3C8 for ; Tue, 18 Aug 2015 18:03:44 +0000 (UTC) (envelope-from adrian.chadd@gmail.com) Received: from mail-ig0-x22f.google.com (mail-ig0-x22f.google.com [IPv6:2607:f8b0:4001:c05::22f]) (using TLSv1.2 with cipher ECDHE-RSA-AES128-GCM-SHA256 (128/128 bits)) (Client CN "smtp.gmail.com", Issuer "Google Internet Authority G2" (verified OK)) by mx1.freebsd.org (Postfix) with ESMTPS id 64893182C; Tue, 18 Aug 2015 18:03:44 +0000 (UTC) (envelope-from adrian.chadd@gmail.com) Received: by igfj19 with SMTP id j19so86497893igf.1; Tue, 18 Aug 2015 11:03:43 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20120113; h=mime-version:in-reply-to:references:date:message-id:subject:from:to :cc:content-type; bh=RiPjQ2rvBqmxSRB2nhvL8Z0llCjRYrYlDdvcTKekDP0=; b=SIijSN0fsz7l9NGdxu+WvCCjuSZpNmAJtFHBP7H0fxnqIynEPyVc5oECfQIb3LjvNU OTinlqWa9GrzadFwy/qwrNKkppkT4gmarqxFF8sjAq16JBRBI1Vt/SFvkWWrgYmbzR35 Fcy+sEfwwOiqg5NQYaDTrTdT22BEj/2AM2iKUAjnxTnjXDkyu4EgaHJ0kTha5CZRwBQH YZT4A2K8wRGkQ04CvLeIj9sAqBmJUy7VQwocQUaWSI/vzQF2vVcLw7VGrSCGgEGSPfi9 eqYEU/fpInDhIPA59jwDRoP+/8VAXMnjeUo9swIAuOxOAUgCvjJ1Ox3TtIZLaudS1viv VHCQ== MIME-Version: 1.0 X-Received: by 10.50.61.144 with SMTP id p16mr22331193igr.22.1439921023805; Tue, 18 Aug 2015 11:03:43 -0700 (PDT) Received: by 10.36.38.133 with HTTP; Tue, 18 Aug 2015 11:03:43 -0700 (PDT) In-Reply-To: References: <77171439377164@web21h.yandex.ru> <55CB2F18.40902@FreeBSD.org> Date: Tue, 18 Aug 2015 11:03:43 -0700 Message-ID: Subject: Re: Poor high-PPS performance of the 10G ixgbe(9) NIC/driver in FreeBSD 10.1 From: Adrian Chadd To: Maxim Sobolev Cc: Luigi Rizzo , "Alexander V. Chernikov" , FreeBSD Net , Babak Farrokhi , freebsd@intel.com, Jev Bjorsell , =?UTF-8?Q?Olivier_Cochard=2DLabb=C3=A9?= Content-Type: text/plain; charset=UTF-8 X-BeenThere: freebsd-net@freebsd.org X-Mailman-Version: 2.1.20 Precedence: list List-Id: Networking and TCP/IP with FreeBSD List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , X-List-Received-Date: Tue, 18 Aug 2015 18:03:44 -0000 you're welcome. Someone should really add a release errata to 10.1 or something. -a On 18 August 2015 at 10:59, Maxim Sobolev wrote: > Yes, we've confirmed it's IXGBE_FDIR. That's good it comes disabled in 10.2. > > Thanks everyone for constructive input! > > -Max > _______________________________________________ > freebsd-net@freebsd.org mailing list > https://lists.freebsd.org/mailman/listinfo/freebsd-net > To unsubscribe, send any mail to "freebsd-net-unsubscribe@freebsd.org" From owner-freebsd-net@freebsd.org Tue Aug 18 17:43:40 2015 Return-Path: Delivered-To: freebsd-net@mailman.ysv.freebsd.org Received: from mx1.freebsd.org (mx1.freebsd.org [IPv6:2001:1900:2254:206a::19:1]) by mailman.ysv.freebsd.org (Postfix) with ESMTP id 876EF9BCFB1 for ; Tue, 18 Aug 2015 17:43:40 +0000 (UTC) (envelope-from reko.turja@liukuma.net) Received: from cerebro.liukuma.net (cerebro.liukuma.net [IPv6:2a00:d1e0:1000:1b00::2]) by mx1.freebsd.org (Postfix) with ESMTP id 462EEB1E; Tue, 18 Aug 2015 17:43:40 +0000 (UTC) (envelope-from reko.turja@liukuma.net) Received: from cerebro.liukuma.net (localhost [127.0.0.1]) by cerebro.liukuma.net (Postfix) with ESMTP id 85EAB8A048A; Tue, 18 Aug 2015 20:43:38 +0300 (EEST) DKIM-Filter: OpenDKIM Filter v2.8.3 cerebro.liukuma.net 85EAB8A048A DKIM-Signature: v=1; a=rsa-sha256; c=simple/simple; d=liukuma.net; s=liukudkim; t=1439919818; bh=ovtvy6nyAIj1VMZd9ag3a921LDCVb8j56JV8bhufNOI=; h=From:To:References:In-Reply-To:Subject:Date; b=XSTI6SImvYea5INHI8AegpksHxsJ5Vp9gH7lPaM0zHzsE/dqSVydS77noWu8GI3Jc k84auySD70pXxXpRdMOm4VGLJjP3TG0n5qTnnJUI6pjmsfs90x+hSYR49epxgmJAAG 7/ED5gi2BZjljnpYO6uv6SqWSFHCYZEeEn0byY9g= X-Virus-Scanned: amavisd-new at liukuma.net Received: from cerebro.liukuma.net ([127.0.0.1]) by cerebro.liukuma.net (cerebro.liukuma.net [127.0.0.1]) (amavisd-new, port 10027) with LMTP id h-Yuo1y879SD; Tue, 18 Aug 2015 20:43:37 +0300 (EEST) Received: from Rivendell (dsl-kmibrasgw1-50dfdd-193.dhcp.inet.fi [80.223.221.193]) (using TLSv1 with cipher ECDHE-RSA-AES256-SHA (256/256 bits)) (Client did not present a certificate) (Authenticated sender: ignatz@cerebro.liukuma.net) by cerebro.liukuma.net (Postfix) with ESMTPSA id F3C4C8A010E; Tue, 18 Aug 2015 20:43:36 +0300 (EEST) DKIM-Filter: OpenDKIM Filter v2.8.3 cerebro.liukuma.net F3C4C8A010E Message-ID: <3FEB78C5597F471D94843F93EC1EC5CE@Rivendell> From: "Reko Turja" To: , "Andriy Gapon" References: <55D2E9B3.2040301@FreeBSD.org> In-Reply-To: <55D2E9B3.2040301@FreeBSD.org> Subject: Re: pf and new interface Date: Tue, 18 Aug 2015 20:43:31 +0300 MIME-Version: 1.0 Content-Type: text/plain; format=flowed; charset="iso-8859-1"; reply-type=original Content-Transfer-Encoding: 7bit X-Priority: 3 X-MSMail-Priority: Normal Importance: Normal X-Mailer: Microsoft Windows Live Mail 15.4.3555.308 X-MimeOLE: Produced By Microsoft MimeOLE V15.4.3555.308 X-Mailman-Approved-At: Tue, 18 Aug 2015 18:13:24 +0000 X-BeenThere: freebsd-net@freebsd.org X-Mailman-Version: 2.1.20 Precedence: list List-Id: Networking and TCP/IP with FreeBSD List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , X-List-Received-Date: Tue, 18 Aug 2015 17:43:40 -0000 Hmm does the: set skip on (tap) syntax work in this case? Basically parentheses around the alias should tell pf that the IP is volatile and can be either activated at later time or it can be dynamic via dhcp etc. -Reko From owner-freebsd-net@freebsd.org Tue Aug 18 18:25:04 2015 Return-Path: Delivered-To: freebsd-net@mailman.ysv.freebsd.org Received: from mx1.freebsd.org (mx1.freebsd.org [IPv6:2001:1900:2254:206a::19:1]) by mailman.ysv.freebsd.org (Postfix) with ESMTP id 350D29BD833 for ; Tue, 18 Aug 2015 18:25:04 +0000 (UTC) (envelope-from gjb@FreeBSD.org) Received: from freefall.freebsd.org (freefall.freebsd.org [IPv6:2001:1900:2254:206c::16:87]) by mx1.freebsd.org (Postfix) with ESMTP id 1E24297B; Tue, 18 Aug 2015 18:25:04 +0000 (UTC) (envelope-from gjb@FreeBSD.org) Received: from FreeBSD.org (freefall.freebsd.org [IPv6:2001:1900:2254:206c::16:87]) by freefall.freebsd.org (Postfix) with ESMTP id 9B53816DE; Tue, 18 Aug 2015 18:25:03 +0000 (UTC) (envelope-from gjb@FreeBSD.org) Date: Tue, 18 Aug 2015 18:25:02 +0000 From: Glen Barber To: hiren panchasara Cc: FreeBSD Net Subject: Re: Poor high-PPS performance of the 10G ixgbe(9) NIC/driver in FreeBSD 10.1 Message-ID: <20150818182502.GZ24069@FreeBSD.org> References: <77171439377164@web21h.yandex.ru> <55CB2F18.40902@FreeBSD.org> <20150818181833.GB94440@strugglingcoder.info> MIME-Version: 1.0 Content-Type: multipart/signed; micalg=pgp-sha256; protocol="application/pgp-signature"; boundary="f/XZMZU9ST6S8MjH" Content-Disposition: inline In-Reply-To: <20150818181833.GB94440@strugglingcoder.info> X-Operating-System: FreeBSD 11.0-CURRENT amd64 X-SCUD-Definition: Sudden Completely Unexpected Dataloss X-SULE-Definition: Sudden Unexpected Learning Event X-PEKBAC-Definition: Problem Exists, Keyboard Between Admin/Computer User-Agent: Mutt/1.5.23 (2014-03-12) X-BeenThere: freebsd-net@freebsd.org X-Mailman-Version: 2.1.20 Precedence: list List-Id: Networking and TCP/IP with FreeBSD List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , X-List-Received-Date: Tue, 18 Aug 2015 18:25:04 -0000 --f/XZMZU9ST6S8MjH Content-Type: text/plain; charset=us-ascii Content-Disposition: inline Content-Transfer-Encoding: quoted-printable On Tue, Aug 18, 2015 at 11:18:33AM -0700, hiren panchasara wrote: > On 08/18/15 at 11:03P, Adrian Chadd wrote: > > you're welcome. > >=20 > > Someone should really add a release errata to 10.1 or something. >=20 > Yes, I strongly feel the same. Adding gjb@ here to see how that can be > done. >=20 Please send to re@. Glen --f/XZMZU9ST6S8MjH Content-Type: application/pgp-signature -----BEGIN PGP SIGNATURE----- Version: GnuPG v2 iQIcBAEBCAAGBQJV03h+AAoJEAMUWKVHj+KTorgP+QF2DkIyS8VsPOACDA3ZoFDe mIsCCeNKuSH93vDa4IRsz/4txwFwbIlbmD18bNhFTvTqEJ+jO6duAQcV5pIM86bD vv7lCnfmIfdYfFG5vElIeAGn0VlhIknTzRc/aEXeu1nLySrFAYBtST1fHYPFA1JU bOYDskyBs/V7gl5VQo+4elpWzONnYL98iVrVttdR7yneG7pw2reWttoer3ye/Zlj jidqLrsvBTwz+AXxje+62KQKP0jm4Px+DzI4S4ITSwIAzXjaR27qzEuNkC+wDA/B 7/IfQDmPC3mRvvDOwUFLY9e2qTcCX7UnDJNKHmzXJQ5UDNcM3qWSj0EwFs6ilAE0 6w8D1tbLsY1QD3SAn0pTzkk0cS4Wvw7Fzy4bjdBXEFgI8/Y8mgsFW1a9CupnTkY0 yEJGdGcMU0iJvVWS0N1lT46RVXAAX23gpKZGu1KRle7q4av6nYiZuc9f4UWU573W agRJrzHdV3k/y3tSgVPgPvAhke8vuuXT4m4g9+pqT4u4+s1RbLJwfiEdiqeM8Hts lhkLE4ZwHsvcrHv1oi0tlCbUMVoDod/lPkABqD0x/tvDrh5kIF+fzP55pxxa0A4p jpGEWaMo/fojc0qsK9vmpCUYzGhLd1QeWGrwBsBAk4PB1CkeVKZ73nNNBnTTt1rW 3o8KqUMVJwDbA8q1SD6L =XII/ -----END PGP SIGNATURE----- --f/XZMZU9ST6S8MjH-- From owner-freebsd-net@freebsd.org Tue Aug 18 18:26:55 2015 Return-Path: Delivered-To: freebsd-net@mailman.ysv.freebsd.org Received: from mx1.freebsd.org (mx1.freebsd.org [IPv6:2001:1900:2254:206a::19:1]) by mailman.ysv.freebsd.org (Postfix) with ESMTP id DB9709BD89C for ; Tue, 18 Aug 2015 18:26:55 +0000 (UTC) (envelope-from avg@FreeBSD.org) Received: from citadel.icyb.net.ua (citadel.icyb.net.ua [212.40.38.140]) by mx1.freebsd.org (Postfix) with ESMTP id 0023EA4B for ; Tue, 18 Aug 2015 18:26:54 +0000 (UTC) (envelope-from avg@FreeBSD.org) Received: from porto.starpoint.kiev.ua (porto-e.starpoint.kiev.ua [212.40.38.100]) by citadel.icyb.net.ua (8.8.8p3/ICyb-2.3exp) with ESMTP id VAA00787; Tue, 18 Aug 2015 21:26:52 +0300 (EEST) (envelope-from avg@FreeBSD.org) Received: from localhost ([127.0.0.1]) by porto.starpoint.kiev.ua with esmtp (Exim 4.34 (FreeBSD)) id 1ZRlb2-0009fg-3I; Tue, 18 Aug 2015 21:26:52 +0300 Subject: Re: pf and new interface To: Reko Turja , freebsd-net@FreeBSD.org References: <55D2E9B3.2040301@FreeBSD.org> <3FEB78C5597F471D94843F93EC1EC5CE@Rivendell> From: Andriy Gapon Message-ID: <55D378B4.9030303@FreeBSD.org> Date: Tue, 18 Aug 2015 21:25:56 +0300 User-Agent: Mozilla/5.0 (Windows NT 6.3; WOW64; rv:38.0) Gecko/20100101 Thunderbird/38.1.0 MIME-Version: 1.0 In-Reply-To: <3FEB78C5597F471D94843F93EC1EC5CE@Rivendell> Content-Type: text/plain; charset=windows-1252 Content-Transfer-Encoding: 7bit X-BeenThere: freebsd-net@freebsd.org X-Mailman-Version: 2.1.20 Precedence: list List-Id: Networking and TCP/IP with FreeBSD List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , X-List-Received-Date: Tue, 18 Aug 2015 18:26:55 -0000 On 18/08/2015 20:43, Reko Turja wrote: > Hmm does the: > > set skip on (tap) > > syntax work in this case? Basically parentheses around the alias should > tell pf that the IP is volatile and can be either activated at later > time or it can be dynamic via dhcp etc. I will check and follow up. -- Andriy Gapon From owner-freebsd-net@freebsd.org Tue Aug 18 18:28:06 2015 Return-Path: Delivered-To: freebsd-net@mailman.ysv.freebsd.org Received: from mx1.freebsd.org (mx1.freebsd.org [IPv6:2001:1900:2254:206a::19:1]) by mailman.ysv.freebsd.org (Postfix) with ESMTP id EDC719BD8F0 for ; Tue, 18 Aug 2015 18:28:06 +0000 (UTC) (envelope-from hiren@strugglingcoder.info) Received: from mail.strugglingcoder.info (strugglingcoder.info [65.19.130.35]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (Client did not present a certificate) by mx1.freebsd.org (Postfix) with ESMTPS id CBD28B11; Tue, 18 Aug 2015 18:28:06 +0000 (UTC) (envelope-from hiren@strugglingcoder.info) Received: from localhost (unknown [10.1.1.3]) (Authenticated sender: hiren@strugglingcoder.info) by mail.strugglingcoder.info (Postfix) with ESMTPSA id 3DB7DE3B2; Tue, 18 Aug 2015 11:28:06 -0700 (PDT) Date: Tue, 18 Aug 2015 11:28:06 -0700 From: hiren panchasara To: Glen Barber Cc: FreeBSD Net Subject: Re: Poor high-PPS performance of the 10G ixgbe(9) NIC/driver in FreeBSD 10.1 Message-ID: <20150818182806.GC94440@strugglingcoder.info> References: <77171439377164@web21h.yandex.ru> <55CB2F18.40902@FreeBSD.org> <20150818181833.GB94440@strugglingcoder.info> <20150818182502.GZ24069@FreeBSD.org> MIME-Version: 1.0 Content-Type: multipart/signed; micalg=pgp-sha512; protocol="application/pgp-signature"; boundary="DIOMP1UsTsWJauNi" Content-Disposition: inline In-Reply-To: <20150818182502.GZ24069@FreeBSD.org> User-Agent: Mutt/1.5.23 (2014-03-12) X-BeenThere: freebsd-net@freebsd.org X-Mailman-Version: 2.1.20 Precedence: list List-Id: Networking and TCP/IP with FreeBSD List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , X-List-Received-Date: Tue, 18 Aug 2015 18:28:07 -0000 --DIOMP1UsTsWJauNi Content-Type: text/plain; charset=us-ascii Content-Disposition: inline Content-Transfer-Encoding: quoted-printable On 08/18/15 at 06:25P, Glen Barber wrote: > On Tue, Aug 18, 2015 at 11:18:33AM -0700, hiren panchasara wrote: > > On 08/18/15 at 11:03P, Adrian Chadd wrote: > > > you're welcome. > > >=20 > > > Someone should really add a release errata to 10.1 or something. > >=20 > > Yes, I strongly feel the same. Adding gjb@ here to see how that can be > > done. > >=20 >=20 > Please send to re@. Will do. Thanks, Hiren --DIOMP1UsTsWJauNi Content-Type: application/pgp-signature -----BEGIN PGP SIGNATURE----- Version: GnuPG v2.0.22 (FreeBSD) iQF8BAEBCgBmBQJV03k0XxSAAAAAAC4AKGlzc3Vlci1mcHJAbm90YXRpb25zLm9w ZW5wZ3AuZmlmdGhob3JzZW1hbi5uZXRBNEUyMEZBMUQ4Nzg4RjNGMTdFNjZGMDI4 QjkyNTBFMTU2M0VERkU1AAoJEIuSUOFWPt/l6dYIAIr28yEQYvTh4zW/HI66oAkA vvoO0j9//67ny5V+VMIFHft1K6/s019k+K0Nj1urcnR6oAQmiEl9q2NInFWSEgRI 9tCVOQpyq8PKgAKS7HVXNJ2klq2fZY5BcAImqTuMMUqSBXK0dHmbRM2CjnaYDaB8 SGFi1dew/X6/Ube35RqXARIsCzfpyHF2Gpxt2Gj5vuBicW39wsj9UFObeUJK5ROp pUnXIihr6Wrje1G3px/+5QxanwYf+CqD84dNU9syu63a+mw2I2Ztk5zXf9UJDRTT 9PRO3PSo7N7D37Ri3TR2XEshVsOgeVVLoHPZWwtAKP59Wd1Su9J/KAxlp/OOgFE= =YRhX -----END PGP SIGNATURE----- --DIOMP1UsTsWJauNi-- From owner-freebsd-net@freebsd.org Tue Aug 18 18:18:35 2015 Return-Path: Delivered-To: freebsd-net@mailman.ysv.freebsd.org Received: from mx1.freebsd.org (mx1.freebsd.org [IPv6:2001:1900:2254:206a::19:1]) by mailman.ysv.freebsd.org (Postfix) with ESMTP id B426D9BD656 for ; Tue, 18 Aug 2015 18:18:35 +0000 (UTC) (envelope-from hiren@strugglingcoder.info) Received: from mail.strugglingcoder.info (strugglingcoder.info [65.19.130.35]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (Client did not present a certificate) by mx1.freebsd.org (Postfix) with ESMTPS id 9AE488B; Tue, 18 Aug 2015 18:18:35 +0000 (UTC) (envelope-from hiren@strugglingcoder.info) Received: from localhost (unknown [10.1.1.3]) (Authenticated sender: hiren@strugglingcoder.info) by mail.strugglingcoder.info (Postfix) with ESMTPSA id D3447E28F; Tue, 18 Aug 2015 11:18:33 -0700 (PDT) Date: Tue, 18 Aug 2015 11:18:33 -0700 From: hiren panchasara To: Adrian Chadd , gjb@FreeBSD.org Cc: Maxim Sobolev , "Alexander V. Chernikov" , FreeBSD Net , Babak Farrokhi , freebsd@intel.com, Jev Bjorsell , Olivier Cochard-Labb? , Luigi Rizzo Subject: Re: Poor high-PPS performance of the 10G ixgbe(9) NIC/driver in FreeBSD 10.1 Message-ID: <20150818181833.GB94440@strugglingcoder.info> References: <77171439377164@web21h.yandex.ru> <55CB2F18.40902@FreeBSD.org> MIME-Version: 1.0 Content-Type: multipart/signed; micalg=pgp-sha512; protocol="application/pgp-signature"; boundary="LpQ9ahxlCli8rRTG" Content-Disposition: inline In-Reply-To: User-Agent: Mutt/1.5.23 (2014-03-12) X-Mailman-Approved-At: Tue, 18 Aug 2015 19:40:01 +0000 X-BeenThere: freebsd-net@freebsd.org X-Mailman-Version: 2.1.20 Precedence: list List-Id: Networking and TCP/IP with FreeBSD List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , X-List-Received-Date: Tue, 18 Aug 2015 18:18:35 -0000 --LpQ9ahxlCli8rRTG Content-Type: text/plain; charset=us-ascii Content-Disposition: inline Content-Transfer-Encoding: quoted-printable On 08/18/15 at 11:03P, Adrian Chadd wrote: > you're welcome. >=20 > Someone should really add a release errata to 10.1 or something. Yes, I strongly feel the same. Adding gjb@ here to see how that can be done. Cheers, Hiren >=20 >=20 > -a >=20 >=20 > On 18 August 2015 at 10:59, Maxim Sobolev wrote: > > Yes, we've confirmed it's IXGBE_FDIR. That's good it comes disabled in = 10.2. > > > > Thanks everyone for constructive input! > > > > -Max > > _______________________________________________ > > freebsd-net@freebsd.org mailing list > > https://lists.freebsd.org/mailman/listinfo/freebsd-net > > To unsubscribe, send any mail to "freebsd-net-unsubscribe@freebsd.org" > _______________________________________________ > freebsd-net@freebsd.org mailing list > https://lists.freebsd.org/mailman/listinfo/freebsd-net > To unsubscribe, send any mail to "freebsd-net-unsubscribe@freebsd.org" --LpQ9ahxlCli8rRTG Content-Type: application/pgp-signature -----BEGIN PGP SIGNATURE----- Version: GnuPG v2.0.22 (FreeBSD) iQF8BAEBCgBmBQJV03b5XxSAAAAAAC4AKGlzc3Vlci1mcHJAbm90YXRpb25zLm9w ZW5wZ3AuZmlmdGhob3JzZW1hbi5uZXRBNEUyMEZBMUQ4Nzg4RjNGMTdFNjZGMDI4 QjkyNTBFMTU2M0VERkU1AAoJEIuSUOFWPt/lB+wIAIWXAHOL4LIfnc5BoB71J2v5 NQFB2g0lGQ/wIiSYsDSZ5mSMpMr95HKKst/DCgwz1OdNWxjfIEyZuV7qlz8LGy+3 gGSbxsMbYCcEm7dT4rbx534KZGcGk74Yw+KxhZVWw+ZmdG2iO6o90KwiSl3Zjz0U eOtzmPK9RQOrFAjnYC3dMFEAEPKOcfrFzFgb+CE9qPXrEkYocbZokZyoIpfqyr4F wccVNlXoF2gzYMl+OloKX2TLyX1UISMwiGvA1LCP7TOHE+GZ4FHXys0ygPfuE8C1 4+xlgAQPgs7AKaN4QCl0xiD94Oh+flaWbGII+xQ2nmR6RFm++nkKK/x4I1Mawpo= =ADZJ -----END PGP SIGNATURE----- --LpQ9ahxlCli8rRTG-- From owner-freebsd-net@freebsd.org Tue Aug 18 21:54:49 2015 Return-Path: Delivered-To: freebsd-net@mailman.ysv.freebsd.org Received: from mx1.freebsd.org (mx1.freebsd.org [IPv6:2001:1900:2254:206a::19:1]) by mailman.ysv.freebsd.org (Postfix) with ESMTP id C14229BB92C; Tue, 18 Aug 2015 21:54:49 +0000 (UTC) (envelope-from rmacklem@uoguelph.ca) Received: from esa-annu.net.uoguelph.ca (esa-annu.mail.uoguelph.ca [131.104.91.36]) by mx1.freebsd.org (Postfix) with ESMTP id 4CC721DA6; Tue, 18 Aug 2015 21:54:48 +0000 (UTC) (envelope-from rmacklem@uoguelph.ca) IronPort-PHdr: 9a23:LX178BFLw6jt9mxB25wmPZ1GYnF86YWxBRYc798ds5kLTJ75oMSwAkXT6L1XgUPTWs2DsrQf27GQ7/2rAT1IyK3CmU5BWaQEbwUCh8QSkl5oK+++Imq/EsTXaTcnFt9JTl5v8iLzG0FUHMHjew+a+SXqvnYsExnyfTB4Ov7yUtaLyZ/njKbuptaLMk1hv3mUX/BbFF2OtwLft80b08NJC50a7V/3mEZOYPlc3mhyJFiezF7W78a0+4N/oWwL46pyv+YJa6jxfrw5QLpEF3xmdjltvIy4/SXEGCuG4GBUamgKjhdSSzPI6BjhXYa55ivircJm1S2TJs7nC7cuVmLxwb1sTUrSiSwEfxsw+2LTh8k42LheqRmioxF665PTb5yYMOJ+OKjUK4BJDVFdV9pcAnQSSri3aJECWq9YZb5V X-IronPort-Anti-Spam-Filtered: true X-IronPort-Anti-Spam-Result: A2BAAgBNqdNV/61jaINdg29pBoMeumQBCYFtCoUxSgKBcxQBAQEBAQEBAYEJgh2CBgEBAQMBAQEBIAQnIAsFCwIBCBgCAg0ZAgInAQkmAgQIBwQBHASIBQgNu2yWHwEBAQEBAQEBAQEBAQEBAQEBARYEgSKKMYQyBgEBHDQHgmmBQwWVIYUEhQadDwImhBkiMwd/CBcjgQQBAQE X-IronPort-AV: E=Sophos;i="5.15,705,1432612800"; d="scan'208";a="233260254" Received: from nipigon.cs.uoguelph.ca (HELO zcs1.mail.uoguelph.ca) ([131.104.99.173]) by esa-annu.net.uoguelph.ca with ESMTP; 18 Aug 2015 17:54:09 -0400 Received: from localhost (localhost [127.0.0.1]) by zcs1.mail.uoguelph.ca (Postfix) with ESMTP id 3C93B15F565; Tue, 18 Aug 2015 17:54:09 -0400 (EDT) Received: from zcs1.mail.uoguelph.ca ([127.0.0.1]) by localhost (zcs1.mail.uoguelph.ca [127.0.0.1]) (amavisd-new, port 10032) with ESMTP id YF57cG3-UtGe; Tue, 18 Aug 2015 17:54:08 -0400 (EDT) Received: from localhost (localhost [127.0.0.1]) by zcs1.mail.uoguelph.ca (Postfix) with ESMTP id 74D0115F56D; Tue, 18 Aug 2015 17:54:08 -0400 (EDT) X-Virus-Scanned: amavisd-new at zcs1.mail.uoguelph.ca Received: from zcs1.mail.uoguelph.ca ([127.0.0.1]) by localhost (zcs1.mail.uoguelph.ca [127.0.0.1]) (amavisd-new, port 10026) with ESMTP id e_V1IkBsZs1R; Tue, 18 Aug 2015 17:54:08 -0400 (EDT) Received: from zcs1.mail.uoguelph.ca (zcs1.mail.uoguelph.ca [172.17.95.18]) by zcs1.mail.uoguelph.ca (Postfix) with ESMTP id 53ACE15F565; Tue, 18 Aug 2015 17:54:08 -0400 (EDT) Date: Tue, 18 Aug 2015 17:54:08 -0400 (EDT) From: Rick Macklem To: Hans Petter Selasky Cc: Daniel Braniss , FreeBSD Net , Christopher Forgeron , FreeBSD stable , Slawa Olhovchenkov Message-ID: <1325951625.25292515.1439934848268.JavaMail.zimbra@uoguelph.ca> In-Reply-To: <55D333D6.5040102@selasky.org> References: <1D52028A-B39F-4F9B-BD38-CB1D73BF5D56@cs.huji.ac.il> <17871443-E105-4434-80B1-6939306A865F@cs.huji.ac.il> <473274181.23263108.1439814072514.JavaMail.zimbra@uoguelph.ca> <7F892C70-9C04-4468-9514-EDBFE75CF2C6@cs.huji.ac.il> <805850043.24018217.1439848150695.JavaMail.zimbra@uoguelph.ca> <9D8B0503-E8FA-43CA-88F0-01F184F84D9B@cs.huji.ac.il> <1721122651.24481798.1439902381663.JavaMail.zimbra@uoguelph.ca> <55D333D6.5040102@selasky.org> Subject: Re: ix(intel) vs mlxen(mellanox) 10Gb performance MIME-Version: 1.0 Content-Type: text/plain; charset=utf-8 Content-Transfer-Encoding: 7bit X-Originating-IP: [172.17.95.10] X-Mailer: Zimbra 8.0.9_GA_6191 (ZimbraWebClient - FF34 (Win)/8.0.9_GA_6191) Thread-Topic: ix(intel) vs mlxen(mellanox) 10Gb performance Thread-Index: 2kCu0WEXLG1Xa/qUNDzhyfV3vrvVUQ== X-BeenThere: freebsd-net@freebsd.org X-Mailman-Version: 2.1.20 Precedence: list List-Id: Networking and TCP/IP with FreeBSD List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , X-List-Received-Date: Tue, 18 Aug 2015 21:54:49 -0000 Hans Petter Selasky wrote: > On 08/18/15 14:53, Rick Macklem wrote: > > 2572 ifp->if_hw_tsomax = 65518; > >> >2573 ifp->if_hw_tsomaxsegcount = IXGBE_82599_SCATTER; > >> >2574 ifp->if_hw_tsomaxsegsize = 2048; > > Hi, > > If IXGBE_82599_SCATTER is the maximum scatter/gather entries the > hardware can do, remember to subtract one fragment for the TCP/IP-header > mbuf! > Ouch! Yes, I now see that the code that counts the # of mbufs is before the code that adds the tcp/ip header mbuf. In my opinion, this should be fixed by setting if_hw_tsomaxsegcount to whatever the driver provides - 1. It is not the driver's responsibility to know if a tcp/ip header mbuf will be added and is a lot less confusing that expecting the driver author to know to subtract one. (I had mistakenly thought that tcp_output() had added the tc/ip header mbuf before the loop that counts mbufs in the list. Btw, this tcp/ip header mbuf also has leading space for the MAC layer header.) > I think there is an off-by-one here: > > ifp->if_hw_tsomax = 65518; > ifp->if_hw_tsomaxsegcount = IXGBE_82599_SCATTER - 1; > ifp->if_hw_tsomaxsegsize = 2048; > > Refer to: > > > * > > * NOTE: The TSO limits only apply to the data payload part of > > * a TCP/IP packet. That means there is no need to subtract > > * space for ethernet-, vlan-, IP- or TCP- headers from the > > * TSO limits unless the hardware driver in question requires > > * so. > This comment suggests that the driver author doesn't need to do this. However, unless this is fixed in tcp_output(), the above patch should be applied to the driver. > In sys/net/if_var.h > > Thank you! > > --HPS > The problem I see is that, after doing the calculation of how many mbufs can be in the TSO segment, the code in tcp_output() will have calculated a value for "len" that will always be less that "tp->t_maxopd - optlen" when the if_hw_tsosegcount limit has been hit (see where it does a "break;" out of the while loop). --> This does not imply "too many small fragments" for NFS, just that the driver's transmit segment limit has been reached, where most of them are mbuf clusters, but not the first ones. As such the code: /* * In case there are too many small fragments * don't use TSO: */ if (len <= max_len) { len = max_len; sendalot = 1; tso = 0; } Will always happen for this case and "tso" gets set to 0. Not what we want to happen, imho. The above code block was what I suggested should be commented out or deleted for the test. It appears you should also add the "- 1" in the driver sys/dev/ixgbe/if_ix.c. rick > _______________________________________________ > freebsd-net@freebsd.org mailing list > https://lists.freebsd.org/mailman/listinfo/freebsd-net > To unsubscribe, send any mail to "freebsd-net-unsubscribe@freebsd.org" > From owner-freebsd-net@freebsd.org Tue Aug 18 22:04:33 2015 Return-Path: Delivered-To: freebsd-net@mailman.ysv.freebsd.org Received: from mx1.freebsd.org (mx1.freebsd.org [IPv6:2001:1900:2254:206a::19:1]) by mailman.ysv.freebsd.org (Postfix) with ESMTP id 37FB59BBB99; Tue, 18 Aug 2015 22:04:33 +0000 (UTC) (envelope-from rmacklem@uoguelph.ca) Received: from esa-jnhn.mail.uoguelph.ca (esa-jnhn.mail.uoguelph.ca [131.104.91.44]) by mx1.freebsd.org (Postfix) with ESMTP id C848C3E8; Tue, 18 Aug 2015 22:04:32 +0000 (UTC) (envelope-from rmacklem@uoguelph.ca) IronPort-PHdr: 9a23:25oK8xU/RUE9GnOdjyMBtBKbuiPV8LGtZVwlr6E/grcLSJyIuqrYZhOPt8tkgFKBZ4jH8fUM07OQ6PC7HzBdqs7Q+Fk5M7VyFDY9wf0MmAIhBMPXQWbaF9XNKxIAIcJZSVV+9Gu6O0UGUOz3ZlnVv2HgpWVKQka3CwN5K6zPF5LIiIzvjqbpq8aVP1gD3Gv1SIgxBSv1hD2ZjtMRj4pmJ/R54TryiVwMRd5rw3h1L0mYhRf265T41pdi9yNNp6BprJYYAu3SNp41Rr1ADTkgL3t9pIiy7UGCHkOz4S48W2MN2iJFHxTI9lnBU5P4qSjr/r59wDKyJsDyRKs3SHKl9ag9GzHyjyJSDT8y8ynyg8dziK9e6Ea7ohV0wIrZZamIM/Vjc6fFfZURTDwSDY5qSyVdD9bkPMM0BO0bMLMd9tGlqg== X-IronPort-Anti-Spam-Filtered: true X-IronPort-Anti-Spam-Result: A2A9AgDBqtNV/61jaINdhF6DHrpkAQmFGoJYAoFzFAEBAQEBAQEBgQmCHYIHAQEEIwRSEAIBCBgCAg0ZAgJXAgSIQbt/lh8BAQEBAQEBAwEBAQEBARyBIooxhFY0B4JpgUMFlSGnGQImgg4cgW8igXuBBAEBAQ X-IronPort-AV: E=Sophos;i="5.15,705,1432612800"; d="scan'208";a="231546692" Received: from nipigon.cs.uoguelph.ca (HELO zcs1.mail.uoguelph.ca) ([131.104.99.173]) by esa-jnhn.mail.uoguelph.ca with ESMTP; 18 Aug 2015 18:04:25 -0400 Received: from localhost (localhost [127.0.0.1]) by zcs1.mail.uoguelph.ca (Postfix) with ESMTP id D505315F565; Tue, 18 Aug 2015 18:04:25 -0400 (EDT) Received: from zcs1.mail.uoguelph.ca ([127.0.0.1]) by localhost (zcs1.mail.uoguelph.ca [127.0.0.1]) (amavisd-new, port 10032) with ESMTP id KC3hLXnbHEvU; Tue, 18 Aug 2015 18:04:25 -0400 (EDT) Received: from localhost (localhost [127.0.0.1]) by zcs1.mail.uoguelph.ca (Postfix) with ESMTP id 489E015F56D; Tue, 18 Aug 2015 18:04:25 -0400 (EDT) X-Virus-Scanned: amavisd-new at zcs1.mail.uoguelph.ca Received: from zcs1.mail.uoguelph.ca ([127.0.0.1]) by localhost (zcs1.mail.uoguelph.ca [127.0.0.1]) (amavisd-new, port 10026) with ESMTP id Ezlh0gWCO7WZ; Tue, 18 Aug 2015 18:04:25 -0400 (EDT) Received: from zcs1.mail.uoguelph.ca (zcs1.mail.uoguelph.ca [172.17.95.18]) by zcs1.mail.uoguelph.ca (Postfix) with ESMTP id 276D315F565; Tue, 18 Aug 2015 18:04:25 -0400 (EDT) Date: Tue, 18 Aug 2015 18:04:25 -0400 (EDT) From: Rick Macklem To: Hans Petter Selasky Cc: Daniel Braniss , FreeBSD Net , Slawa Olhovchenkov , FreeBSD stable , Christopher Forgeron Message-ID: <805386587.25297673.1439935465127.JavaMail.zimbra@uoguelph.ca> In-Reply-To: <55D331A5.9050601@selasky.org> References: <1D52028A-B39F-4F9B-BD38-CB1D73BF5D56@cs.huji.ac.il> <17871443-E105-4434-80B1-6939306A865F@cs.huji.ac.il> <473274181.23263108.1439814072514.JavaMail.zimbra@uoguelph.ca> <7F892C70-9C04-4468-9514-EDBFE75CF2C6@cs.huji.ac.il> <805850043.24018217.1439848150695.JavaMail.zimbra@uoguelph.ca> <9D8B0503-E8FA-43CA-88F0-01F184F84D9B@cs.huji.ac.il> <1721122651.24481798.1439902381663.JavaMail.zimbra@uoguelph.ca> <55D331A5.9050601@selasky.org> Subject: Re: ix(intel) vs mlxen(mellanox) 10Gb performance MIME-Version: 1.0 Content-Type: text/plain; charset=utf-8 Content-Transfer-Encoding: 7bit X-Originating-IP: [172.17.95.11] X-Mailer: Zimbra 8.0.9_GA_6191 (ZimbraWebClient - FF34 (Win)/8.0.9_GA_6191) Thread-Topic: ix(intel) vs mlxen(mellanox) 10Gb performance Thread-Index: Jou6U2JAhtGQyrGph78WF7y8CruWvQ== X-BeenThere: freebsd-net@freebsd.org X-Mailman-Version: 2.1.20 Precedence: list List-Id: Networking and TCP/IP with FreeBSD List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , X-List-Received-Date: Tue, 18 Aug 2015 22:04:33 -0000 Hans Petter Selasky wrote: > On 08/18/15 14:53, Rick Macklem wrote: > > If this is just a test machine, maybe you could test with these lines (at > > about #880) > > in sys/netinet/tcp_output.c commented out? (It looks to me like this will > > disable TSO > > for almost all the NFS writes.) > > - around line #880 in sys/netinet/tcp_output.c: > > /* > > * In case there are too many small fragments > > * don't use TSO: > > */ > > if (len <= max_len) { > > len = max_len; > > sendalot = 1; > > tso = 0; > > } > > > > This was added along with the other stuff that did the > > if_hw_tsomaxsegcount, etc and I > > never noticed it until now (not my patch). > > FYI: > > These lines are needed by other hardware, like the mlxen driver. If you > remove them mlxen will start doing m_defrag(). I believe if you set the > correct parameters in the "struct ifnet" for the TSO size/count limits > this problem will go away. If you print the "len" and "max_len" and also > the cases where TSO limits are reached, you'll see what parameter is > triggering it and needs to be increased. > Well, if the driver isn't setting if_hw_tsomaxsegcount correctly, then it is the driver that needs to be fixed. Having the above code block disable TSO for all of the NFS writes, including the ones that set if_hw_tsomaxsegcount correctly doesn't make sense to me. If the driver authors don't set these, the drivers do lots of m_defrag() calls. I have posted more than once to freebsd-net@ asking the driver authors to set these and some now have. (I can't do it, because I don't have the hardware to test it with.) I do think that most/all of them don't subtract 1 for the tcp/ip header and I don't think they should be expected to, since the driver isn't supposed to worry about the protocol at that level. --> I think tcp_output() should subtract one from the if_hw_tsomaxsegcount provided by the driver to handle this, since it chooses to count mbufs (the while() loop at around line #825 in sys/netinet/tcp_output.c.) before it prepends the tcp/ip header mbuf. rick > --HPS > From owner-freebsd-net@freebsd.org Tue Aug 18 23:20:15 2015 Return-Path: Delivered-To: freebsd-net@mailman.ysv.freebsd.org Received: from mx1.freebsd.org (mx1.freebsd.org [IPv6:2001:1900:2254:206a::19:1]) by mailman.ysv.freebsd.org (Postfix) with ESMTP id 181939BD9AE for ; Tue, 18 Aug 2015 23:20:15 +0000 (UTC) (envelope-from david@catwhisker.org) Received: from mailman.ysv.freebsd.org (mailman.ysv.freebsd.org [IPv6:2001:1900:2254:206a::50:5]) by mx1.freebsd.org (Postfix) with ESMTP id F3A5B9B0 for ; Tue, 18 Aug 2015 23:20:14 +0000 (UTC) (envelope-from david@catwhisker.org) Received: by mailman.ysv.freebsd.org (Postfix) id F0D9D9BD9AC; Tue, 18 Aug 2015 23:20:14 +0000 (UTC) Delivered-To: net@mailman.ysv.freebsd.org Received: from mx1.freebsd.org (mx1.freebsd.org [IPv6:2001:1900:2254:206a::19:1]) by mailman.ysv.freebsd.org (Postfix) with ESMTP id D7C019BD9AB; Tue, 18 Aug 2015 23:20:14 +0000 (UTC) (envelope-from david@catwhisker.org) Received: from albert.catwhisker.org (mx.catwhisker.org [198.144.209.73]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (Client did not present a certificate) by mx1.freebsd.org (Postfix) with ESMTPS id F0E199A9; Tue, 18 Aug 2015 23:20:13 +0000 (UTC) (envelope-from david@catwhisker.org) Received: from albert.catwhisker.org (localhost [127.0.0.1]) by albert.catwhisker.org (8.15.2/8.15.2) with ESMTP id t7INK86g058120; Tue, 18 Aug 2015 16:20:08 -0700 (PDT) (envelope-from david@albert.catwhisker.org) Received: (from david@localhost) by albert.catwhisker.org (8.15.2/8.15.2/Submit) id t7INK7w9058119; Tue, 18 Aug 2015 16:20:07 -0700 (PDT) (envelope-from david) Date: Tue, 18 Aug 2015 16:20:07 -0700 From: David Wolfskill To: stable@freebsd.org, net@freebsd.org Subject: Panic [page fault] in _ieee80211_crypto_delkey(): stable/10/amd64 @r286878 Message-ID: <20150818232007.GN1189@albert.catwhisker.org> Mail-Followup-To: David Wolfskill , stable@freebsd.org, net@freebsd.org MIME-Version: 1.0 Content-Type: multipart/signed; micalg=pgp-sha512; protocol="application/pgp-signature"; boundary="V2Kn1ZfDiPlyQ6Qk" Content-Disposition: inline User-Agent: Mutt/1.5.23 (2014-03-12) X-BeenThere: freebsd-net@freebsd.org X-Mailman-Version: 2.1.20 Precedence: list List-Id: Networking and TCP/IP with FreeBSD List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , X-List-Received-Date: Tue, 18 Aug 2015 23:20:15 -0000 --V2Kn1ZfDiPlyQ6Qk Content-Type: text/plain; charset=us-ascii Content-Disposition: inline Content-Transfer-Encoding: quoted-printable I was minding my own business in a staff meeting this afternoon, and my laptop rebooted; seems it got a panic. I've copied the core.txt.0 file to , along with a verbose dmesg.boot from this morning and output of "pciconf -l -v". This was running: FreeBSD localhost 10.2-STABLE FreeBSD 10.2-STABLE #122 r286878M/286880:100= 2500: Tue Aug 18 04:06:33 PDT 2015 root@g1-252.catwhisker.org:/common/S= 1/obj/usr/src/sys/CANARY amd64 Excerpts from core.txt.0: panic: page fault =2E.. Unread portion of the kernel message buffer: panic: page fault cpuid =3D 2 KDB: stack backtrace: #0 0xffffffff80946e00 at kdb_backtrace+0x60 #1 0xffffffff8090a9e6 at vpanic+0x126 #2 0xffffffff8090a8b3 at panic+0x43 #3 0xffffffff80c8467b at trap_fatal+0x36b #4 0xffffffff80c8497d at trap_pfault+0x2ed #5 0xffffffff80c8401a at trap+0x47a #6 0xffffffff80c6a1b2 at calltrap+0x8 #7 0xffffffff809eff5e at ieee80211_crypto_delkey+0x1e #8 0xffffffff80a04d45 at ieee80211_ioctl_delkey+0x65 #9 0xffffffff80a03bd2 at ieee80211_ioctl_set80211+0x572 #10 0xffffffff80a2c323 at in_control+0x203 #11 0xffffffff809cd57b at ifioctl+0x15eb #12 0xffffffff8095ecf5 at kern_ioctl+0x255 #13 0xffffffff8095e9f0 at sys_ioctl+0x140 #14 0xffffffff80c84f97 at amd64_syscall+0x357 #15 0xffffffff80c6a49b at Xfast_syscall+0xfb Uptime: 9h45m0s =2E.. Loaded symbols for /usr/local/modules/rtc.ko #0 doadump (textdump=3D) at pcpu.h:219 219 pcpu.h: No such file or directory. in pcpu.h (kgdb) #0 doadump (textdump=3D) at pcpu.h:219 #1 0xffffffff8090a642 in kern_reboot (howto=3D260) at /usr/src/sys/kern/kern_shutdown.c:451 #2 0xffffffff8090aa25 in vpanic (fmt=3D,=20 ap=3D) at /usr/src/sys/kern/kern_shutdown.c:758 #3 0xffffffff8090a8b3 in panic (fmt=3D0x0) at /usr/src/sys/kern/kern_shutdown.c:687 #4 0xffffffff80c8467b in trap_fatal (frame=3D,=20 eva=3D) at /usr/src/sys/amd64/amd64/trap.c:851 #5 0xffffffff80c8497d in trap_pfault (frame=3D0xfffffe060d88b510,=20 usermode=3D) at /usr/src/sys/amd64/amd64/trap.c:674 #6 0xffffffff80c8401a in trap (frame=3D0xfffffe060d88b510) at /usr/src/sys/amd64/amd64/trap.c:440 #7 0xffffffff80c6a1b2 in calltrap () at /usr/src/sys/amd64/amd64/exception.S:236 #8 0xffffffff809f003a in _ieee80211_crypto_delkey () at /usr/src/sys/net80211/ieee80211_crypto.c:105 #9 0xffffffff809eff5e in ieee80211_crypto_delkey (vap=3D0xfffffe03d9070000= ,=20 key=3D0xfffffe03d9070800) at /usr/src/sys/net80211/ieee80211_crypto.c:4= 61 #10 0xffffffff80a04d45 in ieee80211_ioctl_delkey (vap=3D0xfffffe03d9070000,= =20 ireq=3D) at /usr/src/sys/net80211/ieee80211_ioctl.c:1252 #11 0xffffffff80a03bd2 in ieee80211_ioctl_set80211 () at /usr/src/sys/net80211/ieee80211_ioctl.c:2814 #12 0xffffffff80a2c323 in in_control (so=3D,=20 cmd=3D9214790412651315593, data=3D0xfffffe060d88bb80 "", ifp=3D0x3,=20 td=3D) at /usr/src/sys/netinet/in.c:308 #13 0xffffffff809cd57b in ifioctl (so=3D0xfffffe03d9070800, cmd=3D214960791= 4,=20 data=3D0xfffffe060d88b8e0 "wlan0", td=3D0xfffff80170abb940) at /usr/src/sys/net/if.c:2770 #14 0xffffffff8095ecf5 in kern_ioctl (td=3D0xfffff80170abb940,=20 fd=3D, com=3D18446741891212314624) at file.h:320 #15 0xffffffff8095e9f0 in sys_ioctl (td=3D0xfffff80170abb940,=20 uap=3D0xfffffe060d88ba40) at /usr/src/sys/kern/sys_generic.c:718 #16 0xffffffff80c84f97 in amd64_syscall (td=3D0xfffff80170abb940, traced=3D= 0) at subr_syscall.c:134 #17 0xffffffff80c6a49b in Xfast_syscall () at /usr/src/sys/amd64/amd64/exception.S:396 #18 0x00000008012a2f9a in ?? () Previous frame inner to this frame (corrupt stack?) Current language: auto; currently minimal (kgdb)=20 Physical 802.11 hardware is iwn(4). I can copy the vmcore.0 file itself after I get home -- it's ~625MB, and I'd rather not try to get that through over a WAN before I need to catch the shuttle to get home. :-} Peace, david --=20 David H. Wolfskill david@catwhisker.org Those who would murder in the name of God or prophet are blasphemous coward= s. See http://www.catwhisker.org/~david/publickey.gpg for my public key. --V2Kn1ZfDiPlyQ6Qk Content-Type: application/pgp-signature -----BEGIN PGP SIGNATURE----- Version: GnuPG v2 iQJ8BAEBCgBmBQJV072nXxSAAAAAAC4AKGlzc3Vlci1mcHJAbm90YXRpb25zLm9w ZW5wZ3AuZmlmdGhob3JzZW1hbi5uZXQ4RThEMDY4QTIxMjc1MDZFRDIzODYzRTc4 QTY3RjlDOERFRjQxOTNCAAoJEIpn+cje9Bk76qkP/jFmvBCveCG+FQCQw1CCux/k HDsAlV4C55R24EuTS3PC63WWH7nRl7soQkrwx2DJqpEAQOnV9DFRsfs+jwGLFAHF kflanGr/XXsT3PFV6dQkmfQhRVV2v4QYPVo0/5FtbqkwHucxKrH2RswoReRve3bD qmcrv80OHGsqnWApq7FetD4qDyeWoI6OIS4WZw87CQhqAv8fEvhrk/aeIvjUz2SC 9Bw2WcRGsyYs0MIL8pS5iaA9XUh7I2DER/odoIipPD/ZjhGqJG9+gM6yBJ27gidX pfcy33vApR6rT5tQlNvVubmEnSm/rH9K/xRSOvLvCeqgAKOWjhXtnLXqzfhq1JlM U0SmNNHZTOMAzNg4dwPgjxavjZAFSsGuBU3mHOF6P+ymSXi2vkYFJhmHr7Wvaqh+ dRhKQ8NjwhEMI5BUti5UwY3zAjD5phxOzqUnHaetPYTZcEJuob+5Fs3n0V1ji3Jx cGy8RMnl2Ly2vfD7f77domWd7SHTRf/1PEyKu1NnPaLImLyI1FNcZN6xjyiIxrrz hEhPRPgnfiyr3uGPxpk+QgxXfGysS089RIgGpagbXUIttAe5EjM0QI+tL++lD/tF M2fWm6rQ6WjDxKBmDac0ZtWgGHBcsYVv/YicH+gSCRgB7t5THwLlxJckIjh/tT7s SDDi4t5PwrrbabGur6m+ =ylI8 -----END PGP SIGNATURE----- --V2Kn1ZfDiPlyQ6Qk-- From owner-freebsd-net@freebsd.org Wed Aug 19 02:42:01 2015 Return-Path: Delivered-To: freebsd-net@mailman.ysv.freebsd.org Received: from mx1.freebsd.org (mx1.freebsd.org [IPv6:2001:1900:2254:206a::19:1]) by mailman.ysv.freebsd.org (Postfix) with ESMTP id EB1929BD78A; Wed, 19 Aug 2015 02:42:01 +0000 (UTC) (envelope-from rmacklem@uoguelph.ca) Received: from esa-jnhn.mail.uoguelph.ca (esa-jnhn.mail.uoguelph.ca [131.104.91.44]) by mx1.freebsd.org (Postfix) with ESMTP id 74DB3191C; Wed, 19 Aug 2015 02:42:01 +0000 (UTC) (envelope-from rmacklem@uoguelph.ca) IronPort-PHdr: 9a23:eIbFqhDlnn1Crzv6DF2DUyQJP3N1i/DPJgcQr6AfoPdwSPn6p8bcNUDSrc9gkEXOFd2CrakU0KyK7uu+AiQp2tWojjMrSNR0TRgLiMEbzUQLIfWuLgnFFsPsdDEwB89YVVVorDmROElRH9viNRWJ+iXhpQAbFhi3DwdpPOO9QteU1JTskbzvsMOIKyxzxxODIppKZC2sqgvQssREyaBDEY0WjiXzn31TZu5NznlpL1/A1zz158O34YIxu38I46Fp34d6XK77Z6U1S6BDRHRjajhtpZ6jiB/YUAHa5mcASn5E1V1MAhPZ91f0RJr8uDD28O1n126fNMzySLkyHjCj9LtqThHvzykdOjMz622SkdB5hqZW8y+nvAF1lo7IfJmOZr05eqLGYchcS3BMU8xKW2pGGIz7aoIOC+8IO6FcrpLhpl0AqlywHwShDvjjjyRUj3Xy0P4H1f88G1TGwBA4BIBJ93DVt8nucqkIXO2/16WOyi/MKPZf2DP44Y6PdhE6vfCKU7U3f9DcxEM0G0bDg0nDlYuwEzqT1+kJ+0KB5uxhTvnn32IurQdgijO0gMcxiIiPj4lTy1SSsW1ZyYAubeW1VFJ2e5afHZ9ZrCKLf992Wc4mSnprqQ400LALs4W3Oi8Qx8J06QTYbqm9coOLqjfqX+WVLDIw0Ghgcbm8gxu32VWnxfDxUtG0ll1D+HkW2uLQv2wAgkSAovOMTeFwqwL4gW6C X-IronPort-Anti-Spam-Filtered: true X-IronPort-Anti-Spam-Result: A2A6AgBW7NNV/61jaINdDoNhaQaDH7pfAQmBbQqFMUoCgXYUAQEBAQEBAQGBCYIdggcBAQQBAQEgBCcgCxACAQgOCgICDRYDAgIhBgEJFRECBAEHBwQBHASHeAMSDbp5kDQNhVcBAQEBAQEBAwEBAQEBAQEXBIEiijGCT4FiAQYBAQcVATMHgmmBQwWHI41/hQSFBnWDN5Evg0+DZQImgz9aIjMHfgEIFyOBBAEBAQ X-IronPort-AV: E=Sophos;i="5.15,706,1432612800"; d="scan'208";a="231577093" Received: from nipigon.cs.uoguelph.ca (HELO zcs1.mail.uoguelph.ca) ([131.104.99.173]) by esa-jnhn.mail.uoguelph.ca with ESMTP; 18 Aug 2015 22:41:59 -0400 Received: from localhost (localhost [127.0.0.1]) by zcs1.mail.uoguelph.ca (Postfix) with ESMTP id A1A3F15F55D; Tue, 18 Aug 2015 22:41:59 -0400 (EDT) Received: from zcs1.mail.uoguelph.ca ([127.0.0.1]) by localhost (zcs1.mail.uoguelph.ca [127.0.0.1]) (amavisd-new, port 10032) with ESMTP id wmVkoLeVSGkc; Tue, 18 Aug 2015 22:41:58 -0400 (EDT) Received: from localhost (localhost [127.0.0.1]) by zcs1.mail.uoguelph.ca (Postfix) with ESMTP id 7FE1B15F571; Tue, 18 Aug 2015 22:41:58 -0400 (EDT) X-Virus-Scanned: amavisd-new at zcs1.mail.uoguelph.ca Received: from zcs1.mail.uoguelph.ca ([127.0.0.1]) by localhost (zcs1.mail.uoguelph.ca [127.0.0.1]) (amavisd-new, port 10026) with ESMTP id ZSabU6gBtopC; Tue, 18 Aug 2015 22:41:58 -0400 (EDT) Received: from zcs1.mail.uoguelph.ca (zcs1.mail.uoguelph.ca [172.17.95.18]) by zcs1.mail.uoguelph.ca (Postfix) with ESMTP id 6054715F55D; Tue, 18 Aug 2015 22:41:58 -0400 (EDT) Date: Tue, 18 Aug 2015 22:41:58 -0400 (EDT) From: Rick Macklem To: Daniel Braniss , Hans Petter Selasky Cc: FreeBSD Net , Christopher Forgeron , FreeBSD stable , Slawa Olhovchenkov Message-ID: <333280926.25456572.1439952118371.JavaMail.zimbra@uoguelph.ca> In-Reply-To: <9D8B0503-E8FA-43CA-88F0-01F184F84D9B@cs.huji.ac.il> References: <1D52028A-B39F-4F9B-BD38-CB1D73BF5D56@cs.huji.ac.il> <20150817094145.GB3158@zxy.spb.ru> <17871443-E105-4434-80B1-6939306A865F@cs.huji.ac.il> <473274181.23263108.1439814072514.JavaMail.zimbra@uoguelph.ca> <7F892C70-9C04-4468-9514-EDBFE75CF2C6@cs.huji.ac.il> <805850043.24018217.1439848150695.JavaMail.zimbra@uoguelph.ca> <9D8B0503-E8FA-43CA-88F0-01F184F84D9B@cs.huji.ac.il> Subject: Re: ix(intel) vs mlxen(mellanox) 10Gb performance MIME-Version: 1.0 Content-Type: text/plain; charset=utf-8 Content-Transfer-Encoding: quoted-printable X-Originating-IP: [172.17.95.12] X-Mailer: Zimbra 8.0.9_GA_6191 (ZimbraWebClient - FF34 (Win)/8.0.9_GA_6191) Thread-Topic: ix(intel) vs mlxen(mellanox) 10Gb performance Thread-Index: zLdb+6CG04LhJxMghkR/pohBzDKCLQ== X-BeenThere: freebsd-net@freebsd.org X-Mailman-Version: 2.1.20 Precedence: list List-Id: Networking and TCP/IP with FreeBSD List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , X-List-Received-Date: Wed, 19 Aug 2015 02:42:02 -0000 Daniel Braniss wrote: >=20 > > On Aug 18, 2015, at 12:49 AM, Rick Macklem wrote= : > >=20 > > Daniel Braniss wrote: > >>=20 > >>> On Aug 17, 2015, at 3:21 PM, Rick Macklem wrot= e: > >>>=20 > >>> Daniel Braniss wrote: > >>>>=20 > >>>>> On Aug 17, 2015, at 1:41 PM, Christopher Forgeron > >>>>> > >>>>> wrote: > >>>>>=20 > >>>>> FYI, I can regularly hit 9.3 Gib/s with my Intel X520-DA2's and Fre= eBSD > >>>>> 10.1. Before 10.1 it was less. > >>>>>=20 > >>>>=20 > >>>> this is NOT iperf/3 where i do get close to wire speed, > >>>> it=E2=80=99s NFS writes, i.e., almost real work :-) > >>>>=20 > >>>>> I used to tweak the card settings, but now it's just stock. You may > >>>>> want > >>>>> to > >>>>> check your settings, the Mellanox may just have better defaults for > >>>>> your > >>>>> switch. > >>>>>=20 > >>> Have you tried disabling TSO for the Intel? With TSO enabled, it will= be > >>> copying > >>> every transmitted mbuf chain to a new chain of mbuf clusters via. > >>> m_defrag() when > >>> TSO is enabled. (Assuming you aren't an 82598 chip. Most seem to be t= he > >>> 82599 chip > >>> these days?) > >>>=20 Oops, I think I screwed up. It looks like t_maxopd is limited to somewhat l= ess than the mtu. If that is the case, the code block wouldn't do what I thought it would do. However, if_hw_tsomaxsegcount does need to be one less than the limit for t= he driver, since the tcp/ip header isn't yet prepended when it is counted. I think the code in tcp_output() should subtract 1, but you can change it i= n the driver to test this. Thanks for doing this, rick > >>=20 > >> hi Rick > >>=20 > >> how can i check the chip? > >>=20 > > Haven't a clue. Does "dmesg" tell you? (To be honest, since disabling T= SO > > helped, > > I'll bet you don't have a 82598.) > >=20 > >>> This has been fixed in the driver very recently, but those fixes won'= t be > >>> in 10.1. > >>>=20 > >>> rick > >>> ps: If you could test with 10.2, it would be interesting to see how t= he > >>> ix > >>> does with > >>> the current driver fixes in it? > >>=20 > >> I new TSO was involved! > >> ok, firstly, it=E2=80=99s 10.2 stable. > >> with TSO enabled, ix is bad, around 64MGB/s. > >> disabling TSO it=E2=80=99s better, around 130 > >>=20 > > Hmm, could you check to see of these lines are in sys/dev/ixgbe/if_ix.c= at > > around > > line#2500? > > /* TSO parameters */ > > 2572 =09 =09 ifp->if_hw_tsomax =3D 65518; > > 2573 =09 =09 ifp->if_hw_tsomaxsegcount =3D IXGBE_82599_SCATTER= ; > > 2574 =09 =09 ifp->if_hw_tsomaxsegsize =3D 2048; > >=20 > > They are in stable/10. I didn't look at releng/10.2. (And if they're in= a > > #ifdef > > for FreeBSD11, take the #ifdef away.) > > If they are there and not ifdef'd, I can't explain why disabling TSO wo= uld > > help. > > Once TSO is fixed so that it handles the 64K transmit segments without > > copying all > > the mbufs, I suspect you might get better perf. with it enabled? > >=20 >=20 > this is 10.2 : > they are on lines 2509-2511 and I don=E2=80=99t see any #ifdefs around i= t. >=20 > the plot thickens :-) >=20 > danny >=20 > > Good luck with it, rick > >=20 > >> still, mlxen0 is about 250! with and without TSO > >>=20 > >>=20 > >>>=20 > >>>>> On Mon, Aug 17, 2015 at 6:41 AM, Slawa Olhovchenkov >>>>> > wrote: > >>>>> On Mon, Aug 17, 2015 at 10:27:41AM +0300, Daniel Braniss wrote: > >>>>>=20 > >>>>>> hi, > >>>>>> I have a host (Dell R730) with both cards, connected to an HP8= 200 > >>>>>> switch at 10Gb. > >>>>>> when writing to the same storage (netapp) this is what I get: > >>>>>> ix0: ~130MGB/s > >>>>>> mlxen0 ~330MGB/s > >>>>>> this is via nfs/tcpv3 > >>>>>>=20 > >>>>>> I can get similar (bad) performance with the mellanox if I > >>>>>> increase > >>>>>> the file size > >>>>>> to 512MGB. > >>>>>=20 > >>>>> Look like mellanox have internal beffer for caching and do ACK > >>>>> acclerating. > >>>>>=20 > >>>>>> so at face value, it seems the mlxen does a better use of > >>>>>> resources > >>>>>> than the intel. > >>>>>> Any ideas how to improve ix/intel's performance? > >>>>>=20 > >>>>> Are you sure about netapp performance? > >>>>> _______________________________________________ > >>>>> freebsd-net@freebsd.org mailing li= st > >>>>> https://lists.freebsd.org/mailman/listinfo/freebsd-net > >>>>> > >>>>> To unsubscribe, send any mail to "freebsd-net-unsubscribe@freebsd.o= rg > >>>>> " > >>>>>=20 > >>>>=20 > >>>> _______________________________________________ > >>>> freebsd-stable@freebsd.org mailing list > >>>> https://lists.freebsd.org/mailman/listinfo/freebsd-stable > >>>> To unsubscribe, send any mail to > >>>> "freebsd-stable-unsubscribe@freebsd.org" > >>=20 > >> _______________________________________________ > >> freebsd-stable@freebsd.org mailing list > >> https://lists.freebsd.org/mailman/listinfo/freebsd-stable > >> To unsubscribe, send any mail to "freebsd-stable-unsubscribe@freebsd.o= rg" >=20 > _______________________________________________ > freebsd-net@freebsd.org mailing list > https://lists.freebsd.org/mailman/listinfo/freebsd-net > To unsubscribe, send any mail to "freebsd-net-unsubscribe@freebsd.org" From owner-freebsd-net@freebsd.org Wed Aug 19 06:59:30 2015 Return-Path: Delivered-To: freebsd-net@mailman.ysv.freebsd.org Received: from mx1.freebsd.org (mx1.freebsd.org [IPv6:2001:1900:2254:206a::19:1]) by mailman.ysv.freebsd.org (Postfix) with ESMTP id ABB269BD956; Wed, 19 Aug 2015 06:59:30 +0000 (UTC) (envelope-from hps@selasky.org) Received: from mail.turbocat.net (mail.turbocat.net [IPv6:2a01:4f8:d16:4514::2]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (Client did not present a certificate) by mx1.freebsd.org (Postfix) with ESMTPS id 67F8F1681; Wed, 19 Aug 2015 06:59:30 +0000 (UTC) (envelope-from hps@selasky.org) Received: from laptop015.home.selasky.org (cm-176.74.213.204.customer.telag.net [176.74.213.204]) (using TLSv1.2 with cipher ECDHE-RSA-AES128-GCM-SHA256 (128/128 bits)) (No client certificate requested) by mail.turbocat.net (Postfix) with ESMTPSA id 183721FE023; Wed, 19 Aug 2015 08:59:28 +0200 (CEST) Subject: Re: ix(intel) vs mlxen(mellanox) 10Gb performance To: Rick Macklem References: <1D52028A-B39F-4F9B-BD38-CB1D73BF5D56@cs.huji.ac.il> <17871443-E105-4434-80B1-6939306A865F@cs.huji.ac.il> <473274181.23263108.1439814072514.JavaMail.zimbra@uoguelph.ca> <7F892C70-9C04-4468-9514-EDBFE75CF2C6@cs.huji.ac.il> <805850043.24018217.1439848150695.JavaMail.zimbra@uoguelph.ca> <9D8B0503-E8FA-43CA-88F0-01F184F84D9B@cs.huji.ac.il> <1721122651.24481798.1439902381663.JavaMail.zimbra@uoguelph.ca> <55D333D6.5040102@selasky.org> <1325951625.25292515.1439934848268.JavaMail.zimbra@uoguelph.ca> Cc: Daniel Braniss , FreeBSD Net , Christopher Forgeron , FreeBSD stable , Slawa Olhovchenkov From: Hans Petter Selasky Message-ID: <55D429A4.3010407@selasky.org> Date: Wed, 19 Aug 2015 09:00:52 +0200 User-Agent: Mozilla/5.0 (X11; FreeBSD amd64; rv:38.0) Gecko/20100101 Thunderbird/38.1.0 MIME-Version: 1.0 In-Reply-To: <1325951625.25292515.1439934848268.JavaMail.zimbra@uoguelph.ca> Content-Type: text/plain; charset=utf-8; format=flowed Content-Transfer-Encoding: 7bit X-BeenThere: freebsd-net@freebsd.org X-Mailman-Version: 2.1.20 Precedence: list List-Id: Networking and TCP/IP with FreeBSD List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , X-List-Received-Date: Wed, 19 Aug 2015 06:59:30 -0000 On 08/18/15 23:54, Rick Macklem wrote: > Ouch! Yes, I now see that the code that counts the # of mbufs is before the > code that adds the tcp/ip header mbuf. > > In my opinion, this should be fixed by setting if_hw_tsomaxsegcount to whatever > the driver provides - 1. It is not the driver's responsibility to know if a tcp/ip > header mbuf will be added and is a lot less confusing that expecting the driver > author to know to subtract one. (I had mistakenly thought that tcp_output() had > added the tc/ip header mbuf before the loop that counts mbufs in the list. Btw, > this tcp/ip header mbuf also has leading space for the MAC layer header.) > Hi Rick, Your question is good. With the Mellanox hardware we have separate so-called inline data space for the TCP/IP headers, so if the TCP stack subtracts something, then we would need to add something to the limit, because then the scatter gather list is only used for the data part. Maybe it can be controlled by some kind of flag, if all the three TSO limits should include the TCP/IP/ethernet headers too. I'm pretty sure we want both versions. --HPS From owner-freebsd-net@freebsd.org Wed Aug 19 07:30:23 2015 Return-Path: Delivered-To: freebsd-net@mailman.ysv.freebsd.org Received: from mx1.freebsd.org (mx1.freebsd.org [IPv6:2001:1900:2254:206a::19:1]) by mailman.ysv.freebsd.org (Postfix) with ESMTP id 86E7D9BD560; Wed, 19 Aug 2015 07:30:23 +0000 (UTC) (envelope-from pyunyh@gmail.com) Received: from mail-pd0-x229.google.com (mail-pd0-x229.google.com [IPv6:2607:f8b0:400e:c02::229]) (using TLSv1.2 with cipher ECDHE-RSA-AES128-GCM-SHA256 (128/128 bits)) (Client CN "smtp.gmail.com", Issuer "Google Internet Authority G2" (verified OK)) by mx1.freebsd.org (Postfix) with ESMTPS id 51A5E1AB8; Wed, 19 Aug 2015 07:30:23 +0000 (UTC) (envelope-from pyunyh@gmail.com) Received: by pdob1 with SMTP id b1so22219525pdo.2; Wed, 19 Aug 2015 00:30:22 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20120113; h=from:date:to:cc:subject:message-id:reply-to:references:mime-version :content-type:content-disposition:in-reply-to:user-agent; bh=qogDhQRuOFK/8Y4NwWfjhcQYE5bkkvb9vv4hdO8b688=; b=uBtsrv+nlh9+jpt1qEuc/lfvyHHI1MeApda4eX6HpBMMavTPwtPe+aC/Ir8H07nAsA agR/lb+IWyL/MDRccbRCzKE6pab6UX4MHU2xVLFlJ1MbBSYlRF6pDsF0/fjEHr/Ofymx 0iHNveprCGI2PeiKuT9b0Dya3CD5AAN2RwIq2ABSJ5zIFG56pMCTnB9q0NDtVWhh7nGB Zemq9Nm7e5V38NhSAApf7q5H1De9vW9G+L2/1YJQWIEb7AdJgCoJduCNE+vm8Zj/lQaA OzPmFzXpnkFssRIwSnEM9Bwmfjwi3xJEuussCv/U9vVF47ymT4k4YbQVFGMGySOow5/l ubdw== X-Received: by 10.70.44.228 with SMTP id h4mr21767238pdm.45.1439969422805; Wed, 19 Aug 2015 00:30:22 -0700 (PDT) Received: from pyunyh@gmail.com ([106.247.248.2]) by smtp.gmail.com with ESMTPSA id gw3sm20428044pbc.46.2015.08.19.00.30.17 (version=TLSv1 cipher=RC4-SHA bits=128/128); Wed, 19 Aug 2015 00:30:21 -0700 (PDT) From: Yonghyeon PYUN X-Google-Original-From: "Yonghyeon PYUN" Received: by pyunyh@gmail.com (sSMTP sendmail emulation); Wed, 19 Aug 2015 16:30:10 +0900 Date: Wed, 19 Aug 2015 16:30:10 +0900 To: Rick Macklem Cc: Hans Petter Selasky , Daniel Braniss , FreeBSD Net , Christopher Forgeron , FreeBSD stable , Slawa Olhovchenkov Subject: Re: ix(intel) vs mlxen(mellanox) 10Gb performance Message-ID: <20150819073010.GA964@michelle.fasterthan.com> Reply-To: pyunyh@gmail.com References: <1D52028A-B39F-4F9B-BD38-CB1D73BF5D56@cs.huji.ac.il> <17871443-E105-4434-80B1-6939306A865F@cs.huji.ac.il> <473274181.23263108.1439814072514.JavaMail.zimbra@uoguelph.ca> <7F892C70-9C04-4468-9514-EDBFE75CF2C6@cs.huji.ac.il> <805850043.24018217.1439848150695.JavaMail.zimbra@uoguelph.ca> <9D8B0503-E8FA-43CA-88F0-01F184F84D9B@cs.huji.ac.il> <1721122651.24481798.1439902381663.JavaMail.zimbra@uoguelph.ca> <55D331A5.9050601@selasky.org> <805386587.25297673.1439935465127.JavaMail.zimbra@uoguelph.ca> Mime-Version: 1.0 Content-Type: text/plain; charset=us-ascii Content-Disposition: inline In-Reply-To: <805386587.25297673.1439935465127.JavaMail.zimbra@uoguelph.ca> User-Agent: Mutt/1.4.2.3i X-BeenThere: freebsd-net@freebsd.org X-Mailman-Version: 2.1.20 Precedence: list List-Id: Networking and TCP/IP with FreeBSD List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , X-List-Received-Date: Wed, 19 Aug 2015 07:30:23 -0000 On Tue, Aug 18, 2015 at 06:04:25PM -0400, Rick Macklem wrote: > Hans Petter Selasky wrote: > > On 08/18/15 14:53, Rick Macklem wrote: > > > If this is just a test machine, maybe you could test with these lines (at > > > about #880) > > > in sys/netinet/tcp_output.c commented out? (It looks to me like this will > > > disable TSO > > > for almost all the NFS writes.) > > > - around line #880 in sys/netinet/tcp_output.c: > > > /* > > > * In case there are too many small fragments > > > * don't use TSO: > > > */ > > > if (len <= max_len) { > > > len = max_len; > > > sendalot = 1; > > > tso = 0; > > > } > > > > > > This was added along with the other stuff that did the > > > if_hw_tsomaxsegcount, etc and I > > > never noticed it until now (not my patch). > > > > FYI: > > > > These lines are needed by other hardware, like the mlxen driver. If you > > remove them mlxen will start doing m_defrag(). I believe if you set the > > correct parameters in the "struct ifnet" for the TSO size/count limits > > this problem will go away. If you print the "len" and "max_len" and also > > the cases where TSO limits are reached, you'll see what parameter is > > triggering it and needs to be increased. > > > Well, if the driver isn't setting if_hw_tsomaxsegcount correctly, then it > is the driver that needs to be fixed. > Having the above code block disable TSO for all of the NFS writes, including > the ones that set if_hw_tsomaxsegcount correctly doesn't make sense to me. > If the driver authors don't set these, the drivers do lots of m_defrag() > calls. I have posted more than once to freebsd-net@ asking the driver authors > to set these and some now have. (I can't do it, because I don't have the > hardware to test it with.) > Thanks for reminder. I have generated a diff against HEAD. https://people.freebsd.org/~yongari/tso.param.diff The diff restores optimal TSO parameters which were lost in r271946 for drivers that relied on sane default values. I'll commit it after some testing. > I do think that most/all of them don't subtract 1 for the tcp/ip header and > I don't think they should be expected to, since the driver isn't supposed to > worry about the protocol at that level. I agree. > --> I think tcp_output() should subtract one from the if_hw_tsomaxsegcount > provided by the driver to handle this, since it chooses to count mbufs > (the while() loop at around line #825 in sys/netinet/tcp_output.c.) > before it prepends the tcp/ip header mbuf. > > rick > > > --HPS From owner-freebsd-net@freebsd.org Wed Aug 19 07:42:25 2015 Return-Path: Delivered-To: freebsd-net@mailman.ysv.freebsd.org Received: from mx1.freebsd.org (mx1.freebsd.org [IPv6:2001:1900:2254:206a::19:1]) by mailman.ysv.freebsd.org (Postfix) with ESMTP id 019E09BDAF2; Wed, 19 Aug 2015 07:42:25 +0000 (UTC) (envelope-from pyunyh@gmail.com) Received: from mail-pa0-x22d.google.com (mail-pa0-x22d.google.com [IPv6:2607:f8b0:400e:c03::22d]) (using TLSv1.2 with cipher ECDHE-RSA-AES128-GCM-SHA256 (128/128 bits)) (Client CN "smtp.gmail.com", Issuer "Google Internet Authority G2" (verified OK)) by mx1.freebsd.org (Postfix) with ESMTPS id C08CBE09; Wed, 19 Aug 2015 07:42:24 +0000 (UTC) (envelope-from pyunyh@gmail.com) Received: by paccq16 with SMTP id cq16so108134730pac.1; Wed, 19 Aug 2015 00:42:24 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20120113; h=from:date:to:cc:subject:message-id:reply-to:references:mime-version :content-type:content-disposition:in-reply-to:user-agent; bh=S0ZZmoyRQiijWzna4W8TD4lopx6A6yFnDneAQwWxdZ0=; b=c6MrsZlVkTXk65TXNgwoj0qpu1Vn8yZQi7Iwh5AuPlHY12Fvjt+lc3qdrCzcBX5Z75 V8HHCQDe4XVxqluirLR3sOsREh5CITl1IaCdCmr6dQrmQDxRqkt+y5OPBYZWReMLyjq+ 7vCUP14XLg5z7GzZvx1+uZFQg+PyTUEGWTh1zQ85zzAxlhUn2xKURMlH6xcrhu9ku+oe +OE4qqeTy1FjqtgPkNc8H8jie6D2affav8JZFODIwK80OWG3LIpdDI0FFmFWckv0yt+L WuebY3qhtVuxBbzn+PjEn3tOzO8+0LyWw/e1/l0TGCWmjAqoG7s/4YEuB4gI0GjDIrDn pSVQ== X-Received: by 10.69.0.166 with SMTP id az6mr21689907pbd.168.1439970144299; Wed, 19 Aug 2015 00:42:24 -0700 (PDT) Received: from pyunyh@gmail.com ([106.247.248.2]) by smtp.gmail.com with ESMTPSA id r1sm2419947pdm.31.2015.08.19.00.42.19 (version=TLSv1 cipher=RC4-SHA bits=128/128); Wed, 19 Aug 2015 00:42:23 -0700 (PDT) From: Yonghyeon PYUN X-Google-Original-From: "Yonghyeon PYUN" Received: by pyunyh@gmail.com (sSMTP sendmail emulation); Wed, 19 Aug 2015 16:42:12 +0900 Date: Wed, 19 Aug 2015 16:42:12 +0900 To: Hans Petter Selasky Cc: Rick Macklem , Daniel Braniss , FreeBSD Net , Slawa Olhovchenkov , FreeBSD stable , Christopher Forgeron Subject: Re: ix(intel) vs mlxen(mellanox) 10Gb performance Message-ID: <20150819074212.GB964@michelle.fasterthan.com> Reply-To: pyunyh@gmail.com References: <1D52028A-B39F-4F9B-BD38-CB1D73BF5D56@cs.huji.ac.il> <17871443-E105-4434-80B1-6939306A865F@cs.huji.ac.il> <473274181.23263108.1439814072514.JavaMail.zimbra@uoguelph.ca> <7F892C70-9C04-4468-9514-EDBFE75CF2C6@cs.huji.ac.il> <805850043.24018217.1439848150695.JavaMail.zimbra@uoguelph.ca> <9D8B0503-E8FA-43CA-88F0-01F184F84D9B@cs.huji.ac.il> <1721122651.24481798.1439902381663.JavaMail.zimbra@uoguelph.ca> <55D333D6.5040102@selasky.org> <1325951625.25292515.1439934848268.JavaMail.zimbra@uoguelph.ca> <55D429A4.3010407@selasky.org> Mime-Version: 1.0 Content-Type: text/plain; charset=us-ascii Content-Disposition: inline In-Reply-To: <55D429A4.3010407@selasky.org> User-Agent: Mutt/1.4.2.3i X-BeenThere: freebsd-net@freebsd.org X-Mailman-Version: 2.1.20 Precedence: list List-Id: Networking and TCP/IP with FreeBSD List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , X-List-Received-Date: Wed, 19 Aug 2015 07:42:25 -0000 On Wed, Aug 19, 2015 at 09:00:52AM +0200, Hans Petter Selasky wrote: > On 08/18/15 23:54, Rick Macklem wrote: > >Ouch! Yes, I now see that the code that counts the # of mbufs is before the > >code that adds the tcp/ip header mbuf. > > > >In my opinion, this should be fixed by setting if_hw_tsomaxsegcount to > >whatever > >the driver provides - 1. It is not the driver's responsibility to know if > >a tcp/ip > >header mbuf will be added and is a lot less confusing that expecting the > >driver > >author to know to subtract one. (I had mistakenly thought that > >tcp_output() had > >added the tc/ip header mbuf before the loop that counts mbufs in the list. > >Btw, > >this tcp/ip header mbuf also has leading space for the MAC layer header.) > > > > Hi Rick, > > Your question is good. With the Mellanox hardware we have separate > so-called inline data space for the TCP/IP headers, so if the TCP stack > subtracts something, then we would need to add something to the limit, > because then the scatter gather list is only used for the data part. > I think all drivers in tree don't subtract 1 for if_hw_tsomaxsegcount. Probably touching Mellanox driver would be simpler than fixing all other drivers in tree. > Maybe it can be controlled by some kind of flag, if all the three TSO > limits should include the TCP/IP/ethernet headers too. I'm pretty sure > we want both versions. > Hmm, I'm afraid it's already complex. Drivers have to tell almost the same information to both bus_dma(9) and network stack. From owner-freebsd-net@freebsd.org Wed Aug 19 07:50:23 2015 Return-Path: Delivered-To: freebsd-net@mailman.ysv.freebsd.org Received: from mx1.freebsd.org (mx1.freebsd.org [IPv6:2001:1900:2254:206a::19:1]) by mailman.ysv.freebsd.org (Postfix) with ESMTP id 264AF9BDE74; Wed, 19 Aug 2015 07:50:23 +0000 (UTC) (envelope-from hps@selasky.org) Received: from mail.turbocat.net (heidi.turbocat.net [88.198.202.214]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (Client did not present a certificate) by mx1.freebsd.org (Postfix) with ESMTPS id D8AEBA2C; Wed, 19 Aug 2015 07:50:22 +0000 (UTC) (envelope-from hps@selasky.org) Received: from laptop015.home.selasky.org (cm-176.74.213.204.customer.telag.net [176.74.213.204]) (using TLSv1.2 with cipher ECDHE-RSA-AES128-GCM-SHA256 (128/128 bits)) (No client certificate requested) by mail.turbocat.net (Postfix) with ESMTPSA id 995EE1FE023; Wed, 19 Aug 2015 09:50:19 +0200 (CEST) Subject: Re: ix(intel) vs mlxen(mellanox) 10Gb performance To: pyunyh@gmail.com References: <1D52028A-B39F-4F9B-BD38-CB1D73BF5D56@cs.huji.ac.il> <17871443-E105-4434-80B1-6939306A865F@cs.huji.ac.il> <473274181.23263108.1439814072514.JavaMail.zimbra@uoguelph.ca> <7F892C70-9C04-4468-9514-EDBFE75CF2C6@cs.huji.ac.il> <805850043.24018217.1439848150695.JavaMail.zimbra@uoguelph.ca> <9D8B0503-E8FA-43CA-88F0-01F184F84D9B@cs.huji.ac.il> <1721122651.24481798.1439902381663.JavaMail.zimbra@uoguelph.ca> <55D333D6.5040102@selasky.org> <1325951625.25292515.1439934848268.JavaMail.zimbra@uoguelph.ca> <55D429A4.3010407@selasky.org> <20150819074212.GB964@michelle.fasterthan.com> Cc: Rick Macklem , FreeBSD stable , FreeBSD Net , Slawa Olhovchenkov , Christopher Forgeron , Daniel Braniss From: Hans Petter Selasky Message-ID: <55D43590.8050508@selasky.org> Date: Wed, 19 Aug 2015 09:51:44 +0200 User-Agent: Mozilla/5.0 (X11; FreeBSD amd64; rv:38.0) Gecko/20100101 Thunderbird/38.1.0 MIME-Version: 1.0 In-Reply-To: <20150819074212.GB964@michelle.fasterthan.com> Content-Type: text/plain; charset=windows-1252; format=flowed Content-Transfer-Encoding: 7bit X-BeenThere: freebsd-net@freebsd.org X-Mailman-Version: 2.1.20 Precedence: list List-Id: Networking and TCP/IP with FreeBSD List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , X-List-Received-Date: Wed, 19 Aug 2015 07:50:23 -0000 On 08/19/15 09:42, Yonghyeon PYUN wrote: > On Wed, Aug 19, 2015 at 09:00:52AM +0200, Hans Petter Selasky wrote: >> On 08/18/15 23:54, Rick Macklem wrote: >>> Ouch! Yes, I now see that the code that counts the # of mbufs is before the >>> code that adds the tcp/ip header mbuf. >>> >>> In my opinion, this should be fixed by setting if_hw_tsomaxsegcount to >>> whatever >>> the driver provides - 1. It is not the driver's responsibility to know if >>> a tcp/ip >>> header mbuf will be added and is a lot less confusing that expecting the >>> driver >>> author to know to subtract one. (I had mistakenly thought that >>> tcp_output() had >>> added the tc/ip header mbuf before the loop that counts mbufs in the list. >>> Btw, >>> this tcp/ip header mbuf also has leading space for the MAC layer header.) >>> >> >> Hi Rick, >> >> Your question is good. With the Mellanox hardware we have separate >> so-called inline data space for the TCP/IP headers, so if the TCP stack >> subtracts something, then we would need to add something to the limit, >> because then the scatter gather list is only used for the data part. >> > > I think all drivers in tree don't subtract 1 for > if_hw_tsomaxsegcount. Probably touching Mellanox driver would be > simpler than fixing all other drivers in tree. Hi, If you change the behaviour don't forget to update and/or add comments describing it. Maybe the amount of subtraction could be defined by some macro? Then drivers which inline the headers can subtract it? Your suggestion is fine by me. The initial TSO limits were tried to be preserved, and I believe that TSO limits never accounted for IP/TCP/ETHERNET/VLAN headers! > >> Maybe it can be controlled by some kind of flag, if all the three TSO >> limits should include the TCP/IP/ethernet headers too. I'm pretty sure >> we want both versions. >> > > Hmm, I'm afraid it's already complex. Drivers have to tell almost > the same information to both bus_dma(9) and network stack. You're right it's complicated. Not sure if bus_dma can provide an API for this though. --HPS From owner-freebsd-net@freebsd.org Wed Aug 19 07:52:34 2015 Return-Path: Delivered-To: freebsd-net@mailman.ysv.freebsd.org Received: from mx1.freebsd.org (mx1.freebsd.org [IPv6:2001:1900:2254:206a::19:1]) by mailman.ysv.freebsd.org (Postfix) with ESMTP id EB7139BDF9E; Wed, 19 Aug 2015 07:52:34 +0000 (UTC) (envelope-from hps@selasky.org) Received: from mail.turbocat.net (heidi.turbocat.net [88.198.202.214]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (Client did not present a certificate) by mx1.freebsd.org (Postfix) with ESMTPS id A967BF8E; Wed, 19 Aug 2015 07:52:34 +0000 (UTC) (envelope-from hps@selasky.org) Received: from laptop015.home.selasky.org (cm-176.74.213.204.customer.telag.net [176.74.213.204]) (using TLSv1.2 with cipher ECDHE-RSA-AES128-GCM-SHA256 (128/128 bits)) (No client certificate requested) by mail.turbocat.net (Postfix) with ESMTPSA id 6EE091FE023; Wed, 19 Aug 2015 09:52:32 +0200 (CEST) Subject: Re: ix(intel) vs mlxen(mellanox) 10Gb performance To: pyunyh@gmail.com References: <1D52028A-B39F-4F9B-BD38-CB1D73BF5D56@cs.huji.ac.il> <17871443-E105-4434-80B1-6939306A865F@cs.huji.ac.il> <473274181.23263108.1439814072514.JavaMail.zimbra@uoguelph.ca> <7F892C70-9C04-4468-9514-EDBFE75CF2C6@cs.huji.ac.il> <805850043.24018217.1439848150695.JavaMail.zimbra@uoguelph.ca> <9D8B0503-E8FA-43CA-88F0-01F184F84D9B@cs.huji.ac.il> <1721122651.24481798.1439902381663.JavaMail.zimbra@uoguelph.ca> <55D333D6.5040102@selasky.org> <1325951625.25292515.1439934848268.JavaMail.zimbra@uoguelph.ca> <55D429A4.3010407@selasky.org> <20150819074212.GB964@michelle.fasterthan.com> Cc: Rick Macklem , FreeBSD stable , FreeBSD Net , Slawa Olhovchenkov , Christopher Forgeron , Daniel Braniss From: Hans Petter Selasky Message-ID: <55D43615.1030401@selasky.org> Date: Wed, 19 Aug 2015 09:53:57 +0200 User-Agent: Mozilla/5.0 (X11; FreeBSD amd64; rv:38.0) Gecko/20100101 Thunderbird/38.1.0 MIME-Version: 1.0 In-Reply-To: <20150819074212.GB964@michelle.fasterthan.com> Content-Type: text/plain; charset=windows-1252; format=flowed Content-Transfer-Encoding: 7bit X-BeenThere: freebsd-net@freebsd.org X-Mailman-Version: 2.1.20 Precedence: list List-Id: Networking and TCP/IP with FreeBSD List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , X-List-Received-Date: Wed, 19 Aug 2015 07:52:35 -0000 On 08/19/15 09:42, Yonghyeon PYUN wrote: > On Wed, Aug 19, 2015 at 09:00:52AM +0200, Hans Petter Selasky wrote: >> On 08/18/15 23:54, Rick Macklem wrote: >>> Ouch! Yes, I now see that the code that counts the # of mbufs is before the >>> code that adds the tcp/ip header mbuf. >>> >>> In my opinion, this should be fixed by setting if_hw_tsomaxsegcount to >>> whatever >>> the driver provides - 1. It is not the driver's responsibility to know if >>> a tcp/ip >>> header mbuf will be added and is a lot less confusing that expecting the >>> driver >>> author to know to subtract one. (I had mistakenly thought that >>> tcp_output() had >>> added the tc/ip header mbuf before the loop that counts mbufs in the list. >>> Btw, >>> this tcp/ip header mbuf also has leading space for the MAC layer header.) >>> >> >> Hi Rick, >> >> Your question is good. With the Mellanox hardware we have separate >> so-called inline data space for the TCP/IP headers, so if the TCP stack >> subtracts something, then we would need to add something to the limit, >> because then the scatter gather list is only used for the data part. >> > > I think all drivers in tree don't subtract 1 for > if_hw_tsomaxsegcount. Probably touching Mellanox driver would be > simpler than fixing all other drivers in tree. > >> Maybe it can be controlled by some kind of flag, if all the three TSO >> limits should include the TCP/IP/ethernet headers too. I'm pretty sure >> we want both versions. >> > > Hmm, I'm afraid it's already complex. Drivers have to tell almost > the same information to both bus_dma(9) and network stack. Don't forget that not all drivers in the tree set the TSO limits before if_attach(), so possibly the subtraction of one TSO fragment needs to go into ip_output() .... --HPS From owner-freebsd-net@freebsd.org Wed Aug 19 08:13:21 2015 Return-Path: Delivered-To: freebsd-net@mailman.ysv.freebsd.org Received: from mx1.freebsd.org (mx1.freebsd.org [IPv6:2001:1900:2254:206a::19:1]) by mailman.ysv.freebsd.org (Postfix) with ESMTP id 7A9E59BD96A; Wed, 19 Aug 2015 08:13:21 +0000 (UTC) (envelope-from pyunyh@gmail.com) Received: from mail-pa0-x22c.google.com (mail-pa0-x22c.google.com [IPv6:2607:f8b0:400e:c03::22c]) (using TLSv1.2 with cipher ECDHE-RSA-AES128-GCM-SHA256 (128/128 bits)) (Client CN "smtp.gmail.com", Issuer "Google Internet Authority G2" (verified OK)) by mx1.freebsd.org (Postfix) with ESMTPS id 456BA24B; Wed, 19 Aug 2015 08:13:21 +0000 (UTC) (envelope-from pyunyh@gmail.com) Received: by pawq9 with SMTP id q9so55586183paw.3; Wed, 19 Aug 2015 01:13:20 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20120113; h=from:date:to:cc:subject:message-id:reply-to:references:mime-version :content-type:content-disposition:in-reply-to:user-agent; bh=CLkjp+SqMahrtwi6FLVeLyQrvmoXwkpvvLRb8VLmBnQ=; b=zRQlmWuUhdPEueQTUHWN6PlbAi3ALpkq1vpKv1dReKcTFrdBFEpwOHJe2MMWnO7hbD 5QRPq8nWkMVD90DbgF9CcIigUZwPK1/y1yV1cUWolyzJwT8BdrzrhC11CzxoDIxo9JC3 4L5KYMttH3TjFAFLmxenMGzRLse0u0ud9/RZqC8RuViyagib7OZqhi+4Vp7sGuHu3Cfg cE2AlxlyJaj7ZAgM3pgIR8Ny0SkB4gt5r3rrR0jiiWAcfW2T4FVNnsfpt6P8L7jaHq9u rH1f7Bg/5BFbiq4u9pBC9WxUs8QjU1tqQVTdvtAqcv2teKf2wm01IjlFod17o+4nrRRP O9dg== X-Received: by 10.68.250.98 with SMTP id zb2mr22368291pbc.40.1439972000644; Wed, 19 Aug 2015 01:13:20 -0700 (PDT) Received: from pyunyh@gmail.com ([106.247.248.2]) by smtp.gmail.com with ESMTPSA id d5sm20648024pdn.74.2015.08.19.01.13.15 (version=TLSv1 cipher=RC4-SHA bits=128/128); Wed, 19 Aug 2015 01:13:19 -0700 (PDT) From: Yonghyeon PYUN X-Google-Original-From: "Yonghyeon PYUN" Received: by pyunyh@gmail.com (sSMTP sendmail emulation); Wed, 19 Aug 2015 17:13:08 +0900 Date: Wed, 19 Aug 2015 17:13:08 +0900 To: Hans Petter Selasky Cc: Rick Macklem , FreeBSD stable , FreeBSD Net , Slawa Olhovchenkov , Christopher Forgeron , Daniel Braniss Subject: Re: ix(intel) vs mlxen(mellanox) 10Gb performance Message-ID: <20150819081308.GC964@michelle.fasterthan.com> Reply-To: pyunyh@gmail.com References: <473274181.23263108.1439814072514.JavaMail.zimbra@uoguelph.ca> <7F892C70-9C04-4468-9514-EDBFE75CF2C6@cs.huji.ac.il> <805850043.24018217.1439848150695.JavaMail.zimbra@uoguelph.ca> <9D8B0503-E8FA-43CA-88F0-01F184F84D9B@cs.huji.ac.il> <1721122651.24481798.1439902381663.JavaMail.zimbra@uoguelph.ca> <55D333D6.5040102@selasky.org> <1325951625.25292515.1439934848268.JavaMail.zimbra@uoguelph.ca> <55D429A4.3010407@selasky.org> <20150819074212.GB964@michelle.fasterthan.com> <55D43590.8050508@selasky.org> Mime-Version: 1.0 Content-Type: text/plain; charset=us-ascii Content-Disposition: inline In-Reply-To: <55D43590.8050508@selasky.org> User-Agent: Mutt/1.4.2.3i X-BeenThere: freebsd-net@freebsd.org X-Mailman-Version: 2.1.20 Precedence: list List-Id: Networking and TCP/IP with FreeBSD List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , X-List-Received-Date: Wed, 19 Aug 2015 08:13:21 -0000 On Wed, Aug 19, 2015 at 09:51:44AM +0200, Hans Petter Selasky wrote: > On 08/19/15 09:42, Yonghyeon PYUN wrote: > >On Wed, Aug 19, 2015 at 09:00:52AM +0200, Hans Petter Selasky wrote: > >>On 08/18/15 23:54, Rick Macklem wrote: > >>>Ouch! Yes, I now see that the code that counts the # of mbufs is before > >>>the > >>>code that adds the tcp/ip header mbuf. > >>> > >>>In my opinion, this should be fixed by setting if_hw_tsomaxsegcount to > >>>whatever > >>>the driver provides - 1. It is not the driver's responsibility to know if > >>>a tcp/ip > >>>header mbuf will be added and is a lot less confusing that expecting the > >>>driver > >>>author to know to subtract one. (I had mistakenly thought that > >>>tcp_output() had > >>>added the tc/ip header mbuf before the loop that counts mbufs in the > >>>list. > >>>Btw, > >>>this tcp/ip header mbuf also has leading space for the MAC layer header.) > >>> > >> > >>Hi Rick, > >> > >>Your question is good. With the Mellanox hardware we have separate > >>so-called inline data space for the TCP/IP headers, so if the TCP stack > >>subtracts something, then we would need to add something to the limit, > >>because then the scatter gather list is only used for the data part. > >> > > > >I think all drivers in tree don't subtract 1 for > >if_hw_tsomaxsegcount. Probably touching Mellanox driver would be > >simpler than fixing all other drivers in tree. > > Hi, > > If you change the behaviour don't forget to update and/or add comments > describing it. Maybe the amount of subtraction could be defined by some > macro? Then drivers which inline the headers can subtract it? > I'm also ok with your suggestion. > Your suggestion is fine by me. > > The initial TSO limits were tried to be preserved, and I believe that > TSO limits never accounted for IP/TCP/ETHERNET/VLAN headers! > I guess FreeBSD used to follow MS LSOv1 specification with minor exception in pseudo checksum computation. If I recall correctly the specification says upper stack can generate up to IP_MAXPACKET sized packet. Other L2 headers like ethernet/vlan header size is not included in the packet and it's drivers responsibility to allocate additional DMA buffers/segments for L2 headers. > > > >>Maybe it can be controlled by some kind of flag, if all the three TSO > >>limits should include the TCP/IP/ethernet headers too. I'm pretty sure > >>we want both versions. > >> > > > >Hmm, I'm afraid it's already complex. Drivers have to tell almost > >the same information to both bus_dma(9) and network stack. > > You're right it's complicated. Not sure if bus_dma can provide an API > for this though. > > --HPS From owner-freebsd-net@freebsd.org Wed Aug 19 12:14:02 2015 Return-Path: Delivered-To: freebsd-net@mailman.ysv.freebsd.org Received: from mx1.freebsd.org (mx1.freebsd.org [IPv6:2001:1900:2254:206a::19:1]) by mailman.ysv.freebsd.org (Postfix) with ESMTP id B4A109BC02B; Wed, 19 Aug 2015 12:14:02 +0000 (UTC) (envelope-from rmacklem@uoguelph.ca) Received: from esa-annu.net.uoguelph.ca (esa-annu.mail.uoguelph.ca [131.104.91.36]) by mx1.freebsd.org (Postfix) with ESMTP id 2C9B0979; Wed, 19 Aug 2015 12:14:01 +0000 (UTC) (envelope-from rmacklem@uoguelph.ca) IronPort-PHdr: 9a23:GkC7YBd9aSdqXt8Roqe2oc6zlGMj4u6mDksu8pMizoh2WeGdxc6/Yx7h7PlgxGXEQZ/co6odzbGG6Oa8Bydcvt6oizMrTt9lb1c9k8IYnggtUoauKHbQC7rUVRE8B9lIT1R//nu2YgB/Ecf6YEDO8DXptWZBUiv2OQc9HOnpAIma153xjLDpvcGNKFkXzBOGIppMbzyO5T3LsccXhYYwYo0Q8TDu5kVyRuJN2GlzLkiSlRuvru25/Zpk7jgC86l5r50IeezAcq85Vb1VCig9eyBwvZWz9EqLcQza/moBVHQWuhVNCgnBqhr9W8TfqCz/49B80yrSGMT9TrQ5XHz29aJiQxzshSIvKjk27WzTksw2h6sN80HpnAB234OBONLdD/F5ZK6IOIpCHWc= X-IronPort-Anti-Spam-Filtered: true X-IronPort-Anti-Spam-Result: A2AnAgA0cdRV/61jaINdg29pBoMfuiQBCYFtCoUxSgKBehQBAQEBAQEBAYEJgh2CBwEBBAEBASArIAsQAgEIGAICDRkCAicBCSYCDAcEARwEiA0NuUuWGwEBAQEBAQEDAQEBAQEZBIEiijGEMQEGAQEcNAeCaYFDBZUjhQSFB4QskDeESINmAiaCDhyBbyIzB34BCBcjgQQBAQE X-IronPort-AV: E=Sophos;i="5.15,709,1432612800"; d="scan'208";a="233342498" Received: from nipigon.cs.uoguelph.ca (HELO zcs1.mail.uoguelph.ca) ([131.104.99.173]) by esa-annu.net.uoguelph.ca with ESMTP; 19 Aug 2015 08:14:00 -0400 Received: from localhost (localhost [127.0.0.1]) by zcs1.mail.uoguelph.ca (Postfix) with ESMTP id 75F1715F574; Wed, 19 Aug 2015 08:14:00 -0400 (EDT) Received: from zcs1.mail.uoguelph.ca ([127.0.0.1]) by localhost (zcs1.mail.uoguelph.ca [127.0.0.1]) (amavisd-new, port 10032) with ESMTP id 9c6wl2qMDweJ; Wed, 19 Aug 2015 08:13:59 -0400 (EDT) Received: from localhost (localhost [127.0.0.1]) by zcs1.mail.uoguelph.ca (Postfix) with ESMTP id A102F15F577; Wed, 19 Aug 2015 08:13:59 -0400 (EDT) X-Virus-Scanned: amavisd-new at zcs1.mail.uoguelph.ca Received: from zcs1.mail.uoguelph.ca ([127.0.0.1]) by localhost (zcs1.mail.uoguelph.ca [127.0.0.1]) (amavisd-new, port 10026) with ESMTP id p_9FsjkfQATs; Wed, 19 Aug 2015 08:13:59 -0400 (EDT) Received: from zcs1.mail.uoguelph.ca (zcs1.mail.uoguelph.ca [172.17.95.18]) by zcs1.mail.uoguelph.ca (Postfix) with ESMTP id 7EE9215F574; Wed, 19 Aug 2015 08:13:59 -0400 (EDT) Date: Wed, 19 Aug 2015 08:13:59 -0400 (EDT) From: Rick Macklem To: pyunyh@gmail.com Cc: Hans Petter Selasky , FreeBSD stable , FreeBSD Net , Slawa Olhovchenkov , Christopher Forgeron , Daniel Braniss Message-ID: <1154739904.25677089.1439986439408.JavaMail.zimbra@uoguelph.ca> In-Reply-To: <20150819081308.GC964@michelle.fasterthan.com> References: <473274181.23263108.1439814072514.JavaMail.zimbra@uoguelph.ca> <1721122651.24481798.1439902381663.JavaMail.zimbra@uoguelph.ca> <55D333D6.5040102@selasky.org> <1325951625.25292515.1439934848268.JavaMail.zimbra@uoguelph.ca> <55D429A4.3010407@selasky.org> <20150819074212.GB964@michelle.fasterthan.com> <55D43590.8050508@selasky.org> <20150819081308.GC964@michelle.fasterthan.com> Subject: Re: ix(intel) vs mlxen(mellanox) 10Gb performance MIME-Version: 1.0 Content-Type: text/plain; charset=utf-8 Content-Transfer-Encoding: 7bit X-Originating-IP: [172.17.95.11] X-Mailer: Zimbra 8.0.9_GA_6191 (ZimbraWebClient - FF34 (Win)/8.0.9_GA_6191) Thread-Topic: ix(intel) vs mlxen(mellanox) 10Gb performance Thread-Index: wv7zo8RPDc9ayJBYkipemnefFyp/cg== X-BeenThere: freebsd-net@freebsd.org X-Mailman-Version: 2.1.20 Precedence: list List-Id: Networking and TCP/IP with FreeBSD List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , X-List-Received-Date: Wed, 19 Aug 2015 12:14:02 -0000 Yonghyeon PYUN wrote: > On Wed, Aug 19, 2015 at 09:51:44AM +0200, Hans Petter Selasky wrote: > > On 08/19/15 09:42, Yonghyeon PYUN wrote: > > >On Wed, Aug 19, 2015 at 09:00:52AM +0200, Hans Petter Selasky wrote: > > >>On 08/18/15 23:54, Rick Macklem wrote: > > >>>Ouch! Yes, I now see that the code that counts the # of mbufs is before > > >>>the > > >>>code that adds the tcp/ip header mbuf. > > >>> > > >>>In my opinion, this should be fixed by setting if_hw_tsomaxsegcount to > > >>>whatever > > >>>the driver provides - 1. It is not the driver's responsibility to know > > >>>if > > >>>a tcp/ip > > >>>header mbuf will be added and is a lot less confusing that expecting the > > >>>driver > > >>>author to know to subtract one. (I had mistakenly thought that > > >>>tcp_output() had > > >>>added the tc/ip header mbuf before the loop that counts mbufs in the > > >>>list. > > >>>Btw, > > >>>this tcp/ip header mbuf also has leading space for the MAC layer > > >>>header.) > > >>> > > >> > > >>Hi Rick, > > >> > > >>Your question is good. With the Mellanox hardware we have separate > > >>so-called inline data space for the TCP/IP headers, so if the TCP stack > > >>subtracts something, then we would need to add something to the limit, > > >>because then the scatter gather list is only used for the data part. > > >> > > > > > >I think all drivers in tree don't subtract 1 for > > >if_hw_tsomaxsegcount. Probably touching Mellanox driver would be > > >simpler than fixing all other drivers in tree. > > > > Hi, > > > > If you change the behaviour don't forget to update and/or add comments > > describing it. Maybe the amount of subtraction could be defined by some > > macro? Then drivers which inline the headers can subtract it? > > > > I'm also ok with your suggestion. > > > Your suggestion is fine by me. > > > > > The initial TSO limits were tried to be preserved, and I believe that > > TSO limits never accounted for IP/TCP/ETHERNET/VLAN headers! > > > > I guess FreeBSD used to follow MS LSOv1 specification with minor > exception in pseudo checksum computation. If I recall correctly the > specification says upper stack can generate up to IP_MAXPACKET sized > packet. Other L2 headers like ethernet/vlan header size is not > included in the packet and it's drivers responsibility to allocate > additional DMA buffers/segments for L2 headers. > Yep. The default for if_hw_tsomax was reduced from IP_MAXPACKET to 32 * MCLBYTES - max_ethernet_header_size as a workaround/hack so that devices limited to 32 transmit segments would work (ie. the entire packet, including MAC header would fit in 32 MCLBYTE clusters). This implied that many drivers did end up using m_defrag() to copy the mbuf list to one made up of 32 MCLBYTE clusters. If a driver sets if_hw_tsomaxsegcount correctly, then it can set if_hw_tsomax to whatever it can handle as the largest TSO packet (without MAC header) the hardware can handle. If it can handle > IP_MAXPACKET, then it can set it to that. rick > > > > > >>Maybe it can be controlled by some kind of flag, if all the three TSO > > >>limits should include the TCP/IP/ethernet headers too. I'm pretty sure > > >>we want both versions. > > >> > > > > > >Hmm, I'm afraid it's already complex. Drivers have to tell almost > > >the same information to both bus_dma(9) and network stack. > > > > You're right it's complicated. Not sure if bus_dma can provide an API > > for this though. > > > > --HPS > _______________________________________________ > freebsd-net@freebsd.org mailing list > https://lists.freebsd.org/mailman/listinfo/freebsd-net > To unsubscribe, send any mail to "freebsd-net-unsubscribe@freebsd.org" > From owner-freebsd-net@freebsd.org Wed Aug 19 12:22:39 2015 Return-Path: Delivered-To: freebsd-net@mailman.ysv.freebsd.org Received: from mx1.freebsd.org (mx1.freebsd.org [IPv6:2001:1900:2254:206a::19:1]) by mailman.ysv.freebsd.org (Postfix) with ESMTP id A83D19BC36F; Wed, 19 Aug 2015 12:22:39 +0000 (UTC) (envelope-from rmacklem@uoguelph.ca) Received: from esa-jnhn.mail.uoguelph.ca (esa-jnhn.mail.uoguelph.ca [131.104.91.44]) by mx1.freebsd.org (Postfix) with ESMTP id 0B1A410A9; Wed, 19 Aug 2015 12:22:38 +0000 (UTC) (envelope-from rmacklem@uoguelph.ca) IronPort-PHdr: 9a23:Vxk/qxbJTY/CfjipoofHFNv/LSx+4OfEezUN459isYplN5qZpcuybnLW6fgltlLVR4KTs6sC0LqN9fy+EjBfqb+681k8M7V0HycfjssXmwFySOWkMmbcaMDQUiohAc5ZX0Vk9XzoeWJcGcL5ekGA6ibqtW1aJBzzOEJPK/jvHcaK1oLsh7v0p8OYP1oArQH+SI0xBS3+lR/WuMgSjNkqAYcK4TyNnEF1ff9Lz3hjP1OZkkW0zM6x+Jl+73YY4Kp5pIZoGJ/3dKUgTLFeEC9ucyVsvJWq5lH/Sl6v730HGl0bjgZFGUD+4RXzRZTg+n/6rvFVwySeNNb1XPYzQzv0vIlxTxq9siYMNHYc+WrUjsF1xPZBpRuqpBhyxqbJZ46IOf5mfuXWdIVJFiJ6Qs9NWnkZUcuHZIwVAr9EZL4Aog== X-IronPort-Anti-Spam-Filtered: true X-IronPort-Anti-Spam-Result: A2AnAgCWc9RV/61jaINdg29pBoMfuiQBCYFtCoUxSgKBehQBAQEBAQEBAYEJgh2CBwEBBAEBASArIAsQAgEIGAICDRkCAicBCSYCBAgHBAEcBIgNDblVlhsBAQEBAQEBAQEBAQEBAQEBARcEgSKKMYQyBgEBHDQHgmmBQwWVI4UEhQcxg3sVlGqDZgImhBkiMwd/CBcjgQQBAQE X-IronPort-AV: E=Sophos;i="5.15,709,1432612800"; d="scan'208";a="231624929" Received: from nipigon.cs.uoguelph.ca (HELO zcs1.mail.uoguelph.ca) ([131.104.99.173]) by esa-jnhn.mail.uoguelph.ca with ESMTP; 19 Aug 2015 08:22:37 -0400 Received: from localhost (localhost [127.0.0.1]) by zcs1.mail.uoguelph.ca (Postfix) with ESMTP id 1DFE315F574; Wed, 19 Aug 2015 08:22:37 -0400 (EDT) Received: from zcs1.mail.uoguelph.ca ([127.0.0.1]) by localhost (zcs1.mail.uoguelph.ca [127.0.0.1]) (amavisd-new, port 10032) with ESMTP id oGHvcUGKoy3Z; Wed, 19 Aug 2015 08:22:36 -0400 (EDT) Received: from localhost (localhost [127.0.0.1]) by zcs1.mail.uoguelph.ca (Postfix) with ESMTP id 76F4315F577; Wed, 19 Aug 2015 08:22:36 -0400 (EDT) X-Virus-Scanned: amavisd-new at zcs1.mail.uoguelph.ca Received: from zcs1.mail.uoguelph.ca ([127.0.0.1]) by localhost (zcs1.mail.uoguelph.ca [127.0.0.1]) (amavisd-new, port 10026) with ESMTP id aicDNu_ZdPFR; Wed, 19 Aug 2015 08:22:36 -0400 (EDT) Received: from zcs1.mail.uoguelph.ca (zcs1.mail.uoguelph.ca [172.17.95.18]) by zcs1.mail.uoguelph.ca (Postfix) with ESMTP id 5630215F574; Wed, 19 Aug 2015 08:22:36 -0400 (EDT) Date: Wed, 19 Aug 2015 08:22:36 -0400 (EDT) From: Rick Macklem To: Hans Petter Selasky Cc: pyunyh@gmail.com, FreeBSD stable , FreeBSD Net , Slawa Olhovchenkov , Christopher Forgeron Message-ID: <160577762.25683294.1439986956328.JavaMail.zimbra@uoguelph.ca> In-Reply-To: <55D43615.1030401@selasky.org> References: <1D52028A-B39F-4F9B-BD38-CB1D73BF5D56@cs.huji.ac.il> <9D8B0503-E8FA-43CA-88F0-01F184F84D9B@cs.huji.ac.il> <1721122651.24481798.1439902381663.JavaMail.zimbra@uoguelph.ca> <55D333D6.5040102@selasky.org> <1325951625.25292515.1439934848268.JavaMail.zimbra@uoguelph.ca> <55D429A4.3010407@selasky.org> <20150819074212.GB964@michelle.fasterthan.com> <55D43615.1030401@selasky.org> Subject: Re: ix(intel) vs mlxen(mellanox) 10Gb performance MIME-Version: 1.0 Content-Type: text/plain; charset=utf-8 Content-Transfer-Encoding: 7bit X-Originating-IP: [172.17.95.10] X-Mailer: Zimbra 8.0.9_GA_6191 (ZimbraWebClient - FF34 (Win)/8.0.9_GA_6191) Thread-Topic: ix(intel) vs mlxen(mellanox) 10Gb performance Thread-Index: 3tLtiaT//pF56x9z31ShK3tEdfsYmQ== X-BeenThere: freebsd-net@freebsd.org X-Mailman-Version: 2.1.20 Precedence: list List-Id: Networking and TCP/IP with FreeBSD List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , X-List-Received-Date: Wed, 19 Aug 2015 12:22:39 -0000 Hans Petter Selasky wrote: > On 08/19/15 09:42, Yonghyeon PYUN wrote: > > On Wed, Aug 19, 2015 at 09:00:52AM +0200, Hans Petter Selasky wrote: > >> On 08/18/15 23:54, Rick Macklem wrote: > >>> Ouch! Yes, I now see that the code that counts the # of mbufs is before > >>> the > >>> code that adds the tcp/ip header mbuf. > >>> > >>> In my opinion, this should be fixed by setting if_hw_tsomaxsegcount to > >>> whatever > >>> the driver provides - 1. It is not the driver's responsibility to know if > >>> a tcp/ip > >>> header mbuf will be added and is a lot less confusing that expecting the > >>> driver > >>> author to know to subtract one. (I had mistakenly thought that > >>> tcp_output() had > >>> added the tc/ip header mbuf before the loop that counts mbufs in the > >>> list. > >>> Btw, > >>> this tcp/ip header mbuf also has leading space for the MAC layer header.) > >>> > >> > >> Hi Rick, > >> > >> Your question is good. With the Mellanox hardware we have separate > >> so-called inline data space for the TCP/IP headers, so if the TCP stack > >> subtracts something, then we would need to add something to the limit, > >> because then the scatter gather list is only used for the data part. > >> > > > > I think all drivers in tree don't subtract 1 for > > if_hw_tsomaxsegcount. Probably touching Mellanox driver would be > > simpler than fixing all other drivers in tree. > > > >> Maybe it can be controlled by some kind of flag, if all the three TSO > >> limits should include the TCP/IP/ethernet headers too. I'm pretty sure > >> we want both versions. > >> > > > > Hmm, I'm afraid it's already complex. Drivers have to tell almost > > the same information to both bus_dma(9) and network stack. > > Don't forget that not all drivers in the tree set the TSO limits before > if_attach(), so possibly the subtraction of one TSO fragment needs to go > into ip_output() .... > I think setting them before a call to ether_ifattach() should be required and any driver that doesn't do that needs to be fixed. Also, I notice that "32 * MCLBYTES - (ETHER_HDR_LEN + ETHER_VLAN_ENCAP_LEN)" is getting written as "65536 - (ETHER_HDR_LEN + ETHER_VLAN_ENCAP_LEN)" which obscures the reason it is the default. It probably isn't the correct default for any driver that sets if_hw_tsomaxsegcount, but is close to IP_MAXPACKET, so the breakage is mostly theoretical. rick > --HPS > > _______________________________________________ > freebsd-stable@freebsd.org mailing list > https://lists.freebsd.org/mailman/listinfo/freebsd-stable > To unsubscribe, send any mail to "freebsd-stable-unsubscribe@freebsd.org" > From owner-freebsd-net@freebsd.org Wed Aug 19 12:26:22 2015 Return-Path: Delivered-To: freebsd-net@mailman.ysv.freebsd.org Received: from mx1.freebsd.org (mx1.freebsd.org [IPv6:2001:1900:2254:206a::19:1]) by mailman.ysv.freebsd.org (Postfix) with ESMTP id 92BD49BC52D; Wed, 19 Aug 2015 12:26:22 +0000 (UTC) (envelope-from rmacklem@uoguelph.ca) Received: from esa-jnhn.mail.uoguelph.ca (esa-jnhn.mail.uoguelph.ca [131.104.91.44]) by mx1.freebsd.org (Postfix) with ESMTP id 113DC162B; Wed, 19 Aug 2015 12:26:21 +0000 (UTC) (envelope-from rmacklem@uoguelph.ca) IronPort-PHdr: 9a23:62rayR/Tb+yobf9uRHKM819IXTAuvvDOBiVQ1KB91OIcTK2v8tzYMVDF4r011RmSDd6dt6wP17WempujcFJDyK7JiGoFfp1IWk1NouQttCtkPvS4D1bmJuXhdS0wEZcKflZk+3amLRodQ56mNBXsq3G/pQQfBg/4fVIsYL+lQciO0Y/riKibwN76XUZhvHKFe7R8LRG7/036l/I9ps9cEJs30QbDuXBSeu5blitCLFOXmAvgtI/rpMYwuwwZgf8q9tZBXKPmZOx4COUAVHV1e1wyse3iswKLdQaT+nYGGl4blhNTABmNuBHiRb/qvy/zrelsni6AMpulY6ozXGGY7qxoADrhgyQDOjtxpHvSg8dziK9eiA+mqAFyx5bUJoqcYqktNpjBdM8XEDISFv1aUDZMV8blN9MC X-IronPort-Anti-Spam-Filtered: true X-IronPort-Anti-Spam-Result: A2AnAgCWc9RV/61jaINdg29pBoMfuiQBCYFtCoUxSgKBehQBAQEBAQEBAYEJgh2CBwEBBAEBASArIAsQAgEIGAICDQQVAgInAQkmAgQIBwQBHASIDQ25VZYbAQEBAQEBAQEBAQEBAQEBAQEXBIEiijGEMgYBARw0BwqCX4FDBZUjhQSFB4QslH+DZgImhBkiMwd/CBcjgQQBAQE X-IronPort-AV: E=Sophos;i="5.15,709,1432612800"; d="scan'208";a="231625582" Received: from nipigon.cs.uoguelph.ca (HELO zcs1.mail.uoguelph.ca) ([131.104.99.173]) by esa-jnhn.mail.uoguelph.ca with ESMTP; 19 Aug 2015 08:26:20 -0400 Received: from localhost (localhost [127.0.0.1]) by zcs1.mail.uoguelph.ca (Postfix) with ESMTP id AB0E615F574; Wed, 19 Aug 2015 08:26:20 -0400 (EDT) Received: from zcs1.mail.uoguelph.ca ([127.0.0.1]) by localhost (zcs1.mail.uoguelph.ca [127.0.0.1]) (amavisd-new, port 10032) with ESMTP id zmNABxJOE1Hh; Wed, 19 Aug 2015 08:26:20 -0400 (EDT) Received: from localhost (localhost [127.0.0.1]) by zcs1.mail.uoguelph.ca (Postfix) with ESMTP id EF4AC15F578; Wed, 19 Aug 2015 08:26:19 -0400 (EDT) X-Virus-Scanned: amavisd-new at zcs1.mail.uoguelph.ca Received: from zcs1.mail.uoguelph.ca ([127.0.0.1]) by localhost (zcs1.mail.uoguelph.ca [127.0.0.1]) (amavisd-new, port 10026) with ESMTP id A6CIpa0jZlVv; Wed, 19 Aug 2015 08:26:19 -0400 (EDT) Received: from zcs1.mail.uoguelph.ca (zcs1.mail.uoguelph.ca [172.17.95.18]) by zcs1.mail.uoguelph.ca (Postfix) with ESMTP id D0CD715F574; Wed, 19 Aug 2015 08:26:19 -0400 (EDT) Date: Wed, 19 Aug 2015 08:26:19 -0400 (EDT) From: Rick Macklem To: Hans Petter Selasky Cc: pyunyh@gmail.com, FreeBSD stable , FreeBSD Net , Slawa Olhovchenkov , Christopher Forgeron , Daniel Braniss Message-ID: <901585223.25686295.1439987179835.JavaMail.zimbra@uoguelph.ca> In-Reply-To: <55D43615.1030401@selasky.org> References: <1D52028A-B39F-4F9B-BD38-CB1D73BF5D56@cs.huji.ac.il> <9D8B0503-E8FA-43CA-88F0-01F184F84D9B@cs.huji.ac.il> <1721122651.24481798.1439902381663.JavaMail.zimbra@uoguelph.ca> <55D333D6.5040102@selasky.org> <1325951625.25292515.1439934848268.JavaMail.zimbra@uoguelph.ca> <55D429A4.3010407@selasky.org> <20150819074212.GB964@michelle.fasterthan.com> <55D43615.1030401@selasky.org> Subject: Re: ix(intel) vs mlxen(mellanox) 10Gb performance MIME-Version: 1.0 Content-Type: text/plain; charset=utf-8 Content-Transfer-Encoding: 7bit X-Originating-IP: [172.17.95.10] X-Mailer: Zimbra 8.0.9_GA_6191 (ZimbraWebClient - FF34 (Win)/8.0.9_GA_6191) Thread-Topic: ix(intel) vs mlxen(mellanox) 10Gb performance Thread-Index: b7KE9Og530PFxSAdUMOGyYtExjLo2w== X-BeenThere: freebsd-net@freebsd.org X-Mailman-Version: 2.1.20 Precedence: list List-Id: Networking and TCP/IP with FreeBSD List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , X-List-Received-Date: Wed, 19 Aug 2015 12:26:22 -0000 Hans Petter Selasky wrote: > On 08/19/15 09:42, Yonghyeon PYUN wrote: > > On Wed, Aug 19, 2015 at 09:00:52AM +0200, Hans Petter Selasky wrote: > >> On 08/18/15 23:54, Rick Macklem wrote: > >>> Ouch! Yes, I now see that the code that counts the # of mbufs is before > >>> the > >>> code that adds the tcp/ip header mbuf. > >>> > >>> In my opinion, this should be fixed by setting if_hw_tsomaxsegcount to > >>> whatever > >>> the driver provides - 1. It is not the driver's responsibility to know if > >>> a tcp/ip > >>> header mbuf will be added and is a lot less confusing that expecting the > >>> driver > >>> author to know to subtract one. (I had mistakenly thought that > >>> tcp_output() had > >>> added the tc/ip header mbuf before the loop that counts mbufs in the > >>> list. > >>> Btw, > >>> this tcp/ip header mbuf also has leading space for the MAC layer header.) > >>> > >> > >> Hi Rick, > >> > >> Your question is good. With the Mellanox hardware we have separate > >> so-called inline data space for the TCP/IP headers, so if the TCP stack > >> subtracts something, then we would need to add something to the limit, > >> because then the scatter gather list is only used for the data part. > >> > > > > I think all drivers in tree don't subtract 1 for > > if_hw_tsomaxsegcount. Probably touching Mellanox driver would be > > simpler than fixing all other drivers in tree. > > > >> Maybe it can be controlled by some kind of flag, if all the three TSO > >> limits should include the TCP/IP/ethernet headers too. I'm pretty sure > >> we want both versions. > >> > > > > Hmm, I'm afraid it's already complex. Drivers have to tell almost > > the same information to both bus_dma(9) and network stack. > > Don't forget that not all drivers in the tree set the TSO limits before > if_attach(), so possibly the subtraction of one TSO fragment needs to go > into ip_output() .... > I don't really care where it gets subtracted, so long as it is subtracted at least by default, so all the drivers that don't subtract it get fixed. However, I might argue that tcp_output() is the correct place, since tcp_output() is where the tcp/ip header mbuf is prepended to the list. The subtraction is just taking into account the mbuf that tcp_output() will be adding to the head of the list and it should count that in the "while()" loop. rick > --HPS > > _______________________________________________ > freebsd-net@freebsd.org mailing list > https://lists.freebsd.org/mailman/listinfo/freebsd-net > To unsubscribe, send any mail to "freebsd-net-unsubscribe@freebsd.org" > From owner-freebsd-net@freebsd.org Wed Aug 19 13:00:39 2015 Return-Path: Delivered-To: freebsd-net@mailman.ysv.freebsd.org Received: from mx1.freebsd.org (mx1.freebsd.org [IPv6:2001:1900:2254:206a::19:1]) by mailman.ysv.freebsd.org (Postfix) with ESMTP id 493949BCC46; Wed, 19 Aug 2015 13:00:39 +0000 (UTC) (envelope-from rmacklem@uoguelph.ca) Received: from esa-jnhn.mail.uoguelph.ca (esa-jnhn.mail.uoguelph.ca [131.104.91.44]) by mx1.freebsd.org (Postfix) with ESMTP id B555480E; Wed, 19 Aug 2015 13:00:38 +0000 (UTC) (envelope-from rmacklem@uoguelph.ca) IronPort-PHdr: 9a23:DMCj8hzucrlBwOjXCy+O+j09IxM/srCxBDY+r6Qd0e0eIJqq85mqBkHD//Il1AaPBtWAra4awLeO+4nbGkU+or+5+EgYd5JNUxJXwe43pCcHRPC/NEvgMfTxZDY7FskRHHVs/nW8LFQHUJ2mPw6anHS+4HYoFwnlMkItf6KuStWU05r8irj60qaQSjsLrQL1Wal1IhSyoFeZnegtqqwmFJwMzADUqGBDYeVcyDAgD1uSmxHh+pX4p8Y7oGx48sgs/M9YUKj8Y79wDfkBVGxnYCgJ45jLvB/YBTOC+mcRSC0tnx5BGAvUpEX6RozZqSb+v/F+yW+dJ8KgHp4uXjH31aZgS1fNgSwEMzM8uDXNj8V7j6ZWpTq8oBNizorMYMeePawtLevmYdoGSD8ZDY5qXCtbD9b5NtNXAg== X-IronPort-Anti-Spam-Filtered: true X-IronPort-Anti-Spam-Result: A2AnAgAsfdRV/61jaINdg29pBoMfuiQBCYFtCoUxSgKBehQBAQEBAQEBAYEJgh2CBwEBBAEBASAEJyALEAIBCBgCAg0ZAgInAQkmAgQIBwQBGgIEiA0NuX6WHgEBAQEBAQEBAQEBAQEBAQEBFwSBIooxhDIGAQEcNAeCaYFDBZUjhQSFB4Qsh0aIcYRIg2YCJoIOHIFvIjMHfwgXI4EEAQEB X-IronPort-AV: E=Sophos;i="5.15,709,1432612800"; d="scan'208";a="231630411" Received: from nipigon.cs.uoguelph.ca (HELO zcs1.mail.uoguelph.ca) ([131.104.99.173]) by esa-jnhn.mail.uoguelph.ca with ESMTP; 19 Aug 2015 09:00:37 -0400 Received: from localhost (localhost [127.0.0.1]) by zcs1.mail.uoguelph.ca (Postfix) with ESMTP id EC41A15F563; Wed, 19 Aug 2015 09:00:36 -0400 (EDT) Received: from zcs1.mail.uoguelph.ca ([127.0.0.1]) by localhost (zcs1.mail.uoguelph.ca [127.0.0.1]) (amavisd-new, port 10032) with ESMTP id XSfv8pzv90CU; Wed, 19 Aug 2015 09:00:36 -0400 (EDT) Received: from localhost (localhost [127.0.0.1]) by zcs1.mail.uoguelph.ca (Postfix) with ESMTP id 0C67A15F56D; Wed, 19 Aug 2015 09:00:36 -0400 (EDT) X-Virus-Scanned: amavisd-new at zcs1.mail.uoguelph.ca Received: from zcs1.mail.uoguelph.ca ([127.0.0.1]) by localhost (zcs1.mail.uoguelph.ca [127.0.0.1]) (amavisd-new, port 10026) with ESMTP id HfBl68YG1Ywd; Wed, 19 Aug 2015 09:00:35 -0400 (EDT) Received: from zcs1.mail.uoguelph.ca (zcs1.mail.uoguelph.ca [172.17.95.18]) by zcs1.mail.uoguelph.ca (Postfix) with ESMTP id CC6C415F563; Wed, 19 Aug 2015 09:00:35 -0400 (EDT) Date: Wed, 19 Aug 2015 09:00:35 -0400 (EDT) From: Rick Macklem To: Hans Petter Selasky Cc: pyunyh@gmail.com, FreeBSD stable , FreeBSD Net , Slawa Olhovchenkov , Christopher Forgeron , Daniel Braniss Message-ID: <2013503980.25726607.1439989235806.JavaMail.zimbra@uoguelph.ca> In-Reply-To: <55D43615.1030401@selasky.org> References: <1D52028A-B39F-4F9B-BD38-CB1D73BF5D56@cs.huji.ac.il> <9D8B0503-E8FA-43CA-88F0-01F184F84D9B@cs.huji.ac.il> <1721122651.24481798.1439902381663.JavaMail.zimbra@uoguelph.ca> <55D333D6.5040102@selasky.org> <1325951625.25292515.1439934848268.JavaMail.zimbra@uoguelph.ca> <55D429A4.3010407@selasky.org> <20150819074212.GB964@michelle.fasterthan.com> <55D43615.1030401@selasky.org> Subject: Re: ix(intel) vs mlxen(mellanox) 10Gb performance MIME-Version: 1.0 Content-Type: text/plain; charset=utf-8 Content-Transfer-Encoding: 7bit X-Originating-IP: [172.17.95.11] X-Mailer: Zimbra 8.0.9_GA_6191 (ZimbraWebClient - FF34 (Win)/8.0.9_GA_6191) Thread-Topic: ix(intel) vs mlxen(mellanox) 10Gb performance Thread-Index: W+G5Djot61DfsWhTAw5fqJmL10IusQ== X-BeenThere: freebsd-net@freebsd.org X-Mailman-Version: 2.1.20 Precedence: list List-Id: Networking and TCP/IP with FreeBSD List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , X-List-Received-Date: Wed, 19 Aug 2015 13:00:39 -0000 Hans Petter Selasky wrote: > On 08/19/15 09:42, Yonghyeon PYUN wrote: > > On Wed, Aug 19, 2015 at 09:00:52AM +0200, Hans Petter Selasky wrote: > >> On 08/18/15 23:54, Rick Macklem wrote: > >>> Ouch! Yes, I now see that the code that counts the # of mbufs is before > >>> the > >>> code that adds the tcp/ip header mbuf. > >>> > >>> In my opinion, this should be fixed by setting if_hw_tsomaxsegcount to > >>> whatever > >>> the driver provides - 1. It is not the driver's responsibility to know if > >>> a tcp/ip > >>> header mbuf will be added and is a lot less confusing that expecting the > >>> driver > >>> author to know to subtract one. (I had mistakenly thought that > >>> tcp_output() had > >>> added the tc/ip header mbuf before the loop that counts mbufs in the > >>> list. > >>> Btw, > >>> this tcp/ip header mbuf also has leading space for the MAC layer header.) > >>> > >> > >> Hi Rick, > >> > >> Your question is good. With the Mellanox hardware we have separate > >> so-called inline data space for the TCP/IP headers, so if the TCP stack > >> subtracts something, then we would need to add something to the limit, > >> because then the scatter gather list is only used for the data part. > >> > > > > I think all drivers in tree don't subtract 1 for > > if_hw_tsomaxsegcount. Probably touching Mellanox driver would be > > simpler than fixing all other drivers in tree. > > > >> Maybe it can be controlled by some kind of flag, if all the three TSO > >> limits should include the TCP/IP/ethernet headers too. I'm pretty sure > >> we want both versions. > >> > > > > Hmm, I'm afraid it's already complex. Drivers have to tell almost > > the same information to both bus_dma(9) and network stack. > > Don't forget that not all drivers in the tree set the TSO limits before > if_attach(), so possibly the subtraction of one TSO fragment needs to go > into ip_output() .... > Ok, I realized that some drivers may not know the answers before ether_ifattach(), due to the way they are configured/written (I saw the use of if_hw_tsomax_update() in the patch). If it is subtracted as a part of the assignment to if_hw_tsomaxsegcount in tcp_output() at line#791 in tcp_output() like the following, I don't think it should matter if the values are set before ether_ifattach()? /* * Subtract 1 for the tcp/ip header mbuf that * will be prepended to the mbuf chain in this * function in the code below this block. */ if_hw_tsomaxsegcount = tp->t_tsomaxsegcount - 1; I don't have a good solution for the case where a driver doesn't plan on using the tcp/ip header provided by tcp_output() except to say the driver can add one to the setting to compensate for that (and if they fail to do so, it still works, although somewhat suboptimally). When I now read the comment in sys/net/if_var.h it is clear what it means, but for some reason I didn't read it that way before? (I think it was the part that said the driver didn't have to subtract for the headers that confused me?) In any case, we need to try and come up with a clear definition of what they need to be set to. I can now think of two ways to deal with this: 1 - Leave tcp_output() as is, but provide a macro for the device driver authors to use that sets if_hw_tsomaxsegcount with a flag for "driver uses tcp/ip header mbuf", documenting that this flag should normally be true. OR 2 - Change tcp_output() as above, noting that this is a workaround for confusion w.r.t. whether or not if_hw_tsomaxsegcount should include the tcp/ip header mbuf and update the comment in if_var.h to reflect this. Then drivers that don't use the tcp/ip header mbuf can increase their value for if_hw_tsomaxsegcount by 1. (The comment should also mention that a value of 35 or greater is much preferred to 32 if the hardware will support that.) Also, I'd like to apologize for some of my emails getting a little "blunt". I just find it flustrating that this problem is still showing up and is even in 10.2. This is partly my fault for not making it clearer to driver authors what if_hw_tsomaxsegcount should be set to, because I had it incorrect. Hopefully we can come up with a solution that everyone is comfortable with, rick > --HPS > > _______________________________________________ > freebsd-net@freebsd.org mailing list > https://lists.freebsd.org/mailman/listinfo/freebsd-net > To unsubscribe, send any mail to "freebsd-net-unsubscribe@freebsd.org" > From owner-freebsd-net@freebsd.org Wed Aug 19 13:20:26 2015 Return-Path: Delivered-To: freebsd-net@mailman.ysv.freebsd.org Received: from mx1.freebsd.org (mx1.freebsd.org [IPv6:2001:1900:2254:206a::19:1]) by mailman.ysv.freebsd.org (Postfix) with ESMTP id 5C75C9BE12B; Wed, 19 Aug 2015 13:20:26 +0000 (UTC) (envelope-from danny@cs.huji.ac.il) Received: from kabab.cs.huji.ac.il (kabab.cs.huji.ac.il [132.65.116.210]) (using TLSv1 with cipher DHE-RSA-AES256-SHA (256/256 bits)) (Client did not present a certificate) by mx1.freebsd.org (Postfix) with ESMTPS id A1BB3127F; Wed, 19 Aug 2015 13:20:25 +0000 (UTC) (envelope-from danny@cs.huji.ac.il) Received: from chamsa.cs.huji.ac.il ([132.65.80.19]) by kabab.cs.huji.ac.il with esmtp id 1ZS3Hs-000F95-7t; Wed, 19 Aug 2015 16:20:16 +0300 Mime-Version: 1.0 (Mac OS X Mail 8.2 \(2104\)) Subject: Re: ix(intel) vs mlxen(mellanox) 10Gb performance From: Daniel Braniss In-Reply-To: <2013503980.25726607.1439989235806.JavaMail.zimbra@uoguelph.ca> Date: Wed, 19 Aug 2015 16:20:15 +0300 Cc: Hans Petter Selasky , pyunyh@gmail.com, FreeBSD stable , FreeBSD Net , Slawa Olhovchenkov , Christopher Forgeron Message-Id: <2BF7FA92-2DDD-452C-822C-534C0DC0B49F@cs.huji.ac.il> References: <1D52028A-B39F-4F9B-BD38-CB1D73BF5D56@cs.huji.ac.il> <9D8B0503-E8FA-43CA-88F0-01F184F84D9B@cs.huji.ac.il> <1721122651.24481798.1439902381663.JavaMail.zimbra@uoguelph.ca> <55D333D6.5040102@selasky.org> <1325951625.25292515.1439934848268.JavaMail.zimbra@uoguelph.ca> <55D429A4.3010407@selasky.org> <20150819074212.GB964@michelle.fasterthan.com> <55D43615.1030401@selasky.org> <2013503980.25726607.1439989235806.JavaMail.zimbra@uoguelph.ca> To: Rick Macklem X-Mailer: Apple Mail (2.2104) Content-Type: text/plain; charset=us-ascii Content-Transfer-Encoding: quoted-printable X-Content-Filtered-By: Mailman/MimeDel 2.1.20 X-BeenThere: freebsd-net@freebsd.org X-Mailman-Version: 2.1.20 Precedence: list List-Id: Networking and TCP/IP with FreeBSD List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , X-List-Received-Date: Wed, 19 Aug 2015 13:20:26 -0000 > On 19 Aug 2015, at 16:00, Rick Macklem wrote: >=20 > Hans Petter Selasky wrote: >> On 08/19/15 09:42, Yonghyeon PYUN wrote: >>> On Wed, Aug 19, 2015 at 09:00:52AM +0200, Hans Petter Selasky wrote: >>>> On 08/18/15 23:54, Rick Macklem wrote: >>>>> Ouch! Yes, I now see that the code that counts the # of mbufs is = before >>>>> the >>>>> code that adds the tcp/ip header mbuf. >>>>>=20 >>>>> In my opinion, this should be fixed by setting = if_hw_tsomaxsegcount to >>>>> whatever >>>>> the driver provides - 1. It is not the driver's responsibility to = know if >>>>> a tcp/ip >>>>> header mbuf will be added and is a lot less confusing that = expecting the >>>>> driver >>>>> author to know to subtract one. (I had mistakenly thought that >>>>> tcp_output() had >>>>> added the tc/ip header mbuf before the loop that counts mbufs in = the >>>>> list. >>>>> Btw, >>>>> this tcp/ip header mbuf also has leading space for the MAC layer = header.) >>>>>=20 >>>>=20 >>>> Hi Rick, >>>>=20 >>>> Your question is good. With the Mellanox hardware we have separate >>>> so-called inline data space for the TCP/IP headers, so if the TCP = stack >>>> subtracts something, then we would need to add something to the = limit, >>>> because then the scatter gather list is only used for the data = part. >>>>=20 >>>=20 >>> I think all drivers in tree don't subtract 1 for >>> if_hw_tsomaxsegcount. Probably touching Mellanox driver would be >>> simpler than fixing all other drivers in tree. >>>=20 >>>> Maybe it can be controlled by some kind of flag, if all the three = TSO >>>> limits should include the TCP/IP/ethernet headers too. I'm pretty = sure >>>> we want both versions. >>>>=20 >>>=20 >>> Hmm, I'm afraid it's already complex. Drivers have to tell almost >>> the same information to both bus_dma(9) and network stack. >>=20 >> Don't forget that not all drivers in the tree set the TSO limits = before >> if_attach(), so possibly the subtraction of one TSO fragment needs to = go >> into ip_output() .... >>=20 > Ok, I realized that some drivers may not know the answers before = ether_ifattach(), > due to the way they are configured/written (I saw the use of = if_hw_tsomax_update() > in the patch). >=20 > If it is subtracted as a part of the assignment to = if_hw_tsomaxsegcount in tcp_output() > at line#791 in tcp_output() like the following, I don't think it = should matter if the > values are set before ether_ifattach()? > /* > * Subtract 1 for the tcp/ip header mbuf that > * will be prepended to the mbuf chain in this > * function in the code below this block. > */ > if_hw_tsomaxsegcount =3D tp->t_tsomaxsegcount - = 1; >=20 > I don't have a good solution for the case where a driver doesn't plan = on using the > tcp/ip header provided by tcp_output() except to say the driver can = add one to the > setting to compensate for that (and if they fail to do so, it still = works, although > somewhat suboptimally). When I now read the comment in = sys/net/if_var.h it is clear > what it means, but for some reason I didn't read it that way before? = (I think it was > the part that said the driver didn't have to subtract for the headers = that confused me?) > In any case, we need to try and come up with a clear definition of = what they need to > be set to. >=20 > I can now think of two ways to deal with this: > 1 - Leave tcp_output() as is, but provide a macro for the device = driver authors to use > that sets if_hw_tsomaxsegcount with a flag for "driver uses tcp/ip = header mbuf", > documenting that this flag should normally be true. > OR > 2 - Change tcp_output() as above, noting that this is a workaround for = confusion w.r.t. > whether or not if_hw_tsomaxsegcount should include the tcp/ip = header mbuf and > update the comment in if_var.h to reflect this. Then drivers that = don't use the > tcp/ip header mbuf can increase their value for = if_hw_tsomaxsegcount by 1. > (The comment should also mention that a value of 35 or greater is = much preferred to > 32 if the hardware will support that.) >=20 > Also, I'd like to apologize for some of my emails getting a little = "blunt". I just find > it flustrating that this problem is still showing up and is even in = 10.2. This is partly > my fault for not making it clearer to driver authors what = if_hw_tsomaxsegcount should be > set to, because I had it incorrect. >=20 > Hopefully we can come up with a solution that everyone is comfortable = with, rick ok guys, when you have some code for me to try just let me know. danny From owner-freebsd-net@freebsd.org Wed Aug 19 15:09:06 2015 Return-Path: Delivered-To: freebsd-net@mailman.ysv.freebsd.org Received: from mx1.freebsd.org (mx1.freebsd.org [IPv6:2001:1900:2254:206a::19:1]) by mailman.ysv.freebsd.org (Postfix) with ESMTP id 876779BC874 for ; Wed, 19 Aug 2015 15:09:06 +0000 (UTC) (envelope-from john@maxnet.ru) Received: from basic.maxnet.ru (mx.maxnet.ru [195.112.97.17]) by mx1.freebsd.org (Postfix) with ESMTP id B0B051927 for ; Wed, 19 Aug 2015 15:09:03 +0000 (UTC) (envelope-from john@maxnet.ru) Received: from [217.15.204.72] (John.Office.Obninsk.MAXnet.ru [217.15.204.72] (may be forged)) by basic.maxnet.ru (8.14.6/8.14.6) with ESMTP id t7JEhO2t071325 for ; Wed, 19 Aug 2015 17:43:25 +0300 (MSK) (envelope-from john@maxnet.ru) Message-ID: <55D49611.40603@maxnet.ru> Date: Wed, 19 Aug 2015 17:43:29 +0300 From: Evgeny Khorokhorin User-Agent: Mozilla/5.0 (Windows NT 6.1; WOW64; rv:31.0) Gecko/20100101 Thunderbird/31.7.0 MIME-Version: 1.0 To: freebsd-net@freebsd.org Subject: FreeBSD 10.2-STABLE + Intel XL710 - free queues Content-Type: text/plain; charset=utf-8; format=flowed Content-Transfer-Encoding: 7bit X-BeenThere: freebsd-net@freebsd.org X-Mailman-Version: 2.1.20 Precedence: list List-Id: Networking and TCP/IP with FreeBSD List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , X-List-Received-Date: Wed, 19 Aug 2015 15:09:06 -0000 Hi All, FreeBSD 10.2-STABLE 2*CPU Intel E5-2643v3 with HyperThreading enabled Intel XL710 network adapter I updated the ixl driver to version 1.4.0 from download.intel.com Every ixl interface create 24 queues (6 cores *2 HT *2 CPUs) but utilizes only 16-17 of them. Where is the reason of such behavior or driver bug? irq284: ixl0:q0 177563088 2054 irq285: ixl0:q1 402668179 4659 irq286: ixl0:q2 408885088 4731 irq287: ixl0:q3 397744300 4602 irq288: ixl0:q4 403040766 4663 irq289: ixl0:q5 402499314 4657 irq290: ixl0:q6 392693663 4543 irq291: ixl0:q7 389364966 4505 irq292: ixl0:q8 243244346 2814 irq293: ixl0:q9 216834450 2509 irq294: ixl0:q10 229460056 2655 irq295: ixl0:q11 219591953 2540 irq296: ixl0:q12 228944960 2649 irq297: ixl0:q13 226385454 2619 irq298: ixl0:q14 219174953 2536 irq299: ixl0:q15 222151378 2570 irq300: ixl0:q16 82799713 958 irq301: ixl0:q17 6131 0 irq302: ixl0:q18 5586 0 irq303: ixl0:q19 6975 0 irq304: ixl0:q20 6243 0 irq305: ixl0:q21 6729 0 irq306: ixl0:q22 6623 0 irq307: ixl0:q23 7306 0 irq309: ixl1:q0 174074462 2014 irq310: ixl1:q1 435716449 5041 irq311: ixl1:q2 431030443 4987 irq312: ixl1:q3 424156413 4907 irq313: ixl1:q4 414791657 4799 irq314: ixl1:q5 420260382 4862 irq315: ixl1:q6 415645708 4809 irq316: ixl1:q7 422783859 4892 irq317: ixl1:q8 252737383 2924 irq318: ixl1:q9 269655708 3120 irq319: ixl1:q10 252397826 2920 irq320: ixl1:q11 255649144 2958 irq321: ixl1:q12 246025621 2846 irq322: ixl1:q13 240176554 2779 irq323: ixl1:q14 254882418 2949 irq324: ixl1:q15 236846536 2740 irq325: ixl1:q16 86794467 1004 irq326: ixl1:q17 83 0 irq327: ixl1:q18 74 0 irq328: ixl1:q19 202 0 irq329: ixl1:q20 99 0 irq330: ixl1:q21 96 0 irq331: ixl1:q22 91 0 irq332: ixl1:q23 89 0 last pid: 28710; load averages: 7.16, 6.76, 6.49 up 1+00:00:41 17:40:46 391 processes: 32 running, 215 sleeping, 144 waiting CPU 0: 0.0% user, 0.0% nice, 0.0% system, 49.2% interrupt, 50.8% idle CPU 1: 0.0% user, 0.0% nice, 0.4% system, 41.3% interrupt, 58.3% idle CPU 2: 0.0% user, 0.0% nice, 0.0% system, 39.0% interrupt, 61.0% idle CPU 3: 0.0% user, 0.0% nice, 0.0% system, 46.5% interrupt, 53.5% idle CPU 4: 0.0% user, 0.0% nice, 0.0% system, 37.4% interrupt, 62.6% idle CPU 5: 0.0% user, 0.0% nice, 0.0% system, 40.9% interrupt, 59.1% idle CPU 6: 0.0% user, 0.0% nice, 0.0% system, 40.2% interrupt, 59.8% idle CPU 7: 0.0% user, 0.0% nice, 0.0% system, 45.3% interrupt, 54.7% idle CPU 8: 0.0% user, 0.0% nice, 0.0% system, 20.5% interrupt, 79.5% idle CPU 9: 0.0% user, 0.0% nice, 0.0% system, 25.2% interrupt, 74.8% idle CPU 10: 0.0% user, 0.0% nice, 0.0% system, 23.2% interrupt, 76.8% idle CPU 11: 0.0% user, 0.0% nice, 0.0% system, 19.3% interrupt, 80.7% idle CPU 12: 0.0% user, 0.0% nice, 0.0% system, 28.7% interrupt, 71.3% idle CPU 13: 0.0% user, 0.0% nice, 0.0% system, 20.5% interrupt, 79.5% idle CPU 14: 0.0% user, 0.0% nice, 0.0% system, 35.0% interrupt, 65.0% idle CPU 15: 0.0% user, 0.0% nice, 0.0% system, 23.2% interrupt, 76.8% idle CPU 16: 0.0% user, 0.0% nice, 0.4% system, 1.2% interrupt, 98.4% idle CPU 17: 0.0% user, 0.0% nice, 2.0% system, 0.0% interrupt, 98.0% idle CPU 18: 0.0% user, 0.0% nice, 2.4% system, 0.0% interrupt, 97.6% idle CPU 19: 0.0% user, 0.0% nice, 2.8% system, 0.0% interrupt, 97.2% idle CPU 20: 0.0% user, 0.0% nice, 2.4% system, 0.0% interrupt, 97.6% idle CPU 21: 0.0% user, 0.0% nice, 1.6% system, 0.0% interrupt, 98.4% idle CPU 22: 0.0% user, 0.0% nice, 2.8% system, 0.0% interrupt, 97.2% idle CPU 23: 0.0% user, 0.0% nice, 0.4% system, 0.0% interrupt, 99.6% idle # netstat -I ixl0 -w1 -h input ixl0 output packets errs idrops bytes packets errs bytes colls 253K 0 0 136M 311K 0 325M 0 251K 0 0 129M 314K 0 334M 0 250K 0 0 135M 313K 0 333M 0 hw.ixl.tx_itr: 122 hw.ixl.rx_itr: 62 hw.ixl.dynamic_tx_itr: 0 hw.ixl.dynamic_rx_itr: 0 hw.ixl.max_queues: 0 hw.ixl.ring_size: 4096 hw.ixl.enable_msix: 1 dev.ixl.3.mac.xoff_recvd: 0 dev.ixl.3.mac.xoff_txd: 0 dev.ixl.3.mac.xon_recvd: 0 dev.ixl.3.mac.xon_txd: 0 dev.ixl.3.mac.tx_frames_big: 0 dev.ixl.3.mac.tx_frames_1024_1522: 0 dev.ixl.3.mac.tx_frames_512_1023: 0 dev.ixl.3.mac.tx_frames_256_511: 0 dev.ixl.3.mac.tx_frames_128_255: 0 dev.ixl.3.mac.tx_frames_65_127: 0 dev.ixl.3.mac.tx_frames_64: 0 dev.ixl.3.mac.checksum_errors: 0 dev.ixl.3.mac.rx_jabber: 0 dev.ixl.3.mac.rx_oversized: 0 dev.ixl.3.mac.rx_fragmented: 0 dev.ixl.3.mac.rx_undersize: 0 dev.ixl.3.mac.rx_frames_big: 0 dev.ixl.3.mac.rx_frames_1024_1522: 0 dev.ixl.3.mac.rx_frames_512_1023: 0 dev.ixl.3.mac.rx_frames_256_511: 0 dev.ixl.3.mac.rx_frames_128_255: 0 dev.ixl.3.mac.rx_frames_65_127: 0 dev.ixl.3.mac.rx_frames_64: 0 dev.ixl.3.mac.rx_length_errors: 0 dev.ixl.3.mac.remote_faults: 0 dev.ixl.3.mac.local_faults: 0 dev.ixl.3.mac.illegal_bytes: 0 dev.ixl.3.mac.crc_errors: 0 dev.ixl.3.mac.bcast_pkts_txd: 0 dev.ixl.3.mac.mcast_pkts_txd: 0 dev.ixl.3.mac.ucast_pkts_txd: 0 dev.ixl.3.mac.good_octets_txd: 0 dev.ixl.3.mac.rx_discards: 0 dev.ixl.3.mac.bcast_pkts_rcvd: 0 dev.ixl.3.mac.mcast_pkts_rcvd: 0 dev.ixl.3.mac.ucast_pkts_rcvd: 0 dev.ixl.3.mac.good_octets_rcvd: 0 dev.ixl.3.pf.que23.rx_bytes: 0 dev.ixl.3.pf.que23.rx_packets: 0 dev.ixl.3.pf.que23.tx_bytes: 0 dev.ixl.3.pf.que23.tx_packets: 0 dev.ixl.3.pf.que23.no_desc_avail: 0 dev.ixl.3.pf.que23.tx_dma_setup: 0 dev.ixl.3.pf.que23.tso_tx: 0 dev.ixl.3.pf.que23.irqs: 0 dev.ixl.3.pf.que23.dropped: 0 dev.ixl.3.pf.que23.mbuf_defrag_failed: 0 dev.ixl.3.pf.que22.rx_bytes: 0 dev.ixl.3.pf.que22.rx_packets: 0 dev.ixl.3.pf.que22.tx_bytes: 0 dev.ixl.3.pf.que22.tx_packets: 0 dev.ixl.3.pf.que22.no_desc_avail: 0 dev.ixl.3.pf.que22.tx_dma_setup: 0 dev.ixl.3.pf.que22.tso_tx: 0 dev.ixl.3.pf.que22.irqs: 0 dev.ixl.3.pf.que22.dropped: 0 dev.ixl.3.pf.que22.mbuf_defrag_failed: 0 dev.ixl.3.pf.que21.rx_bytes: 0 dev.ixl.3.pf.que21.rx_packets: 0 dev.ixl.3.pf.que21.tx_bytes: 0 dev.ixl.3.pf.que21.tx_packets: 0 dev.ixl.3.pf.que21.no_desc_avail: 0 dev.ixl.3.pf.que21.tx_dma_setup: 0 dev.ixl.3.pf.que21.tso_tx: 0 dev.ixl.3.pf.que21.irqs: 0 dev.ixl.3.pf.que21.dropped: 0 dev.ixl.3.pf.que21.mbuf_defrag_failed: 0 dev.ixl.3.pf.que20.rx_bytes: 0 dev.ixl.3.pf.que20.rx_packets: 0 dev.ixl.3.pf.que20.tx_bytes: 0 dev.ixl.3.pf.que20.tx_packets: 0 dev.ixl.3.pf.que20.no_desc_avail: 0 dev.ixl.3.pf.que20.tx_dma_setup: 0 dev.ixl.3.pf.que20.tso_tx: 0 dev.ixl.3.pf.que20.irqs: 0 dev.ixl.3.pf.que20.dropped: 0 dev.ixl.3.pf.que20.mbuf_defrag_failed: 0 dev.ixl.3.pf.que19.rx_bytes: 0 dev.ixl.3.pf.que19.rx_packets: 0 dev.ixl.3.pf.que19.tx_bytes: 0 dev.ixl.3.pf.que19.tx_packets: 0 dev.ixl.3.pf.que19.no_desc_avail: 0 dev.ixl.3.pf.que19.tx_dma_setup: 0 dev.ixl.3.pf.que19.tso_tx: 0 dev.ixl.3.pf.que19.irqs: 0 dev.ixl.3.pf.que19.dropped: 0 dev.ixl.3.pf.que19.mbuf_defrag_failed: 0 dev.ixl.3.pf.que18.rx_bytes: 0 dev.ixl.3.pf.que18.rx_packets: 0 dev.ixl.3.pf.que18.tx_bytes: 0 dev.ixl.3.pf.que18.tx_packets: 0 dev.ixl.3.pf.que18.no_desc_avail: 0 dev.ixl.3.pf.que18.tx_dma_setup: 0 dev.ixl.3.pf.que18.tso_tx: 0 dev.ixl.3.pf.que18.irqs: 0 dev.ixl.3.pf.que18.dropped: 0 dev.ixl.3.pf.que18.mbuf_defrag_failed: 0 dev.ixl.3.pf.que17.rx_bytes: 0 dev.ixl.3.pf.que17.rx_packets: 0 dev.ixl.3.pf.que17.tx_bytes: 0 dev.ixl.3.pf.que17.tx_packets: 0 dev.ixl.3.pf.que17.no_desc_avail: 0 dev.ixl.3.pf.que17.tx_dma_setup: 0 dev.ixl.3.pf.que17.tso_tx: 0 dev.ixl.3.pf.que17.irqs: 0 dev.ixl.3.pf.que17.dropped: 0 dev.ixl.3.pf.que17.mbuf_defrag_failed: 0 dev.ixl.3.pf.que16.rx_bytes: 0 dev.ixl.3.pf.que16.rx_packets: 0 dev.ixl.3.pf.que16.tx_bytes: 0 dev.ixl.3.pf.que16.tx_packets: 0 dev.ixl.3.pf.que16.no_desc_avail: 0 dev.ixl.3.pf.que16.tx_dma_setup: 0 dev.ixl.3.pf.que16.tso_tx: 0 dev.ixl.3.pf.que16.irqs: 0 dev.ixl.3.pf.que16.dropped: 0 dev.ixl.3.pf.que16.mbuf_defrag_failed: 0 dev.ixl.3.pf.que15.rx_bytes: 0 dev.ixl.3.pf.que15.rx_packets: 0 dev.ixl.3.pf.que15.tx_bytes: 0 dev.ixl.3.pf.que15.tx_packets: 0 dev.ixl.3.pf.que15.no_desc_avail: 0 dev.ixl.3.pf.que15.tx_dma_setup: 0 dev.ixl.3.pf.que15.tso_tx: 0 dev.ixl.3.pf.que15.irqs: 0 dev.ixl.3.pf.que15.dropped: 0 dev.ixl.3.pf.que15.mbuf_defrag_failed: 0 dev.ixl.3.pf.que14.rx_bytes: 0 dev.ixl.3.pf.que14.rx_packets: 0 dev.ixl.3.pf.que14.tx_bytes: 0 dev.ixl.3.pf.que14.tx_packets: 0 dev.ixl.3.pf.que14.no_desc_avail: 0 dev.ixl.3.pf.que14.tx_dma_setup: 0 dev.ixl.3.pf.que14.tso_tx: 0 dev.ixl.3.pf.que14.irqs: 0 dev.ixl.3.pf.que14.dropped: 0 dev.ixl.3.pf.que14.mbuf_defrag_failed: 0 dev.ixl.3.pf.que13.rx_bytes: 0 dev.ixl.3.pf.que13.rx_packets: 0 dev.ixl.3.pf.que13.tx_bytes: 0 dev.ixl.3.pf.que13.tx_packets: 0 dev.ixl.3.pf.que13.no_desc_avail: 0 dev.ixl.3.pf.que13.tx_dma_setup: 0 dev.ixl.3.pf.que13.tso_tx: 0 dev.ixl.3.pf.que13.irqs: 0 dev.ixl.3.pf.que13.dropped: 0 dev.ixl.3.pf.que13.mbuf_defrag_failed: 0 dev.ixl.3.pf.que12.rx_bytes: 0 dev.ixl.3.pf.que12.rx_packets: 0 dev.ixl.3.pf.que12.tx_bytes: 0 dev.ixl.3.pf.que12.tx_packets: 0 dev.ixl.3.pf.que12.no_desc_avail: 0 dev.ixl.3.pf.que12.tx_dma_setup: 0 dev.ixl.3.pf.que12.tso_tx: 0 dev.ixl.3.pf.que12.irqs: 0 dev.ixl.3.pf.que12.dropped: 0 dev.ixl.3.pf.que12.mbuf_defrag_failed: 0 dev.ixl.3.pf.que11.rx_bytes: 0 dev.ixl.3.pf.que11.rx_packets: 0 dev.ixl.3.pf.que11.tx_bytes: 0 dev.ixl.3.pf.que11.tx_packets: 0 dev.ixl.3.pf.que11.no_desc_avail: 0 dev.ixl.3.pf.que11.tx_dma_setup: 0 dev.ixl.3.pf.que11.tso_tx: 0 dev.ixl.3.pf.que11.irqs: 0 dev.ixl.3.pf.que11.dropped: 0 dev.ixl.3.pf.que11.mbuf_defrag_failed: 0 dev.ixl.3.pf.que10.rx_bytes: 0 dev.ixl.3.pf.que10.rx_packets: 0 dev.ixl.3.pf.que10.tx_bytes: 0 dev.ixl.3.pf.que10.tx_packets: 0 dev.ixl.3.pf.que10.no_desc_avail: 0 dev.ixl.3.pf.que10.tx_dma_setup: 0 dev.ixl.3.pf.que10.tso_tx: 0 dev.ixl.3.pf.que10.irqs: 0 dev.ixl.3.pf.que10.dropped: 0 dev.ixl.3.pf.que10.mbuf_defrag_failed: 0 dev.ixl.3.pf.que9.rx_bytes: 0 dev.ixl.3.pf.que9.rx_packets: 0 dev.ixl.3.pf.que9.tx_bytes: 0 dev.ixl.3.pf.que9.tx_packets: 0 dev.ixl.3.pf.que9.no_desc_avail: 0 dev.ixl.3.pf.que9.tx_dma_setup: 0 dev.ixl.3.pf.que9.tso_tx: 0 dev.ixl.3.pf.que9.irqs: 0 dev.ixl.3.pf.que9.dropped: 0 dev.ixl.3.pf.que9.mbuf_defrag_failed: 0 dev.ixl.3.pf.que8.rx_bytes: 0 dev.ixl.3.pf.que8.rx_packets: 0 dev.ixl.3.pf.que8.tx_bytes: 0 dev.ixl.3.pf.que8.tx_packets: 0 dev.ixl.3.pf.que8.no_desc_avail: 0 dev.ixl.3.pf.que8.tx_dma_setup: 0 dev.ixl.3.pf.que8.tso_tx: 0 dev.ixl.3.pf.que8.irqs: 0 dev.ixl.3.pf.que8.dropped: 0 dev.ixl.3.pf.que8.mbuf_defrag_failed: 0 dev.ixl.3.pf.que7.rx_bytes: 0 dev.ixl.3.pf.que7.rx_packets: 0 dev.ixl.3.pf.que7.tx_bytes: 0 dev.ixl.3.pf.que7.tx_packets: 0 dev.ixl.3.pf.que7.no_desc_avail: 0 dev.ixl.3.pf.que7.tx_dma_setup: 0 dev.ixl.3.pf.que7.tso_tx: 0 dev.ixl.3.pf.que7.irqs: 0 dev.ixl.3.pf.que7.dropped: 0 dev.ixl.3.pf.que7.mbuf_defrag_failed: 0 dev.ixl.3.pf.que6.rx_bytes: 0 dev.ixl.3.pf.que6.rx_packets: 0 dev.ixl.3.pf.que6.tx_bytes: 0 dev.ixl.3.pf.que6.tx_packets: 0 dev.ixl.3.pf.que6.no_desc_avail: 0 dev.ixl.3.pf.que6.tx_dma_setup: 0 dev.ixl.3.pf.que6.tso_tx: 0 dev.ixl.3.pf.que6.irqs: 0 dev.ixl.3.pf.que6.dropped: 0 dev.ixl.3.pf.que6.mbuf_defrag_failed: 0 dev.ixl.3.pf.que5.rx_bytes: 0 dev.ixl.3.pf.que5.rx_packets: 0 dev.ixl.3.pf.que5.tx_bytes: 0 dev.ixl.3.pf.que5.tx_packets: 0 dev.ixl.3.pf.que5.no_desc_avail: 0 dev.ixl.3.pf.que5.tx_dma_setup: 0 dev.ixl.3.pf.que5.tso_tx: 0 dev.ixl.3.pf.que5.irqs: 0 dev.ixl.3.pf.que5.dropped: 0 dev.ixl.3.pf.que5.mbuf_defrag_failed: 0 dev.ixl.3.pf.que4.rx_bytes: 0 dev.ixl.3.pf.que4.rx_packets: 0 dev.ixl.3.pf.que4.tx_bytes: 0 dev.ixl.3.pf.que4.tx_packets: 0 dev.ixl.3.pf.que4.no_desc_avail: 0 dev.ixl.3.pf.que4.tx_dma_setup: 0 dev.ixl.3.pf.que4.tso_tx: 0 dev.ixl.3.pf.que4.irqs: 0 dev.ixl.3.pf.que4.dropped: 0 dev.ixl.3.pf.que4.mbuf_defrag_failed: 0 dev.ixl.3.pf.que3.rx_bytes: 0 dev.ixl.3.pf.que3.rx_packets: 0 dev.ixl.3.pf.que3.tx_bytes: 0 dev.ixl.3.pf.que3.tx_packets: 0 dev.ixl.3.pf.que3.no_desc_avail: 0 dev.ixl.3.pf.que3.tx_dma_setup: 0 dev.ixl.3.pf.que3.tso_tx: 0 dev.ixl.3.pf.que3.irqs: 0 dev.ixl.3.pf.que3.dropped: 0 dev.ixl.3.pf.que3.mbuf_defrag_failed: 0 dev.ixl.3.pf.que2.rx_bytes: 0 dev.ixl.3.pf.que2.rx_packets: 0 dev.ixl.3.pf.que2.tx_bytes: 0 dev.ixl.3.pf.que2.tx_packets: 0 dev.ixl.3.pf.que2.no_desc_avail: 0 dev.ixl.3.pf.que2.tx_dma_setup: 0 dev.ixl.3.pf.que2.tso_tx: 0 dev.ixl.3.pf.que2.irqs: 0 dev.ixl.3.pf.que2.dropped: 0 dev.ixl.3.pf.que2.mbuf_defrag_failed: 0 dev.ixl.3.pf.que1.rx_bytes: 0 dev.ixl.3.pf.que1.rx_packets: 0 dev.ixl.3.pf.que1.tx_bytes: 0 dev.ixl.3.pf.que1.tx_packets: 0 dev.ixl.3.pf.que1.no_desc_avail: 0 dev.ixl.3.pf.que1.tx_dma_setup: 0 dev.ixl.3.pf.que1.tso_tx: 0 dev.ixl.3.pf.que1.irqs: 0 dev.ixl.3.pf.que1.dropped: 0 dev.ixl.3.pf.que1.mbuf_defrag_failed: 0 dev.ixl.3.pf.que0.rx_bytes: 0 dev.ixl.3.pf.que0.rx_packets: 0 dev.ixl.3.pf.que0.tx_bytes: 0 dev.ixl.3.pf.que0.tx_packets: 0 dev.ixl.3.pf.que0.no_desc_avail: 0 dev.ixl.3.pf.que0.tx_dma_setup: 0 dev.ixl.3.pf.que0.tso_tx: 0 dev.ixl.3.pf.que0.irqs: 0 dev.ixl.3.pf.que0.dropped: 0 dev.ixl.3.pf.que0.mbuf_defrag_failed: 0 dev.ixl.3.pf.bcast_pkts_txd: 0 dev.ixl.3.pf.mcast_pkts_txd: 0 dev.ixl.3.pf.ucast_pkts_txd: 0 dev.ixl.3.pf.good_octets_txd: 0 dev.ixl.3.pf.rx_discards: 0 dev.ixl.3.pf.bcast_pkts_rcvd: 0 dev.ixl.3.pf.mcast_pkts_rcvd: 0 dev.ixl.3.pf.ucast_pkts_rcvd: 0 dev.ixl.3.pf.good_octets_rcvd: 0 dev.ixl.3.vc_debug_level: 1 dev.ixl.3.admin_irq: 0 dev.ixl.3.watchdog_events: 0 dev.ixl.3.debug: 0 dev.ixl.3.dynamic_tx_itr: 0 dev.ixl.3.tx_itr: 122 dev.ixl.3.dynamic_rx_itr: 0 dev.ixl.3.rx_itr: 62 dev.ixl.3.fw_version: f4.33 a1.2 n04.42 e8000191d dev.ixl.3.current_speed: Unknown dev.ixl.3.advertise_speed: 0 dev.ixl.3.fc: 0 dev.ixl.3.%parent: pci129 dev.ixl.3.%pnpinfo: vendor=0x8086 device=0x1572 subvendor=0x8086 subdevice=0x0000 class=0x020000 dev.ixl.3.%location: slot=0 function=3 handle=\_SB_.PCI1.QR3A.H003 dev.ixl.3.%driver: ixl dev.ixl.3.%desc: Intel(R) Ethernet Connection XL710 Driver, Version - 1.4.0 dev.ixl.2.mac.xoff_recvd: 0 dev.ixl.2.mac.xoff_txd: 0 dev.ixl.2.mac.xon_recvd: 0 dev.ixl.2.mac.xon_txd: 0 dev.ixl.2.mac.tx_frames_big: 0 dev.ixl.2.mac.tx_frames_1024_1522: 0 dev.ixl.2.mac.tx_frames_512_1023: 0 dev.ixl.2.mac.tx_frames_256_511: 0 dev.ixl.2.mac.tx_frames_128_255: 0 dev.ixl.2.mac.tx_frames_65_127: 0 dev.ixl.2.mac.tx_frames_64: 0 dev.ixl.2.mac.checksum_errors: 0 dev.ixl.2.mac.rx_jabber: 0 dev.ixl.2.mac.rx_oversized: 0 dev.ixl.2.mac.rx_fragmented: 0 dev.ixl.2.mac.rx_undersize: 0 dev.ixl.2.mac.rx_frames_big: 0 dev.ixl.2.mac.rx_frames_1024_1522: 0 dev.ixl.2.mac.rx_frames_512_1023: 0 dev.ixl.2.mac.rx_frames_256_511: 0 dev.ixl.2.mac.rx_frames_128_255: 0 dev.ixl.2.mac.rx_frames_65_127: 0 dev.ixl.2.mac.rx_frames_64: 0 dev.ixl.2.mac.rx_length_errors: 0 dev.ixl.2.mac.remote_faults: 0 dev.ixl.2.mac.local_faults: 0 dev.ixl.2.mac.illegal_bytes: 0 dev.ixl.2.mac.crc_errors: 0 dev.ixl.2.mac.bcast_pkts_txd: 0 dev.ixl.2.mac.mcast_pkts_txd: 0 dev.ixl.2.mac.ucast_pkts_txd: 0 dev.ixl.2.mac.good_octets_txd: 0 dev.ixl.2.mac.rx_discards: 0 dev.ixl.2.mac.bcast_pkts_rcvd: 0 dev.ixl.2.mac.mcast_pkts_rcvd: 0 dev.ixl.2.mac.ucast_pkts_rcvd: 0 dev.ixl.2.mac.good_octets_rcvd: 0 dev.ixl.2.pf.que23.rx_bytes: 0 dev.ixl.2.pf.que23.rx_packets: 0 dev.ixl.2.pf.que23.tx_bytes: 0 dev.ixl.2.pf.que23.tx_packets: 0 dev.ixl.2.pf.que23.no_desc_avail: 0 dev.ixl.2.pf.que23.tx_dma_setup: 0 dev.ixl.2.pf.que23.tso_tx: 0 dev.ixl.2.pf.que23.irqs: 0 dev.ixl.2.pf.que23.dropped: 0 dev.ixl.2.pf.que23.mbuf_defrag_failed: 0 dev.ixl.2.pf.que22.rx_bytes: 0 dev.ixl.2.pf.que22.rx_packets: 0 dev.ixl.2.pf.que22.tx_bytes: 0 dev.ixl.2.pf.que22.tx_packets: 0 dev.ixl.2.pf.que22.no_desc_avail: 0 dev.ixl.2.pf.que22.tx_dma_setup: 0 dev.ixl.2.pf.que22.tso_tx: 0 dev.ixl.2.pf.que22.irqs: 0 dev.ixl.2.pf.que22.dropped: 0 dev.ixl.2.pf.que22.mbuf_defrag_failed: 0 dev.ixl.2.pf.que21.rx_bytes: 0 dev.ixl.2.pf.que21.rx_packets: 0 dev.ixl.2.pf.que21.tx_bytes: 0 dev.ixl.2.pf.que21.tx_packets: 0 dev.ixl.2.pf.que21.no_desc_avail: 0 dev.ixl.2.pf.que21.tx_dma_setup: 0 dev.ixl.2.pf.que21.tso_tx: 0 dev.ixl.2.pf.que21.irqs: 0 dev.ixl.2.pf.que21.dropped: 0 dev.ixl.2.pf.que21.mbuf_defrag_failed: 0 dev.ixl.2.pf.que20.rx_bytes: 0 dev.ixl.2.pf.que20.rx_packets: 0 dev.ixl.2.pf.que20.tx_bytes: 0 dev.ixl.2.pf.que20.tx_packets: 0 dev.ixl.2.pf.que20.no_desc_avail: 0 dev.ixl.2.pf.que20.tx_dma_setup: 0 dev.ixl.2.pf.que20.tso_tx: 0 dev.ixl.2.pf.que20.irqs: 0 dev.ixl.2.pf.que20.dropped: 0 dev.ixl.2.pf.que20.mbuf_defrag_failed: 0 dev.ixl.2.pf.que19.rx_bytes: 0 dev.ixl.2.pf.que19.rx_packets: 0 dev.ixl.2.pf.que19.tx_bytes: 0 dev.ixl.2.pf.que19.tx_packets: 0 dev.ixl.2.pf.que19.no_desc_avail: 0 dev.ixl.2.pf.que19.tx_dma_setup: 0 dev.ixl.2.pf.que19.tso_tx: 0 dev.ixl.2.pf.que19.irqs: 0 dev.ixl.2.pf.que19.dropped: 0 dev.ixl.2.pf.que19.mbuf_defrag_failed: 0 dev.ixl.2.pf.que18.rx_bytes: 0 dev.ixl.2.pf.que18.rx_packets: 0 dev.ixl.2.pf.que18.tx_bytes: 0 dev.ixl.2.pf.que18.tx_packets: 0 dev.ixl.2.pf.que18.no_desc_avail: 0 dev.ixl.2.pf.que18.tx_dma_setup: 0 dev.ixl.2.pf.que18.tso_tx: 0 dev.ixl.2.pf.que18.irqs: 0 dev.ixl.2.pf.que18.dropped: 0 dev.ixl.2.pf.que18.mbuf_defrag_failed: 0 dev.ixl.2.pf.que17.rx_bytes: 0 dev.ixl.2.pf.que17.rx_packets: 0 dev.ixl.2.pf.que17.tx_bytes: 0 dev.ixl.2.pf.que17.tx_packets: 0 dev.ixl.2.pf.que17.no_desc_avail: 0 dev.ixl.2.pf.que17.tx_dma_setup: 0 dev.ixl.2.pf.que17.tso_tx: 0 dev.ixl.2.pf.que17.irqs: 0 dev.ixl.2.pf.que17.dropped: 0 dev.ixl.2.pf.que17.mbuf_defrag_failed: 0 dev.ixl.2.pf.que16.rx_bytes: 0 dev.ixl.2.pf.que16.rx_packets: 0 dev.ixl.2.pf.que16.tx_bytes: 0 dev.ixl.2.pf.que16.tx_packets: 0 dev.ixl.2.pf.que16.no_desc_avail: 0 dev.ixl.2.pf.que16.tx_dma_setup: 0 dev.ixl.2.pf.que16.tso_tx: 0 dev.ixl.2.pf.que16.irqs: 0 dev.ixl.2.pf.que16.dropped: 0 dev.ixl.2.pf.que16.mbuf_defrag_failed: 0 dev.ixl.2.pf.que15.rx_bytes: 0 dev.ixl.2.pf.que15.rx_packets: 0 dev.ixl.2.pf.que15.tx_bytes: 0 dev.ixl.2.pf.que15.tx_packets: 0 dev.ixl.2.pf.que15.no_desc_avail: 0 dev.ixl.2.pf.que15.tx_dma_setup: 0 dev.ixl.2.pf.que15.tso_tx: 0 dev.ixl.2.pf.que15.irqs: 0 dev.ixl.2.pf.que15.dropped: 0 dev.ixl.2.pf.que15.mbuf_defrag_failed: 0 dev.ixl.2.pf.que14.rx_bytes: 0 dev.ixl.2.pf.que14.rx_packets: 0 dev.ixl.2.pf.que14.tx_bytes: 0 dev.ixl.2.pf.que14.tx_packets: 0 dev.ixl.2.pf.que14.no_desc_avail: 0 dev.ixl.2.pf.que14.tx_dma_setup: 0 dev.ixl.2.pf.que14.tso_tx: 0 dev.ixl.2.pf.que14.irqs: 0 dev.ixl.2.pf.que14.dropped: 0 dev.ixl.2.pf.que14.mbuf_defrag_failed: 0 dev.ixl.2.pf.que13.rx_bytes: 0 dev.ixl.2.pf.que13.rx_packets: 0 dev.ixl.2.pf.que13.tx_bytes: 0 dev.ixl.2.pf.que13.tx_packets: 0 dev.ixl.2.pf.que13.no_desc_avail: 0 dev.ixl.2.pf.que13.tx_dma_setup: 0 dev.ixl.2.pf.que13.tso_tx: 0 dev.ixl.2.pf.que13.irqs: 0 dev.ixl.2.pf.que13.dropped: 0 dev.ixl.2.pf.que13.mbuf_defrag_failed: 0 dev.ixl.2.pf.que12.rx_bytes: 0 dev.ixl.2.pf.que12.rx_packets: 0 dev.ixl.2.pf.que12.tx_bytes: 0 dev.ixl.2.pf.que12.tx_packets: 0 dev.ixl.2.pf.que12.no_desc_avail: 0 dev.ixl.2.pf.que12.tx_dma_setup: 0 dev.ixl.2.pf.que12.tso_tx: 0 dev.ixl.2.pf.que12.irqs: 0 dev.ixl.2.pf.que12.dropped: 0 dev.ixl.2.pf.que12.mbuf_defrag_failed: 0 dev.ixl.2.pf.que11.rx_bytes: 0 dev.ixl.2.pf.que11.rx_packets: 0 dev.ixl.2.pf.que11.tx_bytes: 0 dev.ixl.2.pf.que11.tx_packets: 0 dev.ixl.2.pf.que11.no_desc_avail: 0 dev.ixl.2.pf.que11.tx_dma_setup: 0 dev.ixl.2.pf.que11.tso_tx: 0 dev.ixl.2.pf.que11.irqs: 0 dev.ixl.2.pf.que11.dropped: 0 dev.ixl.2.pf.que11.mbuf_defrag_failed: 0 dev.ixl.2.pf.que10.rx_bytes: 0 dev.ixl.2.pf.que10.rx_packets: 0 dev.ixl.2.pf.que10.tx_bytes: 0 dev.ixl.2.pf.que10.tx_packets: 0 dev.ixl.2.pf.que10.no_desc_avail: 0 dev.ixl.2.pf.que10.tx_dma_setup: 0 dev.ixl.2.pf.que10.tso_tx: 0 dev.ixl.2.pf.que10.irqs: 0 dev.ixl.2.pf.que10.dropped: 0 dev.ixl.2.pf.que10.mbuf_defrag_failed: 0 dev.ixl.2.pf.que9.rx_bytes: 0 dev.ixl.2.pf.que9.rx_packets: 0 dev.ixl.2.pf.que9.tx_bytes: 0 dev.ixl.2.pf.que9.tx_packets: 0 dev.ixl.2.pf.que9.no_desc_avail: 0 dev.ixl.2.pf.que9.tx_dma_setup: 0 dev.ixl.2.pf.que9.tso_tx: 0 dev.ixl.2.pf.que9.irqs: 0 dev.ixl.2.pf.que9.dropped: 0 dev.ixl.2.pf.que9.mbuf_defrag_failed: 0 dev.ixl.2.pf.que8.rx_bytes: 0 dev.ixl.2.pf.que8.rx_packets: 0 dev.ixl.2.pf.que8.tx_bytes: 0 dev.ixl.2.pf.que8.tx_packets: 0 dev.ixl.2.pf.que8.no_desc_avail: 0 dev.ixl.2.pf.que8.tx_dma_setup: 0 dev.ixl.2.pf.que8.tso_tx: 0 dev.ixl.2.pf.que8.irqs: 0 dev.ixl.2.pf.que8.dropped: 0 dev.ixl.2.pf.que8.mbuf_defrag_failed: 0 dev.ixl.2.pf.que7.rx_bytes: 0 dev.ixl.2.pf.que7.rx_packets: 0 dev.ixl.2.pf.que7.tx_bytes: 0 dev.ixl.2.pf.que7.tx_packets: 0 dev.ixl.2.pf.que7.no_desc_avail: 0 dev.ixl.2.pf.que7.tx_dma_setup: 0 dev.ixl.2.pf.que7.tso_tx: 0 dev.ixl.2.pf.que7.irqs: 0 dev.ixl.2.pf.que7.dropped: 0 dev.ixl.2.pf.que7.mbuf_defrag_failed: 0 dev.ixl.2.pf.que6.rx_bytes: 0 dev.ixl.2.pf.que6.rx_packets: 0 dev.ixl.2.pf.que6.tx_bytes: 0 dev.ixl.2.pf.que6.tx_packets: 0 dev.ixl.2.pf.que6.no_desc_avail: 0 dev.ixl.2.pf.que6.tx_dma_setup: 0 dev.ixl.2.pf.que6.tso_tx: 0 dev.ixl.2.pf.que6.irqs: 0 dev.ixl.2.pf.que6.dropped: 0 dev.ixl.2.pf.que6.mbuf_defrag_failed: 0 dev.ixl.2.pf.que5.rx_bytes: 0 dev.ixl.2.pf.que5.rx_packets: 0 dev.ixl.2.pf.que5.tx_bytes: 0 dev.ixl.2.pf.que5.tx_packets: 0 dev.ixl.2.pf.que5.no_desc_avail: 0 dev.ixl.2.pf.que5.tx_dma_setup: 0 dev.ixl.2.pf.que5.tso_tx: 0 dev.ixl.2.pf.que5.irqs: 0 dev.ixl.2.pf.que5.dropped: 0 dev.ixl.2.pf.que5.mbuf_defrag_failed: 0 dev.ixl.2.pf.que4.rx_bytes: 0 dev.ixl.2.pf.que4.rx_packets: 0 dev.ixl.2.pf.que4.tx_bytes: 0 dev.ixl.2.pf.que4.tx_packets: 0 dev.ixl.2.pf.que4.no_desc_avail: 0 dev.ixl.2.pf.que4.tx_dma_setup: 0 dev.ixl.2.pf.que4.tso_tx: 0 dev.ixl.2.pf.que4.irqs: 0 dev.ixl.2.pf.que4.dropped: 0 dev.ixl.2.pf.que4.mbuf_defrag_failed: 0 dev.ixl.2.pf.que3.rx_bytes: 0 dev.ixl.2.pf.que3.rx_packets: 0 dev.ixl.2.pf.que3.tx_bytes: 0 dev.ixl.2.pf.que3.tx_packets: 0 dev.ixl.2.pf.que3.no_desc_avail: 0 dev.ixl.2.pf.que3.tx_dma_setup: 0 dev.ixl.2.pf.que3.tso_tx: 0 dev.ixl.2.pf.que3.irqs: 0 dev.ixl.2.pf.que3.dropped: 0 dev.ixl.2.pf.que3.mbuf_defrag_failed: 0 dev.ixl.2.pf.que2.rx_bytes: 0 dev.ixl.2.pf.que2.rx_packets: 0 dev.ixl.2.pf.que2.tx_bytes: 0 dev.ixl.2.pf.que2.tx_packets: 0 dev.ixl.2.pf.que2.no_desc_avail: 0 dev.ixl.2.pf.que2.tx_dma_setup: 0 dev.ixl.2.pf.que2.tso_tx: 0 dev.ixl.2.pf.que2.irqs: 0 dev.ixl.2.pf.que2.dropped: 0 dev.ixl.2.pf.que2.mbuf_defrag_failed: 0 dev.ixl.2.pf.que1.rx_bytes: 0 dev.ixl.2.pf.que1.rx_packets: 0 dev.ixl.2.pf.que1.tx_bytes: 0 dev.ixl.2.pf.que1.tx_packets: 0 dev.ixl.2.pf.que1.no_desc_avail: 0 dev.ixl.2.pf.que1.tx_dma_setup: 0 dev.ixl.2.pf.que1.tso_tx: 0 dev.ixl.2.pf.que1.irqs: 0 dev.ixl.2.pf.que1.dropped: 0 dev.ixl.2.pf.que1.mbuf_defrag_failed: 0 dev.ixl.2.pf.que0.rx_bytes: 0 dev.ixl.2.pf.que0.rx_packets: 0 dev.ixl.2.pf.que0.tx_bytes: 0 dev.ixl.2.pf.que0.tx_packets: 0 dev.ixl.2.pf.que0.no_desc_avail: 0 dev.ixl.2.pf.que0.tx_dma_setup: 0 dev.ixl.2.pf.que0.tso_tx: 0 dev.ixl.2.pf.que0.irqs: 0 dev.ixl.2.pf.que0.dropped: 0 dev.ixl.2.pf.que0.mbuf_defrag_failed: 0 dev.ixl.2.pf.bcast_pkts_txd: 0 dev.ixl.2.pf.mcast_pkts_txd: 0 dev.ixl.2.pf.ucast_pkts_txd: 0 dev.ixl.2.pf.good_octets_txd: 0 dev.ixl.2.pf.rx_discards: 0 dev.ixl.2.pf.bcast_pkts_rcvd: 0 dev.ixl.2.pf.mcast_pkts_rcvd: 0 dev.ixl.2.pf.ucast_pkts_rcvd: 0 dev.ixl.2.pf.good_octets_rcvd: 0 dev.ixl.2.vc_debug_level: 1 dev.ixl.2.admin_irq: 0 dev.ixl.2.watchdog_events: 0 dev.ixl.2.debug: 0 dev.ixl.2.dynamic_tx_itr: 0 dev.ixl.2.tx_itr: 122 dev.ixl.2.dynamic_rx_itr: 0 dev.ixl.2.rx_itr: 62 dev.ixl.2.fw_version: f4.33 a1.2 n04.42 e8000191d dev.ixl.2.current_speed: Unknown dev.ixl.2.advertise_speed: 0 dev.ixl.2.fc: 0 dev.ixl.2.%parent: pci129 dev.ixl.2.%pnpinfo: vendor=0x8086 device=0x1572 subvendor=0x8086 subdevice=0x0000 class=0x020000 dev.ixl.2.%location: slot=0 function=2 handle=\_SB_.PCI1.QR3A.H002 dev.ixl.2.%driver: ixl dev.ixl.2.%desc: Intel(R) Ethernet Connection XL710 Driver, Version - 1.4.0 dev.ixl.1.mac.xoff_recvd: 0 dev.ixl.1.mac.xoff_txd: 0 dev.ixl.1.mac.xon_recvd: 0 dev.ixl.1.mac.xon_txd: 0 dev.ixl.1.mac.tx_frames_big: 0 dev.ixl.1.mac.tx_frames_1024_1522: 1565670684 dev.ixl.1.mac.tx_frames_512_1023: 101286418 dev.ixl.1.mac.tx_frames_256_511: 49713129 dev.ixl.1.mac.tx_frames_128_255: 231617277 dev.ixl.1.mac.tx_frames_65_127: 2052767669 dev.ixl.1.mac.tx_frames_64: 1318689044 dev.ixl.1.mac.checksum_errors: 0 dev.ixl.1.mac.rx_jabber: 0 dev.ixl.1.mac.rx_oversized: 0 dev.ixl.1.mac.rx_fragmented: 0 dev.ixl.1.mac.rx_undersize: 0 dev.ixl.1.mac.rx_frames_big: 0 dev.ixl.1.mac.rx_frames_1024_1522: 4960403414 dev.ixl.1.mac.rx_frames_512_1023: 113675084 dev.ixl.1.mac.rx_frames_256_511: 253904920 dev.ixl.1.mac.rx_frames_128_255: 196369726 dev.ixl.1.mac.rx_frames_65_127: 1436626211 dev.ixl.1.mac.rx_frames_64: 242768681 dev.ixl.1.mac.rx_length_errors: 0 dev.ixl.1.mac.remote_faults: 0 dev.ixl.1.mac.local_faults: 0 dev.ixl.1.mac.illegal_bytes: 0 dev.ixl.1.mac.crc_errors: 0 dev.ixl.1.mac.bcast_pkts_txd: 277 dev.ixl.1.mac.mcast_pkts_txd: 0 dev.ixl.1.mac.ucast_pkts_txd: 5319743942 dev.ixl.1.mac.good_octets_txd: 2642351885737 dev.ixl.1.mac.rx_discards: 0 dev.ixl.1.mac.bcast_pkts_rcvd: 5 dev.ixl.1.mac.mcast_pkts_rcvd: 144 dev.ixl.1.mac.ucast_pkts_rcvd: 7203747879 dev.ixl.1.mac.good_octets_rcvd: 7770230492434 dev.ixl.1.pf.que23.rx_bytes: 0 dev.ixl.1.pf.que23.rx_packets: 0 dev.ixl.1.pf.que23.tx_bytes: 7111 dev.ixl.1.pf.que23.tx_packets: 88 dev.ixl.1.pf.que23.no_desc_avail: 0 dev.ixl.1.pf.que23.tx_dma_setup: 0 dev.ixl.1.pf.que23.tso_tx: 0 dev.ixl.1.pf.que23.irqs: 88 dev.ixl.1.pf.que23.dropped: 0 dev.ixl.1.pf.que23.mbuf_defrag_failed: 0 dev.ixl.1.pf.que22.rx_bytes: 0 dev.ixl.1.pf.que22.rx_packets: 0 dev.ixl.1.pf.que22.tx_bytes: 6792 dev.ixl.1.pf.que22.tx_packets: 88 dev.ixl.1.pf.que22.no_desc_avail: 0 dev.ixl.1.pf.que22.tx_dma_setup: 0 dev.ixl.1.pf.que22.tso_tx: 0 dev.ixl.1.pf.que22.irqs: 89 dev.ixl.1.pf.que22.dropped: 0 dev.ixl.1.pf.que22.mbuf_defrag_failed: 0 dev.ixl.1.pf.que21.rx_bytes: 0 dev.ixl.1.pf.que21.rx_packets: 0 dev.ixl.1.pf.que21.tx_bytes: 7486 dev.ixl.1.pf.que21.tx_packets: 93 dev.ixl.1.pf.que21.no_desc_avail: 0 dev.ixl.1.pf.que21.tx_dma_setup: 0 dev.ixl.1.pf.que21.tso_tx: 0 dev.ixl.1.pf.que21.irqs: 95 dev.ixl.1.pf.que21.dropped: 0 dev.ixl.1.pf.que21.mbuf_defrag_failed: 0 dev.ixl.1.pf.que20.rx_bytes: 0 dev.ixl.1.pf.que20.rx_packets: 0 dev.ixl.1.pf.que20.tx_bytes: 7850 dev.ixl.1.pf.que20.tx_packets: 98 dev.ixl.1.pf.que20.no_desc_avail: 0 dev.ixl.1.pf.que20.tx_dma_setup: 0 dev.ixl.1.pf.que20.tso_tx: 0 dev.ixl.1.pf.que20.irqs: 99 dev.ixl.1.pf.que20.dropped: 0 dev.ixl.1.pf.que20.mbuf_defrag_failed: 0 dev.ixl.1.pf.que19.rx_bytes: 0 dev.ixl.1.pf.que19.rx_packets: 0 dev.ixl.1.pf.que19.tx_bytes: 64643 dev.ixl.1.pf.que19.tx_packets: 202 dev.ixl.1.pf.que19.no_desc_avail: 0 dev.ixl.1.pf.que19.tx_dma_setup: 0 dev.ixl.1.pf.que19.tso_tx: 0 dev.ixl.1.pf.que19.irqs: 202 dev.ixl.1.pf.que19.dropped: 0 dev.ixl.1.pf.que19.mbuf_defrag_failed: 0 dev.ixl.1.pf.que18.rx_bytes: 0 dev.ixl.1.pf.que18.rx_packets: 0 dev.ixl.1.pf.que18.tx_bytes: 5940 dev.ixl.1.pf.que18.tx_packets: 74 dev.ixl.1.pf.que18.no_desc_avail: 0 dev.ixl.1.pf.que18.tx_dma_setup: 0 dev.ixl.1.pf.que18.tso_tx: 0 dev.ixl.1.pf.que18.irqs: 74 dev.ixl.1.pf.que18.dropped: 0 dev.ixl.1.pf.que18.mbuf_defrag_failed: 0 dev.ixl.1.pf.que17.rx_bytes: 0 dev.ixl.1.pf.que17.rx_packets: 0 dev.ixl.1.pf.que17.tx_bytes: 11675 dev.ixl.1.pf.que17.tx_packets: 83 dev.ixl.1.pf.que17.no_desc_avail: 0 dev.ixl.1.pf.que17.tx_dma_setup: 0 dev.ixl.1.pf.que17.tso_tx: 0 dev.ixl.1.pf.que17.irqs: 83 dev.ixl.1.pf.que17.dropped: 0 dev.ixl.1.pf.que17.mbuf_defrag_failed: 0 dev.ixl.1.pf.que16.rx_bytes: 0 dev.ixl.1.pf.que16.rx_packets: 0 dev.ixl.1.pf.que16.tx_bytes: 105750457831 dev.ixl.1.pf.que16.tx_packets: 205406766 dev.ixl.1.pf.que16.no_desc_avail: 0 dev.ixl.1.pf.que16.tx_dma_setup: 0 dev.ixl.1.pf.que16.tso_tx: 0 dev.ixl.1.pf.que16.irqs: 87222978 dev.ixl.1.pf.que16.dropped: 0 dev.ixl.1.pf.que16.mbuf_defrag_failed: 0 dev.ixl.1.pf.que15.rx_bytes: 289558174088 dev.ixl.1.pf.que15.rx_packets: 272466190 dev.ixl.1.pf.que15.tx_bytes: 106152524681 dev.ixl.1.pf.que15.tx_packets: 205379247 dev.ixl.1.pf.que15.no_desc_avail: 0 dev.ixl.1.pf.que15.tx_dma_setup: 0 dev.ixl.1.pf.que15.tso_tx: 0 dev.ixl.1.pf.que15.irqs: 238145862 dev.ixl.1.pf.que15.dropped: 0 dev.ixl.1.pf.que15.mbuf_defrag_failed: 0 dev.ixl.1.pf.que14.rx_bytes: 301934533473 dev.ixl.1.pf.que14.rx_packets: 298452930 dev.ixl.1.pf.que14.tx_bytes: 111420393725 dev.ixl.1.pf.que14.tx_packets: 215722532 dev.ixl.1.pf.que14.no_desc_avail: 0 dev.ixl.1.pf.que14.tx_dma_setup: 0 dev.ixl.1.pf.que14.tso_tx: 0 dev.ixl.1.pf.que14.irqs: 256291617 dev.ixl.1.pf.que14.dropped: 0 dev.ixl.1.pf.que14.mbuf_defrag_failed: 0 dev.ixl.1.pf.que13.rx_bytes: 291380746253 dev.ixl.1.pf.que13.rx_packets: 273037957 dev.ixl.1.pf.que13.tx_bytes: 112417776222 dev.ixl.1.pf.que13.tx_packets: 217500943 dev.ixl.1.pf.que13.no_desc_avail: 0 dev.ixl.1.pf.que13.tx_dma_setup: 0 dev.ixl.1.pf.que13.tso_tx: 0 dev.ixl.1.pf.que13.irqs: 241422331 dev.ixl.1.pf.que13.dropped: 0 dev.ixl.1.pf.que13.mbuf_defrag_failed: 0 dev.ixl.1.pf.que12.rx_bytes: 301105585425 dev.ixl.1.pf.que12.rx_packets: 286137817 dev.ixl.1.pf.que12.tx_bytes: 95851784579 dev.ixl.1.pf.que12.tx_packets: 199715765 dev.ixl.1.pf.que12.no_desc_avail: 0 dev.ixl.1.pf.que12.tx_dma_setup: 0 dev.ixl.1.pf.que12.tso_tx: 0 dev.ixl.1.pf.que12.irqs: 247322880 dev.ixl.1.pf.que12.dropped: 0 dev.ixl.1.pf.que12.mbuf_defrag_failed: 0 dev.ixl.1.pf.que11.rx_bytes: 307105398143 dev.ixl.1.pf.que11.rx_packets: 281046463 dev.ixl.1.pf.que11.tx_bytes: 110710957789 dev.ixl.1.pf.que11.tx_packets: 211784031 dev.ixl.1.pf.que11.no_desc_avail: 0 dev.ixl.1.pf.que11.tx_dma_setup: 0 dev.ixl.1.pf.que11.tso_tx: 0 dev.ixl.1.pf.que11.irqs: 256987179 dev.ixl.1.pf.que11.dropped: 0 dev.ixl.1.pf.que11.mbuf_defrag_failed: 0 dev.ixl.1.pf.que10.rx_bytes: 304288000453 dev.ixl.1.pf.que10.rx_packets: 278987858 dev.ixl.1.pf.que10.tx_bytes: 93022244338 dev.ixl.1.pf.que10.tx_packets: 195869210 dev.ixl.1.pf.que10.no_desc_avail: 0 dev.ixl.1.pf.que10.tx_dma_setup: 0 dev.ixl.1.pf.que10.tso_tx: 0 dev.ixl.1.pf.que10.irqs: 253622192 dev.ixl.1.pf.que10.dropped: 0 dev.ixl.1.pf.que10.mbuf_defrag_failed: 0 dev.ixl.1.pf.que9.rx_bytes: 320340203822 dev.ixl.1.pf.que9.rx_packets: 302309010 dev.ixl.1.pf.que9.tx_bytes: 116604776460 dev.ixl.1.pf.que9.tx_packets: 223949025 dev.ixl.1.pf.que9.no_desc_avail: 0 dev.ixl.1.pf.que9.tx_dma_setup: 0 dev.ixl.1.pf.que9.tso_tx: 0 dev.ixl.1.pf.que9.irqs: 271165440 dev.ixl.1.pf.que9.dropped: 0 dev.ixl.1.pf.que9.mbuf_defrag_failed: 0 dev.ixl.1.pf.que8.rx_bytes: 291403725592 dev.ixl.1.pf.que8.rx_packets: 267859568 dev.ixl.1.pf.que8.tx_bytes: 205745654558 dev.ixl.1.pf.que8.tx_packets: 443349835 dev.ixl.1.pf.que8.no_desc_avail: 0 dev.ixl.1.pf.que8.tx_dma_setup: 0 dev.ixl.1.pf.que8.tso_tx: 0 dev.ixl.1.pf.que8.irqs: 254116755 dev.ixl.1.pf.que8.dropped: 0 dev.ixl.1.pf.que8.mbuf_defrag_failed: 0 dev.ixl.1.pf.que7.rx_bytes: 673363127346 dev.ixl.1.pf.que7.rx_packets: 617269774 dev.ixl.1.pf.que7.tx_bytes: 203162891886 dev.ixl.1.pf.que7.tx_packets: 443709339 dev.ixl.1.pf.que7.no_desc_avail: 0 dev.ixl.1.pf.que7.tx_dma_setup: 0 dev.ixl.1.pf.que7.tso_tx: 0 dev.ixl.1.pf.que7.irqs: 424706771 dev.ixl.1.pf.que7.dropped: 0 dev.ixl.1.pf.que7.mbuf_defrag_failed: 0 dev.ixl.1.pf.que6.rx_bytes: 644709094218 dev.ixl.1.pf.que6.rx_packets: 601892919 dev.ixl.1.pf.que6.tx_bytes: 221661735032 dev.ixl.1.pf.que6.tx_packets: 460127064 dev.ixl.1.pf.que6.no_desc_avail: 0 dev.ixl.1.pf.que6.tx_dma_setup: 0 dev.ixl.1.pf.que6.tso_tx: 0 dev.ixl.1.pf.que6.irqs: 417748074 dev.ixl.1.pf.que6.dropped: 0 dev.ixl.1.pf.que6.mbuf_defrag_failed: 0 dev.ixl.1.pf.que5.rx_bytes: 661904432231 dev.ixl.1.pf.que5.rx_packets: 622012837 dev.ixl.1.pf.que5.tx_bytes: 230514282876 dev.ixl.1.pf.que5.tx_packets: 458571100 dev.ixl.1.pf.que5.no_desc_avail: 0 dev.ixl.1.pf.que5.tx_dma_setup: 0 dev.ixl.1.pf.que5.tso_tx: 0 dev.ixl.1.pf.que5.irqs: 422305039 dev.ixl.1.pf.que5.dropped: 0 dev.ixl.1.pf.que5.mbuf_defrag_failed: 0 dev.ixl.1.pf.que4.rx_bytes: 653522179234 dev.ixl.1.pf.que4.rx_packets: 603345546 dev.ixl.1.pf.que4.tx_bytes: 216761219483 dev.ixl.1.pf.que4.tx_packets: 450329641 dev.ixl.1.pf.que4.no_desc_avail: 0 dev.ixl.1.pf.que4.tx_dma_setup: 0 dev.ixl.1.pf.que4.tso_tx: 3 dev.ixl.1.pf.que4.irqs: 416920533 dev.ixl.1.pf.que4.dropped: 0 dev.ixl.1.pf.que4.mbuf_defrag_failed: 0 dev.ixl.1.pf.que3.rx_bytes: 676494225882 dev.ixl.1.pf.que3.rx_packets: 620605168 dev.ixl.1.pf.que3.tx_bytes: 233854020454 dev.ixl.1.pf.que3.tx_packets: 464425616 dev.ixl.1.pf.que3.no_desc_avail: 0 dev.ixl.1.pf.que3.tx_dma_setup: 0 dev.ixl.1.pf.que3.tso_tx: 0 dev.ixl.1.pf.que3.irqs: 426349030 dev.ixl.1.pf.que3.dropped: 0 dev.ixl.1.pf.que3.mbuf_defrag_failed: 0 dev.ixl.1.pf.que2.rx_bytes: 677779337711 dev.ixl.1.pf.que2.rx_packets: 620883699 dev.ixl.1.pf.que2.tx_bytes: 211297141668 dev.ixl.1.pf.que2.tx_packets: 450501525 dev.ixl.1.pf.que2.no_desc_avail: 0 dev.ixl.1.pf.que2.tx_dma_setup: 0 dev.ixl.1.pf.que2.tso_tx: 0 dev.ixl.1.pf.que2.irqs: 433146278 dev.ixl.1.pf.que2.dropped: 0 dev.ixl.1.pf.que2.mbuf_defrag_failed: 0 dev.ixl.1.pf.que1.rx_bytes: 661360798018 dev.ixl.1.pf.que1.rx_packets: 619700636 dev.ixl.1.pf.que1.tx_bytes: 238264220772 dev.ixl.1.pf.que1.tx_packets: 473425354 dev.ixl.1.pf.que1.no_desc_avail: 0 dev.ixl.1.pf.que1.tx_dma_setup: 0 dev.ixl.1.pf.que1.tso_tx: 0 dev.ixl.1.pf.que1.irqs: 437959829 dev.ixl.1.pf.que1.dropped: 0 dev.ixl.1.pf.que1.mbuf_defrag_failed: 0 dev.ixl.1.pf.que0.rx_bytes: 685201226330 dev.ixl.1.pf.que0.rx_packets: 637772348 dev.ixl.1.pf.que0.tx_bytes: 124808 dev.ixl.1.pf.que0.tx_packets: 1782 dev.ixl.1.pf.que0.no_desc_avail: 0 dev.ixl.1.pf.que0.tx_dma_setup: 0 dev.ixl.1.pf.que0.tso_tx: 0 dev.ixl.1.pf.que0.irqs: 174905480 dev.ixl.1.pf.que0.dropped: 0 dev.ixl.1.pf.que0.mbuf_defrag_failed: 0 dev.ixl.1.pf.bcast_pkts_txd: 277 dev.ixl.1.pf.mcast_pkts_txd: 0 dev.ixl.1.pf.ucast_pkts_txd: 5319743945 dev.ixl.1.pf.good_octets_txd: 2613178367282 dev.ixl.1.pf.rx_discards: 0 dev.ixl.1.pf.bcast_pkts_rcvd: 1 dev.ixl.1.pf.mcast_pkts_rcvd: 0 dev.ixl.1.pf.ucast_pkts_rcvd: 7203747890 dev.ixl.1.pf.good_octets_rcvd: 7770230490224 dev.ixl.1.vc_debug_level: 1 dev.ixl.1.admin_irq: 0 dev.ixl.1.watchdog_events: 0 dev.ixl.1.debug: 0 dev.ixl.1.dynamic_tx_itr: 0 dev.ixl.1.tx_itr: 122 dev.ixl.1.dynamic_rx_itr: 0 dev.ixl.1.rx_itr: 62 dev.ixl.1.fw_version: f4.33 a1.2 n04.42 e8000191d dev.ixl.1.current_speed: 10G dev.ixl.1.advertise_speed: 0 dev.ixl.1.fc: 0 dev.ixl.1.%parent: pci129 dev.ixl.1.%pnpinfo: vendor=0x8086 device=0x1572 subvendor=0x8086 subdevice=0x0000 class=0x020000 dev.ixl.1.%location: slot=0 function=1 handle=\_SB_.PCI1.QR3A.H001 dev.ixl.1.%driver: ixl dev.ixl.1.%desc: Intel(R) Ethernet Connection XL710 Driver, Version - 1.4.0 dev.ixl.0.mac.xoff_recvd: 0 dev.ixl.0.mac.xoff_txd: 0 dev.ixl.0.mac.xon_recvd: 0 dev.ixl.0.mac.xon_txd: 0 dev.ixl.0.mac.tx_frames_big: 0 dev.ixl.0.mac.tx_frames_1024_1522: 4961134019 dev.ixl.0.mac.tx_frames_512_1023: 113082136 dev.ixl.0.mac.tx_frames_256_511: 123538450 dev.ixl.0.mac.tx_frames_128_255: 185051082 dev.ixl.0.mac.tx_frames_65_127: 1332798493 dev.ixl.0.mac.tx_frames_64: 243338964 dev.ixl.0.mac.checksum_errors: 0 dev.ixl.0.mac.rx_jabber: 0 dev.ixl.0.mac.rx_oversized: 0 dev.ixl.0.mac.rx_fragmented: 0 dev.ixl.0.mac.rx_undersize: 0 dev.ixl.0.mac.rx_frames_big: 0 dev.ixl.0.mac.rx_frames_1024_1522: 1566499069 dev.ixl.0.mac.rx_frames_512_1023: 101390143 dev.ixl.0.mac.rx_frames_256_511: 49831970 dev.ixl.0.mac.rx_frames_128_255: 231738168 dev.ixl.0.mac.rx_frames_65_127: 2123185819 dev.ixl.0.mac.rx_frames_64: 1320404300 dev.ixl.0.mac.rx_length_errors: 0 dev.ixl.0.mac.remote_faults: 0 dev.ixl.0.mac.local_faults: 0 dev.ixl.0.mac.illegal_bytes: 0 dev.ixl.0.mac.crc_errors: 0 dev.ixl.0.mac.bcast_pkts_txd: 302 dev.ixl.0.mac.mcast_pkts_txd: 33965 dev.ixl.0.mac.ucast_pkts_txd: 6958908862 dev.ixl.0.mac.good_octets_txd: 7698936138858 dev.ixl.0.mac.rx_discards: 0 dev.ixl.0.mac.bcast_pkts_rcvd: 1 dev.ixl.0.mac.mcast_pkts_rcvd: 49693 dev.ixl.0.mac.ucast_pkts_rcvd: 5392999771 dev.ixl.0.mac.good_octets_rcvd: 2648906893811 dev.ixl.0.pf.que23.rx_bytes: 0 dev.ixl.0.pf.que23.rx_packets: 0 dev.ixl.0.pf.que23.tx_bytes: 2371273 dev.ixl.0.pf.que23.tx_packets: 7313 dev.ixl.0.pf.que23.no_desc_avail: 0 dev.ixl.0.pf.que23.tx_dma_setup: 0 dev.ixl.0.pf.que23.tso_tx: 0 dev.ixl.0.pf.que23.irqs: 7313 dev.ixl.0.pf.que23.dropped: 0 dev.ixl.0.pf.que23.mbuf_defrag_failed: 0 dev.ixl.0.pf.que22.rx_bytes: 0 dev.ixl.0.pf.que22.rx_packets: 0 dev.ixl.0.pf.que22.tx_bytes: 1908468 dev.ixl.0.pf.que22.tx_packets: 6626 dev.ixl.0.pf.que22.no_desc_avail: 0 dev.ixl.0.pf.que22.tx_dma_setup: 0 dev.ixl.0.pf.que22.tso_tx: 0 dev.ixl.0.pf.que22.irqs: 6627 dev.ixl.0.pf.que22.dropped: 0 dev.ixl.0.pf.que22.mbuf_defrag_failed: 0 dev.ixl.0.pf.que21.rx_bytes: 0 dev.ixl.0.pf.que21.rx_packets: 0 dev.ixl.0.pf.que21.tx_bytes: 2092668 dev.ixl.0.pf.que21.tx_packets: 6739 dev.ixl.0.pf.que21.no_desc_avail: 0 dev.ixl.0.pf.que21.tx_dma_setup: 0 dev.ixl.0.pf.que21.tso_tx: 0 dev.ixl.0.pf.que21.irqs: 6728 dev.ixl.0.pf.que21.dropped: 0 dev.ixl.0.pf.que21.mbuf_defrag_failed: 0 dev.ixl.0.pf.que20.rx_bytes: 0 dev.ixl.0.pf.que20.rx_packets: 0 dev.ixl.0.pf.que20.tx_bytes: 1742176 dev.ixl.0.pf.que20.tx_packets: 6246 dev.ixl.0.pf.que20.no_desc_avail: 0 dev.ixl.0.pf.que20.tx_dma_setup: 0 dev.ixl.0.pf.que20.tso_tx: 0 dev.ixl.0.pf.que20.irqs: 6249 dev.ixl.0.pf.que20.dropped: 0 dev.ixl.0.pf.que20.mbuf_defrag_failed: 0 dev.ixl.0.pf.que19.rx_bytes: 0 dev.ixl.0.pf.que19.rx_packets: 0 dev.ixl.0.pf.que19.tx_bytes: 2102284 dev.ixl.0.pf.que19.tx_packets: 6979 dev.ixl.0.pf.que19.no_desc_avail: 0 dev.ixl.0.pf.que19.tx_dma_setup: 0 dev.ixl.0.pf.que19.tso_tx: 0 dev.ixl.0.pf.que19.irqs: 6979 dev.ixl.0.pf.que19.dropped: 0 dev.ixl.0.pf.que19.mbuf_defrag_failed: 0 dev.ixl.0.pf.que18.rx_bytes: 0 dev.ixl.0.pf.que18.rx_packets: 0 dev.ixl.0.pf.que18.tx_bytes: 1532360 dev.ixl.0.pf.que18.tx_packets: 5588 dev.ixl.0.pf.que18.no_desc_avail: 0 dev.ixl.0.pf.que18.tx_dma_setup: 0 dev.ixl.0.pf.que18.tso_tx: 0 dev.ixl.0.pf.que18.irqs: 5588 dev.ixl.0.pf.que18.dropped: 0 dev.ixl.0.pf.que18.mbuf_defrag_failed: 0 dev.ixl.0.pf.que17.rx_bytes: 0 dev.ixl.0.pf.que17.rx_packets: 0 dev.ixl.0.pf.que17.tx_bytes: 1809684 dev.ixl.0.pf.que17.tx_packets: 6136 dev.ixl.0.pf.que17.no_desc_avail: 0 dev.ixl.0.pf.que17.tx_dma_setup: 0 dev.ixl.0.pf.que17.tso_tx: 0 dev.ixl.0.pf.que17.irqs: 6136 dev.ixl.0.pf.que17.dropped: 0 dev.ixl.0.pf.que17.mbuf_defrag_failed: 0 dev.ixl.0.pf.que16.rx_bytes: 0 dev.ixl.0.pf.que16.rx_packets: 0 dev.ixl.0.pf.que16.tx_bytes: 286836299105 dev.ixl.0.pf.que16.tx_packets: 263532601 dev.ixl.0.pf.que16.no_desc_avail: 0 dev.ixl.0.pf.que16.tx_dma_setup: 0 dev.ixl.0.pf.que16.tso_tx: 0 dev.ixl.0.pf.que16.irqs: 83232941 dev.ixl.0.pf.que16.dropped: 0 dev.ixl.0.pf.que16.mbuf_defrag_failed: 0 dev.ixl.0.pf.que15.rx_bytes: 106345323488 dev.ixl.0.pf.que15.rx_packets: 208869912 dev.ixl.0.pf.que15.tx_bytes: 298825179301 dev.ixl.0.pf.que15.tx_packets: 288517504 dev.ixl.0.pf.que15.no_desc_avail: 0 dev.ixl.0.pf.que15.tx_dma_setup: 0 dev.ixl.0.pf.que15.tso_tx: 0 dev.ixl.0.pf.que15.irqs: 223322408 dev.ixl.0.pf.que15.dropped: 0 dev.ixl.0.pf.que15.mbuf_defrag_failed: 0 dev.ixl.0.pf.que14.rx_bytes: 106721900547 dev.ixl.0.pf.que14.rx_packets: 208566121 dev.ixl.0.pf.que14.tx_bytes: 288657751920 dev.ixl.0.pf.que14.tx_packets: 263556000 dev.ixl.0.pf.que14.no_desc_avail: 0 dev.ixl.0.pf.que14.tx_dma_setup: 0 dev.ixl.0.pf.que14.tso_tx: 0 dev.ixl.0.pf.que14.irqs: 220377537 dev.ixl.0.pf.que14.dropped: 0 dev.ixl.0.pf.que14.mbuf_defrag_failed: 0 dev.ixl.0.pf.que13.rx_bytes: 111978971378 dev.ixl.0.pf.que13.rx_packets: 218447354 dev.ixl.0.pf.que13.tx_bytes: 298439860675 dev.ixl.0.pf.que13.tx_packets: 276806617 dev.ixl.0.pf.que13.no_desc_avail: 0 dev.ixl.0.pf.que13.tx_dma_setup: 0 dev.ixl.0.pf.que13.tso_tx: 0 dev.ixl.0.pf.que13.irqs: 227474625 dev.ixl.0.pf.que13.dropped: 0 dev.ixl.0.pf.que13.mbuf_defrag_failed: 0 dev.ixl.0.pf.que12.rx_bytes: 112969704706 dev.ixl.0.pf.que12.rx_packets: 220275562 dev.ixl.0.pf.que12.tx_bytes: 304750620079 dev.ixl.0.pf.que12.tx_packets: 272244483 dev.ixl.0.pf.que12.no_desc_avail: 0 dev.ixl.0.pf.que12.tx_dma_setup: 0 dev.ixl.0.pf.que12.tso_tx: 183 dev.ixl.0.pf.que12.irqs: 230111291 dev.ixl.0.pf.que12.dropped: 0 dev.ixl.0.pf.que12.mbuf_defrag_failed: 0 dev.ixl.0.pf.que11.rx_bytes: 96405343036 dev.ixl.0.pf.que11.rx_packets: 202329448 dev.ixl.0.pf.que11.tx_bytes: 302481707696 dev.ixl.0.pf.que11.tx_packets: 271689246 dev.ixl.0.pf.que11.no_desc_avail: 0 dev.ixl.0.pf.que11.tx_dma_setup: 0 dev.ixl.0.pf.que11.tso_tx: 0 dev.ixl.0.pf.que11.irqs: 220717612 dev.ixl.0.pf.que11.dropped: 0 dev.ixl.0.pf.que11.mbuf_defrag_failed: 0 dev.ixl.0.pf.que10.rx_bytes: 111280008670 dev.ixl.0.pf.que10.rx_packets: 214900261 dev.ixl.0.pf.que10.tx_bytes: 318638566198 dev.ixl.0.pf.que10.tx_packets: 295011389 dev.ixl.0.pf.que10.no_desc_avail: 0 dev.ixl.0.pf.que10.tx_dma_setup: 0 dev.ixl.0.pf.que10.tso_tx: 0 dev.ixl.0.pf.que10.irqs: 230681709 dev.ixl.0.pf.que10.dropped: 0 dev.ixl.0.pf.que10.mbuf_defrag_failed: 0 dev.ixl.0.pf.que9.rx_bytes: 93566025126 dev.ixl.0.pf.que9.rx_packets: 198726483 dev.ixl.0.pf.que9.tx_bytes: 288858818348 dev.ixl.0.pf.que9.tx_packets: 258926864 dev.ixl.0.pf.que9.no_desc_avail: 0 dev.ixl.0.pf.que9.tx_dma_setup: 0 dev.ixl.0.pf.que9.tso_tx: 0 dev.ixl.0.pf.que9.irqs: 217918160 dev.ixl.0.pf.que9.dropped: 0 dev.ixl.0.pf.que9.mbuf_defrag_failed: 0 dev.ixl.0.pf.que8.rx_bytes: 117169019041 dev.ixl.0.pf.que8.rx_packets: 226938172 dev.ixl.0.pf.que8.tx_bytes: 665794492752 dev.ixl.0.pf.que8.tx_packets: 593519436 dev.ixl.0.pf.que8.no_desc_avail: 0 dev.ixl.0.pf.que8.tx_dma_setup: 0 dev.ixl.0.pf.que8.tso_tx: 0 dev.ixl.0.pf.que8.irqs: 244643578 dev.ixl.0.pf.que8.dropped: 0 dev.ixl.0.pf.que8.mbuf_defrag_failed: 0 dev.ixl.0.pf.que7.rx_bytes: 206974266022 dev.ixl.0.pf.que7.rx_packets: 449899895 dev.ixl.0.pf.que7.tx_bytes: 638527685820 dev.ixl.0.pf.que7.tx_packets: 580750916 dev.ixl.0.pf.que7.no_desc_avail: 0 dev.ixl.0.pf.que7.tx_dma_setup: 0 dev.ixl.0.pf.que7.tso_tx: 0 dev.ixl.0.pf.que7.irqs: 391760959 dev.ixl.0.pf.que7.dropped: 0 dev.ixl.0.pf.que7.mbuf_defrag_failed: 0 dev.ixl.0.pf.que6.rx_bytes: 204373984670 dev.ixl.0.pf.que6.rx_packets: 449990985 dev.ixl.0.pf.que6.tx_bytes: 655511068125 dev.ixl.0.pf.que6.tx_packets: 600735086 dev.ixl.0.pf.que6.no_desc_avail: 0 dev.ixl.0.pf.que6.tx_dma_setup: 0 dev.ixl.0.pf.que6.tso_tx: 0 dev.ixl.0.pf.que6.irqs: 394961024 dev.ixl.0.pf.que6.dropped: 0 dev.ixl.0.pf.que6.mbuf_defrag_failed: 0 dev.ixl.0.pf.que5.rx_bytes: 222919535872 dev.ixl.0.pf.que5.rx_packets: 466659705 dev.ixl.0.pf.que5.tx_bytes: 647689764751 dev.ixl.0.pf.que5.tx_packets: 582532691 dev.ixl.0.pf.que5.no_desc_avail: 0 dev.ixl.0.pf.que5.tx_dma_setup: 0 dev.ixl.0.pf.que5.tso_tx: 5 dev.ixl.0.pf.que5.irqs: 404552229 dev.ixl.0.pf.que5.dropped: 0 dev.ixl.0.pf.que5.mbuf_defrag_failed: 0 dev.ixl.0.pf.que4.rx_bytes: 231706806551 dev.ixl.0.pf.que4.rx_packets: 464397112 dev.ixl.0.pf.que4.tx_bytes: 669945424739 dev.ixl.0.pf.que4.tx_packets: 598527594 dev.ixl.0.pf.que4.no_desc_avail: 0 dev.ixl.0.pf.que4.tx_dma_setup: 0 dev.ixl.0.pf.que4.tso_tx: 452 dev.ixl.0.pf.que4.irqs: 405018727 dev.ixl.0.pf.que4.dropped: 0 dev.ixl.0.pf.que4.mbuf_defrag_failed: 0 dev.ixl.0.pf.que3.rx_bytes: 217942511336 dev.ixl.0.pf.que3.rx_packets: 456454137 dev.ixl.0.pf.que3.tx_bytes: 674027217503 dev.ixl.0.pf.que3.tx_packets: 604815959 dev.ixl.0.pf.que3.no_desc_avail: 0 dev.ixl.0.pf.que3.tx_dma_setup: 0 dev.ixl.0.pf.que3.tso_tx: 0 dev.ixl.0.pf.que3.irqs: 399890434 dev.ixl.0.pf.que3.dropped: 0 dev.ixl.0.pf.que3.mbuf_defrag_failed: 0 dev.ixl.0.pf.que2.rx_bytes: 235057952930 dev.ixl.0.pf.que2.rx_packets: 470668205 dev.ixl.0.pf.que2.tx_bytes: 653598762323 dev.ixl.0.pf.que2.tx_packets: 595468539 dev.ixl.0.pf.que2.no_desc_avail: 0 dev.ixl.0.pf.que2.tx_dma_setup: 0 dev.ixl.0.pf.que2.tso_tx: 0 dev.ixl.0.pf.que2.irqs: 410972406 dev.ixl.0.pf.que2.dropped: 0 dev.ixl.0.pf.que2.mbuf_defrag_failed: 0 dev.ixl.0.pf.que1.rx_bytes: 212570053522 dev.ixl.0.pf.que1.rx_packets: 456981561 dev.ixl.0.pf.que1.tx_bytes: 677227126330 dev.ixl.0.pf.que1.tx_packets: 612428010 dev.ixl.0.pf.que1.no_desc_avail: 0 dev.ixl.0.pf.que1.tx_dma_setup: 0 dev.ixl.0.pf.que1.tso_tx: 0 dev.ixl.0.pf.que1.irqs: 404727745 dev.ixl.0.pf.que1.dropped: 0 dev.ixl.0.pf.que1.mbuf_defrag_failed: 0 dev.ixl.0.pf.que0.rx_bytes: 239424279142 dev.ixl.0.pf.que0.rx_packets: 479078356 dev.ixl.0.pf.que0.tx_bytes: 513283 dev.ixl.0.pf.que0.tx_packets: 3990 dev.ixl.0.pf.que0.no_desc_avail: 0 dev.ixl.0.pf.que0.tx_dma_setup: 0 dev.ixl.0.pf.que0.tso_tx: 0 dev.ixl.0.pf.que0.irqs: 178414974 dev.ixl.0.pf.que0.dropped: 0 dev.ixl.0.pf.que0.mbuf_defrag_failed: 0 dev.ixl.0.pf.bcast_pkts_txd: 302 dev.ixl.0.pf.mcast_pkts_txd: 33965 dev.ixl.0.pf.ucast_pkts_txd: 6958908879 dev.ixl.0.pf.good_octets_txd: 7669637462330 dev.ixl.0.pf.rx_discards: 0 dev.ixl.0.pf.bcast_pkts_rcvd: 1 dev.ixl.0.pf.mcast_pkts_rcvd: 49549 dev.ixl.0.pf.ucast_pkts_rcvd: 5392999777 dev.ixl.0.pf.good_octets_rcvd: 2648906886817 dev.ixl.0.vc_debug_level: 1 dev.ixl.0.admin_irq: 0 dev.ixl.0.watchdog_events: 0 dev.ixl.0.debug: 0 dev.ixl.0.dynamic_tx_itr: 0 dev.ixl.0.tx_itr: 122 dev.ixl.0.dynamic_rx_itr: 0 dev.ixl.0.rx_itr: 62 dev.ixl.0.fw_version: f4.33 a1.2 n04.42 e8000191d dev.ixl.0.current_speed: 10G dev.ixl.0.advertise_speed: 0 dev.ixl.0.fc: 0 dev.ixl.0.%parent: pci129 dev.ixl.0.%pnpinfo: vendor=0x8086 device=0x1572 subvendor=0x8086 subdevice=0x0002 class=0x020000 dev.ixl.0.%location: slot=0 function=0 handle=\_SB_.PCI1.QR3A.H000 dev.ixl.0.%driver: ixl dev.ixl.0.%desc: Intel(R) Ethernet Connection XL710 Driver, Version - 1.4.0 dev.ixl.%parent: -- Best regards, Evgeny Khorokhorin From owner-freebsd-net@freebsd.org Wed Aug 19 15:32:28 2015 Return-Path: Delivered-To: freebsd-net@mailman.ysv.freebsd.org Received: from mx1.freebsd.org (mx1.freebsd.org [IPv6:2001:1900:2254:206a::19:1]) by mailman.ysv.freebsd.org (Postfix) with ESMTP id 1505A9BE198 for ; Wed, 19 Aug 2015 15:32:28 +0000 (UTC) (envelope-from bugzilla-noreply@freebsd.org) Received: from kenobi.freebsd.org (kenobi.freebsd.org [IPv6:2001:1900:2254:206a::16:76]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (Client did not present a certificate) by mx1.freebsd.org (Postfix) with ESMTPS id 01CBFFBF for ; Wed, 19 Aug 2015 15:32:28 +0000 (UTC) (envelope-from bugzilla-noreply@freebsd.org) Received: from bugs.freebsd.org ([127.0.1.118]) by kenobi.freebsd.org (8.15.2/8.15.2) with ESMTP id t7JFWRtv039195 for ; Wed, 19 Aug 2015 15:32:27 GMT (envelope-from bugzilla-noreply@freebsd.org) From: bugzilla-noreply@freebsd.org To: freebsd-net@FreeBSD.org Subject: [Bug 202484] vtnet drivers didn't support being use for multicast routing (MRT_ADD_VIF) Date: Wed, 19 Aug 2015 15:32:27 +0000 X-Bugzilla-Reason: AssignedTo X-Bugzilla-Type: changed X-Bugzilla-Watch-Reason: None X-Bugzilla-Product: Base System X-Bugzilla-Component: kern X-Bugzilla-Version: 10.2-STABLE X-Bugzilla-Keywords: X-Bugzilla-Severity: Affects Some People X-Bugzilla-Who: linimon@FreeBSD.org X-Bugzilla-Status: New X-Bugzilla-Priority: --- X-Bugzilla-Assigned-To: freebsd-net@FreeBSD.org X-Bugzilla-Target-Milestone: --- X-Bugzilla-Flags: X-Bugzilla-Changed-Fields: assigned_to Message-ID: In-Reply-To: References: Content-Type: text/plain; charset="UTF-8" Content-Transfer-Encoding: 7bit X-Bugzilla-URL: https://bugs.freebsd.org/bugzilla/ Auto-Submitted: auto-generated MIME-Version: 1.0 X-BeenThere: freebsd-net@freebsd.org X-Mailman-Version: 2.1.20 Precedence: list List-Id: Networking and TCP/IP with FreeBSD List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , X-List-Received-Date: Wed, 19 Aug 2015 15:32:28 -0000 https://bugs.freebsd.org/bugzilla/show_bug.cgi?id=202484 Mark Linimon changed: What |Removed |Added ---------------------------------------------------------------------------- Assignee|freebsd-bugs@FreeBSD.org |freebsd-net@FreeBSD.org -- You are receiving this mail because: You are the assignee for the bug. From owner-freebsd-net@freebsd.org Wed Aug 19 15:33:08 2015 Return-Path: Delivered-To: freebsd-net@mailman.ysv.freebsd.org Received: from mx1.freebsd.org (mx1.freebsd.org [IPv6:2001:1900:2254:206a::19:1]) by mailman.ysv.freebsd.org (Postfix) with ESMTP id 4F1E19BE1D6 for ; Wed, 19 Aug 2015 15:33:08 +0000 (UTC) (envelope-from bugzilla-noreply@freebsd.org) Received: from kenobi.freebsd.org (kenobi.freebsd.org [IPv6:2001:1900:2254:206a::16:76]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (Client did not present a certificate) by mx1.freebsd.org (Postfix) with ESMTPS id 3BE4010B5 for ; Wed, 19 Aug 2015 15:33:08 +0000 (UTC) (envelope-from bugzilla-noreply@freebsd.org) Received: from bugs.freebsd.org ([127.0.1.118]) by kenobi.freebsd.org (8.15.2/8.15.2) with ESMTP id t7JFX893039921 for ; Wed, 19 Aug 2015 15:33:08 GMT (envelope-from bugzilla-noreply@freebsd.org) From: bugzilla-noreply@freebsd.org To: freebsd-net@FreeBSD.org Subject: [Bug 202351] [ip6] [panic] Kernel panic in ip6_forward (different from 128247, 131038) Date: Wed, 19 Aug 2015 15:33:08 +0000 X-Bugzilla-Reason: AssignedTo X-Bugzilla-Type: changed X-Bugzilla-Watch-Reason: None X-Bugzilla-Product: Base System X-Bugzilla-Component: kern X-Bugzilla-Version: 10.2-RELEASE X-Bugzilla-Keywords: X-Bugzilla-Severity: Affects Only Me X-Bugzilla-Who: linimon@FreeBSD.org X-Bugzilla-Status: New X-Bugzilla-Priority: --- X-Bugzilla-Assigned-To: freebsd-net@FreeBSD.org X-Bugzilla-Target-Milestone: --- X-Bugzilla-Flags: X-Bugzilla-Changed-Fields: assigned_to Message-ID: In-Reply-To: References: Content-Type: text/plain; charset="UTF-8" Content-Transfer-Encoding: 7bit X-Bugzilla-URL: https://bugs.freebsd.org/bugzilla/ Auto-Submitted: auto-generated MIME-Version: 1.0 X-BeenThere: freebsd-net@freebsd.org X-Mailman-Version: 2.1.20 Precedence: list List-Id: Networking and TCP/IP with FreeBSD List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , X-List-Received-Date: Wed, 19 Aug 2015 15:33:08 -0000 https://bugs.freebsd.org/bugzilla/show_bug.cgi?id=202351 Mark Linimon changed: What |Removed |Added ---------------------------------------------------------------------------- Assignee|freebsd-bugs@FreeBSD.org |freebsd-net@FreeBSD.org -- You are receiving this mail because: You are the assignee for the bug. From owner-freebsd-net@freebsd.org Wed Aug 19 15:35:39 2015 Return-Path: Delivered-To: freebsd-net@mailman.ysv.freebsd.org Received: from mx1.freebsd.org (mx1.freebsd.org [IPv6:2001:1900:2254:206a::19:1]) by mailman.ysv.freebsd.org (Postfix) with ESMTP id 2B33F9BE2E1 for ; Wed, 19 Aug 2015 15:35:39 +0000 (UTC) (envelope-from bugzilla-noreply@freebsd.org) Received: from kenobi.freebsd.org (kenobi.freebsd.org [IPv6:2001:1900:2254:206a::16:76]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (Client did not present a certificate) by mx1.freebsd.org (Postfix) with ESMTPS id 183B613B8 for ; Wed, 19 Aug 2015 15:35:39 +0000 (UTC) (envelope-from bugzilla-noreply@freebsd.org) Received: from bugs.freebsd.org ([127.0.1.118]) by kenobi.freebsd.org (8.15.2/8.15.2) with ESMTP id t7JFZc8U042707 for ; Wed, 19 Aug 2015 15:35:38 GMT (envelope-from bugzilla-noreply@freebsd.org) From: bugzilla-noreply@freebsd.org To: freebsd-net@FreeBSD.org Subject: [Bug 201428] Possible Memory leak in Netmap Date: Wed, 19 Aug 2015 15:35:39 +0000 X-Bugzilla-Reason: AssignedTo X-Bugzilla-Type: changed X-Bugzilla-Watch-Reason: None X-Bugzilla-Product: Base System X-Bugzilla-Component: kern X-Bugzilla-Version: 11.0-CURRENT X-Bugzilla-Keywords: patch X-Bugzilla-Severity: Affects Some People X-Bugzilla-Who: linimon@FreeBSD.org X-Bugzilla-Status: New X-Bugzilla-Priority: --- X-Bugzilla-Assigned-To: freebsd-net@FreeBSD.org X-Bugzilla-Target-Milestone: --- X-Bugzilla-Flags: X-Bugzilla-Changed-Fields: keywords assigned_to Message-ID: In-Reply-To: References: Content-Type: text/plain; charset="UTF-8" Content-Transfer-Encoding: 7bit X-Bugzilla-URL: https://bugs.freebsd.org/bugzilla/ Auto-Submitted: auto-generated MIME-Version: 1.0 X-BeenThere: freebsd-net@freebsd.org X-Mailman-Version: 2.1.20 Precedence: list List-Id: Networking and TCP/IP with FreeBSD List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , X-List-Received-Date: Wed, 19 Aug 2015 15:35:39 -0000 https://bugs.freebsd.org/bugzilla/show_bug.cgi?id=201428 Mark Linimon changed: What |Removed |Added ---------------------------------------------------------------------------- Keywords| |patch Assignee|freebsd-bugs@FreeBSD.org |freebsd-net@FreeBSD.org -- You are receiving this mail because: You are the assignee for the bug. From owner-freebsd-net@freebsd.org Wed Aug 19 16:07:24 2015 Return-Path: Delivered-To: freebsd-net@mailman.ysv.freebsd.org Received: from mx1.freebsd.org (mx1.freebsd.org [IPv6:2001:1900:2254:206a::19:1]) by mailman.ysv.freebsd.org (Postfix) with ESMTP id 272039BE8ED for ; Wed, 19 Aug 2015 16:07:24 +0000 (UTC) (envelope-from david@catwhisker.org) Received: from mailman.ysv.freebsd.org (mailman.ysv.freebsd.org [IPv6:2001:1900:2254:206a::50:5]) by mx1.freebsd.org (Postfix) with ESMTP id 0CC798F2 for ; Wed, 19 Aug 2015 16:07:24 +0000 (UTC) (envelope-from david@catwhisker.org) Received: by mailman.ysv.freebsd.org (Postfix) id 0A16E9BE8EB; Wed, 19 Aug 2015 16:07:24 +0000 (UTC) Delivered-To: net@mailman.ysv.freebsd.org Received: from mx1.freebsd.org (mx1.freebsd.org [IPv6:2001:1900:2254:206a::19:1]) by mailman.ysv.freebsd.org (Postfix) with ESMTP id 097FF9BE8EA; Wed, 19 Aug 2015 16:07:24 +0000 (UTC) (envelope-from david@catwhisker.org) Received: from albert.catwhisker.org (mx.catwhisker.org [198.144.209.73]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (Client did not present a certificate) by mx1.freebsd.org (Postfix) with ESMTPS id C2B4B8F1; Wed, 19 Aug 2015 16:07:22 +0000 (UTC) (envelope-from david@catwhisker.org) Received: from albert.catwhisker.org (localhost [127.0.0.1]) by albert.catwhisker.org (8.15.2/8.15.2) with ESMTP id t7JG7GGk067948; Wed, 19 Aug 2015 09:07:16 -0700 (PDT) (envelope-from david@albert.catwhisker.org) Received: (from david@localhost) by albert.catwhisker.org (8.15.2/8.15.2/Submit) id t7JG7GU5067947; Wed, 19 Aug 2015 09:07:16 -0700 (PDT) (envelope-from david) Date: Wed, 19 Aug 2015 09:07:16 -0700 From: David Wolfskill To: stable@freebsd.org, net@freebsd.org Subject: Re: Panic [page fault] in _ieee80211_crypto_delkey(): stable/10/amd64 @r286878 Message-ID: <20150819160716.GK63584@albert.catwhisker.org> Mail-Followup-To: David Wolfskill , stable@freebsd.org, net@freebsd.org References: <20150818232007.GN1189@albert.catwhisker.org> MIME-Version: 1.0 Content-Type: multipart/signed; micalg=pgp-sha512; protocol="application/pgp-signature"; boundary="LHvWgpbS7VDUdu2f" Content-Disposition: inline In-Reply-To: <20150818232007.GN1189@albert.catwhisker.org> User-Agent: Mutt/1.5.23 (2014-03-12) X-BeenThere: freebsd-net@freebsd.org X-Mailman-Version: 2.1.20 Precedence: list List-Id: Networking and TCP/IP with FreeBSD List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , X-List-Received-Date: Wed, 19 Aug 2015 16:07:24 -0000 --LHvWgpbS7VDUdu2f Content-Type: text/plain; charset=us-ascii Content-Disposition: inline Content-Transfer-Encoding: quoted-printable On Tue, Aug 18, 2015 at 04:20:07PM -0700, David Wolfskill wrote: > I was minding my own business in a staff meeting this afternoon, and my > laptop rebooted; seems it got a panic. I've copied the core.txt.0 file > to , along with a > verbose dmesg.boot from this morning and output of "pciconf -l -v". >=20 > This was running: > FreeBSD localhost 10.2-STABLE FreeBSD 10.2-STABLE #122 r286878M/286880:1= 002500: Tue Aug 18 04:06:33 PDT 2015 root@g1-252.catwhisker.org:/common= /S1/obj/usr/src/sys/CANARY amd64 > .... And this morning (just after I got in to work, and was trying (and trying) to get re-associated with the AP at work), I had another one. I've copied the resulting core.txt.1 over to http://www.cawhisker.org:~david/FreeBSD/stable_10/ as well; here are excerpts from a unidiff between core.txt.{0,1}: --- core.txt.0 2015-08-18 15:39:05.232251000 -0700 +++ core.txt.1 2015-08-19 08:56:37.686238000 -0700 @@ -1,8 +1,8 @@ -localhost dumped core - see /var/crash/vmcore.0 +localhost dumped core - see /var/crash/vmcore.1 =20 -Tue Aug 18 15:39:02 PDT 2015 +Wed Aug 19 08:56:35 PDT 2015 =20 -FreeBSD localhost 10.2-STABLE FreeBSD 10.2-STABLE #122 r286878M/286880:10= 02500: Tue Aug 18 04:06:33 PDT 2015 root@g1-252.catwhisker.org:/common/= S1/obj/usr/src/sys/CANARY amd64 +FreeBSD localhost 10.2-STABLE FreeBSD 10.2-STABLE #123 r286912M/286918:10= 02500: Wed Aug 19 04:05:06 PDT 2015 root@g1-252.catwhisker.org:/common/= S1/obj/usr/src/sys/CANARY amd64 =20 panic: page fault =20 @@ -16,7 +16,7 @@ =20 Unread portion of the kernel message buffer: panic: page fault -cpuid =3D 2 +cpuid =3D 1 KDB: stack backtrace: #0 0xffffffff80946e00 at kdb_backtrace+0x60 #1 0xffffffff8090a9e6 at vpanic+0x126 @@ -34,8 +34,8 @@ #13 0xffffffff8095e9f0 at sys_ioctl+0x140 #14 0xffffffff80c84f97 at amd64_syscall+0x357 #15 0xffffffff80c6a49b at Xfast_syscall+0xfb -Uptime: 9h45m0s -Dumping 625 out of 8095 MB:..3%..11%..21%..31%..41%..52%..62%..72%..82%..9= 3% +Uptime: 3h16m49s +Dumping 584 out of 8095 MB:..3%..11%..22%..31%..42%..53%..61%..72%..83%..9= 1% =20 Reading symbols from /boot/kernel/geom_eli.ko.symbols...done. Loaded symbols for /boot/kernel/geom_eli.ko.symbols @@ -81,32 +81,32 @@ at /usr/src/sys/kern/kern_shutdown.c:687 #4 0xffffffff80c8467b in trap_fatal (frame=3D,=20 eva=3D) at /usr/src/sys/amd64/amd64/trap.c:851 -#5 0xffffffff80c8497d in trap_pfault (frame=3D0xfffffe060d88b510,=20 +#5 0xffffffff80c8497d in trap_pfault (frame=3D0xfffffe060d5ea510,=20 usermode=3D) at /usr/src/sys/amd64/amd64/trap.c:6= 74 -#6 0xffffffff80c8401a in trap (frame=3D0xfffffe060d88b510) +#6 0xffffffff80c8401a in trap (frame=3D0xfffffe060d5ea510) at /usr/src/sys/amd64/amd64/trap.c:440 #7 0xffffffff80c6a1b2 in calltrap () at /usr/src/sys/amd64/amd64/exception.S:236 #8 0xffffffff809f003a in _ieee80211_crypto_delkey () at /usr/src/sys/net80211/ieee80211_crypto.c:105 -#9 0xffffffff809eff5e in ieee80211_crypto_delkey (vap=3D0xfffffe03d907000= 0,=20 - key=3D0xfffffe03d9070800) at /usr/src/sys/net80211/ieee80211_crypto.c:= 461 -#10 0xffffffff80a04d45 in ieee80211_ioctl_delkey (vap=3D0xfffffe03d9070000= ,=20 +#9 0xffffffff809eff5e in ieee80211_crypto_delkey (vap=3D0xfffffe03dd31a00= 0,=20 + key=3D0xfffffe03dd31a800) at /usr/src/sys/net80211/ieee80211_crypto.c:= 461 +#10 0xffffffff80a04d45 in ieee80211_ioctl_delkey (vap=3D0xfffffe03dd31a000= ,=20 ireq=3D) at /usr/src/sys/net80211/ieee80211_ioctl.c:1252 #11 0xffffffff80a03bd2 in ieee80211_ioctl_set80211 () at /usr/src/sys/net80211/ieee80211_ioctl.c:2814 #12 0xffffffff80a2c323 in in_control (so=3D,=20 - cmd=3D9214790412651315593, data=3D0xfffffe060d88bb80 "", ifp=3D0x3,=20 + cmd=3D9214790412651315593, data=3D0xfffffe060d5eab80 "", ifp=3D0x3,=20 td=3D) at /usr/src/sys/netinet/in.c:308 -#13 0xffffffff809cd57b in ifioctl (so=3D0xfffffe03d9070800, cmd=3D21496079= 14,=20 - data=3D0xfffffe060d88b8e0 "wlan0", td=3D0xfffff80170abb940) +#13 0xffffffff809cd57b in ifioctl (so=3D0xfffffe03dd31a800, cmd=3D21496079= 14,=20 + data=3D0xfffffe060d5ea8e0 "wlan0", td=3D0xfffff800098b5940) at /usr/src/sys/net/if.c:2770 -#14 0xffffffff8095ecf5 in kern_ioctl (td=3D0xfffff80170abb940,=20 - fd=3D, com=3D18446741891212314624) at file.h:320 -#15 0xffffffff8095e9f0 in sys_ioctl (td=3D0xfffff80170abb940,=20 - uap=3D0xfffffe060d88ba40) at /usr/src/sys/kern/sys_generic.c:718 -#16 0xffffffff80c84f97 in amd64_syscall (td=3D0xfffff80170abb940, traced= =3D0) +#14 0xffffffff8095ecf5 in kern_ioctl (td=3D0xfffff800098b5940,=20 + fd=3D, com=3D18446741891282216960) at file.h:320 +#15 0xffffffff8095e9f0 in sys_ioctl (td=3D0xfffff800098b5940,=20 + uap=3D0xfffffe060d5eaa40) at /usr/src/sys/kern/sys_generic.c:718 +#16 0xffffffff80c84f97 in amd64_syscall (td=3D0xfffff800098b5940, traced= =3D0) at subr_syscall.c:134 #17 0xffffffff80c6a49b in Xfast_syscall () at /usr/src/sys/amd64/amd64/exception.S:396 @@ -118,305 +118,301 @@ ------------------------------------------------------------------------ =2E... So it looks to me to be quite similar to the previous one. I've also copied the kernel config file ("CANARY") to the above-cited Web page. Anything else I can do to help nail this? Peace, david --=20 David H. Wolfskill david@catwhisker.org Those who would murder in the name of God or prophet are blasphemous coward= s. See http://www.catwhisker.org/~david/publickey.gpg for my public key. --LHvWgpbS7VDUdu2f Content-Type: application/pgp-signature -----BEGIN PGP SIGNATURE----- Version: GnuPG v2 iQJ8BAEBCgBmBQJV1KmzXxSAAAAAAC4AKGlzc3Vlci1mcHJAbm90YXRpb25zLm9w ZW5wZ3AuZmlmdGhob3JzZW1hbi5uZXQ4RThEMDY4QTIxMjc1MDZFRDIzODYzRTc4 QTY3RjlDOERFRjQxOTNCAAoJEIpn+cje9Bk7NQAP/1o6K5jgq/F77e/0pWv/BC7Z VRCJjOtoFVeUgYidHFJU7yRUTAJSmNbKI4h41/nFf5simA4e1NIt6GOl0PUa+G2F JVyG1BFE7WFdQoRbPlLMwdAW1zGmi83F22XS+F47AnzCNhm0RClY6fMQ3JphU9N5 1LkZq0FNxeWGFJlZL4JJCkQAJCFSJckX+OReLzbW8nZjxxUMsymsz40J8aRGFGMu yembRQIw44r9/BCEaWCJW8mTZkbR9z7G3hTQo1luSI5u2GB0812Pjd4RkFd/hoWM l3Hdog+VBIVoIXXJvIBTa4Y40dPIA24cRvarKxZVKOh8M4ZTdQbkwatZGJvxDoFg 8qCJA/IRbBp9n6TZvD4pW5rKGN4dZ+wOCT6NOGkFYmYZ6JEqy0z7IZnHON2hVmrJ lUAvU4fFRPZcDYi0X8+SYN9R/rO+tJxqDd3HsOPbS/m61HO/Q2lTMTtsaDC9uPvj PAZAuVWZHEACdW9HHqumC45/AM+3PYHub66xRalG7yYdiN3oTqCc2B3uV7Sj0BXI w7af5cuFCs+kXIRIlDzbzfp4Rg0/48twEKrrZr0R+44rjv3ERN/4PXfQNAfz1p8s T6Xwrb6cHdGvr8SPnB8529CpvmNAON8nMUT/B2Wx3v88ZKzDumygX61BKrcSe249 YI71hq9IsSBzoNwk5IQ0 =UghK -----END PGP SIGNATURE----- --LHvWgpbS7VDUdu2f-- From owner-freebsd-net@freebsd.org Wed Aug 19 18:00:52 2015 Return-Path: Delivered-To: freebsd-net@mailman.ysv.freebsd.org Received: from mx1.freebsd.org (mx1.freebsd.org [IPv6:2001:1900:2254:206a::19:1]) by mailman.ysv.freebsd.org (Postfix) with ESMTP id D60359BE0F0 for ; Wed, 19 Aug 2015 18:00:52 +0000 (UTC) (envelope-from hiren@strugglingcoder.info) Received: from mail.strugglingcoder.info (strugglingcoder.info [65.19.130.35]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (Client did not present a certificate) by mx1.freebsd.org (Postfix) with ESMTPS id C27971723; Wed, 19 Aug 2015 18:00:52 +0000 (UTC) (envelope-from hiren@strugglingcoder.info) Received: from localhost (unknown [10.1.1.3]) (Authenticated sender: hiren@strugglingcoder.info) by mail.strugglingcoder.info (Postfix) with ESMTPSA id D95D1E526; Wed, 19 Aug 2015 11:00:51 -0700 (PDT) Date: Wed, 19 Aug 2015 11:00:51 -0700 From: hiren panchasara To: Evgeny Khorokhorin , erj@freebsd.org Cc: freebsd-net@freebsd.org Subject: Re: FreeBSD 10.2-STABLE + Intel XL710 - free queues Message-ID: <20150819180051.GM94440@strugglingcoder.info> References: <55D49611.40603@maxnet.ru> MIME-Version: 1.0 Content-Type: multipart/signed; micalg=pgp-sha512; protocol="application/pgp-signature"; boundary="/JIF1IJL1ITjxcV4" Content-Disposition: inline In-Reply-To: <55D49611.40603@maxnet.ru> User-Agent: Mutt/1.5.23 (2014-03-12) X-BeenThere: freebsd-net@freebsd.org X-Mailman-Version: 2.1.20 Precedence: list List-Id: Networking and TCP/IP with FreeBSD List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , X-List-Received-Date: Wed, 19 Aug 2015 18:00:53 -0000 --/JIF1IJL1ITjxcV4 Content-Type: text/plain; charset=us-ascii Content-Disposition: inline Content-Transfer-Encoding: quoted-printable On 08/19/15 at 05:43P, Evgeny Khorokhorin wrote: > Hi All, >=20 > FreeBSD 10.2-STABLE > 2*CPU Intel E5-2643v3 with HyperThreading enabled > Intel XL710 network adapter > I updated the ixl driver to version 1.4.0 from download.intel.com > Every ixl interface create 24 queues (6 cores *2 HT *2 CPUs) but=20 > utilizes only 16-17 of them. Where is the reason of such behavior or=20 > driver bug? Not sure what is the h/w limit but this may be a possible cause: #define IXLV_MAX_QUEUES 16=20 in sys/dev/ixl/ixlv.h=20 and ixlv_init_msix() doing: if (queues > IXLV_MAX_QUEUES) queues =3D IXLV_MAX_QUEUES; Adding eric from intel to confirm. Cheers, Hiren >=20 > irq284: ixl0:q0 177563088 2054 > irq285: ixl0:q1 402668179 4659 > irq286: ixl0:q2 408885088 4731 > irq287: ixl0:q3 397744300 4602 > irq288: ixl0:q4 403040766 4663 > irq289: ixl0:q5 402499314 4657 > irq290: ixl0:q6 392693663 4543 > irq291: ixl0:q7 389364966 4505 > irq292: ixl0:q8 243244346 2814 > irq293: ixl0:q9 216834450 2509 > irq294: ixl0:q10 229460056 2655 > irq295: ixl0:q11 219591953 2540 > irq296: ixl0:q12 228944960 2649 > irq297: ixl0:q13 226385454 2619 > irq298: ixl0:q14 219174953 2536 > irq299: ixl0:q15 222151378 2570 > irq300: ixl0:q16 82799713 958 > irq301: ixl0:q17 6131 0 > irq302: ixl0:q18 5586 0 > irq303: ixl0:q19 6975 0 > irq304: ixl0:q20 6243 0 > irq305: ixl0:q21 6729 0 > irq306: ixl0:q22 6623 0 > irq307: ixl0:q23 7306 0 > irq309: ixl1:q0 174074462 2014 > irq310: ixl1:q1 435716449 5041 > irq311: ixl1:q2 431030443 4987 > irq312: ixl1:q3 424156413 4907 > irq313: ixl1:q4 414791657 4799 > irq314: ixl1:q5 420260382 4862 > irq315: ixl1:q6 415645708 4809 > irq316: ixl1:q7 422783859 4892 > irq317: ixl1:q8 252737383 2924 > irq318: ixl1:q9 269655708 3120 > irq319: ixl1:q10 252397826 2920 > irq320: ixl1:q11 255649144 2958 > irq321: ixl1:q12 246025621 2846 > irq322: ixl1:q13 240176554 2779 > irq323: ixl1:q14 254882418 2949 > irq324: ixl1:q15 236846536 2740 > irq325: ixl1:q16 86794467 1004 > irq326: ixl1:q17 83 0 > irq327: ixl1:q18 74 0 > irq328: ixl1:q19 202 0 > irq329: ixl1:q20 99 0 > irq330: ixl1:q21 96 0 > irq331: ixl1:q22 91 0 > irq332: ixl1:q23 89 0 >=20 > last pid: 28710; load averages: 7.16, 6.76, 6.49 up 1+00:00:41 17:40:= 46 > 391 processes: 32 running, 215 sleeping, 144 waiting > CPU 0: 0.0% user, 0.0% nice, 0.0% system, 49.2% interrupt, 50.8% idle > CPU 1: 0.0% user, 0.0% nice, 0.4% system, 41.3% interrupt, 58.3% idle > CPU 2: 0.0% user, 0.0% nice, 0.0% system, 39.0% interrupt, 61.0% idle > CPU 3: 0.0% user, 0.0% nice, 0.0% system, 46.5% interrupt, 53.5% idle > CPU 4: 0.0% user, 0.0% nice, 0.0% system, 37.4% interrupt, 62.6% idle > CPU 5: 0.0% user, 0.0% nice, 0.0% system, 40.9% interrupt, 59.1% idle > CPU 6: 0.0% user, 0.0% nice, 0.0% system, 40.2% interrupt, 59.8% idle > CPU 7: 0.0% user, 0.0% nice, 0.0% system, 45.3% interrupt, 54.7% idle > CPU 8: 0.0% user, 0.0% nice, 0.0% system, 20.5% interrupt, 79.5% idle > CPU 9: 0.0% user, 0.0% nice, 0.0% system, 25.2% interrupt, 74.8% idle > CPU 10: 0.0% user, 0.0% nice, 0.0% system, 23.2% interrupt, 76.8% idle > CPU 11: 0.0% user, 0.0% nice, 0.0% system, 19.3% interrupt, 80.7% idle > CPU 12: 0.0% user, 0.0% nice, 0.0% system, 28.7% interrupt, 71.3% idle > CPU 13: 0.0% user, 0.0% nice, 0.0% system, 20.5% interrupt, 79.5% idle > CPU 14: 0.0% user, 0.0% nice, 0.0% system, 35.0% interrupt, 65.0% idle > CPU 15: 0.0% user, 0.0% nice, 0.0% system, 23.2% interrupt, 76.8% idle > CPU 16: 0.0% user, 0.0% nice, 0.4% system, 1.2% interrupt, 98.4% idle > CPU 17: 0.0% user, 0.0% nice, 2.0% system, 0.0% interrupt, 98.0% idle > CPU 18: 0.0% user, 0.0% nice, 2.4% system, 0.0% interrupt, 97.6% idle > CPU 19: 0.0% user, 0.0% nice, 2.8% system, 0.0% interrupt, 97.2% idle > CPU 20: 0.0% user, 0.0% nice, 2.4% system, 0.0% interrupt, 97.6% idle > CPU 21: 0.0% user, 0.0% nice, 1.6% system, 0.0% interrupt, 98.4% idle > CPU 22: 0.0% user, 0.0% nice, 2.8% system, 0.0% interrupt, 97.2% idle > CPU 23: 0.0% user, 0.0% nice, 0.4% system, 0.0% interrupt, 99.6% idle >=20 > # netstat -I ixl0 -w1 -h > input ixl0 output > packets errs idrops bytes packets errs bytes colls > 253K 0 0 136M 311K 0 325M 0 > 251K 0 0 129M 314K 0 334M 0 > 250K 0 0 135M 313K 0 333M 0 >=20 > hw.ixl.tx_itr: 122 > hw.ixl.rx_itr: 62 > hw.ixl.dynamic_tx_itr: 0 > hw.ixl.dynamic_rx_itr: 0 > hw.ixl.max_queues: 0 > hw.ixl.ring_size: 4096 > hw.ixl.enable_msix: 1 > dev.ixl.3.mac.xoff_recvd: 0 > dev.ixl.3.mac.xoff_txd: 0 > dev.ixl.3.mac.xon_recvd: 0 > dev.ixl.3.mac.xon_txd: 0 > dev.ixl.3.mac.tx_frames_big: 0 > dev.ixl.3.mac.tx_frames_1024_1522: 0 > dev.ixl.3.mac.tx_frames_512_1023: 0 > dev.ixl.3.mac.tx_frames_256_511: 0 > dev.ixl.3.mac.tx_frames_128_255: 0 > dev.ixl.3.mac.tx_frames_65_127: 0 > dev.ixl.3.mac.tx_frames_64: 0 > dev.ixl.3.mac.checksum_errors: 0 > dev.ixl.3.mac.rx_jabber: 0 > dev.ixl.3.mac.rx_oversized: 0 > dev.ixl.3.mac.rx_fragmented: 0 > dev.ixl.3.mac.rx_undersize: 0 > dev.ixl.3.mac.rx_frames_big: 0 > dev.ixl.3.mac.rx_frames_1024_1522: 0 > dev.ixl.3.mac.rx_frames_512_1023: 0 > dev.ixl.3.mac.rx_frames_256_511: 0 > dev.ixl.3.mac.rx_frames_128_255: 0 > dev.ixl.3.mac.rx_frames_65_127: 0 > dev.ixl.3.mac.rx_frames_64: 0 > dev.ixl.3.mac.rx_length_errors: 0 > dev.ixl.3.mac.remote_faults: 0 > dev.ixl.3.mac.local_faults: 0 > dev.ixl.3.mac.illegal_bytes: 0 > dev.ixl.3.mac.crc_errors: 0 > dev.ixl.3.mac.bcast_pkts_txd: 0 > dev.ixl.3.mac.mcast_pkts_txd: 0 > dev.ixl.3.mac.ucast_pkts_txd: 0 > dev.ixl.3.mac.good_octets_txd: 0 > dev.ixl.3.mac.rx_discards: 0 > dev.ixl.3.mac.bcast_pkts_rcvd: 0 > dev.ixl.3.mac.mcast_pkts_rcvd: 0 > dev.ixl.3.mac.ucast_pkts_rcvd: 0 > dev.ixl.3.mac.good_octets_rcvd: 0 > dev.ixl.3.pf.que23.rx_bytes: 0 > dev.ixl.3.pf.que23.rx_packets: 0 > dev.ixl.3.pf.que23.tx_bytes: 0 > dev.ixl.3.pf.que23.tx_packets: 0 > dev.ixl.3.pf.que23.no_desc_avail: 0 > dev.ixl.3.pf.que23.tx_dma_setup: 0 > dev.ixl.3.pf.que23.tso_tx: 0 > dev.ixl.3.pf.que23.irqs: 0 > dev.ixl.3.pf.que23.dropped: 0 > dev.ixl.3.pf.que23.mbuf_defrag_failed: 0 > dev.ixl.3.pf.que22.rx_bytes: 0 > dev.ixl.3.pf.que22.rx_packets: 0 > dev.ixl.3.pf.que22.tx_bytes: 0 > dev.ixl.3.pf.que22.tx_packets: 0 > dev.ixl.3.pf.que22.no_desc_avail: 0 > dev.ixl.3.pf.que22.tx_dma_setup: 0 > dev.ixl.3.pf.que22.tso_tx: 0 > dev.ixl.3.pf.que22.irqs: 0 > dev.ixl.3.pf.que22.dropped: 0 > dev.ixl.3.pf.que22.mbuf_defrag_failed: 0 > dev.ixl.3.pf.que21.rx_bytes: 0 > dev.ixl.3.pf.que21.rx_packets: 0 > dev.ixl.3.pf.que21.tx_bytes: 0 > dev.ixl.3.pf.que21.tx_packets: 0 > dev.ixl.3.pf.que21.no_desc_avail: 0 > dev.ixl.3.pf.que21.tx_dma_setup: 0 > dev.ixl.3.pf.que21.tso_tx: 0 > dev.ixl.3.pf.que21.irqs: 0 > dev.ixl.3.pf.que21.dropped: 0 > dev.ixl.3.pf.que21.mbuf_defrag_failed: 0 > dev.ixl.3.pf.que20.rx_bytes: 0 > dev.ixl.3.pf.que20.rx_packets: 0 > dev.ixl.3.pf.que20.tx_bytes: 0 > dev.ixl.3.pf.que20.tx_packets: 0 > dev.ixl.3.pf.que20.no_desc_avail: 0 > dev.ixl.3.pf.que20.tx_dma_setup: 0 > dev.ixl.3.pf.que20.tso_tx: 0 > dev.ixl.3.pf.que20.irqs: 0 > dev.ixl.3.pf.que20.dropped: 0 > dev.ixl.3.pf.que20.mbuf_defrag_failed: 0 > dev.ixl.3.pf.que19.rx_bytes: 0 > dev.ixl.3.pf.que19.rx_packets: 0 > dev.ixl.3.pf.que19.tx_bytes: 0 > dev.ixl.3.pf.que19.tx_packets: 0 > dev.ixl.3.pf.que19.no_desc_avail: 0 > dev.ixl.3.pf.que19.tx_dma_setup: 0 > dev.ixl.3.pf.que19.tso_tx: 0 > dev.ixl.3.pf.que19.irqs: 0 > dev.ixl.3.pf.que19.dropped: 0 > dev.ixl.3.pf.que19.mbuf_defrag_failed: 0 > dev.ixl.3.pf.que18.rx_bytes: 0 > dev.ixl.3.pf.que18.rx_packets: 0 > dev.ixl.3.pf.que18.tx_bytes: 0 > dev.ixl.3.pf.que18.tx_packets: 0 > dev.ixl.3.pf.que18.no_desc_avail: 0 > dev.ixl.3.pf.que18.tx_dma_setup: 0 > dev.ixl.3.pf.que18.tso_tx: 0 > dev.ixl.3.pf.que18.irqs: 0 > dev.ixl.3.pf.que18.dropped: 0 > dev.ixl.3.pf.que18.mbuf_defrag_failed: 0 > dev.ixl.3.pf.que17.rx_bytes: 0 > dev.ixl.3.pf.que17.rx_packets: 0 > dev.ixl.3.pf.que17.tx_bytes: 0 > dev.ixl.3.pf.que17.tx_packets: 0 > dev.ixl.3.pf.que17.no_desc_avail: 0 > dev.ixl.3.pf.que17.tx_dma_setup: 0 > dev.ixl.3.pf.que17.tso_tx: 0 > dev.ixl.3.pf.que17.irqs: 0 > dev.ixl.3.pf.que17.dropped: 0 > dev.ixl.3.pf.que17.mbuf_defrag_failed: 0 > dev.ixl.3.pf.que16.rx_bytes: 0 > dev.ixl.3.pf.que16.rx_packets: 0 > dev.ixl.3.pf.que16.tx_bytes: 0 > dev.ixl.3.pf.que16.tx_packets: 0 > dev.ixl.3.pf.que16.no_desc_avail: 0 > dev.ixl.3.pf.que16.tx_dma_setup: 0 > dev.ixl.3.pf.que16.tso_tx: 0 > dev.ixl.3.pf.que16.irqs: 0 > dev.ixl.3.pf.que16.dropped: 0 > dev.ixl.3.pf.que16.mbuf_defrag_failed: 0 > dev.ixl.3.pf.que15.rx_bytes: 0 > dev.ixl.3.pf.que15.rx_packets: 0 > dev.ixl.3.pf.que15.tx_bytes: 0 > dev.ixl.3.pf.que15.tx_packets: 0 > dev.ixl.3.pf.que15.no_desc_avail: 0 > dev.ixl.3.pf.que15.tx_dma_setup: 0 > dev.ixl.3.pf.que15.tso_tx: 0 > dev.ixl.3.pf.que15.irqs: 0 > dev.ixl.3.pf.que15.dropped: 0 > dev.ixl.3.pf.que15.mbuf_defrag_failed: 0 > dev.ixl.3.pf.que14.rx_bytes: 0 > dev.ixl.3.pf.que14.rx_packets: 0 > dev.ixl.3.pf.que14.tx_bytes: 0 > dev.ixl.3.pf.que14.tx_packets: 0 > dev.ixl.3.pf.que14.no_desc_avail: 0 > dev.ixl.3.pf.que14.tx_dma_setup: 0 > dev.ixl.3.pf.que14.tso_tx: 0 > dev.ixl.3.pf.que14.irqs: 0 > dev.ixl.3.pf.que14.dropped: 0 > dev.ixl.3.pf.que14.mbuf_defrag_failed: 0 > dev.ixl.3.pf.que13.rx_bytes: 0 > dev.ixl.3.pf.que13.rx_packets: 0 > dev.ixl.3.pf.que13.tx_bytes: 0 > dev.ixl.3.pf.que13.tx_packets: 0 > dev.ixl.3.pf.que13.no_desc_avail: 0 > dev.ixl.3.pf.que13.tx_dma_setup: 0 > dev.ixl.3.pf.que13.tso_tx: 0 > dev.ixl.3.pf.que13.irqs: 0 > dev.ixl.3.pf.que13.dropped: 0 > dev.ixl.3.pf.que13.mbuf_defrag_failed: 0 > dev.ixl.3.pf.que12.rx_bytes: 0 > dev.ixl.3.pf.que12.rx_packets: 0 > dev.ixl.3.pf.que12.tx_bytes: 0 > dev.ixl.3.pf.que12.tx_packets: 0 > dev.ixl.3.pf.que12.no_desc_avail: 0 > dev.ixl.3.pf.que12.tx_dma_setup: 0 > dev.ixl.3.pf.que12.tso_tx: 0 > dev.ixl.3.pf.que12.irqs: 0 > dev.ixl.3.pf.que12.dropped: 0 > dev.ixl.3.pf.que12.mbuf_defrag_failed: 0 > dev.ixl.3.pf.que11.rx_bytes: 0 > dev.ixl.3.pf.que11.rx_packets: 0 > dev.ixl.3.pf.que11.tx_bytes: 0 > dev.ixl.3.pf.que11.tx_packets: 0 > dev.ixl.3.pf.que11.no_desc_avail: 0 > dev.ixl.3.pf.que11.tx_dma_setup: 0 > dev.ixl.3.pf.que11.tso_tx: 0 > dev.ixl.3.pf.que11.irqs: 0 > dev.ixl.3.pf.que11.dropped: 0 > dev.ixl.3.pf.que11.mbuf_defrag_failed: 0 > dev.ixl.3.pf.que10.rx_bytes: 0 > dev.ixl.3.pf.que10.rx_packets: 0 > dev.ixl.3.pf.que10.tx_bytes: 0 > dev.ixl.3.pf.que10.tx_packets: 0 > dev.ixl.3.pf.que10.no_desc_avail: 0 > dev.ixl.3.pf.que10.tx_dma_setup: 0 > dev.ixl.3.pf.que10.tso_tx: 0 > dev.ixl.3.pf.que10.irqs: 0 > dev.ixl.3.pf.que10.dropped: 0 > dev.ixl.3.pf.que10.mbuf_defrag_failed: 0 > dev.ixl.3.pf.que9.rx_bytes: 0 > dev.ixl.3.pf.que9.rx_packets: 0 > dev.ixl.3.pf.que9.tx_bytes: 0 > dev.ixl.3.pf.que9.tx_packets: 0 > dev.ixl.3.pf.que9.no_desc_avail: 0 > dev.ixl.3.pf.que9.tx_dma_setup: 0 > dev.ixl.3.pf.que9.tso_tx: 0 > dev.ixl.3.pf.que9.irqs: 0 > dev.ixl.3.pf.que9.dropped: 0 > dev.ixl.3.pf.que9.mbuf_defrag_failed: 0 > dev.ixl.3.pf.que8.rx_bytes: 0 > dev.ixl.3.pf.que8.rx_packets: 0 > dev.ixl.3.pf.que8.tx_bytes: 0 > dev.ixl.3.pf.que8.tx_packets: 0 > dev.ixl.3.pf.que8.no_desc_avail: 0 > dev.ixl.3.pf.que8.tx_dma_setup: 0 > dev.ixl.3.pf.que8.tso_tx: 0 > dev.ixl.3.pf.que8.irqs: 0 > dev.ixl.3.pf.que8.dropped: 0 > dev.ixl.3.pf.que8.mbuf_defrag_failed: 0 > dev.ixl.3.pf.que7.rx_bytes: 0 > dev.ixl.3.pf.que7.rx_packets: 0 > dev.ixl.3.pf.que7.tx_bytes: 0 > dev.ixl.3.pf.que7.tx_packets: 0 > dev.ixl.3.pf.que7.no_desc_avail: 0 > dev.ixl.3.pf.que7.tx_dma_setup: 0 > dev.ixl.3.pf.que7.tso_tx: 0 > dev.ixl.3.pf.que7.irqs: 0 > dev.ixl.3.pf.que7.dropped: 0 > dev.ixl.3.pf.que7.mbuf_defrag_failed: 0 > dev.ixl.3.pf.que6.rx_bytes: 0 > dev.ixl.3.pf.que6.rx_packets: 0 > dev.ixl.3.pf.que6.tx_bytes: 0 > dev.ixl.3.pf.que6.tx_packets: 0 > dev.ixl.3.pf.que6.no_desc_avail: 0 > dev.ixl.3.pf.que6.tx_dma_setup: 0 > dev.ixl.3.pf.que6.tso_tx: 0 > dev.ixl.3.pf.que6.irqs: 0 > dev.ixl.3.pf.que6.dropped: 0 > dev.ixl.3.pf.que6.mbuf_defrag_failed: 0 > dev.ixl.3.pf.que5.rx_bytes: 0 > dev.ixl.3.pf.que5.rx_packets: 0 > dev.ixl.3.pf.que5.tx_bytes: 0 > dev.ixl.3.pf.que5.tx_packets: 0 > dev.ixl.3.pf.que5.no_desc_avail: 0 > dev.ixl.3.pf.que5.tx_dma_setup: 0 > dev.ixl.3.pf.que5.tso_tx: 0 > dev.ixl.3.pf.que5.irqs: 0 > dev.ixl.3.pf.que5.dropped: 0 > dev.ixl.3.pf.que5.mbuf_defrag_failed: 0 > dev.ixl.3.pf.que4.rx_bytes: 0 > dev.ixl.3.pf.que4.rx_packets: 0 > dev.ixl.3.pf.que4.tx_bytes: 0 > dev.ixl.3.pf.que4.tx_packets: 0 > dev.ixl.3.pf.que4.no_desc_avail: 0 > dev.ixl.3.pf.que4.tx_dma_setup: 0 > dev.ixl.3.pf.que4.tso_tx: 0 > dev.ixl.3.pf.que4.irqs: 0 > dev.ixl.3.pf.que4.dropped: 0 > dev.ixl.3.pf.que4.mbuf_defrag_failed: 0 > dev.ixl.3.pf.que3.rx_bytes: 0 > dev.ixl.3.pf.que3.rx_packets: 0 > dev.ixl.3.pf.que3.tx_bytes: 0 > dev.ixl.3.pf.que3.tx_packets: 0 > dev.ixl.3.pf.que3.no_desc_avail: 0 > dev.ixl.3.pf.que3.tx_dma_setup: 0 > dev.ixl.3.pf.que3.tso_tx: 0 > dev.ixl.3.pf.que3.irqs: 0 > dev.ixl.3.pf.que3.dropped: 0 > dev.ixl.3.pf.que3.mbuf_defrag_failed: 0 > dev.ixl.3.pf.que2.rx_bytes: 0 > dev.ixl.3.pf.que2.rx_packets: 0 > dev.ixl.3.pf.que2.tx_bytes: 0 > dev.ixl.3.pf.que2.tx_packets: 0 > dev.ixl.3.pf.que2.no_desc_avail: 0 > dev.ixl.3.pf.que2.tx_dma_setup: 0 > dev.ixl.3.pf.que2.tso_tx: 0 > dev.ixl.3.pf.que2.irqs: 0 > dev.ixl.3.pf.que2.dropped: 0 > dev.ixl.3.pf.que2.mbuf_defrag_failed: 0 > dev.ixl.3.pf.que1.rx_bytes: 0 > dev.ixl.3.pf.que1.rx_packets: 0 > dev.ixl.3.pf.que1.tx_bytes: 0 > dev.ixl.3.pf.que1.tx_packets: 0 > dev.ixl.3.pf.que1.no_desc_avail: 0 > dev.ixl.3.pf.que1.tx_dma_setup: 0 > dev.ixl.3.pf.que1.tso_tx: 0 > dev.ixl.3.pf.que1.irqs: 0 > dev.ixl.3.pf.que1.dropped: 0 > dev.ixl.3.pf.que1.mbuf_defrag_failed: 0 > dev.ixl.3.pf.que0.rx_bytes: 0 > dev.ixl.3.pf.que0.rx_packets: 0 > dev.ixl.3.pf.que0.tx_bytes: 0 > dev.ixl.3.pf.que0.tx_packets: 0 > dev.ixl.3.pf.que0.no_desc_avail: 0 > dev.ixl.3.pf.que0.tx_dma_setup: 0 > dev.ixl.3.pf.que0.tso_tx: 0 > dev.ixl.3.pf.que0.irqs: 0 > dev.ixl.3.pf.que0.dropped: 0 > dev.ixl.3.pf.que0.mbuf_defrag_failed: 0 > dev.ixl.3.pf.bcast_pkts_txd: 0 > dev.ixl.3.pf.mcast_pkts_txd: 0 > dev.ixl.3.pf.ucast_pkts_txd: 0 > dev.ixl.3.pf.good_octets_txd: 0 > dev.ixl.3.pf.rx_discards: 0 > dev.ixl.3.pf.bcast_pkts_rcvd: 0 > dev.ixl.3.pf.mcast_pkts_rcvd: 0 > dev.ixl.3.pf.ucast_pkts_rcvd: 0 > dev.ixl.3.pf.good_octets_rcvd: 0 > dev.ixl.3.vc_debug_level: 1 > dev.ixl.3.admin_irq: 0 > dev.ixl.3.watchdog_events: 0 > dev.ixl.3.debug: 0 > dev.ixl.3.dynamic_tx_itr: 0 > dev.ixl.3.tx_itr: 122 > dev.ixl.3.dynamic_rx_itr: 0 > dev.ixl.3.rx_itr: 62 > dev.ixl.3.fw_version: f4.33 a1.2 n04.42 e8000191d > dev.ixl.3.current_speed: Unknown > dev.ixl.3.advertise_speed: 0 > dev.ixl.3.fc: 0 > dev.ixl.3.%parent: pci129 > dev.ixl.3.%pnpinfo: vendor=3D0x8086 device=3D0x1572 subvendor=3D0x8086=20 > subdevice=3D0x0000 class=3D0x020000 > dev.ixl.3.%location: slot=3D0 function=3D3 handle=3D\_SB_.PCI1.QR3A.H003 > dev.ixl.3.%driver: ixl > dev.ixl.3.%desc: Intel(R) Ethernet Connection XL710 Driver, Version - 1.4= =2E0 > dev.ixl.2.mac.xoff_recvd: 0 > dev.ixl.2.mac.xoff_txd: 0 > dev.ixl.2.mac.xon_recvd: 0 > dev.ixl.2.mac.xon_txd: 0 > dev.ixl.2.mac.tx_frames_big: 0 > dev.ixl.2.mac.tx_frames_1024_1522: 0 > dev.ixl.2.mac.tx_frames_512_1023: 0 > dev.ixl.2.mac.tx_frames_256_511: 0 > dev.ixl.2.mac.tx_frames_128_255: 0 > dev.ixl.2.mac.tx_frames_65_127: 0 > dev.ixl.2.mac.tx_frames_64: 0 > dev.ixl.2.mac.checksum_errors: 0 > dev.ixl.2.mac.rx_jabber: 0 > dev.ixl.2.mac.rx_oversized: 0 > dev.ixl.2.mac.rx_fragmented: 0 > dev.ixl.2.mac.rx_undersize: 0 > dev.ixl.2.mac.rx_frames_big: 0 > dev.ixl.2.mac.rx_frames_1024_1522: 0 > dev.ixl.2.mac.rx_frames_512_1023: 0 > dev.ixl.2.mac.rx_frames_256_511: 0 > dev.ixl.2.mac.rx_frames_128_255: 0 > dev.ixl.2.mac.rx_frames_65_127: 0 > dev.ixl.2.mac.rx_frames_64: 0 > dev.ixl.2.mac.rx_length_errors: 0 > dev.ixl.2.mac.remote_faults: 0 > dev.ixl.2.mac.local_faults: 0 > dev.ixl.2.mac.illegal_bytes: 0 > dev.ixl.2.mac.crc_errors: 0 > dev.ixl.2.mac.bcast_pkts_txd: 0 > dev.ixl.2.mac.mcast_pkts_txd: 0 > dev.ixl.2.mac.ucast_pkts_txd: 0 > dev.ixl.2.mac.good_octets_txd: 0 > dev.ixl.2.mac.rx_discards: 0 > dev.ixl.2.mac.bcast_pkts_rcvd: 0 > dev.ixl.2.mac.mcast_pkts_rcvd: 0 > dev.ixl.2.mac.ucast_pkts_rcvd: 0 > dev.ixl.2.mac.good_octets_rcvd: 0 > dev.ixl.2.pf.que23.rx_bytes: 0 > dev.ixl.2.pf.que23.rx_packets: 0 > dev.ixl.2.pf.que23.tx_bytes: 0 > dev.ixl.2.pf.que23.tx_packets: 0 > dev.ixl.2.pf.que23.no_desc_avail: 0 > dev.ixl.2.pf.que23.tx_dma_setup: 0 > dev.ixl.2.pf.que23.tso_tx: 0 > dev.ixl.2.pf.que23.irqs: 0 > dev.ixl.2.pf.que23.dropped: 0 > dev.ixl.2.pf.que23.mbuf_defrag_failed: 0 > dev.ixl.2.pf.que22.rx_bytes: 0 > dev.ixl.2.pf.que22.rx_packets: 0 > dev.ixl.2.pf.que22.tx_bytes: 0 > dev.ixl.2.pf.que22.tx_packets: 0 > dev.ixl.2.pf.que22.no_desc_avail: 0 > dev.ixl.2.pf.que22.tx_dma_setup: 0 > dev.ixl.2.pf.que22.tso_tx: 0 > dev.ixl.2.pf.que22.irqs: 0 > dev.ixl.2.pf.que22.dropped: 0 > dev.ixl.2.pf.que22.mbuf_defrag_failed: 0 > dev.ixl.2.pf.que21.rx_bytes: 0 > dev.ixl.2.pf.que21.rx_packets: 0 > dev.ixl.2.pf.que21.tx_bytes: 0 > dev.ixl.2.pf.que21.tx_packets: 0 > dev.ixl.2.pf.que21.no_desc_avail: 0 > dev.ixl.2.pf.que21.tx_dma_setup: 0 > dev.ixl.2.pf.que21.tso_tx: 0 > dev.ixl.2.pf.que21.irqs: 0 > dev.ixl.2.pf.que21.dropped: 0 > dev.ixl.2.pf.que21.mbuf_defrag_failed: 0 > dev.ixl.2.pf.que20.rx_bytes: 0 > dev.ixl.2.pf.que20.rx_packets: 0 > dev.ixl.2.pf.que20.tx_bytes: 0 > dev.ixl.2.pf.que20.tx_packets: 0 > dev.ixl.2.pf.que20.no_desc_avail: 0 > dev.ixl.2.pf.que20.tx_dma_setup: 0 > dev.ixl.2.pf.que20.tso_tx: 0 > dev.ixl.2.pf.que20.irqs: 0 > dev.ixl.2.pf.que20.dropped: 0 > dev.ixl.2.pf.que20.mbuf_defrag_failed: 0 > dev.ixl.2.pf.que19.rx_bytes: 0 > dev.ixl.2.pf.que19.rx_packets: 0 > dev.ixl.2.pf.que19.tx_bytes: 0 > dev.ixl.2.pf.que19.tx_packets: 0 > dev.ixl.2.pf.que19.no_desc_avail: 0 > dev.ixl.2.pf.que19.tx_dma_setup: 0 > dev.ixl.2.pf.que19.tso_tx: 0 > dev.ixl.2.pf.que19.irqs: 0 > dev.ixl.2.pf.que19.dropped: 0 > dev.ixl.2.pf.que19.mbuf_defrag_failed: 0 > dev.ixl.2.pf.que18.rx_bytes: 0 > dev.ixl.2.pf.que18.rx_packets: 0 > dev.ixl.2.pf.que18.tx_bytes: 0 > dev.ixl.2.pf.que18.tx_packets: 0 > dev.ixl.2.pf.que18.no_desc_avail: 0 > dev.ixl.2.pf.que18.tx_dma_setup: 0 > dev.ixl.2.pf.que18.tso_tx: 0 > dev.ixl.2.pf.que18.irqs: 0 > dev.ixl.2.pf.que18.dropped: 0 > dev.ixl.2.pf.que18.mbuf_defrag_failed: 0 > dev.ixl.2.pf.que17.rx_bytes: 0 > dev.ixl.2.pf.que17.rx_packets: 0 > dev.ixl.2.pf.que17.tx_bytes: 0 > dev.ixl.2.pf.que17.tx_packets: 0 > dev.ixl.2.pf.que17.no_desc_avail: 0 > dev.ixl.2.pf.que17.tx_dma_setup: 0 > dev.ixl.2.pf.que17.tso_tx: 0 > dev.ixl.2.pf.que17.irqs: 0 > dev.ixl.2.pf.que17.dropped: 0 > dev.ixl.2.pf.que17.mbuf_defrag_failed: 0 > dev.ixl.2.pf.que16.rx_bytes: 0 > dev.ixl.2.pf.que16.rx_packets: 0 > dev.ixl.2.pf.que16.tx_bytes: 0 > dev.ixl.2.pf.que16.tx_packets: 0 > dev.ixl.2.pf.que16.no_desc_avail: 0 > dev.ixl.2.pf.que16.tx_dma_setup: 0 > dev.ixl.2.pf.que16.tso_tx: 0 > dev.ixl.2.pf.que16.irqs: 0 > dev.ixl.2.pf.que16.dropped: 0 > dev.ixl.2.pf.que16.mbuf_defrag_failed: 0 > dev.ixl.2.pf.que15.rx_bytes: 0 > dev.ixl.2.pf.que15.rx_packets: 0 > dev.ixl.2.pf.que15.tx_bytes: 0 > dev.ixl.2.pf.que15.tx_packets: 0 > dev.ixl.2.pf.que15.no_desc_avail: 0 > dev.ixl.2.pf.que15.tx_dma_setup: 0 > dev.ixl.2.pf.que15.tso_tx: 0 > dev.ixl.2.pf.que15.irqs: 0 > dev.ixl.2.pf.que15.dropped: 0 > dev.ixl.2.pf.que15.mbuf_defrag_failed: 0 > dev.ixl.2.pf.que14.rx_bytes: 0 > dev.ixl.2.pf.que14.rx_packets: 0 > dev.ixl.2.pf.que14.tx_bytes: 0 > dev.ixl.2.pf.que14.tx_packets: 0 > dev.ixl.2.pf.que14.no_desc_avail: 0 > dev.ixl.2.pf.que14.tx_dma_setup: 0 > dev.ixl.2.pf.que14.tso_tx: 0 > dev.ixl.2.pf.que14.irqs: 0 > dev.ixl.2.pf.que14.dropped: 0 > dev.ixl.2.pf.que14.mbuf_defrag_failed: 0 > dev.ixl.2.pf.que13.rx_bytes: 0 > dev.ixl.2.pf.que13.rx_packets: 0 > dev.ixl.2.pf.que13.tx_bytes: 0 > dev.ixl.2.pf.que13.tx_packets: 0 > dev.ixl.2.pf.que13.no_desc_avail: 0 > dev.ixl.2.pf.que13.tx_dma_setup: 0 > dev.ixl.2.pf.que13.tso_tx: 0 > dev.ixl.2.pf.que13.irqs: 0 > dev.ixl.2.pf.que13.dropped: 0 > dev.ixl.2.pf.que13.mbuf_defrag_failed: 0 > dev.ixl.2.pf.que12.rx_bytes: 0 > dev.ixl.2.pf.que12.rx_packets: 0 > dev.ixl.2.pf.que12.tx_bytes: 0 > dev.ixl.2.pf.que12.tx_packets: 0 > dev.ixl.2.pf.que12.no_desc_avail: 0 > dev.ixl.2.pf.que12.tx_dma_setup: 0 > dev.ixl.2.pf.que12.tso_tx: 0 > dev.ixl.2.pf.que12.irqs: 0 > dev.ixl.2.pf.que12.dropped: 0 > dev.ixl.2.pf.que12.mbuf_defrag_failed: 0 > dev.ixl.2.pf.que11.rx_bytes: 0 > dev.ixl.2.pf.que11.rx_packets: 0 > dev.ixl.2.pf.que11.tx_bytes: 0 > dev.ixl.2.pf.que11.tx_packets: 0 > dev.ixl.2.pf.que11.no_desc_avail: 0 > dev.ixl.2.pf.que11.tx_dma_setup: 0 > dev.ixl.2.pf.que11.tso_tx: 0 > dev.ixl.2.pf.que11.irqs: 0 > dev.ixl.2.pf.que11.dropped: 0 > dev.ixl.2.pf.que11.mbuf_defrag_failed: 0 > dev.ixl.2.pf.que10.rx_bytes: 0 > dev.ixl.2.pf.que10.rx_packets: 0 > dev.ixl.2.pf.que10.tx_bytes: 0 > dev.ixl.2.pf.que10.tx_packets: 0 > dev.ixl.2.pf.que10.no_desc_avail: 0 > dev.ixl.2.pf.que10.tx_dma_setup: 0 > dev.ixl.2.pf.que10.tso_tx: 0 > dev.ixl.2.pf.que10.irqs: 0 > dev.ixl.2.pf.que10.dropped: 0 > dev.ixl.2.pf.que10.mbuf_defrag_failed: 0 > dev.ixl.2.pf.que9.rx_bytes: 0 > dev.ixl.2.pf.que9.rx_packets: 0 > dev.ixl.2.pf.que9.tx_bytes: 0 > dev.ixl.2.pf.que9.tx_packets: 0 > dev.ixl.2.pf.que9.no_desc_avail: 0 > dev.ixl.2.pf.que9.tx_dma_setup: 0 > dev.ixl.2.pf.que9.tso_tx: 0 > dev.ixl.2.pf.que9.irqs: 0 > dev.ixl.2.pf.que9.dropped: 0 > dev.ixl.2.pf.que9.mbuf_defrag_failed: 0 > dev.ixl.2.pf.que8.rx_bytes: 0 > dev.ixl.2.pf.que8.rx_packets: 0 > dev.ixl.2.pf.que8.tx_bytes: 0 > dev.ixl.2.pf.que8.tx_packets: 0 > dev.ixl.2.pf.que8.no_desc_avail: 0 > dev.ixl.2.pf.que8.tx_dma_setup: 0 > dev.ixl.2.pf.que8.tso_tx: 0 > dev.ixl.2.pf.que8.irqs: 0 > dev.ixl.2.pf.que8.dropped: 0 > dev.ixl.2.pf.que8.mbuf_defrag_failed: 0 > dev.ixl.2.pf.que7.rx_bytes: 0 > dev.ixl.2.pf.que7.rx_packets: 0 > dev.ixl.2.pf.que7.tx_bytes: 0 > dev.ixl.2.pf.que7.tx_packets: 0 > dev.ixl.2.pf.que7.no_desc_avail: 0 > dev.ixl.2.pf.que7.tx_dma_setup: 0 > dev.ixl.2.pf.que7.tso_tx: 0 > dev.ixl.2.pf.que7.irqs: 0 > dev.ixl.2.pf.que7.dropped: 0 > dev.ixl.2.pf.que7.mbuf_defrag_failed: 0 > dev.ixl.2.pf.que6.rx_bytes: 0 > dev.ixl.2.pf.que6.rx_packets: 0 > dev.ixl.2.pf.que6.tx_bytes: 0 > dev.ixl.2.pf.que6.tx_packets: 0 > dev.ixl.2.pf.que6.no_desc_avail: 0 > dev.ixl.2.pf.que6.tx_dma_setup: 0 > dev.ixl.2.pf.que6.tso_tx: 0 > dev.ixl.2.pf.que6.irqs: 0 > dev.ixl.2.pf.que6.dropped: 0 > dev.ixl.2.pf.que6.mbuf_defrag_failed: 0 > dev.ixl.2.pf.que5.rx_bytes: 0 > dev.ixl.2.pf.que5.rx_packets: 0 > dev.ixl.2.pf.que5.tx_bytes: 0 > dev.ixl.2.pf.que5.tx_packets: 0 > dev.ixl.2.pf.que5.no_desc_avail: 0 > dev.ixl.2.pf.que5.tx_dma_setup: 0 > dev.ixl.2.pf.que5.tso_tx: 0 > dev.ixl.2.pf.que5.irqs: 0 > dev.ixl.2.pf.que5.dropped: 0 > dev.ixl.2.pf.que5.mbuf_defrag_failed: 0 > dev.ixl.2.pf.que4.rx_bytes: 0 > dev.ixl.2.pf.que4.rx_packets: 0 > dev.ixl.2.pf.que4.tx_bytes: 0 > dev.ixl.2.pf.que4.tx_packets: 0 > dev.ixl.2.pf.que4.no_desc_avail: 0 > dev.ixl.2.pf.que4.tx_dma_setup: 0 > dev.ixl.2.pf.que4.tso_tx: 0 > dev.ixl.2.pf.que4.irqs: 0 > dev.ixl.2.pf.que4.dropped: 0 > dev.ixl.2.pf.que4.mbuf_defrag_failed: 0 > dev.ixl.2.pf.que3.rx_bytes: 0 > dev.ixl.2.pf.que3.rx_packets: 0 > dev.ixl.2.pf.que3.tx_bytes: 0 > dev.ixl.2.pf.que3.tx_packets: 0 > dev.ixl.2.pf.que3.no_desc_avail: 0 > dev.ixl.2.pf.que3.tx_dma_setup: 0 > dev.ixl.2.pf.que3.tso_tx: 0 > dev.ixl.2.pf.que3.irqs: 0 > dev.ixl.2.pf.que3.dropped: 0 > dev.ixl.2.pf.que3.mbuf_defrag_failed: 0 > dev.ixl.2.pf.que2.rx_bytes: 0 > dev.ixl.2.pf.que2.rx_packets: 0 > dev.ixl.2.pf.que2.tx_bytes: 0 > dev.ixl.2.pf.que2.tx_packets: 0 > dev.ixl.2.pf.que2.no_desc_avail: 0 > dev.ixl.2.pf.que2.tx_dma_setup: 0 > dev.ixl.2.pf.que2.tso_tx: 0 > dev.ixl.2.pf.que2.irqs: 0 > dev.ixl.2.pf.que2.dropped: 0 > dev.ixl.2.pf.que2.mbuf_defrag_failed: 0 > dev.ixl.2.pf.que1.rx_bytes: 0 > dev.ixl.2.pf.que1.rx_packets: 0 > dev.ixl.2.pf.que1.tx_bytes: 0 > dev.ixl.2.pf.que1.tx_packets: 0 > dev.ixl.2.pf.que1.no_desc_avail: 0 > dev.ixl.2.pf.que1.tx_dma_setup: 0 > dev.ixl.2.pf.que1.tso_tx: 0 > dev.ixl.2.pf.que1.irqs: 0 > dev.ixl.2.pf.que1.dropped: 0 > dev.ixl.2.pf.que1.mbuf_defrag_failed: 0 > dev.ixl.2.pf.que0.rx_bytes: 0 > dev.ixl.2.pf.que0.rx_packets: 0 > dev.ixl.2.pf.que0.tx_bytes: 0 > dev.ixl.2.pf.que0.tx_packets: 0 > dev.ixl.2.pf.que0.no_desc_avail: 0 > dev.ixl.2.pf.que0.tx_dma_setup: 0 > dev.ixl.2.pf.que0.tso_tx: 0 > dev.ixl.2.pf.que0.irqs: 0 > dev.ixl.2.pf.que0.dropped: 0 > dev.ixl.2.pf.que0.mbuf_defrag_failed: 0 > dev.ixl.2.pf.bcast_pkts_txd: 0 > dev.ixl.2.pf.mcast_pkts_txd: 0 > dev.ixl.2.pf.ucast_pkts_txd: 0 > dev.ixl.2.pf.good_octets_txd: 0 > dev.ixl.2.pf.rx_discards: 0 > dev.ixl.2.pf.bcast_pkts_rcvd: 0 > dev.ixl.2.pf.mcast_pkts_rcvd: 0 > dev.ixl.2.pf.ucast_pkts_rcvd: 0 > dev.ixl.2.pf.good_octets_rcvd: 0 > dev.ixl.2.vc_debug_level: 1 > dev.ixl.2.admin_irq: 0 > dev.ixl.2.watchdog_events: 0 > dev.ixl.2.debug: 0 > dev.ixl.2.dynamic_tx_itr: 0 > dev.ixl.2.tx_itr: 122 > dev.ixl.2.dynamic_rx_itr: 0 > dev.ixl.2.rx_itr: 62 > dev.ixl.2.fw_version: f4.33 a1.2 n04.42 e8000191d > dev.ixl.2.current_speed: Unknown > dev.ixl.2.advertise_speed: 0 > dev.ixl.2.fc: 0 > dev.ixl.2.%parent: pci129 > dev.ixl.2.%pnpinfo: vendor=3D0x8086 device=3D0x1572 subvendor=3D0x8086=20 > subdevice=3D0x0000 class=3D0x020000 > dev.ixl.2.%location: slot=3D0 function=3D2 handle=3D\_SB_.PCI1.QR3A.H002 > dev.ixl.2.%driver: ixl > dev.ixl.2.%desc: Intel(R) Ethernet Connection XL710 Driver, Version - 1.4= =2E0 > dev.ixl.1.mac.xoff_recvd: 0 > dev.ixl.1.mac.xoff_txd: 0 > dev.ixl.1.mac.xon_recvd: 0 > dev.ixl.1.mac.xon_txd: 0 > dev.ixl.1.mac.tx_frames_big: 0 > dev.ixl.1.mac.tx_frames_1024_1522: 1565670684 > dev.ixl.1.mac.tx_frames_512_1023: 101286418 > dev.ixl.1.mac.tx_frames_256_511: 49713129 > dev.ixl.1.mac.tx_frames_128_255: 231617277 > dev.ixl.1.mac.tx_frames_65_127: 2052767669 > dev.ixl.1.mac.tx_frames_64: 1318689044 > dev.ixl.1.mac.checksum_errors: 0 > dev.ixl.1.mac.rx_jabber: 0 > dev.ixl.1.mac.rx_oversized: 0 > dev.ixl.1.mac.rx_fragmented: 0 > dev.ixl.1.mac.rx_undersize: 0 > dev.ixl.1.mac.rx_frames_big: 0 > dev.ixl.1.mac.rx_frames_1024_1522: 4960403414 > dev.ixl.1.mac.rx_frames_512_1023: 113675084 > dev.ixl.1.mac.rx_frames_256_511: 253904920 > dev.ixl.1.mac.rx_frames_128_255: 196369726 > dev.ixl.1.mac.rx_frames_65_127: 1436626211 > dev.ixl.1.mac.rx_frames_64: 242768681 > dev.ixl.1.mac.rx_length_errors: 0 > dev.ixl.1.mac.remote_faults: 0 > dev.ixl.1.mac.local_faults: 0 > dev.ixl.1.mac.illegal_bytes: 0 > dev.ixl.1.mac.crc_errors: 0 > dev.ixl.1.mac.bcast_pkts_txd: 277 > dev.ixl.1.mac.mcast_pkts_txd: 0 > dev.ixl.1.mac.ucast_pkts_txd: 5319743942 > dev.ixl.1.mac.good_octets_txd: 2642351885737 > dev.ixl.1.mac.rx_discards: 0 > dev.ixl.1.mac.bcast_pkts_rcvd: 5 > dev.ixl.1.mac.mcast_pkts_rcvd: 144 > dev.ixl.1.mac.ucast_pkts_rcvd: 7203747879 > dev.ixl.1.mac.good_octets_rcvd: 7770230492434 > dev.ixl.1.pf.que23.rx_bytes: 0 > dev.ixl.1.pf.que23.rx_packets: 0 > dev.ixl.1.pf.que23.tx_bytes: 7111 > dev.ixl.1.pf.que23.tx_packets: 88 > dev.ixl.1.pf.que23.no_desc_avail: 0 > dev.ixl.1.pf.que23.tx_dma_setup: 0 > dev.ixl.1.pf.que23.tso_tx: 0 > dev.ixl.1.pf.que23.irqs: 88 > dev.ixl.1.pf.que23.dropped: 0 > dev.ixl.1.pf.que23.mbuf_defrag_failed: 0 > dev.ixl.1.pf.que22.rx_bytes: 0 > dev.ixl.1.pf.que22.rx_packets: 0 > dev.ixl.1.pf.que22.tx_bytes: 6792 > dev.ixl.1.pf.que22.tx_packets: 88 > dev.ixl.1.pf.que22.no_desc_avail: 0 > dev.ixl.1.pf.que22.tx_dma_setup: 0 > dev.ixl.1.pf.que22.tso_tx: 0 > dev.ixl.1.pf.que22.irqs: 89 > dev.ixl.1.pf.que22.dropped: 0 > dev.ixl.1.pf.que22.mbuf_defrag_failed: 0 > dev.ixl.1.pf.que21.rx_bytes: 0 > dev.ixl.1.pf.que21.rx_packets: 0 > dev.ixl.1.pf.que21.tx_bytes: 7486 > dev.ixl.1.pf.que21.tx_packets: 93 > dev.ixl.1.pf.que21.no_desc_avail: 0 > dev.ixl.1.pf.que21.tx_dma_setup: 0 > dev.ixl.1.pf.que21.tso_tx: 0 > dev.ixl.1.pf.que21.irqs: 95 > dev.ixl.1.pf.que21.dropped: 0 > dev.ixl.1.pf.que21.mbuf_defrag_failed: 0 > dev.ixl.1.pf.que20.rx_bytes: 0 > dev.ixl.1.pf.que20.rx_packets: 0 > dev.ixl.1.pf.que20.tx_bytes: 7850 > dev.ixl.1.pf.que20.tx_packets: 98 > dev.ixl.1.pf.que20.no_desc_avail: 0 > dev.ixl.1.pf.que20.tx_dma_setup: 0 > dev.ixl.1.pf.que20.tso_tx: 0 > dev.ixl.1.pf.que20.irqs: 99 > dev.ixl.1.pf.que20.dropped: 0 > dev.ixl.1.pf.que20.mbuf_defrag_failed: 0 > dev.ixl.1.pf.que19.rx_bytes: 0 > dev.ixl.1.pf.que19.rx_packets: 0 > dev.ixl.1.pf.que19.tx_bytes: 64643 > dev.ixl.1.pf.que19.tx_packets: 202 > dev.ixl.1.pf.que19.no_desc_avail: 0 > dev.ixl.1.pf.que19.tx_dma_setup: 0 > dev.ixl.1.pf.que19.tso_tx: 0 > dev.ixl.1.pf.que19.irqs: 202 > dev.ixl.1.pf.que19.dropped: 0 > dev.ixl.1.pf.que19.mbuf_defrag_failed: 0 > dev.ixl.1.pf.que18.rx_bytes: 0 > dev.ixl.1.pf.que18.rx_packets: 0 > dev.ixl.1.pf.que18.tx_bytes: 5940 > dev.ixl.1.pf.que18.tx_packets: 74 > dev.ixl.1.pf.que18.no_desc_avail: 0 > dev.ixl.1.pf.que18.tx_dma_setup: 0 > dev.ixl.1.pf.que18.tso_tx: 0 > dev.ixl.1.pf.que18.irqs: 74 > dev.ixl.1.pf.que18.dropped: 0 > dev.ixl.1.pf.que18.mbuf_defrag_failed: 0 > dev.ixl.1.pf.que17.rx_bytes: 0 > dev.ixl.1.pf.que17.rx_packets: 0 > dev.ixl.1.pf.que17.tx_bytes: 11675 > dev.ixl.1.pf.que17.tx_packets: 83 > dev.ixl.1.pf.que17.no_desc_avail: 0 > dev.ixl.1.pf.que17.tx_dma_setup: 0 > dev.ixl.1.pf.que17.tso_tx: 0 > dev.ixl.1.pf.que17.irqs: 83 > dev.ixl.1.pf.que17.dropped: 0 > dev.ixl.1.pf.que17.mbuf_defrag_failed: 0 > dev.ixl.1.pf.que16.rx_bytes: 0 > dev.ixl.1.pf.que16.rx_packets: 0 > dev.ixl.1.pf.que16.tx_bytes: 105750457831 > dev.ixl.1.pf.que16.tx_packets: 205406766 > dev.ixl.1.pf.que16.no_desc_avail: 0 > dev.ixl.1.pf.que16.tx_dma_setup: 0 > dev.ixl.1.pf.que16.tso_tx: 0 > dev.ixl.1.pf.que16.irqs: 87222978 > dev.ixl.1.pf.que16.dropped: 0 > dev.ixl.1.pf.que16.mbuf_defrag_failed: 0 > dev.ixl.1.pf.que15.rx_bytes: 289558174088 > dev.ixl.1.pf.que15.rx_packets: 272466190 > dev.ixl.1.pf.que15.tx_bytes: 106152524681 > dev.ixl.1.pf.que15.tx_packets: 205379247 > dev.ixl.1.pf.que15.no_desc_avail: 0 > dev.ixl.1.pf.que15.tx_dma_setup: 0 > dev.ixl.1.pf.que15.tso_tx: 0 > dev.ixl.1.pf.que15.irqs: 238145862 > dev.ixl.1.pf.que15.dropped: 0 > dev.ixl.1.pf.que15.mbuf_defrag_failed: 0 > dev.ixl.1.pf.que14.rx_bytes: 301934533473 > dev.ixl.1.pf.que14.rx_packets: 298452930 > dev.ixl.1.pf.que14.tx_bytes: 111420393725 > dev.ixl.1.pf.que14.tx_packets: 215722532 > dev.ixl.1.pf.que14.no_desc_avail: 0 > dev.ixl.1.pf.que14.tx_dma_setup: 0 > dev.ixl.1.pf.que14.tso_tx: 0 > dev.ixl.1.pf.que14.irqs: 256291617 > dev.ixl.1.pf.que14.dropped: 0 > dev.ixl.1.pf.que14.mbuf_defrag_failed: 0 > dev.ixl.1.pf.que13.rx_bytes: 291380746253 > dev.ixl.1.pf.que13.rx_packets: 273037957 > dev.ixl.1.pf.que13.tx_bytes: 112417776222 > dev.ixl.1.pf.que13.tx_packets: 217500943 > dev.ixl.1.pf.que13.no_desc_avail: 0 > dev.ixl.1.pf.que13.tx_dma_setup: 0 > dev.ixl.1.pf.que13.tso_tx: 0 > dev.ixl.1.pf.que13.irqs: 241422331 > dev.ixl.1.pf.que13.dropped: 0 > dev.ixl.1.pf.que13.mbuf_defrag_failed: 0 > dev.ixl.1.pf.que12.rx_bytes: 301105585425 > dev.ixl.1.pf.que12.rx_packets: 286137817 > dev.ixl.1.pf.que12.tx_bytes: 95851784579 > dev.ixl.1.pf.que12.tx_packets: 199715765 > dev.ixl.1.pf.que12.no_desc_avail: 0 > dev.ixl.1.pf.que12.tx_dma_setup: 0 > dev.ixl.1.pf.que12.tso_tx: 0 > dev.ixl.1.pf.que12.irqs: 247322880 > dev.ixl.1.pf.que12.dropped: 0 > dev.ixl.1.pf.que12.mbuf_defrag_failed: 0 > dev.ixl.1.pf.que11.rx_bytes: 307105398143 > dev.ixl.1.pf.que11.rx_packets: 281046463 > dev.ixl.1.pf.que11.tx_bytes: 110710957789 > dev.ixl.1.pf.que11.tx_packets: 211784031 > dev.ixl.1.pf.que11.no_desc_avail: 0 > dev.ixl.1.pf.que11.tx_dma_setup: 0 > dev.ixl.1.pf.que11.tso_tx: 0 > dev.ixl.1.pf.que11.irqs: 256987179 > dev.ixl.1.pf.que11.dropped: 0 > dev.ixl.1.pf.que11.mbuf_defrag_failed: 0 > dev.ixl.1.pf.que10.rx_bytes: 304288000453 > dev.ixl.1.pf.que10.rx_packets: 278987858 > dev.ixl.1.pf.que10.tx_bytes: 93022244338 > dev.ixl.1.pf.que10.tx_packets: 195869210 > dev.ixl.1.pf.que10.no_desc_avail: 0 > dev.ixl.1.pf.que10.tx_dma_setup: 0 > dev.ixl.1.pf.que10.tso_tx: 0 > dev.ixl.1.pf.que10.irqs: 253622192 > dev.ixl.1.pf.que10.dropped: 0 > dev.ixl.1.pf.que10.mbuf_defrag_failed: 0 > dev.ixl.1.pf.que9.rx_bytes: 320340203822 > dev.ixl.1.pf.que9.rx_packets: 302309010 > dev.ixl.1.pf.que9.tx_bytes: 116604776460 > dev.ixl.1.pf.que9.tx_packets: 223949025 > dev.ixl.1.pf.que9.no_desc_avail: 0 > dev.ixl.1.pf.que9.tx_dma_setup: 0 > dev.ixl.1.pf.que9.tso_tx: 0 > dev.ixl.1.pf.que9.irqs: 271165440 > dev.ixl.1.pf.que9.dropped: 0 > dev.ixl.1.pf.que9.mbuf_defrag_failed: 0 > dev.ixl.1.pf.que8.rx_bytes: 291403725592 > dev.ixl.1.pf.que8.rx_packets: 267859568 > dev.ixl.1.pf.que8.tx_bytes: 205745654558 > dev.ixl.1.pf.que8.tx_packets: 443349835 > dev.ixl.1.pf.que8.no_desc_avail: 0 > dev.ixl.1.pf.que8.tx_dma_setup: 0 > dev.ixl.1.pf.que8.tso_tx: 0 > dev.ixl.1.pf.que8.irqs: 254116755 > dev.ixl.1.pf.que8.dropped: 0 > dev.ixl.1.pf.que8.mbuf_defrag_failed: 0 > dev.ixl.1.pf.que7.rx_bytes: 673363127346 > dev.ixl.1.pf.que7.rx_packets: 617269774 > dev.ixl.1.pf.que7.tx_bytes: 203162891886 > dev.ixl.1.pf.que7.tx_packets: 443709339 > dev.ixl.1.pf.que7.no_desc_avail: 0 > dev.ixl.1.pf.que7.tx_dma_setup: 0 > dev.ixl.1.pf.que7.tso_tx: 0 > dev.ixl.1.pf.que7.irqs: 424706771 > dev.ixl.1.pf.que7.dropped: 0 > dev.ixl.1.pf.que7.mbuf_defrag_failed: 0 > dev.ixl.1.pf.que6.rx_bytes: 644709094218 > dev.ixl.1.pf.que6.rx_packets: 601892919 > dev.ixl.1.pf.que6.tx_bytes: 221661735032 > dev.ixl.1.pf.que6.tx_packets: 460127064 > dev.ixl.1.pf.que6.no_desc_avail: 0 > dev.ixl.1.pf.que6.tx_dma_setup: 0 > dev.ixl.1.pf.que6.tso_tx: 0 > dev.ixl.1.pf.que6.irqs: 417748074 > dev.ixl.1.pf.que6.dropped: 0 > dev.ixl.1.pf.que6.mbuf_defrag_failed: 0 > dev.ixl.1.pf.que5.rx_bytes: 661904432231 > dev.ixl.1.pf.que5.rx_packets: 622012837 > dev.ixl.1.pf.que5.tx_bytes: 230514282876 > dev.ixl.1.pf.que5.tx_packets: 458571100 > dev.ixl.1.pf.que5.no_desc_avail: 0 > dev.ixl.1.pf.que5.tx_dma_setup: 0 > dev.ixl.1.pf.que5.tso_tx: 0 > dev.ixl.1.pf.que5.irqs: 422305039 > dev.ixl.1.pf.que5.dropped: 0 > dev.ixl.1.pf.que5.mbuf_defrag_failed: 0 > dev.ixl.1.pf.que4.rx_bytes: 653522179234 > dev.ixl.1.pf.que4.rx_packets: 603345546 > dev.ixl.1.pf.que4.tx_bytes: 216761219483 > dev.ixl.1.pf.que4.tx_packets: 450329641 > dev.ixl.1.pf.que4.no_desc_avail: 0 > dev.ixl.1.pf.que4.tx_dma_setup: 0 > dev.ixl.1.pf.que4.tso_tx: 3 > dev.ixl.1.pf.que4.irqs: 416920533 > dev.ixl.1.pf.que4.dropped: 0 > dev.ixl.1.pf.que4.mbuf_defrag_failed: 0 > dev.ixl.1.pf.que3.rx_bytes: 676494225882 > dev.ixl.1.pf.que3.rx_packets: 620605168 > dev.ixl.1.pf.que3.tx_bytes: 233854020454 > dev.ixl.1.pf.que3.tx_packets: 464425616 > dev.ixl.1.pf.que3.no_desc_avail: 0 > dev.ixl.1.pf.que3.tx_dma_setup: 0 > dev.ixl.1.pf.que3.tso_tx: 0 > dev.ixl.1.pf.que3.irqs: 426349030 > dev.ixl.1.pf.que3.dropped: 0 > dev.ixl.1.pf.que3.mbuf_defrag_failed: 0 > dev.ixl.1.pf.que2.rx_bytes: 677779337711 > dev.ixl.1.pf.que2.rx_packets: 620883699 > dev.ixl.1.pf.que2.tx_bytes: 211297141668 > dev.ixl.1.pf.que2.tx_packets: 450501525 > dev.ixl.1.pf.que2.no_desc_avail: 0 > dev.ixl.1.pf.que2.tx_dma_setup: 0 > dev.ixl.1.pf.que2.tso_tx: 0 > dev.ixl.1.pf.que2.irqs: 433146278 > dev.ixl.1.pf.que2.dropped: 0 > dev.ixl.1.pf.que2.mbuf_defrag_failed: 0 > dev.ixl.1.pf.que1.rx_bytes: 661360798018 > dev.ixl.1.pf.que1.rx_packets: 619700636 > dev.ixl.1.pf.que1.tx_bytes: 238264220772 > dev.ixl.1.pf.que1.tx_packets: 473425354 > dev.ixl.1.pf.que1.no_desc_avail: 0 > dev.ixl.1.pf.que1.tx_dma_setup: 0 > dev.ixl.1.pf.que1.tso_tx: 0 > dev.ixl.1.pf.que1.irqs: 437959829 > dev.ixl.1.pf.que1.dropped: 0 > dev.ixl.1.pf.que1.mbuf_defrag_failed: 0 > dev.ixl.1.pf.que0.rx_bytes: 685201226330 > dev.ixl.1.pf.que0.rx_packets: 637772348 > dev.ixl.1.pf.que0.tx_bytes: 124808 > dev.ixl.1.pf.que0.tx_packets: 1782 > dev.ixl.1.pf.que0.no_desc_avail: 0 > dev.ixl.1.pf.que0.tx_dma_setup: 0 > dev.ixl.1.pf.que0.tso_tx: 0 > dev.ixl.1.pf.que0.irqs: 174905480 > dev.ixl.1.pf.que0.dropped: 0 > dev.ixl.1.pf.que0.mbuf_defrag_failed: 0 > dev.ixl.1.pf.bcast_pkts_txd: 277 > dev.ixl.1.pf.mcast_pkts_txd: 0 > dev.ixl.1.pf.ucast_pkts_txd: 5319743945 > dev.ixl.1.pf.good_octets_txd: 2613178367282 > dev.ixl.1.pf.rx_discards: 0 > dev.ixl.1.pf.bcast_pkts_rcvd: 1 > dev.ixl.1.pf.mcast_pkts_rcvd: 0 > dev.ixl.1.pf.ucast_pkts_rcvd: 7203747890 > dev.ixl.1.pf.good_octets_rcvd: 7770230490224 > dev.ixl.1.vc_debug_level: 1 > dev.ixl.1.admin_irq: 0 > dev.ixl.1.watchdog_events: 0 > dev.ixl.1.debug: 0 > dev.ixl.1.dynamic_tx_itr: 0 > dev.ixl.1.tx_itr: 122 > dev.ixl.1.dynamic_rx_itr: 0 > dev.ixl.1.rx_itr: 62 > dev.ixl.1.fw_version: f4.33 a1.2 n04.42 e8000191d > dev.ixl.1.current_speed: 10G > dev.ixl.1.advertise_speed: 0 > dev.ixl.1.fc: 0 > dev.ixl.1.%parent: pci129 > dev.ixl.1.%pnpinfo: vendor=3D0x8086 device=3D0x1572 subvendor=3D0x8086=20 > subdevice=3D0x0000 class=3D0x020000 > dev.ixl.1.%location: slot=3D0 function=3D1 handle=3D\_SB_.PCI1.QR3A.H001 > dev.ixl.1.%driver: ixl > dev.ixl.1.%desc: Intel(R) Ethernet Connection XL710 Driver, Version - 1.4= =2E0 > dev.ixl.0.mac.xoff_recvd: 0 > dev.ixl.0.mac.xoff_txd: 0 > dev.ixl.0.mac.xon_recvd: 0 > dev.ixl.0.mac.xon_txd: 0 > dev.ixl.0.mac.tx_frames_big: 0 > dev.ixl.0.mac.tx_frames_1024_1522: 4961134019 > dev.ixl.0.mac.tx_frames_512_1023: 113082136 > dev.ixl.0.mac.tx_frames_256_511: 123538450 > dev.ixl.0.mac.tx_frames_128_255: 185051082 > dev.ixl.0.mac.tx_frames_65_127: 1332798493 > dev.ixl.0.mac.tx_frames_64: 243338964 > dev.ixl.0.mac.checksum_errors: 0 > dev.ixl.0.mac.rx_jabber: 0 > dev.ixl.0.mac.rx_oversized: 0 > dev.ixl.0.mac.rx_fragmented: 0 > dev.ixl.0.mac.rx_undersize: 0 > dev.ixl.0.mac.rx_frames_big: 0 > dev.ixl.0.mac.rx_frames_1024_1522: 1566499069 > dev.ixl.0.mac.rx_frames_512_1023: 101390143 > dev.ixl.0.mac.rx_frames_256_511: 49831970 > dev.ixl.0.mac.rx_frames_128_255: 231738168 > dev.ixl.0.mac.rx_frames_65_127: 2123185819 > dev.ixl.0.mac.rx_frames_64: 1320404300 > dev.ixl.0.mac.rx_length_errors: 0 > dev.ixl.0.mac.remote_faults: 0 > dev.ixl.0.mac.local_faults: 0 > dev.ixl.0.mac.illegal_bytes: 0 > dev.ixl.0.mac.crc_errors: 0 > dev.ixl.0.mac.bcast_pkts_txd: 302 > dev.ixl.0.mac.mcast_pkts_txd: 33965 > dev.ixl.0.mac.ucast_pkts_txd: 6958908862 > dev.ixl.0.mac.good_octets_txd: 7698936138858 > dev.ixl.0.mac.rx_discards: 0 > dev.ixl.0.mac.bcast_pkts_rcvd: 1 > dev.ixl.0.mac.mcast_pkts_rcvd: 49693 > dev.ixl.0.mac.ucast_pkts_rcvd: 5392999771 > dev.ixl.0.mac.good_octets_rcvd: 2648906893811 > dev.ixl.0.pf.que23.rx_bytes: 0 > dev.ixl.0.pf.que23.rx_packets: 0 > dev.ixl.0.pf.que23.tx_bytes: 2371273 > dev.ixl.0.pf.que23.tx_packets: 7313 > dev.ixl.0.pf.que23.no_desc_avail: 0 > dev.ixl.0.pf.que23.tx_dma_setup: 0 > dev.ixl.0.pf.que23.tso_tx: 0 > dev.ixl.0.pf.que23.irqs: 7313 > dev.ixl.0.pf.que23.dropped: 0 > dev.ixl.0.pf.que23.mbuf_defrag_failed: 0 > dev.ixl.0.pf.que22.rx_bytes: 0 > dev.ixl.0.pf.que22.rx_packets: 0 > dev.ixl.0.pf.que22.tx_bytes: 1908468 > dev.ixl.0.pf.que22.tx_packets: 6626 > dev.ixl.0.pf.que22.no_desc_avail: 0 > dev.ixl.0.pf.que22.tx_dma_setup: 0 > dev.ixl.0.pf.que22.tso_tx: 0 > dev.ixl.0.pf.que22.irqs: 6627 > dev.ixl.0.pf.que22.dropped: 0 > dev.ixl.0.pf.que22.mbuf_defrag_failed: 0 > dev.ixl.0.pf.que21.rx_bytes: 0 > dev.ixl.0.pf.que21.rx_packets: 0 > dev.ixl.0.pf.que21.tx_bytes: 2092668 > dev.ixl.0.pf.que21.tx_packets: 6739 > dev.ixl.0.pf.que21.no_desc_avail: 0 > dev.ixl.0.pf.que21.tx_dma_setup: 0 > dev.ixl.0.pf.que21.tso_tx: 0 > dev.ixl.0.pf.que21.irqs: 6728 > dev.ixl.0.pf.que21.dropped: 0 > dev.ixl.0.pf.que21.mbuf_defrag_failed: 0 > dev.ixl.0.pf.que20.rx_bytes: 0 > dev.ixl.0.pf.que20.rx_packets: 0 > dev.ixl.0.pf.que20.tx_bytes: 1742176 > dev.ixl.0.pf.que20.tx_packets: 6246 > dev.ixl.0.pf.que20.no_desc_avail: 0 > dev.ixl.0.pf.que20.tx_dma_setup: 0 > dev.ixl.0.pf.que20.tso_tx: 0 > dev.ixl.0.pf.que20.irqs: 6249 > dev.ixl.0.pf.que20.dropped: 0 > dev.ixl.0.pf.que20.mbuf_defrag_failed: 0 > dev.ixl.0.pf.que19.rx_bytes: 0 > dev.ixl.0.pf.que19.rx_packets: 0 > dev.ixl.0.pf.que19.tx_bytes: 2102284 > dev.ixl.0.pf.que19.tx_packets: 6979 > dev.ixl.0.pf.que19.no_desc_avail: 0 > dev.ixl.0.pf.que19.tx_dma_setup: 0 > dev.ixl.0.pf.que19.tso_tx: 0 > dev.ixl.0.pf.que19.irqs: 6979 > dev.ixl.0.pf.que19.dropped: 0 > dev.ixl.0.pf.que19.mbuf_defrag_failed: 0 > dev.ixl.0.pf.que18.rx_bytes: 0 > dev.ixl.0.pf.que18.rx_packets: 0 > dev.ixl.0.pf.que18.tx_bytes: 1532360 > dev.ixl.0.pf.que18.tx_packets: 5588 > dev.ixl.0.pf.que18.no_desc_avail: 0 > dev.ixl.0.pf.que18.tx_dma_setup: 0 > dev.ixl.0.pf.que18.tso_tx: 0 > dev.ixl.0.pf.que18.irqs: 5588 > dev.ixl.0.pf.que18.dropped: 0 > dev.ixl.0.pf.que18.mbuf_defrag_failed: 0 > dev.ixl.0.pf.que17.rx_bytes: 0 > dev.ixl.0.pf.que17.rx_packets: 0 > dev.ixl.0.pf.que17.tx_bytes: 1809684 > dev.ixl.0.pf.que17.tx_packets: 6136 > dev.ixl.0.pf.que17.no_desc_avail: 0 > dev.ixl.0.pf.que17.tx_dma_setup: 0 > dev.ixl.0.pf.que17.tso_tx: 0 > dev.ixl.0.pf.que17.irqs: 6136 > dev.ixl.0.pf.que17.dropped: 0 > dev.ixl.0.pf.que17.mbuf_defrag_failed: 0 > dev.ixl.0.pf.que16.rx_bytes: 0 > dev.ixl.0.pf.que16.rx_packets: 0 > dev.ixl.0.pf.que16.tx_bytes: 286836299105 > dev.ixl.0.pf.que16.tx_packets: 263532601 > dev.ixl.0.pf.que16.no_desc_avail: 0 > dev.ixl.0.pf.que16.tx_dma_setup: 0 > dev.ixl.0.pf.que16.tso_tx: 0 > dev.ixl.0.pf.que16.irqs: 83232941 > dev.ixl.0.pf.que16.dropped: 0 > dev.ixl.0.pf.que16.mbuf_defrag_failed: 0 > dev.ixl.0.pf.que15.rx_bytes: 106345323488 > dev.ixl.0.pf.que15.rx_packets: 208869912 > dev.ixl.0.pf.que15.tx_bytes: 298825179301 > dev.ixl.0.pf.que15.tx_packets: 288517504 > dev.ixl.0.pf.que15.no_desc_avail: 0 > dev.ixl.0.pf.que15.tx_dma_setup: 0 > dev.ixl.0.pf.que15.tso_tx: 0 > dev.ixl.0.pf.que15.irqs: 223322408 > dev.ixl.0.pf.que15.dropped: 0 > dev.ixl.0.pf.que15.mbuf_defrag_failed: 0 > dev.ixl.0.pf.que14.rx_bytes: 106721900547 > dev.ixl.0.pf.que14.rx_packets: 208566121 > dev.ixl.0.pf.que14.tx_bytes: 288657751920 > dev.ixl.0.pf.que14.tx_packets: 263556000 > dev.ixl.0.pf.que14.no_desc_avail: 0 > dev.ixl.0.pf.que14.tx_dma_setup: 0 > dev.ixl.0.pf.que14.tso_tx: 0 > dev.ixl.0.pf.que14.irqs: 220377537 > dev.ixl.0.pf.que14.dropped: 0 > dev.ixl.0.pf.que14.mbuf_defrag_failed: 0 > dev.ixl.0.pf.que13.rx_bytes: 111978971378 > dev.ixl.0.pf.que13.rx_packets: 218447354 > dev.ixl.0.pf.que13.tx_bytes: 298439860675 > dev.ixl.0.pf.que13.tx_packets: 276806617 > dev.ixl.0.pf.que13.no_desc_avail: 0 > dev.ixl.0.pf.que13.tx_dma_setup: 0 > dev.ixl.0.pf.que13.tso_tx: 0 > dev.ixl.0.pf.que13.irqs: 227474625 > dev.ixl.0.pf.que13.dropped: 0 > dev.ixl.0.pf.que13.mbuf_defrag_failed: 0 > dev.ixl.0.pf.que12.rx_bytes: 112969704706 > dev.ixl.0.pf.que12.rx_packets: 220275562 > dev.ixl.0.pf.que12.tx_bytes: 304750620079 > dev.ixl.0.pf.que12.tx_packets: 272244483 > dev.ixl.0.pf.que12.no_desc_avail: 0 > dev.ixl.0.pf.que12.tx_dma_setup: 0 > dev.ixl.0.pf.que12.tso_tx: 183 > dev.ixl.0.pf.que12.irqs: 230111291 > dev.ixl.0.pf.que12.dropped: 0 > dev.ixl.0.pf.que12.mbuf_defrag_failed: 0 > dev.ixl.0.pf.que11.rx_bytes: 96405343036 > dev.ixl.0.pf.que11.rx_packets: 202329448 > dev.ixl.0.pf.que11.tx_bytes: 302481707696 > dev.ixl.0.pf.que11.tx_packets: 271689246 > dev.ixl.0.pf.que11.no_desc_avail: 0 > dev.ixl.0.pf.que11.tx_dma_setup: 0 > dev.ixl.0.pf.que11.tso_tx: 0 > dev.ixl.0.pf.que11.irqs: 220717612 > dev.ixl.0.pf.que11.dropped: 0 > dev.ixl.0.pf.que11.mbuf_defrag_failed: 0 > dev.ixl.0.pf.que10.rx_bytes: 111280008670 > dev.ixl.0.pf.que10.rx_packets: 214900261 > dev.ixl.0.pf.que10.tx_bytes: 318638566198 > dev.ixl.0.pf.que10.tx_packets: 295011389 > dev.ixl.0.pf.que10.no_desc_avail: 0 > dev.ixl.0.pf.que10.tx_dma_setup: 0 > dev.ixl.0.pf.que10.tso_tx: 0 > dev.ixl.0.pf.que10.irqs: 230681709 > dev.ixl.0.pf.que10.dropped: 0 > dev.ixl.0.pf.que10.mbuf_defrag_failed: 0 > dev.ixl.0.pf.que9.rx_bytes: 93566025126 > dev.ixl.0.pf.que9.rx_packets: 198726483 > dev.ixl.0.pf.que9.tx_bytes: 288858818348 > dev.ixl.0.pf.que9.tx_packets: 258926864 > dev.ixl.0.pf.que9.no_desc_avail: 0 > dev.ixl.0.pf.que9.tx_dma_setup: 0 > dev.ixl.0.pf.que9.tso_tx: 0 > dev.ixl.0.pf.que9.irqs: 217918160 > dev.ixl.0.pf.que9.dropped: 0 > dev.ixl.0.pf.que9.mbuf_defrag_failed: 0 > dev.ixl.0.pf.que8.rx_bytes: 117169019041 > dev.ixl.0.pf.que8.rx_packets: 226938172 > dev.ixl.0.pf.que8.tx_bytes: 665794492752 > dev.ixl.0.pf.que8.tx_packets: 593519436 > dev.ixl.0.pf.que8.no_desc_avail: 0 > dev.ixl.0.pf.que8.tx_dma_setup: 0 > dev.ixl.0.pf.que8.tso_tx: 0 > dev.ixl.0.pf.que8.irqs: 244643578 > dev.ixl.0.pf.que8.dropped: 0 > dev.ixl.0.pf.que8.mbuf_defrag_failed: 0 > dev.ixl.0.pf.que7.rx_bytes: 206974266022 > dev.ixl.0.pf.que7.rx_packets: 449899895 > dev.ixl.0.pf.que7.tx_bytes: 638527685820 > dev.ixl.0.pf.que7.tx_packets: 580750916 > dev.ixl.0.pf.que7.no_desc_avail: 0 > dev.ixl.0.pf.que7.tx_dma_setup: 0 > dev.ixl.0.pf.que7.tso_tx: 0 > dev.ixl.0.pf.que7.irqs: 391760959 > dev.ixl.0.pf.que7.dropped: 0 > dev.ixl.0.pf.que7.mbuf_defrag_failed: 0 > dev.ixl.0.pf.que6.rx_bytes: 204373984670 > dev.ixl.0.pf.que6.rx_packets: 449990985 > dev.ixl.0.pf.que6.tx_bytes: 655511068125 > dev.ixl.0.pf.que6.tx_packets: 600735086 > dev.ixl.0.pf.que6.no_desc_avail: 0 > dev.ixl.0.pf.que6.tx_dma_setup: 0 > dev.ixl.0.pf.que6.tso_tx: 0 > dev.ixl.0.pf.que6.irqs: 394961024 > dev.ixl.0.pf.que6.dropped: 0 > dev.ixl.0.pf.que6.mbuf_defrag_failed: 0 > dev.ixl.0.pf.que5.rx_bytes: 222919535872 > dev.ixl.0.pf.que5.rx_packets: 466659705 > dev.ixl.0.pf.que5.tx_bytes: 647689764751 > dev.ixl.0.pf.que5.tx_packets: 582532691 > dev.ixl.0.pf.que5.no_desc_avail: 0 > dev.ixl.0.pf.que5.tx_dma_setup: 0 > dev.ixl.0.pf.que5.tso_tx: 5 > dev.ixl.0.pf.que5.irqs: 404552229 > dev.ixl.0.pf.que5.dropped: 0 > dev.ixl.0.pf.que5.mbuf_defrag_failed: 0 > dev.ixl.0.pf.que4.rx_bytes: 231706806551 > dev.ixl.0.pf.que4.rx_packets: 464397112 > dev.ixl.0.pf.que4.tx_bytes: 669945424739 > dev.ixl.0.pf.que4.tx_packets: 598527594 > dev.ixl.0.pf.que4.no_desc_avail: 0 > dev.ixl.0.pf.que4.tx_dma_setup: 0 > dev.ixl.0.pf.que4.tso_tx: 452 > dev.ixl.0.pf.que4.irqs: 405018727 > dev.ixl.0.pf.que4.dropped: 0 > dev.ixl.0.pf.que4.mbuf_defrag_failed: 0 > dev.ixl.0.pf.que3.rx_bytes: 217942511336 > dev.ixl.0.pf.que3.rx_packets: 456454137 > dev.ixl.0.pf.que3.tx_bytes: 674027217503 > dev.ixl.0.pf.que3.tx_packets: 604815959 > dev.ixl.0.pf.que3.no_desc_avail: 0 > dev.ixl.0.pf.que3.tx_dma_setup: 0 > dev.ixl.0.pf.que3.tso_tx: 0 > dev.ixl.0.pf.que3.irqs: 399890434 > dev.ixl.0.pf.que3.dropped: 0 > dev.ixl.0.pf.que3.mbuf_defrag_failed: 0 > dev.ixl.0.pf.que2.rx_bytes: 235057952930 > dev.ixl.0.pf.que2.rx_packets: 470668205 > dev.ixl.0.pf.que2.tx_bytes: 653598762323 > dev.ixl.0.pf.que2.tx_packets: 595468539 > dev.ixl.0.pf.que2.no_desc_avail: 0 > dev.ixl.0.pf.que2.tx_dma_setup: 0 > dev.ixl.0.pf.que2.tso_tx: 0 > dev.ixl.0.pf.que2.irqs: 410972406 > dev.ixl.0.pf.que2.dropped: 0 > dev.ixl.0.pf.que2.mbuf_defrag_failed: 0 > dev.ixl.0.pf.que1.rx_bytes: 212570053522 > dev.ixl.0.pf.que1.rx_packets: 456981561 > dev.ixl.0.pf.que1.tx_bytes: 677227126330 > dev.ixl.0.pf.que1.tx_packets: 612428010 > dev.ixl.0.pf.que1.no_desc_avail: 0 > dev.ixl.0.pf.que1.tx_dma_setup: 0 > dev.ixl.0.pf.que1.tso_tx: 0 > dev.ixl.0.pf.que1.irqs: 404727745 > dev.ixl.0.pf.que1.dropped: 0 > dev.ixl.0.pf.que1.mbuf_defrag_failed: 0 > dev.ixl.0.pf.que0.rx_bytes: 239424279142 > dev.ixl.0.pf.que0.rx_packets: 479078356 > dev.ixl.0.pf.que0.tx_bytes: 513283 > dev.ixl.0.pf.que0.tx_packets: 3990 > dev.ixl.0.pf.que0.no_desc_avail: 0 > dev.ixl.0.pf.que0.tx_dma_setup: 0 > dev.ixl.0.pf.que0.tso_tx: 0 > dev.ixl.0.pf.que0.irqs: 178414974 > dev.ixl.0.pf.que0.dropped: 0 > dev.ixl.0.pf.que0.mbuf_defrag_failed: 0 > dev.ixl.0.pf.bcast_pkts_txd: 302 > dev.ixl.0.pf.mcast_pkts_txd: 33965 > dev.ixl.0.pf.ucast_pkts_txd: 6958908879 > dev.ixl.0.pf.good_octets_txd: 7669637462330 > dev.ixl.0.pf.rx_discards: 0 > dev.ixl.0.pf.bcast_pkts_rcvd: 1 > dev.ixl.0.pf.mcast_pkts_rcvd: 49549 > dev.ixl.0.pf.ucast_pkts_rcvd: 5392999777 > dev.ixl.0.pf.good_octets_rcvd: 2648906886817 > dev.ixl.0.vc_debug_level: 1 > dev.ixl.0.admin_irq: 0 > dev.ixl.0.watchdog_events: 0 > dev.ixl.0.debug: 0 > dev.ixl.0.dynamic_tx_itr: 0 > dev.ixl.0.tx_itr: 122 > dev.ixl.0.dynamic_rx_itr: 0 > dev.ixl.0.rx_itr: 62 > dev.ixl.0.fw_version: f4.33 a1.2 n04.42 e8000191d > dev.ixl.0.current_speed: 10G > dev.ixl.0.advertise_speed: 0 > dev.ixl.0.fc: 0 > dev.ixl.0.%parent: pci129 > dev.ixl.0.%pnpinfo: vendor=3D0x8086 device=3D0x1572 subvendor=3D0x8086=20 > subdevice=3D0x0002 class=3D0x020000 > dev.ixl.0.%location: slot=3D0 function=3D0 handle=3D\_SB_.PCI1.QR3A.H000 > dev.ixl.0.%driver: ixl > dev.ixl.0.%desc: Intel(R) Ethernet Connection XL710 Driver, Version - 1.4= =2E0 > dev.ixl.%parent: --/JIF1IJL1ITjxcV4 Content-Type: application/pgp-signature -----BEGIN PGP SIGNATURE----- Version: GnuPG v2.0.22 (FreeBSD) iQF8BAEBCgBmBQJV1MRTXxSAAAAAAC4AKGlzc3Vlci1mcHJAbm90YXRpb25zLm9w ZW5wZ3AuZmlmdGhob3JzZW1hbi5uZXRBNEUyMEZBMUQ4Nzg4RjNGMTdFNjZGMDI4 QjkyNTBFMTU2M0VERkU1AAoJEIuSUOFWPt/lKOYH/AxOjWIvMD9xbT0cuJ1WadZj 8/eS9nFC0CDUyHT/rb8czxcnK6CnDMBO3DvV/rhDhQnTIqyk/6dRf3+k1MhhYc93 3sdXyYwPuZFCi/Do/Zljk1ike3Udw69nkrRg5+WKXzgVO1NaK/amSFm8IiAgFpuf 0p09N9y97ismJV2eCVfw4kEviT5SnipSDuI0zaeMKwKDtlwjP9yILZb45zNJYqQv l72w3mbDMgc55QF+Hv39nFlIa84Sn9K0sZGOIdH2/BTYRl2PMKcJU/5DMvrpKwS/ nWQx4ODHXADA67/GUTzthFt5eQHXKTD96vqv1G6VVY3yhpxXfn+1Q5BAYsiCigI= =P+b7 -----END PGP SIGNATURE----- --/JIF1IJL1ITjxcV4-- From owner-freebsd-net@freebsd.org Wed Aug 19 18:17:37 2015 Return-Path: Delivered-To: freebsd-net@mailman.ysv.freebsd.org Received: from mx1.freebsd.org (mx1.freebsd.org [IPv6:2001:1900:2254:206a::19:1]) by mailman.ysv.freebsd.org (Postfix) with ESMTP id 8AFCB9BE2DA for ; Wed, 19 Aug 2015 18:17:37 +0000 (UTC) (envelope-from ricera10@gmail.com) Received: from mail-qk0-f180.google.com (mail-qk0-f180.google.com [209.85.220.180]) (using TLSv1.2 with cipher ECDHE-RSA-AES128-GCM-SHA256 (128/128 bits)) (Client CN "smtp.gmail.com", Issuer "Google Internet Authority G2" (verified OK)) by mx1.freebsd.org (Postfix) with ESMTPS id 30BCA1FA3 for ; Wed, 19 Aug 2015 18:17:36 +0000 (UTC) (envelope-from ricera10@gmail.com) Received: by qkfh127 with SMTP id h127so5266926qkf.1 for ; Wed, 19 Aug 2015 11:17:30 -0700 (PDT) X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20130820; h=mime-version:references:in-reply-to:from:date:message-id:subject:to :cc:content-type; bh=fMX4/HtEKWkunw+gashgqCmu76DiY2Qp6Jhr/GzKCPs=; b=irg9axy7e62FGX9I0kaQI/fsHNDl9Ff5UcPF0LWngVgzR2OlR+noq9H2iED/SRgIsh JgIhHGTdmFYgP0VElqV8t6xx7jFALfyho/fNTY7tSFmnCVC4S7QpSaAziInrNV74iBeC klFtoq4j0QLCfyDwEIN2Sr5Wptvc+6JpWUL77XeDUQbqT3SgN9eQ8+6Vg+uplrqzgm1P Ei1zEE6Gl0/9xp53TWBXcFh/RK1CqF8b/2dPYBUcH61Ba/otVWm4OB2C8G1uSUVTHFfW 5oTKPe4EpEr6uUWXnkTy65iRtBjtVI43ME68VnybBwGYVqSFelCrT4xFfPhBixpdl4HA 7yTg== X-Received: by 10.55.23.234 with SMTP id 103mr25322369qkx.19.1440008250350; Wed, 19 Aug 2015 11:17:30 -0700 (PDT) Received: from mail-qk0-f177.google.com (mail-qk0-f177.google.com. [209.85.220.177]) by smtp.gmail.com with ESMTPSA id i2sm775749qgf.49.2015.08.19.11.17.29 for (version=TLSv1.2 cipher=ECDHE-RSA-AES128-GCM-SHA256 bits=128/128); Wed, 19 Aug 2015 11:17:30 -0700 (PDT) Received: by qkbm65 with SMTP id m65so5327492qkb.2 for ; Wed, 19 Aug 2015 11:17:29 -0700 (PDT) X-Received: by 10.55.23.42 with SMTP id i42mr25668207qkh.22.1440008249747; Wed, 19 Aug 2015 11:17:29 -0700 (PDT) MIME-Version: 1.0 References: <55D49611.40603@maxnet.ru> <20150819180051.GM94440@strugglingcoder.info> In-Reply-To: <20150819180051.GM94440@strugglingcoder.info> From: Eric Joyner Date: Wed, 19 Aug 2015 18:17:20 +0000 Message-ID: Subject: Re: FreeBSD 10.2-STABLE + Intel XL710 - free queues To: hiren panchasara , Evgeny Khorokhorin Cc: freebsd-net@freebsd.org Content-Type: text/plain; charset=UTF-8 X-Content-Filtered-By: Mailman/MimeDel 2.1.20 X-BeenThere: freebsd-net@freebsd.org X-Mailman-Version: 2.1.20 Precedence: list List-Id: Networking and TCP/IP with FreeBSD List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , X-List-Received-Date: Wed, 19 Aug 2015 18:17:37 -0000 The IXLV_MAX_QUEUES value is for the VF driver; the standard driver should be able to allocate and properly use up to 64 queues. That said, you're only getting rx traffic on the first 16 queues, so that looks like a bug in the driver. I'll take a look at it. - Eric On Wed, Aug 19, 2015 at 11:00 AM hiren panchasara < hiren@strugglingcoder.info> wrote: > On 08/19/15 at 05:43P, Evgeny Khorokhorin wrote: > > Hi All, > > > > FreeBSD 10.2-STABLE > > 2*CPU Intel E5-2643v3 with HyperThreading enabled > > Intel XL710 network adapter > > I updated the ixl driver to version 1.4.0 from download.intel.com > > Every ixl interface create 24 queues (6 cores *2 HT *2 CPUs) but > > utilizes only 16-17 of them. Where is the reason of such behavior or > > driver bug? > > Not sure what is the h/w limit but this may be a possible cause: > #define IXLV_MAX_QUEUES 16 > in sys/dev/ixl/ixlv.h > > and ixlv_init_msix() doing: > if (queues > IXLV_MAX_QUEUES) > queues = IXLV_MAX_QUEUES; > > Adding eric from intel to confirm. > > Cheers, > Hiren > > > > irq284: ixl0:q0 177563088 2054 > > irq285: ixl0:q1 402668179 4659 > > irq286: ixl0:q2 408885088 4731 > > irq287: ixl0:q3 397744300 4602 > > irq288: ixl0:q4 403040766 4663 > > irq289: ixl0:q5 402499314 4657 > > irq290: ixl0:q6 392693663 4543 > > irq291: ixl0:q7 389364966 4505 > > irq292: ixl0:q8 243244346 2814 > > irq293: ixl0:q9 216834450 2509 > > irq294: ixl0:q10 229460056 2655 > > irq295: ixl0:q11 219591953 2540 > > irq296: ixl0:q12 228944960 2649 > > irq297: ixl0:q13 226385454 2619 > > irq298: ixl0:q14 219174953 2536 > > irq299: ixl0:q15 222151378 2570 > > irq300: ixl0:q16 82799713 958 > > irq301: ixl0:q17 6131 0 > > irq302: ixl0:q18 5586 0 > > irq303: ixl0:q19 6975 0 > > irq304: ixl0:q20 6243 0 > > irq305: ixl0:q21 6729 0 > > irq306: ixl0:q22 6623 0 > > irq307: ixl0:q23 7306 0 > > irq309: ixl1:q0 174074462 2014 > > irq310: ixl1:q1 435716449 5041 > > irq311: ixl1:q2 431030443 4987 > > irq312: ixl1:q3 424156413 4907 > > irq313: ixl1:q4 414791657 4799 > > irq314: ixl1:q5 420260382 4862 > > irq315: ixl1:q6 415645708 4809 > > irq316: ixl1:q7 422783859 4892 > > irq317: ixl1:q8 252737383 2924 > > irq318: ixl1:q9 269655708 3120 > > irq319: ixl1:q10 252397826 2920 > > irq320: ixl1:q11 255649144 2958 > > irq321: ixl1:q12 246025621 2846 > > irq322: ixl1:q13 240176554 2779 > > irq323: ixl1:q14 254882418 2949 > > irq324: ixl1:q15 236846536 2740 > > irq325: ixl1:q16 86794467 1004 > > irq326: ixl1:q17 83 0 > > irq327: ixl1:q18 74 0 > > irq328: ixl1:q19 202 0 > > irq329: ixl1:q20 99 0 > > irq330: ixl1:q21 96 0 > > irq331: ixl1:q22 91 0 > > irq332: ixl1:q23 89 0 > > > > last pid: 28710; load averages: 7.16, 6.76, 6.49 up 1+00:00:41 > 17:40:46 > > 391 processes: 32 running, 215 sleeping, 144 waiting > > CPU 0: 0.0% user, 0.0% nice, 0.0% system, 49.2% interrupt, 50.8% idle > > CPU 1: 0.0% user, 0.0% nice, 0.4% system, 41.3% interrupt, 58.3% idle > > CPU 2: 0.0% user, 0.0% nice, 0.0% system, 39.0% interrupt, 61.0% idle > > CPU 3: 0.0% user, 0.0% nice, 0.0% system, 46.5% interrupt, 53.5% idle > > CPU 4: 0.0% user, 0.0% nice, 0.0% system, 37.4% interrupt, 62.6% idle > > CPU 5: 0.0% user, 0.0% nice, 0.0% system, 40.9% interrupt, 59.1% idle > > CPU 6: 0.0% user, 0.0% nice, 0.0% system, 40.2% interrupt, 59.8% idle > > CPU 7: 0.0% user, 0.0% nice, 0.0% system, 45.3% interrupt, 54.7% idle > > CPU 8: 0.0% user, 0.0% nice, 0.0% system, 20.5% interrupt, 79.5% idle > > CPU 9: 0.0% user, 0.0% nice, 0.0% system, 25.2% interrupt, 74.8% idle > > CPU 10: 0.0% user, 0.0% nice, 0.0% system, 23.2% interrupt, 76.8% idle > > CPU 11: 0.0% user, 0.0% nice, 0.0% system, 19.3% interrupt, 80.7% idle > > CPU 12: 0.0% user, 0.0% nice, 0.0% system, 28.7% interrupt, 71.3% idle > > CPU 13: 0.0% user, 0.0% nice, 0.0% system, 20.5% interrupt, 79.5% idle > > CPU 14: 0.0% user, 0.0% nice, 0.0% system, 35.0% interrupt, 65.0% idle > > CPU 15: 0.0% user, 0.0% nice, 0.0% system, 23.2% interrupt, 76.8% idle > > CPU 16: 0.0% user, 0.0% nice, 0.4% system, 1.2% interrupt, 98.4% idle > > CPU 17: 0.0% user, 0.0% nice, 2.0% system, 0.0% interrupt, 98.0% idle > > CPU 18: 0.0% user, 0.0% nice, 2.4% system, 0.0% interrupt, 97.6% idle > > CPU 19: 0.0% user, 0.0% nice, 2.8% system, 0.0% interrupt, 97.2% idle > > CPU 20: 0.0% user, 0.0% nice, 2.4% system, 0.0% interrupt, 97.6% idle > > CPU 21: 0.0% user, 0.0% nice, 1.6% system, 0.0% interrupt, 98.4% idle > > CPU 22: 0.0% user, 0.0% nice, 2.8% system, 0.0% interrupt, 97.2% idle > > CPU 23: 0.0% user, 0.0% nice, 0.4% system, 0.0% interrupt, 99.6% idle > > > > # netstat -I ixl0 -w1 -h > > input ixl0 output > > packets errs idrops bytes packets errs bytes colls > > 253K 0 0 136M 311K 0 325M 0 > > 251K 0 0 129M 314K 0 334M 0 > > 250K 0 0 135M 313K 0 333M 0 > > > > hw.ixl.tx_itr: 122 > > hw.ixl.rx_itr: 62 > > hw.ixl.dynamic_tx_itr: 0 > > hw.ixl.dynamic_rx_itr: 0 > > hw.ixl.max_queues: 0 > > hw.ixl.ring_size: 4096 > > hw.ixl.enable_msix: 1 > > dev.ixl.3.mac.xoff_recvd: 0 > > dev.ixl.3.mac.xoff_txd: 0 > > dev.ixl.3.mac.xon_recvd: 0 > > dev.ixl.3.mac.xon_txd: 0 > > dev.ixl.3.mac.tx_frames_big: 0 > > dev.ixl.3.mac.tx_frames_1024_1522: 0 > > dev.ixl.3.mac.tx_frames_512_1023: 0 > > dev.ixl.3.mac.tx_frames_256_511: 0 > > dev.ixl.3.mac.tx_frames_128_255: 0 > > dev.ixl.3.mac.tx_frames_65_127: 0 > > dev.ixl.3.mac.tx_frames_64: 0 > > dev.ixl.3.mac.checksum_errors: 0 > > dev.ixl.3.mac.rx_jabber: 0 > > dev.ixl.3.mac.rx_oversized: 0 > > dev.ixl.3.mac.rx_fragmented: 0 > > dev.ixl.3.mac.rx_undersize: 0 > > dev.ixl.3.mac.rx_frames_big: 0 > > dev.ixl.3.mac.rx_frames_1024_1522: 0 > > dev.ixl.3.mac.rx_frames_512_1023: 0 > > dev.ixl.3.mac.rx_frames_256_511: 0 > > dev.ixl.3.mac.rx_frames_128_255: 0 > > dev.ixl.3.mac.rx_frames_65_127: 0 > > dev.ixl.3.mac.rx_frames_64: 0 > > dev.ixl.3.mac.rx_length_errors: 0 > > dev.ixl.3.mac.remote_faults: 0 > > dev.ixl.3.mac.local_faults: 0 > > dev.ixl.3.mac.illegal_bytes: 0 > > dev.ixl.3.mac.crc_errors: 0 > > dev.ixl.3.mac.bcast_pkts_txd: 0 > > dev.ixl.3.mac.mcast_pkts_txd: 0 > > dev.ixl.3.mac.ucast_pkts_txd: 0 > > dev.ixl.3.mac.good_octets_txd: 0 > > dev.ixl.3.mac.rx_discards: 0 > > dev.ixl.3.mac.bcast_pkts_rcvd: 0 > > dev.ixl.3.mac.mcast_pkts_rcvd: 0 > > dev.ixl.3.mac.ucast_pkts_rcvd: 0 > > dev.ixl.3.mac.good_octets_rcvd: 0 > > dev.ixl.3.pf.que23.rx_bytes: 0 > > dev.ixl.3.pf.que23.rx_packets: 0 > > dev.ixl.3.pf.que23.tx_bytes: 0 > > dev.ixl.3.pf.que23.tx_packets: 0 > > dev.ixl.3.pf.que23.no_desc_avail: 0 > > dev.ixl.3.pf.que23.tx_dma_setup: 0 > > dev.ixl.3.pf.que23.tso_tx: 0 > > dev.ixl.3.pf.que23.irqs: 0 > > dev.ixl.3.pf.que23.dropped: 0 > > dev.ixl.3.pf.que23.mbuf_defrag_failed: 0 > > dev.ixl.3.pf.que22.rx_bytes: 0 > > dev.ixl.3.pf.que22.rx_packets: 0 > > dev.ixl.3.pf.que22.tx_bytes: 0 > > dev.ixl.3.pf.que22.tx_packets: 0 > > dev.ixl.3.pf.que22.no_desc_avail: 0 > > dev.ixl.3.pf.que22.tx_dma_setup: 0 > > dev.ixl.3.pf.que22.tso_tx: 0 > > dev.ixl.3.pf.que22.irqs: 0 > > dev.ixl.3.pf.que22.dropped: 0 > > dev.ixl.3.pf.que22.mbuf_defrag_failed: 0 > > dev.ixl.3.pf.que21.rx_bytes: 0 > > dev.ixl.3.pf.que21.rx_packets: 0 > > dev.ixl.3.pf.que21.tx_bytes: 0 > > dev.ixl.3.pf.que21.tx_packets: 0 > > dev.ixl.3.pf.que21.no_desc_avail: 0 > > dev.ixl.3.pf.que21.tx_dma_setup: 0 > > dev.ixl.3.pf.que21.tso_tx: 0 > > dev.ixl.3.pf.que21.irqs: 0 > > dev.ixl.3.pf.que21.dropped: 0 > > dev.ixl.3.pf.que21.mbuf_defrag_failed: 0 > > dev.ixl.3.pf.que20.rx_bytes: 0 > > dev.ixl.3.pf.que20.rx_packets: 0 > > dev.ixl.3.pf.que20.tx_bytes: 0 > > dev.ixl.3.pf.que20.tx_packets: 0 > > dev.ixl.3.pf.que20.no_desc_avail: 0 > > dev.ixl.3.pf.que20.tx_dma_setup: 0 > > dev.ixl.3.pf.que20.tso_tx: 0 > > dev.ixl.3.pf.que20.irqs: 0 > > dev.ixl.3.pf.que20.dropped: 0 > > dev.ixl.3.pf.que20.mbuf_defrag_failed: 0 > > dev.ixl.3.pf.que19.rx_bytes: 0 > > dev.ixl.3.pf.que19.rx_packets: 0 > > dev.ixl.3.pf.que19.tx_bytes: 0 > > dev.ixl.3.pf.que19.tx_packets: 0 > > dev.ixl.3.pf.que19.no_desc_avail: 0 > > dev.ixl.3.pf.que19.tx_dma_setup: 0 > > dev.ixl.3.pf.que19.tso_tx: 0 > > dev.ixl.3.pf.que19.irqs: 0 > > dev.ixl.3.pf.que19.dropped: 0 > > dev.ixl.3.pf.que19.mbuf_defrag_failed: 0 > > dev.ixl.3.pf.que18.rx_bytes: 0 > > dev.ixl.3.pf.que18.rx_packets: 0 > > dev.ixl.3.pf.que18.tx_bytes: 0 > > dev.ixl.3.pf.que18.tx_packets: 0 > > dev.ixl.3.pf.que18.no_desc_avail: 0 > > dev.ixl.3.pf.que18.tx_dma_setup: 0 > > dev.ixl.3.pf.que18.tso_tx: 0 > > dev.ixl.3.pf.que18.irqs: 0 > > dev.ixl.3.pf.que18.dropped: 0 > > dev.ixl.3.pf.que18.mbuf_defrag_failed: 0 > > dev.ixl.3.pf.que17.rx_bytes: 0 > > dev.ixl.3.pf.que17.rx_packets: 0 > > dev.ixl.3.pf.que17.tx_bytes: 0 > > dev.ixl.3.pf.que17.tx_packets: 0 > > dev.ixl.3.pf.que17.no_desc_avail: 0 > > dev.ixl.3.pf.que17.tx_dma_setup: 0 > > dev.ixl.3.pf.que17.tso_tx: 0 > > dev.ixl.3.pf.que17.irqs: 0 > > dev.ixl.3.pf.que17.dropped: 0 > > dev.ixl.3.pf.que17.mbuf_defrag_failed: 0 > > dev.ixl.3.pf.que16.rx_bytes: 0 > > dev.ixl.3.pf.que16.rx_packets: 0 > > dev.ixl.3.pf.que16.tx_bytes: 0 > > dev.ixl.3.pf.que16.tx_packets: 0 > > dev.ixl.3.pf.que16.no_desc_avail: 0 > > dev.ixl.3.pf.que16.tx_dma_setup: 0 > > dev.ixl.3.pf.que16.tso_tx: 0 > > dev.ixl.3.pf.que16.irqs: 0 > > dev.ixl.3.pf.que16.dropped: 0 > > dev.ixl.3.pf.que16.mbuf_defrag_failed: 0 > > dev.ixl.3.pf.que15.rx_bytes: 0 > > dev.ixl.3.pf.que15.rx_packets: 0 > > dev.ixl.3.pf.que15.tx_bytes: 0 > > dev.ixl.3.pf.que15.tx_packets: 0 > > dev.ixl.3.pf.que15.no_desc_avail: 0 > > dev.ixl.3.pf.que15.tx_dma_setup: 0 > > dev.ixl.3.pf.que15.tso_tx: 0 > > dev.ixl.3.pf.que15.irqs: 0 > > dev.ixl.3.pf.que15.dropped: 0 > > dev.ixl.3.pf.que15.mbuf_defrag_failed: 0 > > dev.ixl.3.pf.que14.rx_bytes: 0 > > dev.ixl.3.pf.que14.rx_packets: 0 > > dev.ixl.3.pf.que14.tx_bytes: 0 > > dev.ixl.3.pf.que14.tx_packets: 0 > > dev.ixl.3.pf.que14.no_desc_avail: 0 > > dev.ixl.3.pf.que14.tx_dma_setup: 0 > > dev.ixl.3.pf.que14.tso_tx: 0 > > dev.ixl.3.pf.que14.irqs: 0 > > dev.ixl.3.pf.que14.dropped: 0 > > dev.ixl.3.pf.que14.mbuf_defrag_failed: 0 > > dev.ixl.3.pf.que13.rx_bytes: 0 > > dev.ixl.3.pf.que13.rx_packets: 0 > > dev.ixl.3.pf.que13.tx_bytes: 0 > > dev.ixl.3.pf.que13.tx_packets: 0 > > dev.ixl.3.pf.que13.no_desc_avail: 0 > > dev.ixl.3.pf.que13.tx_dma_setup: 0 > > dev.ixl.3.pf.que13.tso_tx: 0 > > dev.ixl.3.pf.que13.irqs: 0 > > dev.ixl.3.pf.que13.dropped: 0 > > dev.ixl.3.pf.que13.mbuf_defrag_failed: 0 > > dev.ixl.3.pf.que12.rx_bytes: 0 > > dev.ixl.3.pf.que12.rx_packets: 0 > > dev.ixl.3.pf.que12.tx_bytes: 0 > > dev.ixl.3.pf.que12.tx_packets: 0 > > dev.ixl.3.pf.que12.no_desc_avail: 0 > > dev.ixl.3.pf.que12.tx_dma_setup: 0 > > dev.ixl.3.pf.que12.tso_tx: 0 > > dev.ixl.3.pf.que12.irqs: 0 > > dev.ixl.3.pf.que12.dropped: 0 > > dev.ixl.3.pf.que12.mbuf_defrag_failed: 0 > > dev.ixl.3.pf.que11.rx_bytes: 0 > > dev.ixl.3.pf.que11.rx_packets: 0 > > dev.ixl.3.pf.que11.tx_bytes: 0 > > dev.ixl.3.pf.que11.tx_packets: 0 > > dev.ixl.3.pf.que11.no_desc_avail: 0 > > dev.ixl.3.pf.que11.tx_dma_setup: 0 > > dev.ixl.3.pf.que11.tso_tx: 0 > > dev.ixl.3.pf.que11.irqs: 0 > > dev.ixl.3.pf.que11.dropped: 0 > > dev.ixl.3.pf.que11.mbuf_defrag_failed: 0 > > dev.ixl.3.pf.que10.rx_bytes: 0 > > dev.ixl.3.pf.que10.rx_packets: 0 > > dev.ixl.3.pf.que10.tx_bytes: 0 > > dev.ixl.3.pf.que10.tx_packets: 0 > > dev.ixl.3.pf.que10.no_desc_avail: 0 > > dev.ixl.3.pf.que10.tx_dma_setup: 0 > > dev.ixl.3.pf.que10.tso_tx: 0 > > dev.ixl.3.pf.que10.irqs: 0 > > dev.ixl.3.pf.que10.dropped: 0 > > dev.ixl.3.pf.que10.mbuf_defrag_failed: 0 > > dev.ixl.3.pf.que9.rx_bytes: 0 > > dev.ixl.3.pf.que9.rx_packets: 0 > > dev.ixl.3.pf.que9.tx_bytes: 0 > > dev.ixl.3.pf.que9.tx_packets: 0 > > dev.ixl.3.pf.que9.no_desc_avail: 0 > > dev.ixl.3.pf.que9.tx_dma_setup: 0 > > dev.ixl.3.pf.que9.tso_tx: 0 > > dev.ixl.3.pf.que9.irqs: 0 > > dev.ixl.3.pf.que9.dropped: 0 > > dev.ixl.3.pf.que9.mbuf_defrag_failed: 0 > > dev.ixl.3.pf.que8.rx_bytes: 0 > > dev.ixl.3.pf.que8.rx_packets: 0 > > dev.ixl.3.pf.que8.tx_bytes: 0 > > dev.ixl.3.pf.que8.tx_packets: 0 > > dev.ixl.3.pf.que8.no_desc_avail: 0 > > dev.ixl.3.pf.que8.tx_dma_setup: 0 > > dev.ixl.3.pf.que8.tso_tx: 0 > > dev.ixl.3.pf.que8.irqs: 0 > > dev.ixl.3.pf.que8.dropped: 0 > > dev.ixl.3.pf.que8.mbuf_defrag_failed: 0 > > dev.ixl.3.pf.que7.rx_bytes: 0 > > dev.ixl.3.pf.que7.rx_packets: 0 > > dev.ixl.3.pf.que7.tx_bytes: 0 > > dev.ixl.3.pf.que7.tx_packets: 0 > > dev.ixl.3.pf.que7.no_desc_avail: 0 > > dev.ixl.3.pf.que7.tx_dma_setup: 0 > > dev.ixl.3.pf.que7.tso_tx: 0 > > dev.ixl.3.pf.que7.irqs: 0 > > dev.ixl.3.pf.que7.dropped: 0 > > dev.ixl.3.pf.que7.mbuf_defrag_failed: 0 > > dev.ixl.3.pf.que6.rx_bytes: 0 > > dev.ixl.3.pf.que6.rx_packets: 0 > > dev.ixl.3.pf.que6.tx_bytes: 0 > > dev.ixl.3.pf.que6.tx_packets: 0 > > dev.ixl.3.pf.que6.no_desc_avail: 0 > > dev.ixl.3.pf.que6.tx_dma_setup: 0 > > dev.ixl.3.pf.que6.tso_tx: 0 > > dev.ixl.3.pf.que6.irqs: 0 > > dev.ixl.3.pf.que6.dropped: 0 > > dev.ixl.3.pf.que6.mbuf_defrag_failed: 0 > > dev.ixl.3.pf.que5.rx_bytes: 0 > > dev.ixl.3.pf.que5.rx_packets: 0 > > dev.ixl.3.pf.que5.tx_bytes: 0 > > dev.ixl.3.pf.que5.tx_packets: 0 > > dev.ixl.3.pf.que5.no_desc_avail: 0 > > dev.ixl.3.pf.que5.tx_dma_setup: 0 > > dev.ixl.3.pf.que5.tso_tx: 0 > > dev.ixl.3.pf.que5.irqs: 0 > > dev.ixl.3.pf.que5.dropped: 0 > > dev.ixl.3.pf.que5.mbuf_defrag_failed: 0 > > dev.ixl.3.pf.que4.rx_bytes: 0 > > dev.ixl.3.pf.que4.rx_packets: 0 > > dev.ixl.3.pf.que4.tx_bytes: 0 > > dev.ixl.3.pf.que4.tx_packets: 0 > > dev.ixl.3.pf.que4.no_desc_avail: 0 > > dev.ixl.3.pf.que4.tx_dma_setup: 0 > > dev.ixl.3.pf.que4.tso_tx: 0 > > dev.ixl.3.pf.que4.irqs: 0 > > dev.ixl.3.pf.que4.dropped: 0 > > dev.ixl.3.pf.que4.mbuf_defrag_failed: 0 > > dev.ixl.3.pf.que3.rx_bytes: 0 > > dev.ixl.3.pf.que3.rx_packets: 0 > > dev.ixl.3.pf.que3.tx_bytes: 0 > > dev.ixl.3.pf.que3.tx_packets: 0 > > dev.ixl.3.pf.que3.no_desc_avail: 0 > > dev.ixl.3.pf.que3.tx_dma_setup: 0 > > dev.ixl.3.pf.que3.tso_tx: 0 > > dev.ixl.3.pf.que3.irqs: 0 > > dev.ixl.3.pf.que3.dropped: 0 > > dev.ixl.3.pf.que3.mbuf_defrag_failed: 0 > > dev.ixl.3.pf.que2.rx_bytes: 0 > > dev.ixl.3.pf.que2.rx_packets: 0 > > dev.ixl.3.pf.que2.tx_bytes: 0 > > dev.ixl.3.pf.que2.tx_packets: 0 > > dev.ixl.3.pf.que2.no_desc_avail: 0 > > dev.ixl.3.pf.que2.tx_dma_setup: 0 > > dev.ixl.3.pf.que2.tso_tx: 0 > > dev.ixl.3.pf.que2.irqs: 0 > > dev.ixl.3.pf.que2.dropped: 0 > > dev.ixl.3.pf.que2.mbuf_defrag_failed: 0 > > dev.ixl.3.pf.que1.rx_bytes: 0 > > dev.ixl.3.pf.que1.rx_packets: 0 > > dev.ixl.3.pf.que1.tx_bytes: 0 > > dev.ixl.3.pf.que1.tx_packets: 0 > > dev.ixl.3.pf.que1.no_desc_avail: 0 > > dev.ixl.3.pf.que1.tx_dma_setup: 0 > > dev.ixl.3.pf.que1.tso_tx: 0 > > dev.ixl.3.pf.que1.irqs: 0 > > dev.ixl.3.pf.que1.dropped: 0 > > dev.ixl.3.pf.que1.mbuf_defrag_failed: 0 > > dev.ixl.3.pf.que0.rx_bytes: 0 > > dev.ixl.3.pf.que0.rx_packets: 0 > > dev.ixl.3.pf.que0.tx_bytes: 0 > > dev.ixl.3.pf.que0.tx_packets: 0 > > dev.ixl.3.pf.que0.no_desc_avail: 0 > > dev.ixl.3.pf.que0.tx_dma_setup: 0 > > dev.ixl.3.pf.que0.tso_tx: 0 > > dev.ixl.3.pf.que0.irqs: 0 > > dev.ixl.3.pf.que0.dropped: 0 > > dev.ixl.3.pf.que0.mbuf_defrag_failed: 0 > > dev.ixl.3.pf.bcast_pkts_txd: 0 > > dev.ixl.3.pf.mcast_pkts_txd: 0 > > dev.ixl.3.pf.ucast_pkts_txd: 0 > > dev.ixl.3.pf.good_octets_txd: 0 > > dev.ixl.3.pf.rx_discards: 0 > > dev.ixl.3.pf.bcast_pkts_rcvd: 0 > > dev.ixl.3.pf.mcast_pkts_rcvd: 0 > > dev.ixl.3.pf.ucast_pkts_rcvd: 0 > > dev.ixl.3.pf.good_octets_rcvd: 0 > > dev.ixl.3.vc_debug_level: 1 > > dev.ixl.3.admin_irq: 0 > > dev.ixl.3.watchdog_events: 0 > > dev.ixl.3.debug: 0 > > dev.ixl.3.dynamic_tx_itr: 0 > > dev.ixl.3.tx_itr: 122 > > dev.ixl.3.dynamic_rx_itr: 0 > > dev.ixl.3.rx_itr: 62 > > dev.ixl.3.fw_version: f4.33 a1.2 n04.42 e8000191d > > dev.ixl.3.current_speed: Unknown > > dev.ixl.3.advertise_speed: 0 > > dev.ixl.3.fc: 0 > > dev.ixl.3.%parent: pci129 > > dev.ixl.3.%pnpinfo: vendor=0x8086 device=0x1572 subvendor=0x8086 > > subdevice=0x0000 class=0x020000 > > dev.ixl.3.%location: slot=0 function=3 handle=\_SB_.PCI1.QR3A.H003 > > dev.ixl.3.%driver: ixl > > dev.ixl.3.%desc: Intel(R) Ethernet Connection XL710 Driver, Version - > 1.4.0 > > dev.ixl.2.mac.xoff_recvd: 0 > > dev.ixl.2.mac.xoff_txd: 0 > > dev.ixl.2.mac.xon_recvd: 0 > > dev.ixl.2.mac.xon_txd: 0 > > dev.ixl.2.mac.tx_frames_big: 0 > > dev.ixl.2.mac.tx_frames_1024_1522: 0 > > dev.ixl.2.mac.tx_frames_512_1023: 0 > > dev.ixl.2.mac.tx_frames_256_511: 0 > > dev.ixl.2.mac.tx_frames_128_255: 0 > > dev.ixl.2.mac.tx_frames_65_127: 0 > > dev.ixl.2.mac.tx_frames_64: 0 > > dev.ixl.2.mac.checksum_errors: 0 > > dev.ixl.2.mac.rx_jabber: 0 > > dev.ixl.2.mac.rx_oversized: 0 > > dev.ixl.2.mac.rx_fragmented: 0 > > dev.ixl.2.mac.rx_undersize: 0 > > dev.ixl.2.mac.rx_frames_big: 0 > > dev.ixl.2.mac.rx_frames_1024_1522: 0 > > dev.ixl.2.mac.rx_frames_512_1023: 0 > > dev.ixl.2.mac.rx_frames_256_511: 0 > > dev.ixl.2.mac.rx_frames_128_255: 0 > > dev.ixl.2.mac.rx_frames_65_127: 0 > > dev.ixl.2.mac.rx_frames_64: 0 > > dev.ixl.2.mac.rx_length_errors: 0 > > dev.ixl.2.mac.remote_faults: 0 > > dev.ixl.2.mac.local_faults: 0 > > dev.ixl.2.mac.illegal_bytes: 0 > > dev.ixl.2.mac.crc_errors: 0 > > dev.ixl.2.mac.bcast_pkts_txd: 0 > > dev.ixl.2.mac.mcast_pkts_txd: 0 > > dev.ixl.2.mac.ucast_pkts_txd: 0 > > dev.ixl.2.mac.good_octets_txd: 0 > > dev.ixl.2.mac.rx_discards: 0 > > dev.ixl.2.mac.bcast_pkts_rcvd: 0 > > dev.ixl.2.mac.mcast_pkts_rcvd: 0 > > dev.ixl.2.mac.ucast_pkts_rcvd: 0 > > dev.ixl.2.mac.good_octets_rcvd: 0 > > dev.ixl.2.pf.que23.rx_bytes: 0 > > dev.ixl.2.pf.que23.rx_packets: 0 > > dev.ixl.2.pf.que23.tx_bytes: 0 > > dev.ixl.2.pf.que23.tx_packets: 0 > > dev.ixl.2.pf.que23.no_desc_avail: 0 > > dev.ixl.2.pf.que23.tx_dma_setup: 0 > > dev.ixl.2.pf.que23.tso_tx: 0 > > dev.ixl.2.pf.que23.irqs: 0 > > dev.ixl.2.pf.que23.dropped: 0 > > dev.ixl.2.pf.que23.mbuf_defrag_failed: 0 > > dev.ixl.2.pf.que22.rx_bytes: 0 > > dev.ixl.2.pf.que22.rx_packets: 0 > > dev.ixl.2.pf.que22.tx_bytes: 0 > > dev.ixl.2.pf.que22.tx_packets: 0 > > dev.ixl.2.pf.que22.no_desc_avail: 0 > > dev.ixl.2.pf.que22.tx_dma_setup: 0 > > dev.ixl.2.pf.que22.tso_tx: 0 > > dev.ixl.2.pf.que22.irqs: 0 > > dev.ixl.2.pf.que22.dropped: 0 > > dev.ixl.2.pf.que22.mbuf_defrag_failed: 0 > > dev.ixl.2.pf.que21.rx_bytes: 0 > > dev.ixl.2.pf.que21.rx_packets: 0 > > dev.ixl.2.pf.que21.tx_bytes: 0 > > dev.ixl.2.pf.que21.tx_packets: 0 > > dev.ixl.2.pf.que21.no_desc_avail: 0 > > dev.ixl.2.pf.que21.tx_dma_setup: 0 > > dev.ixl.2.pf.que21.tso_tx: 0 > > dev.ixl.2.pf.que21.irqs: 0 > > dev.ixl.2.pf.que21.dropped: 0 > > dev.ixl.2.pf.que21.mbuf_defrag_failed: 0 > > dev.ixl.2.pf.que20.rx_bytes: 0 > > dev.ixl.2.pf.que20.rx_packets: 0 > > dev.ixl.2.pf.que20.tx_bytes: 0 > > dev.ixl.2.pf.que20.tx_packets: 0 > > dev.ixl.2.pf.que20.no_desc_avail: 0 > > dev.ixl.2.pf.que20.tx_dma_setup: 0 > > dev.ixl.2.pf.que20.tso_tx: 0 > > dev.ixl.2.pf.que20.irqs: 0 > > dev.ixl.2.pf.que20.dropped: 0 > > dev.ixl.2.pf.que20.mbuf_defrag_failed: 0 > > dev.ixl.2.pf.que19.rx_bytes: 0 > > dev.ixl.2.pf.que19.rx_packets: 0 > > dev.ixl.2.pf.que19.tx_bytes: 0 > > dev.ixl.2.pf.que19.tx_packets: 0 > > dev.ixl.2.pf.que19.no_desc_avail: 0 > > dev.ixl.2.pf.que19.tx_dma_setup: 0 > > dev.ixl.2.pf.que19.tso_tx: 0 > > dev.ixl.2.pf.que19.irqs: 0 > > dev.ixl.2.pf.que19.dropped: 0 > > dev.ixl.2.pf.que19.mbuf_defrag_failed: 0 > > dev.ixl.2.pf.que18.rx_bytes: 0 > > dev.ixl.2.pf.que18.rx_packets: 0 > > dev.ixl.2.pf.que18.tx_bytes: 0 > > dev.ixl.2.pf.que18.tx_packets: 0 > > dev.ixl.2.pf.que18.no_desc_avail: 0 > > dev.ixl.2.pf.que18.tx_dma_setup: 0 > > dev.ixl.2.pf.que18.tso_tx: 0 > > dev.ixl.2.pf.que18.irqs: 0 > > dev.ixl.2.pf.que18.dropped: 0 > > dev.ixl.2.pf.que18.mbuf_defrag_failed: 0 > > dev.ixl.2.pf.que17.rx_bytes: 0 > > dev.ixl.2.pf.que17.rx_packets: 0 > > dev.ixl.2.pf.que17.tx_bytes: 0 > > dev.ixl.2.pf.que17.tx_packets: 0 > > dev.ixl.2.pf.que17.no_desc_avail: 0 > > dev.ixl.2.pf.que17.tx_dma_setup: 0 > > dev.ixl.2.pf.que17.tso_tx: 0 > > dev.ixl.2.pf.que17.irqs: 0 > > dev.ixl.2.pf.que17.dropped: 0 > > dev.ixl.2.pf.que17.mbuf_defrag_failed: 0 > > dev.ixl.2.pf.que16.rx_bytes: 0 > > dev.ixl.2.pf.que16.rx_packets: 0 > > dev.ixl.2.pf.que16.tx_bytes: 0 > > dev.ixl.2.pf.que16.tx_packets: 0 > > dev.ixl.2.pf.que16.no_desc_avail: 0 > > dev.ixl.2.pf.que16.tx_dma_setup: 0 > > dev.ixl.2.pf.que16.tso_tx: 0 > > dev.ixl.2.pf.que16.irqs: 0 > > dev.ixl.2.pf.que16.dropped: 0 > > dev.ixl.2.pf.que16.mbuf_defrag_failed: 0 > > dev.ixl.2.pf.que15.rx_bytes: 0 > > dev.ixl.2.pf.que15.rx_packets: 0 > > dev.ixl.2.pf.que15.tx_bytes: 0 > > dev.ixl.2.pf.que15.tx_packets: 0 > > dev.ixl.2.pf.que15.no_desc_avail: 0 > > dev.ixl.2.pf.que15.tx_dma_setup: 0 > > dev.ixl.2.pf.que15.tso_tx: 0 > > dev.ixl.2.pf.que15.irqs: 0 > > dev.ixl.2.pf.que15.dropped: 0 > > dev.ixl.2.pf.que15.mbuf_defrag_failed: 0 > > dev.ixl.2.pf.que14.rx_bytes: 0 > > dev.ixl.2.pf.que14.rx_packets: 0 > > dev.ixl.2.pf.que14.tx_bytes: 0 > > dev.ixl.2.pf.que14.tx_packets: 0 > > dev.ixl.2.pf.que14.no_desc_avail: 0 > > dev.ixl.2.pf.que14.tx_dma_setup: 0 > > dev.ixl.2.pf.que14.tso_tx: 0 > > dev.ixl.2.pf.que14.irqs: 0 > > dev.ixl.2.pf.que14.dropped: 0 > > dev.ixl.2.pf.que14.mbuf_defrag_failed: 0 > > dev.ixl.2.pf.que13.rx_bytes: 0 > > dev.ixl.2.pf.que13.rx_packets: 0 > > dev.ixl.2.pf.que13.tx_bytes: 0 > > dev.ixl.2.pf.que13.tx_packets: 0 > > dev.ixl.2.pf.que13.no_desc_avail: 0 > > dev.ixl.2.pf.que13.tx_dma_setup: 0 > > dev.ixl.2.pf.que13.tso_tx: 0 > > dev.ixl.2.pf.que13.irqs: 0 > > dev.ixl.2.pf.que13.dropped: 0 > > dev.ixl.2.pf.que13.mbuf_defrag_failed: 0 > > dev.ixl.2.pf.que12.rx_bytes: 0 > > dev.ixl.2.pf.que12.rx_packets: 0 > > dev.ixl.2.pf.que12.tx_bytes: 0 > > dev.ixl.2.pf.que12.tx_packets: 0 > > dev.ixl.2.pf.que12.no_desc_avail: 0 > > dev.ixl.2.pf.que12.tx_dma_setup: 0 > > dev.ixl.2.pf.que12.tso_tx: 0 > > dev.ixl.2.pf.que12.irqs: 0 > > dev.ixl.2.pf.que12.dropped: 0 > > dev.ixl.2.pf.que12.mbuf_defrag_failed: 0 > > dev.ixl.2.pf.que11.rx_bytes: 0 > > dev.ixl.2.pf.que11.rx_packets: 0 > > dev.ixl.2.pf.que11.tx_bytes: 0 > > dev.ixl.2.pf.que11.tx_packets: 0 > > dev.ixl.2.pf.que11.no_desc_avail: 0 > > dev.ixl.2.pf.que11.tx_dma_setup: 0 > > dev.ixl.2.pf.que11.tso_tx: 0 > > dev.ixl.2.pf.que11.irqs: 0 > > dev.ixl.2.pf.que11.dropped: 0 > > dev.ixl.2.pf.que11.mbuf_defrag_failed: 0 > > dev.ixl.2.pf.que10.rx_bytes: 0 > > dev.ixl.2.pf.que10.rx_packets: 0 > > dev.ixl.2.pf.que10.tx_bytes: 0 > > dev.ixl.2.pf.que10.tx_packets: 0 > > dev.ixl.2.pf.que10.no_desc_avail: 0 > > dev.ixl.2.pf.que10.tx_dma_setup: 0 > > dev.ixl.2.pf.que10.tso_tx: 0 > > dev.ixl.2.pf.que10.irqs: 0 > > dev.ixl.2.pf.que10.dropped: 0 > > dev.ixl.2.pf.que10.mbuf_defrag_failed: 0 > > dev.ixl.2.pf.que9.rx_bytes: 0 > > dev.ixl.2.pf.que9.rx_packets: 0 > > dev.ixl.2.pf.que9.tx_bytes: 0 > > dev.ixl.2.pf.que9.tx_packets: 0 > > dev.ixl.2.pf.que9.no_desc_avail: 0 > > dev.ixl.2.pf.que9.tx_dma_setup: 0 > > dev.ixl.2.pf.que9.tso_tx: 0 > > dev.ixl.2.pf.que9.irqs: 0 > > dev.ixl.2.pf.que9.dropped: 0 > > dev.ixl.2.pf.que9.mbuf_defrag_failed: 0 > > dev.ixl.2.pf.que8.rx_bytes: 0 > > dev.ixl.2.pf.que8.rx_packets: 0 > > dev.ixl.2.pf.que8.tx_bytes: 0 > > dev.ixl.2.pf.que8.tx_packets: 0 > > dev.ixl.2.pf.que8.no_desc_avail: 0 > > dev.ixl.2.pf.que8.tx_dma_setup: 0 > > dev.ixl.2.pf.que8.tso_tx: 0 > > dev.ixl.2.pf.que8.irqs: 0 > > dev.ixl.2.pf.que8.dropped: 0 > > dev.ixl.2.pf.que8.mbuf_defrag_failed: 0 > > dev.ixl.2.pf.que7.rx_bytes: 0 > > dev.ixl.2.pf.que7.rx_packets: 0 > > dev.ixl.2.pf.que7.tx_bytes: 0 > > dev.ixl.2.pf.que7.tx_packets: 0 > > dev.ixl.2.pf.que7.no_desc_avail: 0 > > dev.ixl.2.pf.que7.tx_dma_setup: 0 > > dev.ixl.2.pf.que7.tso_tx: 0 > > dev.ixl.2.pf.que7.irqs: 0 > > dev.ixl.2.pf.que7.dropped: 0 > > dev.ixl.2.pf.que7.mbuf_defrag_failed: 0 > > dev.ixl.2.pf.que6.rx_bytes: 0 > > dev.ixl.2.pf.que6.rx_packets: 0 > > dev.ixl.2.pf.que6.tx_bytes: 0 > > dev.ixl.2.pf.que6.tx_packets: 0 > > dev.ixl.2.pf.que6.no_desc_avail: 0 > > dev.ixl.2.pf.que6.tx_dma_setup: 0 > > dev.ixl.2.pf.que6.tso_tx: 0 > > dev.ixl.2.pf.que6.irqs: 0 > > dev.ixl.2.pf.que6.dropped: 0 > > dev.ixl.2.pf.que6.mbuf_defrag_failed: 0 > > dev.ixl.2.pf.que5.rx_bytes: 0 > > dev.ixl.2.pf.que5.rx_packets: 0 > > dev.ixl.2.pf.que5.tx_bytes: 0 > > dev.ixl.2.pf.que5.tx_packets: 0 > > dev.ixl.2.pf.que5.no_desc_avail: 0 > > dev.ixl.2.pf.que5.tx_dma_setup: 0 > > dev.ixl.2.pf.que5.tso_tx: 0 > > dev.ixl.2.pf.que5.irqs: 0 > > dev.ixl.2.pf.que5.dropped: 0 > > dev.ixl.2.pf.que5.mbuf_defrag_failed: 0 > > dev.ixl.2.pf.que4.rx_bytes: 0 > > dev.ixl.2.pf.que4.rx_packets: 0 > > dev.ixl.2.pf.que4.tx_bytes: 0 > > dev.ixl.2.pf.que4.tx_packets: 0 > > dev.ixl.2.pf.que4.no_desc_avail: 0 > > dev.ixl.2.pf.que4.tx_dma_setup: 0 > > dev.ixl.2.pf.que4.tso_tx: 0 > > dev.ixl.2.pf.que4.irqs: 0 > > dev.ixl.2.pf.que4.dropped: 0 > > dev.ixl.2.pf.que4.mbuf_defrag_failed: 0 > > dev.ixl.2.pf.que3.rx_bytes: 0 > > dev.ixl.2.pf.que3.rx_packets: 0 > > dev.ixl.2.pf.que3.tx_bytes: 0 > > dev.ixl.2.pf.que3.tx_packets: 0 > > dev.ixl.2.pf.que3.no_desc_avail: 0 > > dev.ixl.2.pf.que3.tx_dma_setup: 0 > > dev.ixl.2.pf.que3.tso_tx: 0 > > dev.ixl.2.pf.que3.irqs: 0 > > dev.ixl.2.pf.que3.dropped: 0 > > dev.ixl.2.pf.que3.mbuf_defrag_failed: 0 > > dev.ixl.2.pf.que2.rx_bytes: 0 > > dev.ixl.2.pf.que2.rx_packets: 0 > > dev.ixl.2.pf.que2.tx_bytes: 0 > > dev.ixl.2.pf.que2.tx_packets: 0 > > dev.ixl.2.pf.que2.no_desc_avail: 0 > > dev.ixl.2.pf.que2.tx_dma_setup: 0 > > dev.ixl.2.pf.que2.tso_tx: 0 > > dev.ixl.2.pf.que2.irqs: 0 > > dev.ixl.2.pf.que2.dropped: 0 > > dev.ixl.2.pf.que2.mbuf_defrag_failed: 0 > > dev.ixl.2.pf.que1.rx_bytes: 0 > > dev.ixl.2.pf.que1.rx_packets: 0 > > dev.ixl.2.pf.que1.tx_bytes: 0 > > dev.ixl.2.pf.que1.tx_packets: 0 > > dev.ixl.2.pf.que1.no_desc_avail: 0 > > dev.ixl.2.pf.que1.tx_dma_setup: 0 > > dev.ixl.2.pf.que1.tso_tx: 0 > > dev.ixl.2.pf.que1.irqs: 0 > > dev.ixl.2.pf.que1.dropped: 0 > > dev.ixl.2.pf.que1.mbuf_defrag_failed: 0 > > dev.ixl.2.pf.que0.rx_bytes: 0 > > dev.ixl.2.pf.que0.rx_packets: 0 > > dev.ixl.2.pf.que0.tx_bytes: 0 > > dev.ixl.2.pf.que0.tx_packets: 0 > > dev.ixl.2.pf.que0.no_desc_avail: 0 > > dev.ixl.2.pf.que0.tx_dma_setup: 0 > > dev.ixl.2.pf.que0.tso_tx: 0 > > dev.ixl.2.pf.que0.irqs: 0 > > dev.ixl.2.pf.que0.dropped: 0 > > dev.ixl.2.pf.que0.mbuf_defrag_failed: 0 > > dev.ixl.2.pf.bcast_pkts_txd: 0 > > dev.ixl.2.pf.mcast_pkts_txd: 0 > > dev.ixl.2.pf.ucast_pkts_txd: 0 > > dev.ixl.2.pf.good_octets_txd: 0 > > dev.ixl.2.pf.rx_discards: 0 > > dev.ixl.2.pf.bcast_pkts_rcvd: 0 > > dev.ixl.2.pf.mcast_pkts_rcvd: 0 > > dev.ixl.2.pf.ucast_pkts_rcvd: 0 > > dev.ixl.2.pf.good_octets_rcvd: 0 > > dev.ixl.2.vc_debug_level: 1 > > dev.ixl.2.admin_irq: 0 > > dev.ixl.2.watchdog_events: 0 > > dev.ixl.2.debug: 0 > > dev.ixl.2.dynamic_tx_itr: 0 > > dev.ixl.2.tx_itr: 122 > > dev.ixl.2.dynamic_rx_itr: 0 > > dev.ixl.2.rx_itr: 62 > > dev.ixl.2.fw_version: f4.33 a1.2 n04.42 e8000191d > > dev.ixl.2.current_speed: Unknown > > dev.ixl.2.advertise_speed: 0 > > dev.ixl.2.fc: 0 > > dev.ixl.2.%parent: pci129 > > dev.ixl.2.%pnpinfo: vendor=0x8086 device=0x1572 subvendor=0x8086 > > subdevice=0x0000 class=0x020000 > > dev.ixl.2.%location: slot=0 function=2 handle=\_SB_.PCI1.QR3A.H002 > > dev.ixl.2.%driver: ixl > > dev.ixl.2.%desc: Intel(R) Ethernet Connection XL710 Driver, Version - > 1.4.0 > > dev.ixl.1.mac.xoff_recvd: 0 > > dev.ixl.1.mac.xoff_txd: 0 > > dev.ixl.1.mac.xon_recvd: 0 > > dev.ixl.1.mac.xon_txd: 0 > > dev.ixl.1.mac.tx_frames_big: 0 > > dev.ixl.1.mac.tx_frames_1024_1522: 1565670684 > > dev.ixl.1.mac.tx_frames_512_1023: 101286418 > > dev.ixl.1.mac.tx_frames_256_511: 49713129 > > dev.ixl.1.mac.tx_frames_128_255: 231617277 > > dev.ixl.1.mac.tx_frames_65_127: 2052767669 > > dev.ixl.1.mac.tx_frames_64: 1318689044 > > dev.ixl.1.mac.checksum_errors: 0 > > dev.ixl.1.mac.rx_jabber: 0 > > dev.ixl.1.mac.rx_oversized: 0 > > dev.ixl.1.mac.rx_fragmented: 0 > > dev.ixl.1.mac.rx_undersize: 0 > > dev.ixl.1.mac.rx_frames_big: 0 > > dev.ixl.1.mac.rx_frames_1024_1522: 4960403414 > > dev.ixl.1.mac.rx_frames_512_1023: 113675084 > > dev.ixl.1.mac.rx_frames_256_511: 253904920 > > dev.ixl.1.mac.rx_frames_128_255: 196369726 > > dev.ixl.1.mac.rx_frames_65_127: 1436626211 > > dev.ixl.1.mac.rx_frames_64: 242768681 > > dev.ixl.1.mac.rx_length_errors: 0 > > dev.ixl.1.mac.remote_faults: 0 > > dev.ixl.1.mac.local_faults: 0 > > dev.ixl.1.mac.illegal_bytes: 0 > > dev.ixl.1.mac.crc_errors: 0 > > dev.ixl.1.mac.bcast_pkts_txd: 277 > > dev.ixl.1.mac.mcast_pkts_txd: 0 > > dev.ixl.1.mac.ucast_pkts_txd: 5319743942 > > dev.ixl.1.mac.good_octets_txd: 2642351885737 > > dev.ixl.1.mac.rx_discards: 0 > > dev.ixl.1.mac.bcast_pkts_rcvd: 5 > > dev.ixl.1.mac.mcast_pkts_rcvd: 144 > > dev.ixl.1.mac.ucast_pkts_rcvd: 7203747879 > > dev.ixl.1.mac.good_octets_rcvd: 7770230492434 > > dev.ixl.1.pf.que23.rx_bytes: 0 > > dev.ixl.1.pf.que23.rx_packets: 0 > > dev.ixl.1.pf.que23.tx_bytes: 7111 > > dev.ixl.1.pf.que23.tx_packets: 88 > > dev.ixl.1.pf.que23.no_desc_avail: 0 > > dev.ixl.1.pf.que23.tx_dma_setup: 0 > > dev.ixl.1.pf.que23.tso_tx: 0 > > dev.ixl.1.pf.que23.irqs: 88 > > dev.ixl.1.pf.que23.dropped: 0 > > dev.ixl.1.pf.que23.mbuf_defrag_failed: 0 > > dev.ixl.1.pf.que22.rx_bytes: 0 > > dev.ixl.1.pf.que22.rx_packets: 0 > > dev.ixl.1.pf.que22.tx_bytes: 6792 > > dev.ixl.1.pf.que22.tx_packets: 88 > > dev.ixl.1.pf.que22.no_desc_avail: 0 > > dev.ixl.1.pf.que22.tx_dma_setup: 0 > > dev.ixl.1.pf.que22.tso_tx: 0 > > dev.ixl.1.pf.que22.irqs: 89 > > dev.ixl.1.pf.que22.dropped: 0 > > dev.ixl.1.pf.que22.mbuf_defrag_failed: 0 > > dev.ixl.1.pf.que21.rx_bytes: 0 > > dev.ixl.1.pf.que21.rx_packets: 0 > > dev.ixl.1.pf.que21.tx_bytes: 7486 > > dev.ixl.1.pf.que21.tx_packets: 93 > > dev.ixl.1.pf.que21.no_desc_avail: 0 > > dev.ixl.1.pf.que21.tx_dma_setup: 0 > > dev.ixl.1.pf.que21.tso_tx: 0 > > dev.ixl.1.pf.que21.irqs: 95 > > dev.ixl.1.pf.que21.dropped: 0 > > dev.ixl.1.pf.que21.mbuf_defrag_failed: 0 > > dev.ixl.1.pf.que20.rx_bytes: 0 > > dev.ixl.1.pf.que20.rx_packets: 0 > > dev.ixl.1.pf.que20.tx_bytes: 7850 > > dev.ixl.1.pf.que20.tx_packets: 98 > > dev.ixl.1.pf.que20.no_desc_avail: 0 > > dev.ixl.1.pf.que20.tx_dma_setup: 0 > > dev.ixl.1.pf.que20.tso_tx: 0 > > dev.ixl.1.pf.que20.irqs: 99 > > dev.ixl.1.pf.que20.dropped: 0 > > dev.ixl.1.pf.que20.mbuf_defrag_failed: 0 > > dev.ixl.1.pf.que19.rx_bytes: 0 > > dev.ixl.1.pf.que19.rx_packets: 0 > > dev.ixl.1.pf.que19.tx_bytes: 64643 > > dev.ixl.1.pf.que19.tx_packets: 202 > > dev.ixl.1.pf.que19.no_desc_avail: 0 > > dev.ixl.1.pf.que19.tx_dma_setup: 0 > > dev.ixl.1.pf.que19.tso_tx: 0 > > dev.ixl.1.pf.que19.irqs: 202 > > dev.ixl.1.pf.que19.dropped: 0 > > dev.ixl.1.pf.que19.mbuf_defrag_failed: 0 > > dev.ixl.1.pf.que18.rx_bytes: 0 > > dev.ixl.1.pf.que18.rx_packets: 0 > > dev.ixl.1.pf.que18.tx_bytes: 5940 > > dev.ixl.1.pf.que18.tx_packets: 74 > > dev.ixl.1.pf.que18.no_desc_avail: 0 > > dev.ixl.1.pf.que18.tx_dma_setup: 0 > > dev.ixl.1.pf.que18.tso_tx: 0 > > dev.ixl.1.pf.que18.irqs: 74 > > dev.ixl.1.pf.que18.dropped: 0 > > dev.ixl.1.pf.que18.mbuf_defrag_failed: 0 > > dev.ixl.1.pf.que17.rx_bytes: 0 > > dev.ixl.1.pf.que17.rx_packets: 0 > > dev.ixl.1.pf.que17.tx_bytes: 11675 > > dev.ixl.1.pf.que17.tx_packets: 83 > > dev.ixl.1.pf.que17.no_desc_avail: 0 > > dev.ixl.1.pf.que17.tx_dma_setup: 0 > > dev.ixl.1.pf.que17.tso_tx: 0 > > dev.ixl.1.pf.que17.irqs: 83 > > dev.ixl.1.pf.que17.dropped: 0 > > dev.ixl.1.pf.que17.mbuf_defrag_failed: 0 > > dev.ixl.1.pf.que16.rx_bytes: 0 > > dev.ixl.1.pf.que16.rx_packets: 0 > > dev.ixl.1.pf.que16.tx_bytes: 105750457831 > > dev.ixl.1.pf.que16.tx_packets: 205406766 > > dev.ixl.1.pf.que16.no_desc_avail: 0 > > dev.ixl.1.pf.que16.tx_dma_setup: 0 > > dev.ixl.1.pf.que16.tso_tx: 0 > > dev.ixl.1.pf.que16.irqs: 87222978 > > dev.ixl.1.pf.que16.dropped: 0 > > dev.ixl.1.pf.que16.mbuf_defrag_failed: 0 > > dev.ixl.1.pf.que15.rx_bytes: 289558174088 > > dev.ixl.1.pf.que15.rx_packets: 272466190 > > dev.ixl.1.pf.que15.tx_bytes: 106152524681 > > dev.ixl.1.pf.que15.tx_packets: 205379247 > > dev.ixl.1.pf.que15.no_desc_avail: 0 > > dev.ixl.1.pf.que15.tx_dma_setup: 0 > > dev.ixl.1.pf.que15.tso_tx: 0 > > dev.ixl.1.pf.que15.irqs: 238145862 > > dev.ixl.1.pf.que15.dropped: 0 > > dev.ixl.1.pf.que15.mbuf_defrag_failed: 0 > > dev.ixl.1.pf.que14.rx_bytes: 301934533473 > > dev.ixl.1.pf.que14.rx_packets: 298452930 > > dev.ixl.1.pf.que14.tx_bytes: 111420393725 > > dev.ixl.1.pf.que14.tx_packets: 215722532 > > dev.ixl.1.pf.que14.no_desc_avail: 0 > > dev.ixl.1.pf.que14.tx_dma_setup: 0 > > dev.ixl.1.pf.que14.tso_tx: 0 > > dev.ixl.1.pf.que14.irqs: 256291617 > > dev.ixl.1.pf.que14.dropped: 0 > > dev.ixl.1.pf.que14.mbuf_defrag_failed: 0 > > dev.ixl.1.pf.que13.rx_bytes: 291380746253 > > dev.ixl.1.pf.que13.rx_packets: 273037957 > > dev.ixl.1.pf.que13.tx_bytes: 112417776222 > > dev.ixl.1.pf.que13.tx_packets: 217500943 > > dev.ixl.1.pf.que13.no_desc_avail: 0 > > dev.ixl.1.pf.que13.tx_dma_setup: 0 > > dev.ixl.1.pf.que13.tso_tx: 0 > > dev.ixl.1.pf.que13.irqs: 241422331 > > dev.ixl.1.pf.que13.dropped: 0 > > dev.ixl.1.pf.que13.mbuf_defrag_failed: 0 > > dev.ixl.1.pf.que12.rx_bytes: 301105585425 > > dev.ixl.1.pf.que12.rx_packets: 286137817 > > dev.ixl.1.pf.que12.tx_bytes: 95851784579 > > dev.ixl.1.pf.que12.tx_packets: 199715765 > > dev.ixl.1.pf.que12.no_desc_avail: 0 > > dev.ixl.1.pf.que12.tx_dma_setup: 0 > > dev.ixl.1.pf.que12.tso_tx: 0 > > dev.ixl.1.pf.que12.irqs: 247322880 > > dev.ixl.1.pf.que12.dropped: 0 > > dev.ixl.1.pf.que12.mbuf_defrag_failed: 0 > > dev.ixl.1.pf.que11.rx_bytes: 307105398143 > > dev.ixl.1.pf.que11.rx_packets: 281046463 > > dev.ixl.1.pf.que11.tx_bytes: 110710957789 > > dev.ixl.1.pf.que11.tx_packets: 211784031 > > dev.ixl.1.pf.que11.no_desc_avail: 0 > > dev.ixl.1.pf.que11.tx_dma_setup: 0 > > dev.ixl.1.pf.que11.tso_tx: 0 > > dev.ixl.1.pf.que11.irqs: 256987179 > > dev.ixl.1.pf.que11.dropped: 0 > > dev.ixl.1.pf.que11.mbuf_defrag_failed: 0 > > dev.ixl.1.pf.que10.rx_bytes: 304288000453 > > dev.ixl.1.pf.que10.rx_packets: 278987858 > > dev.ixl.1.pf.que10.tx_bytes: 93022244338 > > dev.ixl.1.pf.que10.tx_packets: 195869210 > > dev.ixl.1.pf.que10.no_desc_avail: 0 > > dev.ixl.1.pf.que10.tx_dma_setup: 0 > > dev.ixl.1.pf.que10.tso_tx: 0 > > dev.ixl.1.pf.que10.irqs: 253622192 > > dev.ixl.1.pf.que10.dropped: 0 > > dev.ixl.1.pf.que10.mbuf_defrag_failed: 0 > > dev.ixl.1.pf.que9.rx_bytes: 320340203822 > > dev.ixl.1.pf.que9.rx_packets: 302309010 > > dev.ixl.1.pf.que9.tx_bytes: 116604776460 > > dev.ixl.1.pf.que9.tx_packets: 223949025 > > dev.ixl.1.pf.que9.no_desc_avail: 0 > > dev.ixl.1.pf.que9.tx_dma_setup: 0 > > dev.ixl.1.pf.que9.tso_tx: 0 > > dev.ixl.1.pf.que9.irqs: 271165440 > > dev.ixl.1.pf.que9.dropped: 0 > > dev.ixl.1.pf.que9.mbuf_defrag_failed: 0 > > dev.ixl.1.pf.que8.rx_bytes: 291403725592 > > dev.ixl.1.pf.que8.rx_packets: 267859568 > > dev.ixl.1.pf.que8.tx_bytes: 205745654558 > > dev.ixl.1.pf.que8.tx_packets: 443349835 > > dev.ixl.1.pf.que8.no_desc_avail: 0 > > dev.ixl.1.pf.que8.tx_dma_setup: 0 > > dev.ixl.1.pf.que8.tso_tx: 0 > > dev.ixl.1.pf.que8.irqs: 254116755 > > dev.ixl.1.pf.que8.dropped: 0 > > dev.ixl.1.pf.que8.mbuf_defrag_failed: 0 > > dev.ixl.1.pf.que7.rx_bytes: 673363127346 > > dev.ixl.1.pf.que7.rx_packets: 617269774 > > dev.ixl.1.pf.que7.tx_bytes: 203162891886 > > dev.ixl.1.pf.que7.tx_packets: 443709339 > > dev.ixl.1.pf.que7.no_desc_avail: 0 > > dev.ixl.1.pf.que7.tx_dma_setup: 0 > > dev.ixl.1.pf.que7.tso_tx: 0 > > dev.ixl.1.pf.que7.irqs: 424706771 > > dev.ixl.1.pf.que7.dropped: 0 > > dev.ixl.1.pf.que7.mbuf_defrag_failed: 0 > > dev.ixl.1.pf.que6.rx_bytes: 644709094218 > > dev.ixl.1.pf.que6.rx_packets: 601892919 > > dev.ixl.1.pf.que6.tx_bytes: 221661735032 > > dev.ixl.1.pf.que6.tx_packets: 460127064 > > dev.ixl.1.pf.que6.no_desc_avail: 0 > > dev.ixl.1.pf.que6.tx_dma_setup: 0 > > dev.ixl.1.pf.que6.tso_tx: 0 > > dev.ixl.1.pf.que6.irqs: 417748074 > > dev.ixl.1.pf.que6.dropped: 0 > > dev.ixl.1.pf.que6.mbuf_defrag_failed: 0 > > dev.ixl.1.pf.que5.rx_bytes: 661904432231 > > dev.ixl.1.pf.que5.rx_packets: 622012837 > > dev.ixl.1.pf.que5.tx_bytes: 230514282876 > > dev.ixl.1.pf.que5.tx_packets: 458571100 > > dev.ixl.1.pf.que5.no_desc_avail: 0 > > dev.ixl.1.pf.que5.tx_dma_setup: 0 > > dev.ixl.1.pf.que5.tso_tx: 0 > > dev.ixl.1.pf.que5.irqs: 422305039 > > dev.ixl.1.pf.que5.dropped: 0 > > dev.ixl.1.pf.que5.mbuf_defrag_failed: 0 > > dev.ixl.1.pf.que4.rx_bytes: 653522179234 > > dev.ixl.1.pf.que4.rx_packets: 603345546 > > dev.ixl.1.pf.que4.tx_bytes: 216761219483 > > dev.ixl.1.pf.que4.tx_packets: 450329641 > > dev.ixl.1.pf.que4.no_desc_avail: 0 > > dev.ixl.1.pf.que4.tx_dma_setup: 0 > > dev.ixl.1.pf.que4.tso_tx: 3 > > dev.ixl.1.pf.que4.irqs: 416920533 > > dev.ixl.1.pf.que4.dropped: 0 > > dev.ixl.1.pf.que4.mbuf_defrag_failed: 0 > > dev.ixl.1.pf.que3.rx_bytes: 676494225882 > > dev.ixl.1.pf.que3.rx_packets: 620605168 > > dev.ixl.1.pf.que3.tx_bytes: 233854020454 > > dev.ixl.1.pf.que3.tx_packets: 464425616 > > dev.ixl.1.pf.que3.no_desc_avail: 0 > > dev.ixl.1.pf.que3.tx_dma_setup: 0 > > dev.ixl.1.pf.que3.tso_tx: 0 > > dev.ixl.1.pf.que3.irqs: 426349030 > > dev.ixl.1.pf.que3.dropped: 0 > > dev.ixl.1.pf.que3.mbuf_defrag_failed: 0 > > dev.ixl.1.pf.que2.rx_bytes: 677779337711 > > dev.ixl.1.pf.que2.rx_packets: 620883699 > > dev.ixl.1.pf.que2.tx_bytes: 211297141668 > > dev.ixl.1.pf.que2.tx_packets: 450501525 > > dev.ixl.1.pf.que2.no_desc_avail: 0 > > dev.ixl.1.pf.que2.tx_dma_setup: 0 > > dev.ixl.1.pf.que2.tso_tx: 0 > > dev.ixl.1.pf.que2.irqs: 433146278 > > dev.ixl.1.pf.que2.dropped: 0 > > dev.ixl.1.pf.que2.mbuf_defrag_failed: 0 > > dev.ixl.1.pf.que1.rx_bytes: 661360798018 > > dev.ixl.1.pf.que1.rx_packets: 619700636 > > dev.ixl.1.pf.que1.tx_bytes: 238264220772 > > dev.ixl.1.pf.que1.tx_packets: 473425354 > > dev.ixl.1.pf.que1.no_desc_avail: 0 > > dev.ixl.1.pf.que1.tx_dma_setup: 0 > > dev.ixl.1.pf.que1.tso_tx: 0 > > dev.ixl.1.pf.que1.irqs: 437959829 > > dev.ixl.1.pf.que1.dropped: 0 > > dev.ixl.1.pf.que1.mbuf_defrag_failed: 0 > > dev.ixl.1.pf.que0.rx_bytes: 685201226330 > > dev.ixl.1.pf.que0.rx_packets: 637772348 > > dev.ixl.1.pf.que0.tx_bytes: 124808 > > dev.ixl.1.pf.que0.tx_packets: 1782 > > dev.ixl.1.pf.que0.no_desc_avail: 0 > > dev.ixl.1.pf.que0.tx_dma_setup: 0 > > dev.ixl.1.pf.que0.tso_tx: 0 > > dev.ixl.1.pf.que0.irqs: 174905480 > > dev.ixl.1.pf.que0.dropped: 0 > > dev.ixl.1.pf.que0.mbuf_defrag_failed: 0 > > dev.ixl.1.pf.bcast_pkts_txd: 277 > > dev.ixl.1.pf.mcast_pkts_txd: 0 > > dev.ixl.1.pf.ucast_pkts_txd: 5319743945 > > dev.ixl.1.pf.good_octets_txd: 2613178367282 > > dev.ixl.1.pf.rx_discards: 0 > > dev.ixl.1.pf.bcast_pkts_rcvd: 1 > > dev.ixl.1.pf.mcast_pkts_rcvd: 0 > > dev.ixl.1.pf.ucast_pkts_rcvd: 7203747890 > > dev.ixl.1.pf.good_octets_rcvd: 7770230490224 > > dev.ixl.1.vc_debug_level: 1 > > dev.ixl.1.admin_irq: 0 > > dev.ixl.1.watchdog_events: 0 > > dev.ixl.1.debug: 0 > > dev.ixl.1.dynamic_tx_itr: 0 > > dev.ixl.1.tx_itr: 122 > > dev.ixl.1.dynamic_rx_itr: 0 > > dev.ixl.1.rx_itr: 62 > > dev.ixl.1.fw_version: f4.33 a1.2 n04.42 e8000191d > > dev.ixl.1.current_speed: 10G > > dev.ixl.1.advertise_speed: 0 > > dev.ixl.1.fc: 0 > > dev.ixl.1.%parent: pci129 > > dev.ixl.1.%pnpinfo: vendor=0x8086 device=0x1572 subvendor=0x8086 > > subdevice=0x0000 class=0x020000 > > dev.ixl.1.%location: slot=0 function=1 handle=\_SB_.PCI1.QR3A.H001 > > dev.ixl.1.%driver: ixl > > dev.ixl.1.%desc: Intel(R) Ethernet Connection XL710 Driver, Version - > 1.4.0 > > dev.ixl.0.mac.xoff_recvd: 0 > > dev.ixl.0.mac.xoff_txd: 0 > > dev.ixl.0.mac.xon_recvd: 0 > > dev.ixl.0.mac.xon_txd: 0 > > dev.ixl.0.mac.tx_frames_big: 0 > > dev.ixl.0.mac.tx_frames_1024_1522: 4961134019 > > dev.ixl.0.mac.tx_frames_512_1023: 113082136 > > dev.ixl.0.mac.tx_frames_256_511: 123538450 > > dev.ixl.0.mac.tx_frames_128_255: 185051082 > > dev.ixl.0.mac.tx_frames_65_127: 1332798493 > > dev.ixl.0.mac.tx_frames_64: 243338964 > > dev.ixl.0.mac.checksum_errors: 0 > > dev.ixl.0.mac.rx_jabber: 0 > > dev.ixl.0.mac.rx_oversized: 0 > > dev.ixl.0.mac.rx_fragmented: 0 > > dev.ixl.0.mac.rx_undersize: 0 > > dev.ixl.0.mac.rx_frames_big: 0 > > dev.ixl.0.mac.rx_frames_1024_1522: 1566499069 > > dev.ixl.0.mac.rx_frames_512_1023: 101390143 > > dev.ixl.0.mac.rx_frames_256_511: 49831970 > > dev.ixl.0.mac.rx_frames_128_255: 231738168 > > dev.ixl.0.mac.rx_frames_65_127: 2123185819 > > dev.ixl.0.mac.rx_frames_64: 1320404300 > > dev.ixl.0.mac.rx_length_errors: 0 > > dev.ixl.0.mac.remote_faults: 0 > > dev.ixl.0.mac.local_faults: 0 > > dev.ixl.0.mac.illegal_bytes: 0 > > dev.ixl.0.mac.crc_errors: 0 > > dev.ixl.0.mac.bcast_pkts_txd: 302 > > dev.ixl.0.mac.mcast_pkts_txd: 33965 > > dev.ixl.0.mac.ucast_pkts_txd: 6958908862 > > dev.ixl.0.mac.good_octets_txd: 7698936138858 > > dev.ixl.0.mac.rx_discards: 0 > > dev.ixl.0.mac.bcast_pkts_rcvd: 1 > > dev.ixl.0.mac.mcast_pkts_rcvd: 49693 > > dev.ixl.0.mac.ucast_pkts_rcvd: 5392999771 > > dev.ixl.0.mac.good_octets_rcvd: 2648906893811 > > dev.ixl.0.pf.que23.rx_bytes: 0 > > dev.ixl.0.pf.que23.rx_packets: 0 > > dev.ixl.0.pf.que23.tx_bytes: 2371273 > > dev.ixl.0.pf.que23.tx_packets: 7313 > > dev.ixl.0.pf.que23.no_desc_avail: 0 > > dev.ixl.0.pf.que23.tx_dma_setup: 0 > > dev.ixl.0.pf.que23.tso_tx: 0 > > dev.ixl.0.pf.que23.irqs: 7313 > > dev.ixl.0.pf.que23.dropped: 0 > > dev.ixl.0.pf.que23.mbuf_defrag_failed: 0 > > dev.ixl.0.pf.que22.rx_bytes: 0 > > dev.ixl.0.pf.que22.rx_packets: 0 > > dev.ixl.0.pf.que22.tx_bytes: 1908468 > > dev.ixl.0.pf.que22.tx_packets: 6626 > > dev.ixl.0.pf.que22.no_desc_avail: 0 > > dev.ixl.0.pf.que22.tx_dma_setup: 0 > > dev.ixl.0.pf.que22.tso_tx: 0 > > dev.ixl.0.pf.que22.irqs: 6627 > > dev.ixl.0.pf.que22.dropped: 0 > > dev.ixl.0.pf.que22.mbuf_defrag_failed: 0 > > dev.ixl.0.pf.que21.rx_bytes: 0 > > dev.ixl.0.pf.que21.rx_packets: 0 > > dev.ixl.0.pf.que21.tx_bytes: 2092668 > > dev.ixl.0.pf.que21.tx_packets: 6739 > > dev.ixl.0.pf.que21.no_desc_avail: 0 > > dev.ixl.0.pf.que21.tx_dma_setup: 0 > > dev.ixl.0.pf.que21.tso_tx: 0 > > dev.ixl.0.pf.que21.irqs: 6728 > > dev.ixl.0.pf.que21.dropped: 0 > > dev.ixl.0.pf.que21.mbuf_defrag_failed: 0 > > dev.ixl.0.pf.que20.rx_bytes: 0 > > dev.ixl.0.pf.que20.rx_packets: 0 > > dev.ixl.0.pf.que20.tx_bytes: 1742176 > > dev.ixl.0.pf.que20.tx_packets: 6246 > > dev.ixl.0.pf.que20.no_desc_avail: 0 > > dev.ixl.0.pf.que20.tx_dma_setup: 0 > > dev.ixl.0.pf.que20.tso_tx: 0 > > dev.ixl.0.pf.que20.irqs: 6249 > > dev.ixl.0.pf.que20.dropped: 0 > > dev.ixl.0.pf.que20.mbuf_defrag_failed: 0 > > dev.ixl.0.pf.que19.rx_bytes: 0 > > dev.ixl.0.pf.que19.rx_packets: 0 > > dev.ixl.0.pf.que19.tx_bytes: 2102284 > > dev.ixl.0.pf.que19.tx_packets: 6979 > > dev.ixl.0.pf.que19.no_desc_avail: 0 > > dev.ixl.0.pf.que19.tx_dma_setup: 0 > > dev.ixl.0.pf.que19.tso_tx: 0 > > dev.ixl.0.pf.que19.irqs: 6979 > > dev.ixl.0.pf.que19.dropped: 0 > > dev.ixl.0.pf.que19.mbuf_defrag_failed: 0 > > dev.ixl.0.pf.que18.rx_bytes: 0 > > dev.ixl.0.pf.que18.rx_packets: 0 > > dev.ixl.0.pf.que18.tx_bytes: 1532360 > > dev.ixl.0.pf.que18.tx_packets: 5588 > > dev.ixl.0.pf.que18.no_desc_avail: 0 > > dev.ixl.0.pf.que18.tx_dma_setup: 0 > > dev.ixl.0.pf.que18.tso_tx: 0 > > dev.ixl.0.pf.que18.irqs: 5588 > > dev.ixl.0.pf.que18.dropped: 0 > > dev.ixl.0.pf.que18.mbuf_defrag_failed: 0 > > dev.ixl.0.pf.que17.rx_bytes: 0 > > dev.ixl.0.pf.que17.rx_packets: 0 > > dev.ixl.0.pf.que17.tx_bytes: 1809684 > > dev.ixl.0.pf.que17.tx_packets: 6136 > > dev.ixl.0.pf.que17.no_desc_avail: 0 > > dev.ixl.0.pf.que17.tx_dma_setup: 0 > > dev.ixl.0.pf.que17.tso_tx: 0 > > dev.ixl.0.pf.que17.irqs: 6136 > > dev.ixl.0.pf.que17.dropped: 0 > > dev.ixl.0.pf.que17.mbuf_defrag_failed: 0 > > dev.ixl.0.pf.que16.rx_bytes: 0 > > dev.ixl.0.pf.que16.rx_packets: 0 > > dev.ixl.0.pf.que16.tx_bytes: 286836299105 > > dev.ixl.0.pf.que16.tx_packets: 263532601 > > dev.ixl.0.pf.que16.no_desc_avail: 0 > > dev.ixl.0.pf.que16.tx_dma_setup: 0 > > dev.ixl.0.pf.que16.tso_tx: 0 > > dev.ixl.0.pf.que16.irqs: 83232941 > > dev.ixl.0.pf.que16.dropped: 0 > > dev.ixl.0.pf.que16.mbuf_defrag_failed: 0 > > dev.ixl.0.pf.que15.rx_bytes: 106345323488 > > dev.ixl.0.pf.que15.rx_packets: 208869912 > > dev.ixl.0.pf.que15.tx_bytes: 298825179301 > > dev.ixl.0.pf.que15.tx_packets: 288517504 > > dev.ixl.0.pf.que15.no_desc_avail: 0 > > dev.ixl.0.pf.que15.tx_dma_setup: 0 > > dev.ixl.0.pf.que15.tso_tx: 0 > > dev.ixl.0.pf.que15.irqs: 223322408 > > dev.ixl.0.pf.que15.dropped: 0 > > dev.ixl.0.pf.que15.mbuf_defrag_failed: 0 > > dev.ixl.0.pf.que14.rx_bytes: 106721900547 > > dev.ixl.0.pf.que14.rx_packets: 208566121 > > dev.ixl.0.pf.que14.tx_bytes: 288657751920 > > dev.ixl.0.pf.que14.tx_packets: 263556000 > > dev.ixl.0.pf.que14.no_desc_avail: 0 > > dev.ixl.0.pf.que14.tx_dma_setup: 0 > > dev.ixl.0.pf.que14.tso_tx: 0 > > dev.ixl.0.pf.que14.irqs: 220377537 > > dev.ixl.0.pf.que14.dropped: 0 > > dev.ixl.0.pf.que14.mbuf_defrag_failed: 0 > > dev.ixl.0.pf.que13.rx_bytes: 111978971378 > > dev.ixl.0.pf.que13.rx_packets: 218447354 > > dev.ixl.0.pf.que13.tx_bytes: 298439860675 > > dev.ixl.0.pf.que13.tx_packets: 276806617 > > dev.ixl.0.pf.que13.no_desc_avail: 0 > > dev.ixl.0.pf.que13.tx_dma_setup: 0 > > dev.ixl.0.pf.que13.tso_tx: 0 > > dev.ixl.0.pf.que13.irqs: 227474625 > > dev.ixl.0.pf.que13.dropped: 0 > > dev.ixl.0.pf.que13.mbuf_defrag_failed: 0 > > dev.ixl.0.pf.que12.rx_bytes: 112969704706 > > dev.ixl.0.pf.que12.rx_packets: 220275562 > > dev.ixl.0.pf.que12.tx_bytes: 304750620079 > > dev.ixl.0.pf.que12.tx_packets: 272244483 > > dev.ixl.0.pf.que12.no_desc_avail: 0 > > dev.ixl.0.pf.que12.tx_dma_setup: 0 > > dev.ixl.0.pf.que12.tso_tx: 183 > > dev.ixl.0.pf.que12.irqs: 230111291 > > dev.ixl.0.pf.que12.dropped: 0 > > dev.ixl.0.pf.que12.mbuf_defrag_failed: 0 > > dev.ixl.0.pf.que11.rx_bytes: 96405343036 > > dev.ixl.0.pf.que11.rx_packets: 202329448 > > dev.ixl.0.pf.que11.tx_bytes: 302481707696 > > dev.ixl.0.pf.que11.tx_packets: 271689246 > > dev.ixl.0.pf.que11.no_desc_avail: 0 > > dev.ixl.0.pf.que11.tx_dma_setup: 0 > > dev.ixl.0.pf.que11.tso_tx: 0 > > dev.ixl.0.pf.que11.irqs: 220717612 > > dev.ixl.0.pf.que11.dropped: 0 > > dev.ixl.0.pf.que11.mbuf_defrag_failed: 0 > > dev.ixl.0.pf.que10.rx_bytes: 111280008670 > > dev.ixl.0.pf.que10.rx_packets: 214900261 > > dev.ixl.0.pf.que10.tx_bytes: 318638566198 > > dev.ixl.0.pf.que10.tx_packets: 295011389 > > dev.ixl.0.pf.que10.no_desc_avail: 0 > > dev.ixl.0.pf.que10.tx_dma_setup: 0 > > dev.ixl.0.pf.que10.tso_tx: 0 > > dev.ixl.0.pf.que10.irqs: 230681709 > > dev.ixl.0.pf.que10.dropped: 0 > > dev.ixl.0.pf.que10.mbuf_defrag_failed: 0 > > dev.ixl.0.pf.que9.rx_bytes: 93566025126 > > dev.ixl.0.pf.que9.rx_packets: 198726483 > > dev.ixl.0.pf.que9.tx_bytes: 288858818348 > > dev.ixl.0.pf.que9.tx_packets: 258926864 > > dev.ixl.0.pf.que9.no_desc_avail: 0 > > dev.ixl.0.pf.que9.tx_dma_setup: 0 > > dev.ixl.0.pf.que9.tso_tx: 0 > > dev.ixl.0.pf.que9.irqs: 217918160 > > dev.ixl.0.pf.que9.dropped: 0 > > dev.ixl.0.pf.que9.mbuf_defrag_failed: 0 > > dev.ixl.0.pf.que8.rx_bytes: 117169019041 > > dev.ixl.0.pf.que8.rx_packets: 226938172 > > dev.ixl.0.pf.que8.tx_bytes: 665794492752 > > dev.ixl.0.pf.que8.tx_packets: 593519436 > > dev.ixl.0.pf.que8.no_desc_avail: 0 > > dev.ixl.0.pf.que8.tx_dma_setup: 0 > > dev.ixl.0.pf.que8.tso_tx: 0 > > dev.ixl.0.pf.que8.irqs: 244643578 > > dev.ixl.0.pf.que8.dropped: 0 > > dev.ixl.0.pf.que8.mbuf_defrag_failed: 0 > > dev.ixl.0.pf.que7.rx_bytes: 206974266022 > > dev.ixl.0.pf.que7.rx_packets: 449899895 > > dev.ixl.0.pf.que7.tx_bytes: 638527685820 > > dev.ixl.0.pf.que7.tx_packets: 580750916 > > dev.ixl.0.pf.que7.no_desc_avail: 0 > > dev.ixl.0.pf.que7.tx_dma_setup: 0 > > dev.ixl.0.pf.que7.tso_tx: 0 > > dev.ixl.0.pf.que7.irqs: 391760959 > > dev.ixl.0.pf.que7.dropped: 0 > > dev.ixl.0.pf.que7.mbuf_defrag_failed: 0 > > dev.ixl.0.pf.que6.rx_bytes: 204373984670 > > dev.ixl.0.pf.que6.rx_packets: 449990985 > > dev.ixl.0.pf.que6.tx_bytes: 655511068125 > > dev.ixl.0.pf.que6.tx_packets: 600735086 > > dev.ixl.0.pf.que6.no_desc_avail: 0 > > dev.ixl.0.pf.que6.tx_dma_setup: 0 > > dev.ixl.0.pf.que6.tso_tx: 0 > > dev.ixl.0.pf.que6.irqs: 394961024 > > dev.ixl.0.pf.que6.dropped: 0 > > dev.ixl.0.pf.que6.mbuf_defrag_failed: 0 > > dev.ixl.0.pf.que5.rx_bytes: 222919535872 > > dev.ixl.0.pf.que5.rx_packets: 466659705 > > dev.ixl.0.pf.que5.tx_bytes: 647689764751 > > dev.ixl.0.pf.que5.tx_packets: 582532691 > > dev.ixl.0.pf.que5.no_desc_avail: 0 > > dev.ixl.0.pf.que5.tx_dma_setup: 0 > > dev.ixl.0.pf.que5.tso_tx: 5 > > dev.ixl.0.pf.que5.irqs: 404552229 > > dev.ixl.0.pf.que5.dropped: 0 > > dev.ixl.0.pf.que5.mbuf_defrag_failed: 0 > > dev.ixl.0.pf.que4.rx_bytes: 231706806551 > > dev.ixl.0.pf.que4.rx_packets: 464397112 > > dev.ixl.0.pf.que4.tx_bytes: 669945424739 > > dev.ixl.0.pf.que4.tx_packets: 598527594 > > dev.ixl.0.pf.que4.no_desc_avail: 0 > > dev.ixl.0.pf.que4.tx_dma_setup: 0 > > dev.ixl.0.pf.que4.tso_tx: 452 > > dev.ixl.0.pf.que4.irqs: 405018727 > > dev.ixl.0.pf.que4.dropped: 0 > > dev.ixl.0.pf.que4.mbuf_defrag_failed: 0 > > dev.ixl.0.pf.que3.rx_bytes: 217942511336 > > dev.ixl.0.pf.que3.rx_packets: 456454137 > > dev.ixl.0.pf.que3.tx_bytes: 674027217503 > > dev.ixl.0.pf.que3.tx_packets: 604815959 > > dev.ixl.0.pf.que3.no_desc_avail: 0 > > dev.ixl.0.pf.que3.tx_dma_setup: 0 > > dev.ixl.0.pf.que3.tso_tx: 0 > > dev.ixl.0.pf.que3.irqs: 399890434 > > dev.ixl.0.pf.que3.dropped: 0 > > dev.ixl.0.pf.que3.mbuf_defrag_failed: 0 > > dev.ixl.0.pf.que2.rx_bytes: 235057952930 > > dev.ixl.0.pf.que2.rx_packets: 470668205 > > dev.ixl.0.pf.que2.tx_bytes: 653598762323 > > dev.ixl.0.pf.que2.tx_packets: 595468539 > > dev.ixl.0.pf.que2.no_desc_avail: 0 > > dev.ixl.0.pf.que2.tx_dma_setup: 0 > > dev.ixl.0.pf.que2.tso_tx: 0 > > dev.ixl.0.pf.que2.irqs: 410972406 > > dev.ixl.0.pf.que2.dropped: 0 > > dev.ixl.0.pf.que2.mbuf_defrag_failed: 0 > > dev.ixl.0.pf.que1.rx_bytes: 212570053522 > > dev.ixl.0.pf.que1.rx_packets: 456981561 > > dev.ixl.0.pf.que1.tx_bytes: 677227126330 > > dev.ixl.0.pf.que1.tx_packets: 612428010 > > dev.ixl.0.pf.que1.no_desc_avail: 0 > > dev.ixl.0.pf.que1.tx_dma_setup: 0 > > dev.ixl.0.pf.que1.tso_tx: 0 > > dev.ixl.0.pf.que1.irqs: 404727745 > > dev.ixl.0.pf.que1.dropped: 0 > > dev.ixl.0.pf.que1.mbuf_defrag_failed: 0 > > dev.ixl.0.pf.que0.rx_bytes: 239424279142 > > dev.ixl.0.pf.que0.rx_packets: 479078356 > > dev.ixl.0.pf.que0.tx_bytes: 513283 > > dev.ixl.0.pf.que0.tx_packets: 3990 > > dev.ixl.0.pf.que0.no_desc_avail: 0 > > dev.ixl.0.pf.que0.tx_dma_setup: 0 > > dev.ixl.0.pf.que0.tso_tx: 0 > > dev.ixl.0.pf.que0.irqs: 178414974 > > dev.ixl.0.pf.que0.dropped: 0 > > dev.ixl.0.pf.que0.mbuf_defrag_failed: 0 > > dev.ixl.0.pf.bcast_pkts_txd: 302 > > dev.ixl.0.pf.mcast_pkts_txd: 33965 > > dev.ixl.0.pf.ucast_pkts_txd: 6958908879 > > dev.ixl.0.pf.good_octets_txd: 7669637462330 > > dev.ixl.0.pf.rx_discards: 0 > > dev.ixl.0.pf.bcast_pkts_rcvd: 1 > > dev.ixl.0.pf.mcast_pkts_rcvd: 49549 > > dev.ixl.0.pf.ucast_pkts_rcvd: 5392999777 > > dev.ixl.0.pf.good_octets_rcvd: 2648906886817 > > dev.ixl.0.vc_debug_level: 1 > > dev.ixl.0.admin_irq: 0 > > dev.ixl.0.watchdog_events: 0 > > dev.ixl.0.debug: 0 > > dev.ixl.0.dynamic_tx_itr: 0 > > dev.ixl.0.tx_itr: 122 > > dev.ixl.0.dynamic_rx_itr: 0 > > dev.ixl.0.rx_itr: 62 > > dev.ixl.0.fw_version: f4.33 a1.2 n04.42 e8000191d > > dev.ixl.0.current_speed: 10G > > dev.ixl.0.advertise_speed: 0 > > dev.ixl.0.fc: 0 > > dev.ixl.0.%parent: pci129 > > dev.ixl.0.%pnpinfo: vendor=0x8086 device=0x1572 subvendor=0x8086 > > subdevice=0x0002 class=0x020000 > > dev.ixl.0.%location: slot=0 function=0 handle=\_SB_.PCI1.QR3A.H000 > > dev.ixl.0.%driver: ixl > > dev.ixl.0.%desc: Intel(R) Ethernet Connection XL710 Driver, Version - > 1.4.0 > > dev.ixl.%parent: > > From owner-freebsd-net@freebsd.org Wed Aug 19 18:58:57 2015 Return-Path: Delivered-To: freebsd-net@mailman.ysv.freebsd.org Received: from mx1.freebsd.org (mx1.freebsd.org [IPv6:2001:1900:2254:206a::19:1]) by mailman.ysv.freebsd.org (Postfix) with ESMTP id 6C1669BEC14 for ; Wed, 19 Aug 2015 18:58:57 +0000 (UTC) (envelope-from adrian.chadd@gmail.com) Received: from mailman.ysv.freebsd.org (mailman.ysv.freebsd.org [IPv6:2001:1900:2254:206a::50:5]) by mx1.freebsd.org (Postfix) with ESMTP id 4D4E41846 for ; Wed, 19 Aug 2015 18:58:57 +0000 (UTC) (envelope-from adrian.chadd@gmail.com) Received: by mailman.ysv.freebsd.org (Postfix) id 4AA2E9BEC11; Wed, 19 Aug 2015 18:58:57 +0000 (UTC) Delivered-To: net@mailman.ysv.freebsd.org Received: from mx1.freebsd.org (mx1.freebsd.org [IPv6:2001:1900:2254:206a::19:1]) by mailman.ysv.freebsd.org (Postfix) with ESMTP id 303749BEC10; Wed, 19 Aug 2015 18:58:57 +0000 (UTC) (envelope-from adrian.chadd@gmail.com) Received: from mail-ig0-x22d.google.com (mail-ig0-x22d.google.com [IPv6:2607:f8b0:4001:c05::22d]) (using TLSv1.2 with cipher ECDHE-RSA-AES128-GCM-SHA256 (128/128 bits)) (Client CN "smtp.gmail.com", Issuer "Google Internet Authority G2" (verified OK)) by mx1.freebsd.org (Postfix) with ESMTPS id F06A81845; Wed, 19 Aug 2015 18:58:56 +0000 (UTC) (envelope-from adrian.chadd@gmail.com) Received: by igxp17 with SMTP id p17so112743126igx.1; Wed, 19 Aug 2015 11:58:56 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20120113; h=mime-version:in-reply-to:references:date:message-id:subject:from:to :content-type; bh=rOZAtgpPO8xS7uVnQoHJEBkLTFsEgT4USulqOUNeF3Y=; b=BEtCxkJfo8H18LstSn18JcrSYHDE8mm/MjoYU0CZALr932hkAvzlxtUzgbVCPj6far Pn+BRuEjiqKDby1uqXr8Is0h2NP1KYyprNlK95KJwW9jlACZo9vyAoTK2OXOAOmR0wRL FK1nD5BQncEfd+TZDdLV2LWt8DsZDIePp9QlMIu5ZBnHADekycHw9FBMQSWA51IjLOOf Cqj6y3AXnahZx2bLssbxkE783fjB+kb4h/70mwr+pRgM4jz0T3DavkvLVKe6SsPueeUV wNPa6wsGfQnxEchunXOzbQNyzWADntWs741oj+PaHtSiearCnHEkzAnj1aBlQi5hC28x qeHQ== MIME-Version: 1.0 X-Received: by 10.50.43.227 with SMTP id z3mr28127613igl.22.1440010736374; Wed, 19 Aug 2015 11:58:56 -0700 (PDT) Received: by 10.36.38.133 with HTTP; Wed, 19 Aug 2015 11:58:56 -0700 (PDT) In-Reply-To: <20150819160716.GK63584@albert.catwhisker.org> References: <20150818232007.GN1189@albert.catwhisker.org> <20150819160716.GK63584@albert.catwhisker.org> Date: Wed, 19 Aug 2015 11:58:56 -0700 Message-ID: Subject: Re: Panic [page fault] in _ieee80211_crypto_delkey(): stable/10/amd64 @r286878 From: Adrian Chadd To: David Wolfskill , "stable@freebsd.org" , "net@freebsd.org" Content-Type: text/plain; charset=UTF-8 X-BeenThere: freebsd-net@freebsd.org X-Mailman-Version: 2.1.20 Precedence: list List-Id: Networking and TCP/IP with FreeBSD List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , X-List-Received-Date: Wed, 19 Aug 2015 18:58:57 -0000 hi, you'll have to do some debugging. it looks like it's some kind of odd race - line 461 is _ieee80211_crypto_delkey(); line 105 is cipher_detach() and it blows up there. Try "wlandebug +crypto" during your next boot and let's see what it logs for the key. If you can 'print *key' in kgdb on the core at some frame then we should get some useful information. -a On 19 August 2015 at 09:07, David Wolfskill wrote: > On Tue, Aug 18, 2015 at 04:20:07PM -0700, David Wolfskill wrote: >> I was minding my own business in a staff meeting this afternoon, and my >> laptop rebooted; seems it got a panic. I've copied the core.txt.0 file >> to , along with a >> verbose dmesg.boot from this morning and output of "pciconf -l -v". >> >> This was running: >> FreeBSD localhost 10.2-STABLE FreeBSD 10.2-STABLE #122 r286878M/286880:1002500: Tue Aug 18 04:06:33 PDT 2015 root@g1-252.catwhisker.org:/common/S1/obj/usr/src/sys/CANARY amd64 >> .... > > And this morning (just after I got in to work, and was trying (and > trying) to get re-associated with the AP at work), I had another one. > > I've copied the resulting core.txt.1 over to > http://www.cawhisker.org:~david/FreeBSD/stable_10/ as well; here are > excerpts from a unidiff between core.txt.{0,1}: > > --- core.txt.0 2015-08-18 15:39:05.232251000 -0700 > +++ core.txt.1 2015-08-19 08:56:37.686238000 -0700 > @@ -1,8 +1,8 @@ > -localhost dumped core - see /var/crash/vmcore.0 > +localhost dumped core - see /var/crash/vmcore.1 > > -Tue Aug 18 15:39:02 PDT 2015 > +Wed Aug 19 08:56:35 PDT 2015 > > -FreeBSD localhost 10.2-STABLE FreeBSD 10.2-STABLE #122 r286878M/286880:1002500: Tue Aug 18 04:06:33 PDT 2015 root@g1-252.catwhisker.org:/common/S1/obj/usr/src/sys/CANARY amd64 > +FreeBSD localhost 10.2-STABLE FreeBSD 10.2-STABLE #123 r286912M/286918:1002500: Wed Aug 19 04:05:06 PDT 2015 root@g1-252.catwhisker.org:/common/S1/obj/usr/src/sys/CANARY amd64 > > panic: page fault > > @@ -16,7 +16,7 @@ > > Unread portion of the kernel message buffer: > panic: page fault > -cpuid = 2 > +cpuid = 1 > KDB: stack backtrace: > #0 0xffffffff80946e00 at kdb_backtrace+0x60 > #1 0xffffffff8090a9e6 at vpanic+0x126 > @@ -34,8 +34,8 @@ > #13 0xffffffff8095e9f0 at sys_ioctl+0x140 > #14 0xffffffff80c84f97 at amd64_syscall+0x357 > #15 0xffffffff80c6a49b at Xfast_syscall+0xfb > -Uptime: 9h45m0s > -Dumping 625 out of 8095 MB:..3%..11%..21%..31%..41%..52%..62%..72%..82%..93% > +Uptime: 3h16m49s > +Dumping 584 out of 8095 MB:..3%..11%..22%..31%..42%..53%..61%..72%..83%..91% > > Reading symbols from /boot/kernel/geom_eli.ko.symbols...done. > Loaded symbols for /boot/kernel/geom_eli.ko.symbols > @@ -81,32 +81,32 @@ > at /usr/src/sys/kern/kern_shutdown.c:687 > #4 0xffffffff80c8467b in trap_fatal (frame=, > eva=) at /usr/src/sys/amd64/amd64/trap.c:851 > -#5 0xffffffff80c8497d in trap_pfault (frame=0xfffffe060d88b510, > +#5 0xffffffff80c8497d in trap_pfault (frame=0xfffffe060d5ea510, > usermode=) at /usr/src/sys/amd64/amd64/trap.c:674 > -#6 0xffffffff80c8401a in trap (frame=0xfffffe060d88b510) > +#6 0xffffffff80c8401a in trap (frame=0xfffffe060d5ea510) > at /usr/src/sys/amd64/amd64/trap.c:440 > #7 0xffffffff80c6a1b2 in calltrap () > at /usr/src/sys/amd64/amd64/exception.S:236 > #8 0xffffffff809f003a in _ieee80211_crypto_delkey () > at /usr/src/sys/net80211/ieee80211_crypto.c:105 > -#9 0xffffffff809eff5e in ieee80211_crypto_delkey (vap=0xfffffe03d9070000, > - key=0xfffffe03d9070800) at /usr/src/sys/net80211/ieee80211_crypto.c:461 > -#10 0xffffffff80a04d45 in ieee80211_ioctl_delkey (vap=0xfffffe03d9070000, > +#9 0xffffffff809eff5e in ieee80211_crypto_delkey (vap=0xfffffe03dd31a000, > + key=0xfffffe03dd31a800) at /usr/src/sys/net80211/ieee80211_crypto.c:461 > +#10 0xffffffff80a04d45 in ieee80211_ioctl_delkey (vap=0xfffffe03dd31a000, > ireq=) > at /usr/src/sys/net80211/ieee80211_ioctl.c:1252 > #11 0xffffffff80a03bd2 in ieee80211_ioctl_set80211 () > at /usr/src/sys/net80211/ieee80211_ioctl.c:2814 > #12 0xffffffff80a2c323 in in_control (so=, > - cmd=9214790412651315593, data=0xfffffe060d88bb80 "", ifp=0x3, > + cmd=9214790412651315593, data=0xfffffe060d5eab80 "", ifp=0x3, > td=) at /usr/src/sys/netinet/in.c:308 > -#13 0xffffffff809cd57b in ifioctl (so=0xfffffe03d9070800, cmd=2149607914, > - data=0xfffffe060d88b8e0 "wlan0", td=0xfffff80170abb940) > +#13 0xffffffff809cd57b in ifioctl (so=0xfffffe03dd31a800, cmd=2149607914, > + data=0xfffffe060d5ea8e0 "wlan0", td=0xfffff800098b5940) > at /usr/src/sys/net/if.c:2770 > -#14 0xffffffff8095ecf5 in kern_ioctl (td=0xfffff80170abb940, > - fd=, com=18446741891212314624) at file.h:320 > -#15 0xffffffff8095e9f0 in sys_ioctl (td=0xfffff80170abb940, > - uap=0xfffffe060d88ba40) at /usr/src/sys/kern/sys_generic.c:718 > -#16 0xffffffff80c84f97 in amd64_syscall (td=0xfffff80170abb940, traced=0) > +#14 0xffffffff8095ecf5 in kern_ioctl (td=0xfffff800098b5940, > + fd=, com=18446741891282216960) at file.h:320 > +#15 0xffffffff8095e9f0 in sys_ioctl (td=0xfffff800098b5940, > + uap=0xfffffe060d5eaa40) at /usr/src/sys/kern/sys_generic.c:718 > +#16 0xffffffff80c84f97 in amd64_syscall (td=0xfffff800098b5940, traced=0) > at subr_syscall.c:134 > #17 0xffffffff80c6a49b in Xfast_syscall () > at /usr/src/sys/amd64/amd64/exception.S:396 > @@ -118,305 +118,301 @@ > ------------------------------------------------------------------------ > .... > > > So it looks to me to be quite similar to the previous one. > > I've also copied the kernel config file ("CANARY") to the above-cited > Web page. > > Anything else I can do to help nail this? > > Peace, > david > -- > David H. Wolfskill david@catwhisker.org > Those who would murder in the name of God or prophet are blasphemous cowards. > > See http://www.catwhisker.org/~david/publickey.gpg for my public key. From owner-freebsd-net@freebsd.org Wed Aug 19 18:59:34 2015 Return-Path: Delivered-To: freebsd-net@mailman.ysv.freebsd.org Received: from mx1.freebsd.org (mx1.freebsd.org [IPv6:2001:1900:2254:206a::19:1]) by mailman.ysv.freebsd.org (Postfix) with ESMTP id 00E399BEC72 for ; Wed, 19 Aug 2015 18:59:34 +0000 (UTC) (envelope-from adrian.chadd@gmail.com) Received: from mail-io0-x233.google.com (mail-io0-x233.google.com [IPv6:2607:f8b0:4001:c06::233]) (using TLSv1.2 with cipher ECDHE-RSA-AES128-GCM-SHA256 (128/128 bits)) (Client CN "smtp.gmail.com", Issuer "Google Internet Authority G2" (verified OK)) by mx1.freebsd.org (Postfix) with ESMTPS id BF55419FB; Wed, 19 Aug 2015 18:59:33 +0000 (UTC) (envelope-from adrian.chadd@gmail.com) Received: by iodt126 with SMTP id t126so20264143iod.2; Wed, 19 Aug 2015 11:59:33 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20120113; h=mime-version:in-reply-to:references:date:message-id:subject:from:to :cc:content-type; bh=0c1UpCJXahlcCtaQz2oYcvXIKuV0sNM6vJTrPB7gVf8=; b=fzpttzvdhcWXfbOsmUJpvsdvYG+JNiwD/SWzfp1dBgP8YNRKxBgXTDVD+Rr9awrQCS 5niV6YAsdp0qhBQfVkojPBIsN81zVzYkmGW8bBR86+6vJ0WDX0xaE3xrPxxMDVNbWdJi z3Lq0o2pFdgo7XJOxmPHM0I0vACwzAFBtXMuSrsRrCXgdSu+jFaZrghi+Nr1eW5oF/qB WlLoVtq3yU4rs20eoG9Z/4YqdTvlTG30i+aa+CBY3PZ8mNWER9aRO1gyP2ZX1rswIKKv bBNQldCIUXV+MxPQ/T6Y+vLhVavr9Bgg8NVFl54IXBcpb2HbhgjJRlI49VFO94Mnfknt QF/Q== MIME-Version: 1.0 X-Received: by 10.107.156.73 with SMTP id f70mr13553895ioe.165.1440010773081; Wed, 19 Aug 2015 11:59:33 -0700 (PDT) Received: by 10.36.38.133 with HTTP; Wed, 19 Aug 2015 11:59:32 -0700 (PDT) In-Reply-To: References: <55D49611.40603@maxnet.ru> <20150819180051.GM94440@strugglingcoder.info> Date: Wed, 19 Aug 2015 11:59:32 -0700 Message-ID: Subject: Re: FreeBSD 10.2-STABLE + Intel XL710 - free queues From: Adrian Chadd To: Eric Joyner Cc: hiren panchasara , Evgeny Khorokhorin , FreeBSD Net Content-Type: text/plain; charset=UTF-8 X-BeenThere: freebsd-net@freebsd.org X-Mailman-Version: 2.1.20 Precedence: list List-Id: Networking and TCP/IP with FreeBSD List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , X-List-Received-Date: Wed, 19 Aug 2015 18:59:34 -0000 Does it do RSS distribution into > 16 queues? -a On 19 August 2015 at 11:17, Eric Joyner wrote: > The IXLV_MAX_QUEUES value is for the VF driver; the standard driver should > be able to allocate and properly use up to 64 queues. > > That said, you're only getting rx traffic on the first 16 queues, so that > looks like a bug in the driver. I'll take a look at it. > > - Eric > > On Wed, Aug 19, 2015 at 11:00 AM hiren panchasara < > hiren@strugglingcoder.info> wrote: > >> On 08/19/15 at 05:43P, Evgeny Khorokhorin wrote: >> > Hi All, >> > >> > FreeBSD 10.2-STABLE >> > 2*CPU Intel E5-2643v3 with HyperThreading enabled >> > Intel XL710 network adapter >> > I updated the ixl driver to version 1.4.0 from download.intel.com >> > Every ixl interface create 24 queues (6 cores *2 HT *2 CPUs) but >> > utilizes only 16-17 of them. Where is the reason of such behavior or >> > driver bug? >> >> Not sure what is the h/w limit but this may be a possible cause: >> #define IXLV_MAX_QUEUES 16 >> in sys/dev/ixl/ixlv.h >> >> and ixlv_init_msix() doing: >> if (queues > IXLV_MAX_QUEUES) >> queues = IXLV_MAX_QUEUES; >> >> Adding eric from intel to confirm. >> >> Cheers, >> Hiren >> > >> > irq284: ixl0:q0 177563088 2054 >> > irq285: ixl0:q1 402668179 4659 >> > irq286: ixl0:q2 408885088 4731 >> > irq287: ixl0:q3 397744300 4602 >> > irq288: ixl0:q4 403040766 4663 >> > irq289: ixl0:q5 402499314 4657 >> > irq290: ixl0:q6 392693663 4543 >> > irq291: ixl0:q7 389364966 4505 >> > irq292: ixl0:q8 243244346 2814 >> > irq293: ixl0:q9 216834450 2509 >> > irq294: ixl0:q10 229460056 2655 >> > irq295: ixl0:q11 219591953 2540 >> > irq296: ixl0:q12 228944960 2649 >> > irq297: ixl0:q13 226385454 2619 >> > irq298: ixl0:q14 219174953 2536 >> > irq299: ixl0:q15 222151378 2570 >> > irq300: ixl0:q16 82799713 958 >> > irq301: ixl0:q17 6131 0 >> > irq302: ixl0:q18 5586 0 >> > irq303: ixl0:q19 6975 0 >> > irq304: ixl0:q20 6243 0 >> > irq305: ixl0:q21 6729 0 >> > irq306: ixl0:q22 6623 0 >> > irq307: ixl0:q23 7306 0 >> > irq309: ixl1:q0 174074462 2014 >> > irq310: ixl1:q1 435716449 5041 >> > irq311: ixl1:q2 431030443 4987 >> > irq312: ixl1:q3 424156413 4907 >> > irq313: ixl1:q4 414791657 4799 >> > irq314: ixl1:q5 420260382 4862 >> > irq315: ixl1:q6 415645708 4809 >> > irq316: ixl1:q7 422783859 4892 >> > irq317: ixl1:q8 252737383 2924 >> > irq318: ixl1:q9 269655708 3120 >> > irq319: ixl1:q10 252397826 2920 >> > irq320: ixl1:q11 255649144 2958 >> > irq321: ixl1:q12 246025621 2846 >> > irq322: ixl1:q13 240176554 2779 >> > irq323: ixl1:q14 254882418 2949 >> > irq324: ixl1:q15 236846536 2740 >> > irq325: ixl1:q16 86794467 1004 >> > irq326: ixl1:q17 83 0 >> > irq327: ixl1:q18 74 0 >> > irq328: ixl1:q19 202 0 >> > irq329: ixl1:q20 99 0 >> > irq330: ixl1:q21 96 0 >> > irq331: ixl1:q22 91 0 >> > irq332: ixl1:q23 89 0 >> > >> > last pid: 28710; load averages: 7.16, 6.76, 6.49 up 1+00:00:41 >> 17:40:46 >> > 391 processes: 32 running, 215 sleeping, 144 waiting >> > CPU 0: 0.0% user, 0.0% nice, 0.0% system, 49.2% interrupt, 50.8% idle >> > CPU 1: 0.0% user, 0.0% nice, 0.4% system, 41.3% interrupt, 58.3% idle >> > CPU 2: 0.0% user, 0.0% nice, 0.0% system, 39.0% interrupt, 61.0% idle >> > CPU 3: 0.0% user, 0.0% nice, 0.0% system, 46.5% interrupt, 53.5% idle >> > CPU 4: 0.0% user, 0.0% nice, 0.0% system, 37.4% interrupt, 62.6% idle >> > CPU 5: 0.0% user, 0.0% nice, 0.0% system, 40.9% interrupt, 59.1% idle >> > CPU 6: 0.0% user, 0.0% nice, 0.0% system, 40.2% interrupt, 59.8% idle >> > CPU 7: 0.0% user, 0.0% nice, 0.0% system, 45.3% interrupt, 54.7% idle >> > CPU 8: 0.0% user, 0.0% nice, 0.0% system, 20.5% interrupt, 79.5% idle >> > CPU 9: 0.0% user, 0.0% nice, 0.0% system, 25.2% interrupt, 74.8% idle >> > CPU 10: 0.0% user, 0.0% nice, 0.0% system, 23.2% interrupt, 76.8% idle >> > CPU 11: 0.0% user, 0.0% nice, 0.0% system, 19.3% interrupt, 80.7% idle >> > CPU 12: 0.0% user, 0.0% nice, 0.0% system, 28.7% interrupt, 71.3% idle >> > CPU 13: 0.0% user, 0.0% nice, 0.0% system, 20.5% interrupt, 79.5% idle >> > CPU 14: 0.0% user, 0.0% nice, 0.0% system, 35.0% interrupt, 65.0% idle >> > CPU 15: 0.0% user, 0.0% nice, 0.0% system, 23.2% interrupt, 76.8% idle >> > CPU 16: 0.0% user, 0.0% nice, 0.4% system, 1.2% interrupt, 98.4% idle >> > CPU 17: 0.0% user, 0.0% nice, 2.0% system, 0.0% interrupt, 98.0% idle >> > CPU 18: 0.0% user, 0.0% nice, 2.4% system, 0.0% interrupt, 97.6% idle >> > CPU 19: 0.0% user, 0.0% nice, 2.8% system, 0.0% interrupt, 97.2% idle >> > CPU 20: 0.0% user, 0.0% nice, 2.4% system, 0.0% interrupt, 97.6% idle >> > CPU 21: 0.0% user, 0.0% nice, 1.6% system, 0.0% interrupt, 98.4% idle >> > CPU 22: 0.0% user, 0.0% nice, 2.8% system, 0.0% interrupt, 97.2% idle >> > CPU 23: 0.0% user, 0.0% nice, 0.4% system, 0.0% interrupt, 99.6% idle >> > >> > # netstat -I ixl0 -w1 -h >> > input ixl0 output >> > packets errs idrops bytes packets errs bytes colls >> > 253K 0 0 136M 311K 0 325M 0 >> > 251K 0 0 129M 314K 0 334M 0 >> > 250K 0 0 135M 313K 0 333M 0 >> > >> > hw.ixl.tx_itr: 122 >> > hw.ixl.rx_itr: 62 >> > hw.ixl.dynamic_tx_itr: 0 >> > hw.ixl.dynamic_rx_itr: 0 >> > hw.ixl.max_queues: 0 >> > hw.ixl.ring_size: 4096 >> > hw.ixl.enable_msix: 1 >> > dev.ixl.3.mac.xoff_recvd: 0 >> > dev.ixl.3.mac.xoff_txd: 0 >> > dev.ixl.3.mac.xon_recvd: 0 >> > dev.ixl.3.mac.xon_txd: 0 >> > dev.ixl.3.mac.tx_frames_big: 0 >> > dev.ixl.3.mac.tx_frames_1024_1522: 0 >> > dev.ixl.3.mac.tx_frames_512_1023: 0 >> > dev.ixl.3.mac.tx_frames_256_511: 0 >> > dev.ixl.3.mac.tx_frames_128_255: 0 >> > dev.ixl.3.mac.tx_frames_65_127: 0 >> > dev.ixl.3.mac.tx_frames_64: 0 >> > dev.ixl.3.mac.checksum_errors: 0 >> > dev.ixl.3.mac.rx_jabber: 0 >> > dev.ixl.3.mac.rx_oversized: 0 >> > dev.ixl.3.mac.rx_fragmented: 0 >> > dev.ixl.3.mac.rx_undersize: 0 >> > dev.ixl.3.mac.rx_frames_big: 0 >> > dev.ixl.3.mac.rx_frames_1024_1522: 0 >> > dev.ixl.3.mac.rx_frames_512_1023: 0 >> > dev.ixl.3.mac.rx_frames_256_511: 0 >> > dev.ixl.3.mac.rx_frames_128_255: 0 >> > dev.ixl.3.mac.rx_frames_65_127: 0 >> > dev.ixl.3.mac.rx_frames_64: 0 >> > dev.ixl.3.mac.rx_length_errors: 0 >> > dev.ixl.3.mac.remote_faults: 0 >> > dev.ixl.3.mac.local_faults: 0 >> > dev.ixl.3.mac.illegal_bytes: 0 >> > dev.ixl.3.mac.crc_errors: 0 >> > dev.ixl.3.mac.bcast_pkts_txd: 0 >> > dev.ixl.3.mac.mcast_pkts_txd: 0 >> > dev.ixl.3.mac.ucast_pkts_txd: 0 >> > dev.ixl.3.mac.good_octets_txd: 0 >> > dev.ixl.3.mac.rx_discards: 0 >> > dev.ixl.3.mac.bcast_pkts_rcvd: 0 >> > dev.ixl.3.mac.mcast_pkts_rcvd: 0 >> > dev.ixl.3.mac.ucast_pkts_rcvd: 0 >> > dev.ixl.3.mac.good_octets_rcvd: 0 >> > dev.ixl.3.pf.que23.rx_bytes: 0 >> > dev.ixl.3.pf.que23.rx_packets: 0 >> > dev.ixl.3.pf.que23.tx_bytes: 0 >> > dev.ixl.3.pf.que23.tx_packets: 0 >> > dev.ixl.3.pf.que23.no_desc_avail: 0 >> > dev.ixl.3.pf.que23.tx_dma_setup: 0 >> > dev.ixl.3.pf.que23.tso_tx: 0 >> > dev.ixl.3.pf.que23.irqs: 0 >> > dev.ixl.3.pf.que23.dropped: 0 >> > dev.ixl.3.pf.que23.mbuf_defrag_failed: 0 >> > dev.ixl.3.pf.que22.rx_bytes: 0 >> > dev.ixl.3.pf.que22.rx_packets: 0 >> > dev.ixl.3.pf.que22.tx_bytes: 0 >> > dev.ixl.3.pf.que22.tx_packets: 0 >> > dev.ixl.3.pf.que22.no_desc_avail: 0 >> > dev.ixl.3.pf.que22.tx_dma_setup: 0 >> > dev.ixl.3.pf.que22.tso_tx: 0 >> > dev.ixl.3.pf.que22.irqs: 0 >> > dev.ixl.3.pf.que22.dropped: 0 >> > dev.ixl.3.pf.que22.mbuf_defrag_failed: 0 >> > dev.ixl.3.pf.que21.rx_bytes: 0 >> > dev.ixl.3.pf.que21.rx_packets: 0 >> > dev.ixl.3.pf.que21.tx_bytes: 0 >> > dev.ixl.3.pf.que21.tx_packets: 0 >> > dev.ixl.3.pf.que21.no_desc_avail: 0 >> > dev.ixl.3.pf.que21.tx_dma_setup: 0 >> > dev.ixl.3.pf.que21.tso_tx: 0 >> > dev.ixl.3.pf.que21.irqs: 0 >> > dev.ixl.3.pf.que21.dropped: 0 >> > dev.ixl.3.pf.que21.mbuf_defrag_failed: 0 >> > dev.ixl.3.pf.que20.rx_bytes: 0 >> > dev.ixl.3.pf.que20.rx_packets: 0 >> > dev.ixl.3.pf.que20.tx_bytes: 0 >> > dev.ixl.3.pf.que20.tx_packets: 0 >> > dev.ixl.3.pf.que20.no_desc_avail: 0 >> > dev.ixl.3.pf.que20.tx_dma_setup: 0 >> > dev.ixl.3.pf.que20.tso_tx: 0 >> > dev.ixl.3.pf.que20.irqs: 0 >> > dev.ixl.3.pf.que20.dropped: 0 >> > dev.ixl.3.pf.que20.mbuf_defrag_failed: 0 >> > dev.ixl.3.pf.que19.rx_bytes: 0 >> > dev.ixl.3.pf.que19.rx_packets: 0 >> > dev.ixl.3.pf.que19.tx_bytes: 0 >> > dev.ixl.3.pf.que19.tx_packets: 0 >> > dev.ixl.3.pf.que19.no_desc_avail: 0 >> > dev.ixl.3.pf.que19.tx_dma_setup: 0 >> > dev.ixl.3.pf.que19.tso_tx: 0 >> > dev.ixl.3.pf.que19.irqs: 0 >> > dev.ixl.3.pf.que19.dropped: 0 >> > dev.ixl.3.pf.que19.mbuf_defrag_failed: 0 >> > dev.ixl.3.pf.que18.rx_bytes: 0 >> > dev.ixl.3.pf.que18.rx_packets: 0 >> > dev.ixl.3.pf.que18.tx_bytes: 0 >> > dev.ixl.3.pf.que18.tx_packets: 0 >> > dev.ixl.3.pf.que18.no_desc_avail: 0 >> > dev.ixl.3.pf.que18.tx_dma_setup: 0 >> > dev.ixl.3.pf.que18.tso_tx: 0 >> > dev.ixl.3.pf.que18.irqs: 0 >> > dev.ixl.3.pf.que18.dropped: 0 >> > dev.ixl.3.pf.que18.mbuf_defrag_failed: 0 >> > dev.ixl.3.pf.que17.rx_bytes: 0 >> > dev.ixl.3.pf.que17.rx_packets: 0 >> > dev.ixl.3.pf.que17.tx_bytes: 0 >> > dev.ixl.3.pf.que17.tx_packets: 0 >> > dev.ixl.3.pf.que17.no_desc_avail: 0 >> > dev.ixl.3.pf.que17.tx_dma_setup: 0 >> > dev.ixl.3.pf.que17.tso_tx: 0 >> > dev.ixl.3.pf.que17.irqs: 0 >> > dev.ixl.3.pf.que17.dropped: 0 >> > dev.ixl.3.pf.que17.mbuf_defrag_failed: 0 >> > dev.ixl.3.pf.que16.rx_bytes: 0 >> > dev.ixl.3.pf.que16.rx_packets: 0 >> > dev.ixl.3.pf.que16.tx_bytes: 0 >> > dev.ixl.3.pf.que16.tx_packets: 0 >> > dev.ixl.3.pf.que16.no_desc_avail: 0 >> > dev.ixl.3.pf.que16.tx_dma_setup: 0 >> > dev.ixl.3.pf.que16.tso_tx: 0 >> > dev.ixl.3.pf.que16.irqs: 0 >> > dev.ixl.3.pf.que16.dropped: 0 >> > dev.ixl.3.pf.que16.mbuf_defrag_failed: 0 >> > dev.ixl.3.pf.que15.rx_bytes: 0 >> > dev.ixl.3.pf.que15.rx_packets: 0 >> > dev.ixl.3.pf.que15.tx_bytes: 0 >> > dev.ixl.3.pf.que15.tx_packets: 0 >> > dev.ixl.3.pf.que15.no_desc_avail: 0 >> > dev.ixl.3.pf.que15.tx_dma_setup: 0 >> > dev.ixl.3.pf.que15.tso_tx: 0 >> > dev.ixl.3.pf.que15.irqs: 0 >> > dev.ixl.3.pf.que15.dropped: 0 >> > dev.ixl.3.pf.que15.mbuf_defrag_failed: 0 >> > dev.ixl.3.pf.que14.rx_bytes: 0 >> > dev.ixl.3.pf.que14.rx_packets: 0 >> > dev.ixl.3.pf.que14.tx_bytes: 0 >> > dev.ixl.3.pf.que14.tx_packets: 0 >> > dev.ixl.3.pf.que14.no_desc_avail: 0 >> > dev.ixl.3.pf.que14.tx_dma_setup: 0 >> > dev.ixl.3.pf.que14.tso_tx: 0 >> > dev.ixl.3.pf.que14.irqs: 0 >> > dev.ixl.3.pf.que14.dropped: 0 >> > dev.ixl.3.pf.que14.mbuf_defrag_failed: 0 >> > dev.ixl.3.pf.que13.rx_bytes: 0 >> > dev.ixl.3.pf.que13.rx_packets: 0 >> > dev.ixl.3.pf.que13.tx_bytes: 0 >> > dev.ixl.3.pf.que13.tx_packets: 0 >> > dev.ixl.3.pf.que13.no_desc_avail: 0 >> > dev.ixl.3.pf.que13.tx_dma_setup: 0 >> > dev.ixl.3.pf.que13.tso_tx: 0 >> > dev.ixl.3.pf.que13.irqs: 0 >> > dev.ixl.3.pf.que13.dropped: 0 >> > dev.ixl.3.pf.que13.mbuf_defrag_failed: 0 >> > dev.ixl.3.pf.que12.rx_bytes: 0 >> > dev.ixl.3.pf.que12.rx_packets: 0 >> > dev.ixl.3.pf.que12.tx_bytes: 0 >> > dev.ixl.3.pf.que12.tx_packets: 0 >> > dev.ixl.3.pf.que12.no_desc_avail: 0 >> > dev.ixl.3.pf.que12.tx_dma_setup: 0 >> > dev.ixl.3.pf.que12.tso_tx: 0 >> > dev.ixl.3.pf.que12.irqs: 0 >> > dev.ixl.3.pf.que12.dropped: 0 >> > dev.ixl.3.pf.que12.mbuf_defrag_failed: 0 >> > dev.ixl.3.pf.que11.rx_bytes: 0 >> > dev.ixl.3.pf.que11.rx_packets: 0 >> > dev.ixl.3.pf.que11.tx_bytes: 0 >> > dev.ixl.3.pf.que11.tx_packets: 0 >> > dev.ixl.3.pf.que11.no_desc_avail: 0 >> > dev.ixl.3.pf.que11.tx_dma_setup: 0 >> > dev.ixl.3.pf.que11.tso_tx: 0 >> > dev.ixl.3.pf.que11.irqs: 0 >> > dev.ixl.3.pf.que11.dropped: 0 >> > dev.ixl.3.pf.que11.mbuf_defrag_failed: 0 >> > dev.ixl.3.pf.que10.rx_bytes: 0 >> > dev.ixl.3.pf.que10.rx_packets: 0 >> > dev.ixl.3.pf.que10.tx_bytes: 0 >> > dev.ixl.3.pf.que10.tx_packets: 0 >> > dev.ixl.3.pf.que10.no_desc_avail: 0 >> > dev.ixl.3.pf.que10.tx_dma_setup: 0 >> > dev.ixl.3.pf.que10.tso_tx: 0 >> > dev.ixl.3.pf.que10.irqs: 0 >> > dev.ixl.3.pf.que10.dropped: 0 >> > dev.ixl.3.pf.que10.mbuf_defrag_failed: 0 >> > dev.ixl.3.pf.que9.rx_bytes: 0 >> > dev.ixl.3.pf.que9.rx_packets: 0 >> > dev.ixl.3.pf.que9.tx_bytes: 0 >> > dev.ixl.3.pf.que9.tx_packets: 0 >> > dev.ixl.3.pf.que9.no_desc_avail: 0 >> > dev.ixl.3.pf.que9.tx_dma_setup: 0 >> > dev.ixl.3.pf.que9.tso_tx: 0 >> > dev.ixl.3.pf.que9.irqs: 0 >> > dev.ixl.3.pf.que9.dropped: 0 >> > dev.ixl.3.pf.que9.mbuf_defrag_failed: 0 >> > dev.ixl.3.pf.que8.rx_bytes: 0 >> > dev.ixl.3.pf.que8.rx_packets: 0 >> > dev.ixl.3.pf.que8.tx_bytes: 0 >> > dev.ixl.3.pf.que8.tx_packets: 0 >> > dev.ixl.3.pf.que8.no_desc_avail: 0 >> > dev.ixl.3.pf.que8.tx_dma_setup: 0 >> > dev.ixl.3.pf.que8.tso_tx: 0 >> > dev.ixl.3.pf.que8.irqs: 0 >> > dev.ixl.3.pf.que8.dropped: 0 >> > dev.ixl.3.pf.que8.mbuf_defrag_failed: 0 >> > dev.ixl.3.pf.que7.rx_bytes: 0 >> > dev.ixl.3.pf.que7.rx_packets: 0 >> > dev.ixl.3.pf.que7.tx_bytes: 0 >> > dev.ixl.3.pf.que7.tx_packets: 0 >> > dev.ixl.3.pf.que7.no_desc_avail: 0 >> > dev.ixl.3.pf.que7.tx_dma_setup: 0 >> > dev.ixl.3.pf.que7.tso_tx: 0 >> > dev.ixl.3.pf.que7.irqs: 0 >> > dev.ixl.3.pf.que7.dropped: 0 >> > dev.ixl.3.pf.que7.mbuf_defrag_failed: 0 >> > dev.ixl.3.pf.que6.rx_bytes: 0 >> > dev.ixl.3.pf.que6.rx_packets: 0 >> > dev.ixl.3.pf.que6.tx_bytes: 0 >> > dev.ixl.3.pf.que6.tx_packets: 0 >> > dev.ixl.3.pf.que6.no_desc_avail: 0 >> > dev.ixl.3.pf.que6.tx_dma_setup: 0 >> > dev.ixl.3.pf.que6.tso_tx: 0 >> > dev.ixl.3.pf.que6.irqs: 0 >> > dev.ixl.3.pf.que6.dropped: 0 >> > dev.ixl.3.pf.que6.mbuf_defrag_failed: 0 >> > dev.ixl.3.pf.que5.rx_bytes: 0 >> > dev.ixl.3.pf.que5.rx_packets: 0 >> > dev.ixl.3.pf.que5.tx_bytes: 0 >> > dev.ixl.3.pf.que5.tx_packets: 0 >> > dev.ixl.3.pf.que5.no_desc_avail: 0 >> > dev.ixl.3.pf.que5.tx_dma_setup: 0 >> > dev.ixl.3.pf.que5.tso_tx: 0 >> > dev.ixl.3.pf.que5.irqs: 0 >> > dev.ixl.3.pf.que5.dropped: 0 >> > dev.ixl.3.pf.que5.mbuf_defrag_failed: 0 >> > dev.ixl.3.pf.que4.rx_bytes: 0 >> > dev.ixl.3.pf.que4.rx_packets: 0 >> > dev.ixl.3.pf.que4.tx_bytes: 0 >> > dev.ixl.3.pf.que4.tx_packets: 0 >> > dev.ixl.3.pf.que4.no_desc_avail: 0 >> > dev.ixl.3.pf.que4.tx_dma_setup: 0 >> > dev.ixl.3.pf.que4.tso_tx: 0 >> > dev.ixl.3.pf.que4.irqs: 0 >> > dev.ixl.3.pf.que4.dropped: 0 >> > dev.ixl.3.pf.que4.mbuf_defrag_failed: 0 >> > dev.ixl.3.pf.que3.rx_bytes: 0 >> > dev.ixl.3.pf.que3.rx_packets: 0 >> > dev.ixl.3.pf.que3.tx_bytes: 0 >> > dev.ixl.3.pf.que3.tx_packets: 0 >> > dev.ixl.3.pf.que3.no_desc_avail: 0 >> > dev.ixl.3.pf.que3.tx_dma_setup: 0 >> > dev.ixl.3.pf.que3.tso_tx: 0 >> > dev.ixl.3.pf.que3.irqs: 0 >> > dev.ixl.3.pf.que3.dropped: 0 >> > dev.ixl.3.pf.que3.mbuf_defrag_failed: 0 >> > dev.ixl.3.pf.que2.rx_bytes: 0 >> > dev.ixl.3.pf.que2.rx_packets: 0 >> > dev.ixl.3.pf.que2.tx_bytes: 0 >> > dev.ixl.3.pf.que2.tx_packets: 0 >> > dev.ixl.3.pf.que2.no_desc_avail: 0 >> > dev.ixl.3.pf.que2.tx_dma_setup: 0 >> > dev.ixl.3.pf.que2.tso_tx: 0 >> > dev.ixl.3.pf.que2.irqs: 0 >> > dev.ixl.3.pf.que2.dropped: 0 >> > dev.ixl.3.pf.que2.mbuf_defrag_failed: 0 >> > dev.ixl.3.pf.que1.rx_bytes: 0 >> > dev.ixl.3.pf.que1.rx_packets: 0 >> > dev.ixl.3.pf.que1.tx_bytes: 0 >> > dev.ixl.3.pf.que1.tx_packets: 0 >> > dev.ixl.3.pf.que1.no_desc_avail: 0 >> > dev.ixl.3.pf.que1.tx_dma_setup: 0 >> > dev.ixl.3.pf.que1.tso_tx: 0 >> > dev.ixl.3.pf.que1.irqs: 0 >> > dev.ixl.3.pf.que1.dropped: 0 >> > dev.ixl.3.pf.que1.mbuf_defrag_failed: 0 >> > dev.ixl.3.pf.que0.rx_bytes: 0 >> > dev.ixl.3.pf.que0.rx_packets: 0 >> > dev.ixl.3.pf.que0.tx_bytes: 0 >> > dev.ixl.3.pf.que0.tx_packets: 0 >> > dev.ixl.3.pf.que0.no_desc_avail: 0 >> > dev.ixl.3.pf.que0.tx_dma_setup: 0 >> > dev.ixl.3.pf.que0.tso_tx: 0 >> > dev.ixl.3.pf.que0.irqs: 0 >> > dev.ixl.3.pf.que0.dropped: 0 >> > dev.ixl.3.pf.que0.mbuf_defrag_failed: 0 >> > dev.ixl.3.pf.bcast_pkts_txd: 0 >> > dev.ixl.3.pf.mcast_pkts_txd: 0 >> > dev.ixl.3.pf.ucast_pkts_txd: 0 >> > dev.ixl.3.pf.good_octets_txd: 0 >> > dev.ixl.3.pf.rx_discards: 0 >> > dev.ixl.3.pf.bcast_pkts_rcvd: 0 >> > dev.ixl.3.pf.mcast_pkts_rcvd: 0 >> > dev.ixl.3.pf.ucast_pkts_rcvd: 0 >> > dev.ixl.3.pf.good_octets_rcvd: 0 >> > dev.ixl.3.vc_debug_level: 1 >> > dev.ixl.3.admin_irq: 0 >> > dev.ixl.3.watchdog_events: 0 >> > dev.ixl.3.debug: 0 >> > dev.ixl.3.dynamic_tx_itr: 0 >> > dev.ixl.3.tx_itr: 122 >> > dev.ixl.3.dynamic_rx_itr: 0 >> > dev.ixl.3.rx_itr: 62 >> > dev.ixl.3.fw_version: f4.33 a1.2 n04.42 e8000191d >> > dev.ixl.3.current_speed: Unknown >> > dev.ixl.3.advertise_speed: 0 >> > dev.ixl.3.fc: 0 >> > dev.ixl.3.%parent: pci129 >> > dev.ixl.3.%pnpinfo: vendor=0x8086 device=0x1572 subvendor=0x8086 >> > subdevice=0x0000 class=0x020000 >> > dev.ixl.3.%location: slot=0 function=3 handle=\_SB_.PCI1.QR3A.H003 >> > dev.ixl.3.%driver: ixl >> > dev.ixl.3.%desc: Intel(R) Ethernet Connection XL710 Driver, Version - >> 1.4.0 >> > dev.ixl.2.mac.xoff_recvd: 0 >> > dev.ixl.2.mac.xoff_txd: 0 >> > dev.ixl.2.mac.xon_recvd: 0 >> > dev.ixl.2.mac.xon_txd: 0 >> > dev.ixl.2.mac.tx_frames_big: 0 >> > dev.ixl.2.mac.tx_frames_1024_1522: 0 >> > dev.ixl.2.mac.tx_frames_512_1023: 0 >> > dev.ixl.2.mac.tx_frames_256_511: 0 >> > dev.ixl.2.mac.tx_frames_128_255: 0 >> > dev.ixl.2.mac.tx_frames_65_127: 0 >> > dev.ixl.2.mac.tx_frames_64: 0 >> > dev.ixl.2.mac.checksum_errors: 0 >> > dev.ixl.2.mac.rx_jabber: 0 >> > dev.ixl.2.mac.rx_oversized: 0 >> > dev.ixl.2.mac.rx_fragmented: 0 >> > dev.ixl.2.mac.rx_undersize: 0 >> > dev.ixl.2.mac.rx_frames_big: 0 >> > dev.ixl.2.mac.rx_frames_1024_1522: 0 >> > dev.ixl.2.mac.rx_frames_512_1023: 0 >> > dev.ixl.2.mac.rx_frames_256_511: 0 >> > dev.ixl.2.mac.rx_frames_128_255: 0 >> > dev.ixl.2.mac.rx_frames_65_127: 0 >> > dev.ixl.2.mac.rx_frames_64: 0 >> > dev.ixl.2.mac.rx_length_errors: 0 >> > dev.ixl.2.mac.remote_faults: 0 >> > dev.ixl.2.mac.local_faults: 0 >> > dev.ixl.2.mac.illegal_bytes: 0 >> > dev.ixl.2.mac.crc_errors: 0 >> > dev.ixl.2.mac.bcast_pkts_txd: 0 >> > dev.ixl.2.mac.mcast_pkts_txd: 0 >> > dev.ixl.2.mac.ucast_pkts_txd: 0 >> > dev.ixl.2.mac.good_octets_txd: 0 >> > dev.ixl.2.mac.rx_discards: 0 >> > dev.ixl.2.mac.bcast_pkts_rcvd: 0 >> > dev.ixl.2.mac.mcast_pkts_rcvd: 0 >> > dev.ixl.2.mac.ucast_pkts_rcvd: 0 >> > dev.ixl.2.mac.good_octets_rcvd: 0 >> > dev.ixl.2.pf.que23.rx_bytes: 0 >> > dev.ixl.2.pf.que23.rx_packets: 0 >> > dev.ixl.2.pf.que23.tx_bytes: 0 >> > dev.ixl.2.pf.que23.tx_packets: 0 >> > dev.ixl.2.pf.que23.no_desc_avail: 0 >> > dev.ixl.2.pf.que23.tx_dma_setup: 0 >> > dev.ixl.2.pf.que23.tso_tx: 0 >> > dev.ixl.2.pf.que23.irqs: 0 >> > dev.ixl.2.pf.que23.dropped: 0 >> > dev.ixl.2.pf.que23.mbuf_defrag_failed: 0 >> > dev.ixl.2.pf.que22.rx_bytes: 0 >> > dev.ixl.2.pf.que22.rx_packets: 0 >> > dev.ixl.2.pf.que22.tx_bytes: 0 >> > dev.ixl.2.pf.que22.tx_packets: 0 >> > dev.ixl.2.pf.que22.no_desc_avail: 0 >> > dev.ixl.2.pf.que22.tx_dma_setup: 0 >> > dev.ixl.2.pf.que22.tso_tx: 0 >> > dev.ixl.2.pf.que22.irqs: 0 >> > dev.ixl.2.pf.que22.dropped: 0 >> > dev.ixl.2.pf.que22.mbuf_defrag_failed: 0 >> > dev.ixl.2.pf.que21.rx_bytes: 0 >> > dev.ixl.2.pf.que21.rx_packets: 0 >> > dev.ixl.2.pf.que21.tx_bytes: 0 >> > dev.ixl.2.pf.que21.tx_packets: 0 >> > dev.ixl.2.pf.que21.no_desc_avail: 0 >> > dev.ixl.2.pf.que21.tx_dma_setup: 0 >> > dev.ixl.2.pf.que21.tso_tx: 0 >> > dev.ixl.2.pf.que21.irqs: 0 >> > dev.ixl.2.pf.que21.dropped: 0 >> > dev.ixl.2.pf.que21.mbuf_defrag_failed: 0 >> > dev.ixl.2.pf.que20.rx_bytes: 0 >> > dev.ixl.2.pf.que20.rx_packets: 0 >> > dev.ixl.2.pf.que20.tx_bytes: 0 >> > dev.ixl.2.pf.que20.tx_packets: 0 >> > dev.ixl.2.pf.que20.no_desc_avail: 0 >> > dev.ixl.2.pf.que20.tx_dma_setup: 0 >> > dev.ixl.2.pf.que20.tso_tx: 0 >> > dev.ixl.2.pf.que20.irqs: 0 >> > dev.ixl.2.pf.que20.dropped: 0 >> > dev.ixl.2.pf.que20.mbuf_defrag_failed: 0 >> > dev.ixl.2.pf.que19.rx_bytes: 0 >> > dev.ixl.2.pf.que19.rx_packets: 0 >> > dev.ixl.2.pf.que19.tx_bytes: 0 >> > dev.ixl.2.pf.que19.tx_packets: 0 >> > dev.ixl.2.pf.que19.no_desc_avail: 0 >> > dev.ixl.2.pf.que19.tx_dma_setup: 0 >> > dev.ixl.2.pf.que19.tso_tx: 0 >> > dev.ixl.2.pf.que19.irqs: 0 >> > dev.ixl.2.pf.que19.dropped: 0 >> > dev.ixl.2.pf.que19.mbuf_defrag_failed: 0 >> > dev.ixl.2.pf.que18.rx_bytes: 0 >> > dev.ixl.2.pf.que18.rx_packets: 0 >> > dev.ixl.2.pf.que18.tx_bytes: 0 >> > dev.ixl.2.pf.que18.tx_packets: 0 >> > dev.ixl.2.pf.que18.no_desc_avail: 0 >> > dev.ixl.2.pf.que18.tx_dma_setup: 0 >> > dev.ixl.2.pf.que18.tso_tx: 0 >> > dev.ixl.2.pf.que18.irqs: 0 >> > dev.ixl.2.pf.que18.dropped: 0 >> > dev.ixl.2.pf.que18.mbuf_defrag_failed: 0 >> > dev.ixl.2.pf.que17.rx_bytes: 0 >> > dev.ixl.2.pf.que17.rx_packets: 0 >> > dev.ixl.2.pf.que17.tx_bytes: 0 >> > dev.ixl.2.pf.que17.tx_packets: 0 >> > dev.ixl.2.pf.que17.no_desc_avail: 0 >> > dev.ixl.2.pf.que17.tx_dma_setup: 0 >> > dev.ixl.2.pf.que17.tso_tx: 0 >> > dev.ixl.2.pf.que17.irqs: 0 >> > dev.ixl.2.pf.que17.dropped: 0 >> > dev.ixl.2.pf.que17.mbuf_defrag_failed: 0 >> > dev.ixl.2.pf.que16.rx_bytes: 0 >> > dev.ixl.2.pf.que16.rx_packets: 0 >> > dev.ixl.2.pf.que16.tx_bytes: 0 >> > dev.ixl.2.pf.que16.tx_packets: 0 >> > dev.ixl.2.pf.que16.no_desc_avail: 0 >> > dev.ixl.2.pf.que16.tx_dma_setup: 0 >> > dev.ixl.2.pf.que16.tso_tx: 0 >> > dev.ixl.2.pf.que16.irqs: 0 >> > dev.ixl.2.pf.que16.dropped: 0 >> > dev.ixl.2.pf.que16.mbuf_defrag_failed: 0 >> > dev.ixl.2.pf.que15.rx_bytes: 0 >> > dev.ixl.2.pf.que15.rx_packets: 0 >> > dev.ixl.2.pf.que15.tx_bytes: 0 >> > dev.ixl.2.pf.que15.tx_packets: 0 >> > dev.ixl.2.pf.que15.no_desc_avail: 0 >> > dev.ixl.2.pf.que15.tx_dma_setup: 0 >> > dev.ixl.2.pf.que15.tso_tx: 0 >> > dev.ixl.2.pf.que15.irqs: 0 >> > dev.ixl.2.pf.que15.dropped: 0 >> > dev.ixl.2.pf.que15.mbuf_defrag_failed: 0 >> > dev.ixl.2.pf.que14.rx_bytes: 0 >> > dev.ixl.2.pf.que14.rx_packets: 0 >> > dev.ixl.2.pf.que14.tx_bytes: 0 >> > dev.ixl.2.pf.que14.tx_packets: 0 >> > dev.ixl.2.pf.que14.no_desc_avail: 0 >> > dev.ixl.2.pf.que14.tx_dma_setup: 0 >> > dev.ixl.2.pf.que14.tso_tx: 0 >> > dev.ixl.2.pf.que14.irqs: 0 >> > dev.ixl.2.pf.que14.dropped: 0 >> > dev.ixl.2.pf.que14.mbuf_defrag_failed: 0 >> > dev.ixl.2.pf.que13.rx_bytes: 0 >> > dev.ixl.2.pf.que13.rx_packets: 0 >> > dev.ixl.2.pf.que13.tx_bytes: 0 >> > dev.ixl.2.pf.que13.tx_packets: 0 >> > dev.ixl.2.pf.que13.no_desc_avail: 0 >> > dev.ixl.2.pf.que13.tx_dma_setup: 0 >> > dev.ixl.2.pf.que13.tso_tx: 0 >> > dev.ixl.2.pf.que13.irqs: 0 >> > dev.ixl.2.pf.que13.dropped: 0 >> > dev.ixl.2.pf.que13.mbuf_defrag_failed: 0 >> > dev.ixl.2.pf.que12.rx_bytes: 0 >> > dev.ixl.2.pf.que12.rx_packets: 0 >> > dev.ixl.2.pf.que12.tx_bytes: 0 >> > dev.ixl.2.pf.que12.tx_packets: 0 >> > dev.ixl.2.pf.que12.no_desc_avail: 0 >> > dev.ixl.2.pf.que12.tx_dma_setup: 0 >> > dev.ixl.2.pf.que12.tso_tx: 0 >> > dev.ixl.2.pf.que12.irqs: 0 >> > dev.ixl.2.pf.que12.dropped: 0 >> > dev.ixl.2.pf.que12.mbuf_defrag_failed: 0 >> > dev.ixl.2.pf.que11.rx_bytes: 0 >> > dev.ixl.2.pf.que11.rx_packets: 0 >> > dev.ixl.2.pf.que11.tx_bytes: 0 >> > dev.ixl.2.pf.que11.tx_packets: 0 >> > dev.ixl.2.pf.que11.no_desc_avail: 0 >> > dev.ixl.2.pf.que11.tx_dma_setup: 0 >> > dev.ixl.2.pf.que11.tso_tx: 0 >> > dev.ixl.2.pf.que11.irqs: 0 >> > dev.ixl.2.pf.que11.dropped: 0 >> > dev.ixl.2.pf.que11.mbuf_defrag_failed: 0 >> > dev.ixl.2.pf.que10.rx_bytes: 0 >> > dev.ixl.2.pf.que10.rx_packets: 0 >> > dev.ixl.2.pf.que10.tx_bytes: 0 >> > dev.ixl.2.pf.que10.tx_packets: 0 >> > dev.ixl.2.pf.que10.no_desc_avail: 0 >> > dev.ixl.2.pf.que10.tx_dma_setup: 0 >> > dev.ixl.2.pf.que10.tso_tx: 0 >> > dev.ixl.2.pf.que10.irqs: 0 >> > dev.ixl.2.pf.que10.dropped: 0 >> > dev.ixl.2.pf.que10.mbuf_defrag_failed: 0 >> > dev.ixl.2.pf.que9.rx_bytes: 0 >> > dev.ixl.2.pf.que9.rx_packets: 0 >> > dev.ixl.2.pf.que9.tx_bytes: 0 >> > dev.ixl.2.pf.que9.tx_packets: 0 >> > dev.ixl.2.pf.que9.no_desc_avail: 0 >> > dev.ixl.2.pf.que9.tx_dma_setup: 0 >> > dev.ixl.2.pf.que9.tso_tx: 0 >> > dev.ixl.2.pf.que9.irqs: 0 >> > dev.ixl.2.pf.que9.dropped: 0 >> > dev.ixl.2.pf.que9.mbuf_defrag_failed: 0 >> > dev.ixl.2.pf.que8.rx_bytes: 0 >> > dev.ixl.2.pf.que8.rx_packets: 0 >> > dev.ixl.2.pf.que8.tx_bytes: 0 >> > dev.ixl.2.pf.que8.tx_packets: 0 >> > dev.ixl.2.pf.que8.no_desc_avail: 0 >> > dev.ixl.2.pf.que8.tx_dma_setup: 0 >> > dev.ixl.2.pf.que8.tso_tx: 0 >> > dev.ixl.2.pf.que8.irqs: 0 >> > dev.ixl.2.pf.que8.dropped: 0 >> > dev.ixl.2.pf.que8.mbuf_defrag_failed: 0 >> > dev.ixl.2.pf.que7.rx_bytes: 0 >> > dev.ixl.2.pf.que7.rx_packets: 0 >> > dev.ixl.2.pf.que7.tx_bytes: 0 >> > dev.ixl.2.pf.que7.tx_packets: 0 >> > dev.ixl.2.pf.que7.no_desc_avail: 0 >> > dev.ixl.2.pf.que7.tx_dma_setup: 0 >> > dev.ixl.2.pf.que7.tso_tx: 0 >> > dev.ixl.2.pf.que7.irqs: 0 >> > dev.ixl.2.pf.que7.dropped: 0 >> > dev.ixl.2.pf.que7.mbuf_defrag_failed: 0 >> > dev.ixl.2.pf.que6.rx_bytes: 0 >> > dev.ixl.2.pf.que6.rx_packets: 0 >> > dev.ixl.2.pf.que6.tx_bytes: 0 >> > dev.ixl.2.pf.que6.tx_packets: 0 >> > dev.ixl.2.pf.que6.no_desc_avail: 0 >> > dev.ixl.2.pf.que6.tx_dma_setup: 0 >> > dev.ixl.2.pf.que6.tso_tx: 0 >> > dev.ixl.2.pf.que6.irqs: 0 >> > dev.ixl.2.pf.que6.dropped: 0 >> > dev.ixl.2.pf.que6.mbuf_defrag_failed: 0 >> > dev.ixl.2.pf.que5.rx_bytes: 0 >> > dev.ixl.2.pf.que5.rx_packets: 0 >> > dev.ixl.2.pf.que5.tx_bytes: 0 >> > dev.ixl.2.pf.que5.tx_packets: 0 >> > dev.ixl.2.pf.que5.no_desc_avail: 0 >> > dev.ixl.2.pf.que5.tx_dma_setup: 0 >> > dev.ixl.2.pf.que5.tso_tx: 0 >> > dev.ixl.2.pf.que5.irqs: 0 >> > dev.ixl.2.pf.que5.dropped: 0 >> > dev.ixl.2.pf.que5.mbuf_defrag_failed: 0 >> > dev.ixl.2.pf.que4.rx_bytes: 0 >> > dev.ixl.2.pf.que4.rx_packets: 0 >> > dev.ixl.2.pf.que4.tx_bytes: 0 >> > dev.ixl.2.pf.que4.tx_packets: 0 >> > dev.ixl.2.pf.que4.no_desc_avail: 0 >> > dev.ixl.2.pf.que4.tx_dma_setup: 0 >> > dev.ixl.2.pf.que4.tso_tx: 0 >> > dev.ixl.2.pf.que4.irqs: 0 >> > dev.ixl.2.pf.que4.dropped: 0 >> > dev.ixl.2.pf.que4.mbuf_defrag_failed: 0 >> > dev.ixl.2.pf.que3.rx_bytes: 0 >> > dev.ixl.2.pf.que3.rx_packets: 0 >> > dev.ixl.2.pf.que3.tx_bytes: 0 >> > dev.ixl.2.pf.que3.tx_packets: 0 >> > dev.ixl.2.pf.que3.no_desc_avail: 0 >> > dev.ixl.2.pf.que3.tx_dma_setup: 0 >> > dev.ixl.2.pf.que3.tso_tx: 0 >> > dev.ixl.2.pf.que3.irqs: 0 >> > dev.ixl.2.pf.que3.dropped: 0 >> > dev.ixl.2.pf.que3.mbuf_defrag_failed: 0 >> > dev.ixl.2.pf.que2.rx_bytes: 0 >> > dev.ixl.2.pf.que2.rx_packets: 0 >> > dev.ixl.2.pf.que2.tx_bytes: 0 >> > dev.ixl.2.pf.que2.tx_packets: 0 >> > dev.ixl.2.pf.que2.no_desc_avail: 0 >> > dev.ixl.2.pf.que2.tx_dma_setup: 0 >> > dev.ixl.2.pf.que2.tso_tx: 0 >> > dev.ixl.2.pf.que2.irqs: 0 >> > dev.ixl.2.pf.que2.dropped: 0 >> > dev.ixl.2.pf.que2.mbuf_defrag_failed: 0 >> > dev.ixl.2.pf.que1.rx_bytes: 0 >> > dev.ixl.2.pf.que1.rx_packets: 0 >> > dev.ixl.2.pf.que1.tx_bytes: 0 >> > dev.ixl.2.pf.que1.tx_packets: 0 >> > dev.ixl.2.pf.que1.no_desc_avail: 0 >> > dev.ixl.2.pf.que1.tx_dma_setup: 0 >> > dev.ixl.2.pf.que1.tso_tx: 0 >> > dev.ixl.2.pf.que1.irqs: 0 >> > dev.ixl.2.pf.que1.dropped: 0 >> > dev.ixl.2.pf.que1.mbuf_defrag_failed: 0 >> > dev.ixl.2.pf.que0.rx_bytes: 0 >> > dev.ixl.2.pf.que0.rx_packets: 0 >> > dev.ixl.2.pf.que0.tx_bytes: 0 >> > dev.ixl.2.pf.que0.tx_packets: 0 >> > dev.ixl.2.pf.que0.no_desc_avail: 0 >> > dev.ixl.2.pf.que0.tx_dma_setup: 0 >> > dev.ixl.2.pf.que0.tso_tx: 0 >> > dev.ixl.2.pf.que0.irqs: 0 >> > dev.ixl.2.pf.que0.dropped: 0 >> > dev.ixl.2.pf.que0.mbuf_defrag_failed: 0 >> > dev.ixl.2.pf.bcast_pkts_txd: 0 >> > dev.ixl.2.pf.mcast_pkts_txd: 0 >> > dev.ixl.2.pf.ucast_pkts_txd: 0 >> > dev.ixl.2.pf.good_octets_txd: 0 >> > dev.ixl.2.pf.rx_discards: 0 >> > dev.ixl.2.pf.bcast_pkts_rcvd: 0 >> > dev.ixl.2.pf.mcast_pkts_rcvd: 0 >> > dev.ixl.2.pf.ucast_pkts_rcvd: 0 >> > dev.ixl.2.pf.good_octets_rcvd: 0 >> > dev.ixl.2.vc_debug_level: 1 >> > dev.ixl.2.admin_irq: 0 >> > dev.ixl.2.watchdog_events: 0 >> > dev.ixl.2.debug: 0 >> > dev.ixl.2.dynamic_tx_itr: 0 >> > dev.ixl.2.tx_itr: 122 >> > dev.ixl.2.dynamic_rx_itr: 0 >> > dev.ixl.2.rx_itr: 62 >> > dev.ixl.2.fw_version: f4.33 a1.2 n04.42 e8000191d >> > dev.ixl.2.current_speed: Unknown >> > dev.ixl.2.advertise_speed: 0 >> > dev.ixl.2.fc: 0 >> > dev.ixl.2.%parent: pci129 >> > dev.ixl.2.%pnpinfo: vendor=0x8086 device=0x1572 subvendor=0x8086 >> > subdevice=0x0000 class=0x020000 >> > dev.ixl.2.%location: slot=0 function=2 handle=\_SB_.PCI1.QR3A.H002 >> > dev.ixl.2.%driver: ixl >> > dev.ixl.2.%desc: Intel(R) Ethernet Connection XL710 Driver, Version - >> 1.4.0 >> > dev.ixl.1.mac.xoff_recvd: 0 >> > dev.ixl.1.mac.xoff_txd: 0 >> > dev.ixl.1.mac.xon_recvd: 0 >> > dev.ixl.1.mac.xon_txd: 0 >> > dev.ixl.1.mac.tx_frames_big: 0 >> > dev.ixl.1.mac.tx_frames_1024_1522: 1565670684 >> > dev.ixl.1.mac.tx_frames_512_1023: 101286418 >> > dev.ixl.1.mac.tx_frames_256_511: 49713129 >> > dev.ixl.1.mac.tx_frames_128_255: 231617277 >> > dev.ixl.1.mac.tx_frames_65_127: 2052767669 >> > dev.ixl.1.mac.tx_frames_64: 1318689044 >> > dev.ixl.1.mac.checksum_errors: 0 >> > dev.ixl.1.mac.rx_jabber: 0 >> > dev.ixl.1.mac.rx_oversized: 0 >> > dev.ixl.1.mac.rx_fragmented: 0 >> > dev.ixl.1.mac.rx_undersize: 0 >> > dev.ixl.1.mac.rx_frames_big: 0 >> > dev.ixl.1.mac.rx_frames_1024_1522: 4960403414 >> > dev.ixl.1.mac.rx_frames_512_1023: 113675084 >> > dev.ixl.1.mac.rx_frames_256_511: 253904920 >> > dev.ixl.1.mac.rx_frames_128_255: 196369726 >> > dev.ixl.1.mac.rx_frames_65_127: 1436626211 >> > dev.ixl.1.mac.rx_frames_64: 242768681 >> > dev.ixl.1.mac.rx_length_errors: 0 >> > dev.ixl.1.mac.remote_faults: 0 >> > dev.ixl.1.mac.local_faults: 0 >> > dev.ixl.1.mac.illegal_bytes: 0 >> > dev.ixl.1.mac.crc_errors: 0 >> > dev.ixl.1.mac.bcast_pkts_txd: 277 >> > dev.ixl.1.mac.mcast_pkts_txd: 0 >> > dev.ixl.1.mac.ucast_pkts_txd: 5319743942 >> > dev.ixl.1.mac.good_octets_txd: 2642351885737 >> > dev.ixl.1.mac.rx_discards: 0 >> > dev.ixl.1.mac.bcast_pkts_rcvd: 5 >> > dev.ixl.1.mac.mcast_pkts_rcvd: 144 >> > dev.ixl.1.mac.ucast_pkts_rcvd: 7203747879 >> > dev.ixl.1.mac.good_octets_rcvd: 7770230492434 >> > dev.ixl.1.pf.que23.rx_bytes: 0 >> > dev.ixl.1.pf.que23.rx_packets: 0 >> > dev.ixl.1.pf.que23.tx_bytes: 7111 >> > dev.ixl.1.pf.que23.tx_packets: 88 >> > dev.ixl.1.pf.que23.no_desc_avail: 0 >> > dev.ixl.1.pf.que23.tx_dma_setup: 0 >> > dev.ixl.1.pf.que23.tso_tx: 0 >> > dev.ixl.1.pf.que23.irqs: 88 >> > dev.ixl.1.pf.que23.dropped: 0 >> > dev.ixl.1.pf.que23.mbuf_defrag_failed: 0 >> > dev.ixl.1.pf.que22.rx_bytes: 0 >> > dev.ixl.1.pf.que22.rx_packets: 0 >> > dev.ixl.1.pf.que22.tx_bytes: 6792 >> > dev.ixl.1.pf.que22.tx_packets: 88 >> > dev.ixl.1.pf.que22.no_desc_avail: 0 >> > dev.ixl.1.pf.que22.tx_dma_setup: 0 >> > dev.ixl.1.pf.que22.tso_tx: 0 >> > dev.ixl.1.pf.que22.irqs: 89 >> > dev.ixl.1.pf.que22.dropped: 0 >> > dev.ixl.1.pf.que22.mbuf_defrag_failed: 0 >> > dev.ixl.1.pf.que21.rx_bytes: 0 >> > dev.ixl.1.pf.que21.rx_packets: 0 >> > dev.ixl.1.pf.que21.tx_bytes: 7486 >> > dev.ixl.1.pf.que21.tx_packets: 93 >> > dev.ixl.1.pf.que21.no_desc_avail: 0 >> > dev.ixl.1.pf.que21.tx_dma_setup: 0 >> > dev.ixl.1.pf.que21.tso_tx: 0 >> > dev.ixl.1.pf.que21.irqs: 95 >> > dev.ixl.1.pf.que21.dropped: 0 >> > dev.ixl.1.pf.que21.mbuf_defrag_failed: 0 >> > dev.ixl.1.pf.que20.rx_bytes: 0 >> > dev.ixl.1.pf.que20.rx_packets: 0 >> > dev.ixl.1.pf.que20.tx_bytes: 7850 >> > dev.ixl.1.pf.que20.tx_packets: 98 >> > dev.ixl.1.pf.que20.no_desc_avail: 0 >> > dev.ixl.1.pf.que20.tx_dma_setup: 0 >> > dev.ixl.1.pf.que20.tso_tx: 0 >> > dev.ixl.1.pf.que20.irqs: 99 >> > dev.ixl.1.pf.que20.dropped: 0 >> > dev.ixl.1.pf.que20.mbuf_defrag_failed: 0 >> > dev.ixl.1.pf.que19.rx_bytes: 0 >> > dev.ixl.1.pf.que19.rx_packets: 0 >> > dev.ixl.1.pf.que19.tx_bytes: 64643 >> > dev.ixl.1.pf.que19.tx_packets: 202 >> > dev.ixl.1.pf.que19.no_desc_avail: 0 >> > dev.ixl.1.pf.que19.tx_dma_setup: 0 >> > dev.ixl.1.pf.que19.tso_tx: 0 >> > dev.ixl.1.pf.que19.irqs: 202 >> > dev.ixl.1.pf.que19.dropped: 0 >> > dev.ixl.1.pf.que19.mbuf_defrag_failed: 0 >> > dev.ixl.1.pf.que18.rx_bytes: 0 >> > dev.ixl.1.pf.que18.rx_packets: 0 >> > dev.ixl.1.pf.que18.tx_bytes: 5940 >> > dev.ixl.1.pf.que18.tx_packets: 74 >> > dev.ixl.1.pf.que18.no_desc_avail: 0 >> > dev.ixl.1.pf.que18.tx_dma_setup: 0 >> > dev.ixl.1.pf.que18.tso_tx: 0 >> > dev.ixl.1.pf.que18.irqs: 74 >> > dev.ixl.1.pf.que18.dropped: 0 >> > dev.ixl.1.pf.que18.mbuf_defrag_failed: 0 >> > dev.ixl.1.pf.que17.rx_bytes: 0 >> > dev.ixl.1.pf.que17.rx_packets: 0 >> > dev.ixl.1.pf.que17.tx_bytes: 11675 >> > dev.ixl.1.pf.que17.tx_packets: 83 >> > dev.ixl.1.pf.que17.no_desc_avail: 0 >> > dev.ixl.1.pf.que17.tx_dma_setup: 0 >> > dev.ixl.1.pf.que17.tso_tx: 0 >> > dev.ixl.1.pf.que17.irqs: 83 >> > dev.ixl.1.pf.que17.dropped: 0 >> > dev.ixl.1.pf.que17.mbuf_defrag_failed: 0 >> > dev.ixl.1.pf.que16.rx_bytes: 0 >> > dev.ixl.1.pf.que16.rx_packets: 0 >> > dev.ixl.1.pf.que16.tx_bytes: 105750457831 >> > dev.ixl.1.pf.que16.tx_packets: 205406766 >> > dev.ixl.1.pf.que16.no_desc_avail: 0 >> > dev.ixl.1.pf.que16.tx_dma_setup: 0 >> > dev.ixl.1.pf.que16.tso_tx: 0 >> > dev.ixl.1.pf.que16.irqs: 87222978 >> > dev.ixl.1.pf.que16.dropped: 0 >> > dev.ixl.1.pf.que16.mbuf_defrag_failed: 0 >> > dev.ixl.1.pf.que15.rx_bytes: 289558174088 >> > dev.ixl.1.pf.que15.rx_packets: 272466190 >> > dev.ixl.1.pf.que15.tx_bytes: 106152524681 >> > dev.ixl.1.pf.que15.tx_packets: 205379247 >> > dev.ixl.1.pf.que15.no_desc_avail: 0 >> > dev.ixl.1.pf.que15.tx_dma_setup: 0 >> > dev.ixl.1.pf.que15.tso_tx: 0 >> > dev.ixl.1.pf.que15.irqs: 238145862 >> > dev.ixl.1.pf.que15.dropped: 0 >> > dev.ixl.1.pf.que15.mbuf_defrag_failed: 0 >> > dev.ixl.1.pf.que14.rx_bytes: 301934533473 >> > dev.ixl.1.pf.que14.rx_packets: 298452930 >> > dev.ixl.1.pf.que14.tx_bytes: 111420393725 >> > dev.ixl.1.pf.que14.tx_packets: 215722532 >> > dev.ixl.1.pf.que14.no_desc_avail: 0 >> > dev.ixl.1.pf.que14.tx_dma_setup: 0 >> > dev.ixl.1.pf.que14.tso_tx: 0 >> > dev.ixl.1.pf.que14.irqs: 256291617 >> > dev.ixl.1.pf.que14.dropped: 0 >> > dev.ixl.1.pf.que14.mbuf_defrag_failed: 0 >> > dev.ixl.1.pf.que13.rx_bytes: 291380746253 >> > dev.ixl.1.pf.que13.rx_packets: 273037957 >> > dev.ixl.1.pf.que13.tx_bytes: 112417776222 >> > dev.ixl.1.pf.que13.tx_packets: 217500943 >> > dev.ixl.1.pf.que13.no_desc_avail: 0 >> > dev.ixl.1.pf.que13.tx_dma_setup: 0 >> > dev.ixl.1.pf.que13.tso_tx: 0 >> > dev.ixl.1.pf.que13.irqs: 241422331 >> > dev.ixl.1.pf.que13.dropped: 0 >> > dev.ixl.1.pf.que13.mbuf_defrag_failed: 0 >> > dev.ixl.1.pf.que12.rx_bytes: 301105585425 >> > dev.ixl.1.pf.que12.rx_packets: 286137817 >> > dev.ixl.1.pf.que12.tx_bytes: 95851784579 >> > dev.ixl.1.pf.que12.tx_packets: 199715765 >> > dev.ixl.1.pf.que12.no_desc_avail: 0 >> > dev.ixl.1.pf.que12.tx_dma_setup: 0 >> > dev.ixl.1.pf.que12.tso_tx: 0 >> > dev.ixl.1.pf.que12.irqs: 247322880 >> > dev.ixl.1.pf.que12.dropped: 0 >> > dev.ixl.1.pf.que12.mbuf_defrag_failed: 0 >> > dev.ixl.1.pf.que11.rx_bytes: 307105398143 >> > dev.ixl.1.pf.que11.rx_packets: 281046463 >> > dev.ixl.1.pf.que11.tx_bytes: 110710957789 >> > dev.ixl.1.pf.que11.tx_packets: 211784031 >> > dev.ixl.1.pf.que11.no_desc_avail: 0 >> > dev.ixl.1.pf.que11.tx_dma_setup: 0 >> > dev.ixl.1.pf.que11.tso_tx: 0 >> > dev.ixl.1.pf.que11.irqs: 256987179 >> > dev.ixl.1.pf.que11.dropped: 0 >> > dev.ixl.1.pf.que11.mbuf_defrag_failed: 0 >> > dev.ixl.1.pf.que10.rx_bytes: 304288000453 >> > dev.ixl.1.pf.que10.rx_packets: 278987858 >> > dev.ixl.1.pf.que10.tx_bytes: 93022244338 >> > dev.ixl.1.pf.que10.tx_packets: 195869210 >> > dev.ixl.1.pf.que10.no_desc_avail: 0 >> > dev.ixl.1.pf.que10.tx_dma_setup: 0 >> > dev.ixl.1.pf.que10.tso_tx: 0 >> > dev.ixl.1.pf.que10.irqs: 253622192 >> > dev.ixl.1.pf.que10.dropped: 0 >> > dev.ixl.1.pf.que10.mbuf_defrag_failed: 0 >> > dev.ixl.1.pf.que9.rx_bytes: 320340203822 >> > dev.ixl.1.pf.que9.rx_packets: 302309010 >> > dev.ixl.1.pf.que9.tx_bytes: 116604776460 >> > dev.ixl.1.pf.que9.tx_packets: 223949025 >> > dev.ixl.1.pf.que9.no_desc_avail: 0 >> > dev.ixl.1.pf.que9.tx_dma_setup: 0 >> > dev.ixl.1.pf.que9.tso_tx: 0 >> > dev.ixl.1.pf.que9.irqs: 271165440 >> > dev.ixl.1.pf.que9.dropped: 0 >> > dev.ixl.1.pf.que9.mbuf_defrag_failed: 0 >> > dev.ixl.1.pf.que8.rx_bytes: 291403725592 >> > dev.ixl.1.pf.que8.rx_packets: 267859568 >> > dev.ixl.1.pf.que8.tx_bytes: 205745654558 >> > dev.ixl.1.pf.que8.tx_packets: 443349835 >> > dev.ixl.1.pf.que8.no_desc_avail: 0 >> > dev.ixl.1.pf.que8.tx_dma_setup: 0 >> > dev.ixl.1.pf.que8.tso_tx: 0 >> > dev.ixl.1.pf.que8.irqs: 254116755 >> > dev.ixl.1.pf.que8.dropped: 0 >> > dev.ixl.1.pf.que8.mbuf_defrag_failed: 0 >> > dev.ixl.1.pf.que7.rx_bytes: 673363127346 >> > dev.ixl.1.pf.que7.rx_packets: 617269774 >> > dev.ixl.1.pf.que7.tx_bytes: 203162891886 >> > dev.ixl.1.pf.que7.tx_packets: 443709339 >> > dev.ixl.1.pf.que7.no_desc_avail: 0 >> > dev.ixl.1.pf.que7.tx_dma_setup: 0 >> > dev.ixl.1.pf.que7.tso_tx: 0 >> > dev.ixl.1.pf.que7.irqs: 424706771 >> > dev.ixl.1.pf.que7.dropped: 0 >> > dev.ixl.1.pf.que7.mbuf_defrag_failed: 0 >> > dev.ixl.1.pf.que6.rx_bytes: 644709094218 >> > dev.ixl.1.pf.que6.rx_packets: 601892919 >> > dev.ixl.1.pf.que6.tx_bytes: 221661735032 >> > dev.ixl.1.pf.que6.tx_packets: 460127064 >> > dev.ixl.1.pf.que6.no_desc_avail: 0 >> > dev.ixl.1.pf.que6.tx_dma_setup: 0 >> > dev.ixl.1.pf.que6.tso_tx: 0 >> > dev.ixl.1.pf.que6.irqs: 417748074 >> > dev.ixl.1.pf.que6.dropped: 0 >> > dev.ixl.1.pf.que6.mbuf_defrag_failed: 0 >> > dev.ixl.1.pf.que5.rx_bytes: 661904432231 >> > dev.ixl.1.pf.que5.rx_packets: 622012837 >> > dev.ixl.1.pf.que5.tx_bytes: 230514282876 >> > dev.ixl.1.pf.que5.tx_packets: 458571100 >> > dev.ixl.1.pf.que5.no_desc_avail: 0 >> > dev.ixl.1.pf.que5.tx_dma_setup: 0 >> > dev.ixl.1.pf.que5.tso_tx: 0 >> > dev.ixl.1.pf.que5.irqs: 422305039 >> > dev.ixl.1.pf.que5.dropped: 0 >> > dev.ixl.1.pf.que5.mbuf_defrag_failed: 0 >> > dev.ixl.1.pf.que4.rx_bytes: 653522179234 >> > dev.ixl.1.pf.que4.rx_packets: 603345546 >> > dev.ixl.1.pf.que4.tx_bytes: 216761219483 >> > dev.ixl.1.pf.que4.tx_packets: 450329641 >> > dev.ixl.1.pf.que4.no_desc_avail: 0 >> > dev.ixl.1.pf.que4.tx_dma_setup: 0 >> > dev.ixl.1.pf.que4.tso_tx: 3 >> > dev.ixl.1.pf.que4.irqs: 416920533 >> > dev.ixl.1.pf.que4.dropped: 0 >> > dev.ixl.1.pf.que4.mbuf_defrag_failed: 0 >> > dev.ixl.1.pf.que3.rx_bytes: 676494225882 >> > dev.ixl.1.pf.que3.rx_packets: 620605168 >> > dev.ixl.1.pf.que3.tx_bytes: 233854020454 >> > dev.ixl.1.pf.que3.tx_packets: 464425616 >> > dev.ixl.1.pf.que3.no_desc_avail: 0 >> > dev.ixl.1.pf.que3.tx_dma_setup: 0 >> > dev.ixl.1.pf.que3.tso_tx: 0 >> > dev.ixl.1.pf.que3.irqs: 426349030 >> > dev.ixl.1.pf.que3.dropped: 0 >> > dev.ixl.1.pf.que3.mbuf_defrag_failed: 0 >> > dev.ixl.1.pf.que2.rx_bytes: 677779337711 >> > dev.ixl.1.pf.que2.rx_packets: 620883699 >> > dev.ixl.1.pf.que2.tx_bytes: 211297141668 >> > dev.ixl.1.pf.que2.tx_packets: 450501525 >> > dev.ixl.1.pf.que2.no_desc_avail: 0 >> > dev.ixl.1.pf.que2.tx_dma_setup: 0 >> > dev.ixl.1.pf.que2.tso_tx: 0 >> > dev.ixl.1.pf.que2.irqs: 433146278 >> > dev.ixl.1.pf.que2.dropped: 0 >> > dev.ixl.1.pf.que2.mbuf_defrag_failed: 0 >> > dev.ixl.1.pf.que1.rx_bytes: 661360798018 >> > dev.ixl.1.pf.que1.rx_packets: 619700636 >> > dev.ixl.1.pf.que1.tx_bytes: 238264220772 >> > dev.ixl.1.pf.que1.tx_packets: 473425354 >> > dev.ixl.1.pf.que1.no_desc_avail: 0 >> > dev.ixl.1.pf.que1.tx_dma_setup: 0 >> > dev.ixl.1.pf.que1.tso_tx: 0 >> > dev.ixl.1.pf.que1.irqs: 437959829 >> > dev.ixl.1.pf.que1.dropped: 0 >> > dev.ixl.1.pf.que1.mbuf_defrag_failed: 0 >> > dev.ixl.1.pf.que0.rx_bytes: 685201226330 >> > dev.ixl.1.pf.que0.rx_packets: 637772348 >> > dev.ixl.1.pf.que0.tx_bytes: 124808 >> > dev.ixl.1.pf.que0.tx_packets: 1782 >> > dev.ixl.1.pf.que0.no_desc_avail: 0 >> > dev.ixl.1.pf.que0.tx_dma_setup: 0 >> > dev.ixl.1.pf.que0.tso_tx: 0 >> > dev.ixl.1.pf.que0.irqs: 174905480 >> > dev.ixl.1.pf.que0.dropped: 0 >> > dev.ixl.1.pf.que0.mbuf_defrag_failed: 0 >> > dev.ixl.1.pf.bcast_pkts_txd: 277 >> > dev.ixl.1.pf.mcast_pkts_txd: 0 >> > dev.ixl.1.pf.ucast_pkts_txd: 5319743945 >> > dev.ixl.1.pf.good_octets_txd: 2613178367282 >> > dev.ixl.1.pf.rx_discards: 0 >> > dev.ixl.1.pf.bcast_pkts_rcvd: 1 >> > dev.ixl.1.pf.mcast_pkts_rcvd: 0 >> > dev.ixl.1.pf.ucast_pkts_rcvd: 7203747890 >> > dev.ixl.1.pf.good_octets_rcvd: 7770230490224 >> > dev.ixl.1.vc_debug_level: 1 >> > dev.ixl.1.admin_irq: 0 >> > dev.ixl.1.watchdog_events: 0 >> > dev.ixl.1.debug: 0 >> > dev.ixl.1.dynamic_tx_itr: 0 >> > dev.ixl.1.tx_itr: 122 >> > dev.ixl.1.dynamic_rx_itr: 0 >> > dev.ixl.1.rx_itr: 62 >> > dev.ixl.1.fw_version: f4.33 a1.2 n04.42 e8000191d >> > dev.ixl.1.current_speed: 10G >> > dev.ixl.1.advertise_speed: 0 >> > dev.ixl.1.fc: 0 >> > dev.ixl.1.%parent: pci129 >> > dev.ixl.1.%pnpinfo: vendor=0x8086 device=0x1572 subvendor=0x8086 >> > subdevice=0x0000 class=0x020000 >> > dev.ixl.1.%location: slot=0 function=1 handle=\_SB_.PCI1.QR3A.H001 >> > dev.ixl.1.%driver: ixl >> > dev.ixl.1.%desc: Intel(R) Ethernet Connection XL710 Driver, Version - >> 1.4.0 >> > dev.ixl.0.mac.xoff_recvd: 0 >> > dev.ixl.0.mac.xoff_txd: 0 >> > dev.ixl.0.mac.xon_recvd: 0 >> > dev.ixl.0.mac.xon_txd: 0 >> > dev.ixl.0.mac.tx_frames_big: 0 >> > dev.ixl.0.mac.tx_frames_1024_1522: 4961134019 >> > dev.ixl.0.mac.tx_frames_512_1023: 113082136 >> > dev.ixl.0.mac.tx_frames_256_511: 123538450 >> > dev.ixl.0.mac.tx_frames_128_255: 185051082 >> > dev.ixl.0.mac.tx_frames_65_127: 1332798493 >> > dev.ixl.0.mac.tx_frames_64: 243338964 >> > dev.ixl.0.mac.checksum_errors: 0 >> > dev.ixl.0.mac.rx_jabber: 0 >> > dev.ixl.0.mac.rx_oversized: 0 >> > dev.ixl.0.mac.rx_fragmented: 0 >> > dev.ixl.0.mac.rx_undersize: 0 >> > dev.ixl.0.mac.rx_frames_big: 0 >> > dev.ixl.0.mac.rx_frames_1024_1522: 1566499069 >> > dev.ixl.0.mac.rx_frames_512_1023: 101390143 >> > dev.ixl.0.mac.rx_frames_256_511: 49831970 >> > dev.ixl.0.mac.rx_frames_128_255: 231738168 >> > dev.ixl.0.mac.rx_frames_65_127: 2123185819 >> > dev.ixl.0.mac.rx_frames_64: 1320404300 >> > dev.ixl.0.mac.rx_length_errors: 0 >> > dev.ixl.0.mac.remote_faults: 0 >> > dev.ixl.0.mac.local_faults: 0 >> > dev.ixl.0.mac.illegal_bytes: 0 >> > dev.ixl.0.mac.crc_errors: 0 >> > dev.ixl.0.mac.bcast_pkts_txd: 302 >> > dev.ixl.0.mac.mcast_pkts_txd: 33965 >> > dev.ixl.0.mac.ucast_pkts_txd: 6958908862 >> > dev.ixl.0.mac.good_octets_txd: 7698936138858 >> > dev.ixl.0.mac.rx_discards: 0 >> > dev.ixl.0.mac.bcast_pkts_rcvd: 1 >> > dev.ixl.0.mac.mcast_pkts_rcvd: 49693 >> > dev.ixl.0.mac.ucast_pkts_rcvd: 5392999771 >> > dev.ixl.0.mac.good_octets_rcvd: 2648906893811 >> > dev.ixl.0.pf.que23.rx_bytes: 0 >> > dev.ixl.0.pf.que23.rx_packets: 0 >> > dev.ixl.0.pf.que23.tx_bytes: 2371273 >> > dev.ixl.0.pf.que23.tx_packets: 7313 >> > dev.ixl.0.pf.que23.no_desc_avail: 0 >> > dev.ixl.0.pf.que23.tx_dma_setup: 0 >> > dev.ixl.0.pf.que23.tso_tx: 0 >> > dev.ixl.0.pf.que23.irqs: 7313 >> > dev.ixl.0.pf.que23.dropped: 0 >> > dev.ixl.0.pf.que23.mbuf_defrag_failed: 0 >> > dev.ixl.0.pf.que22.rx_bytes: 0 >> > dev.ixl.0.pf.que22.rx_packets: 0 >> > dev.ixl.0.pf.que22.tx_bytes: 1908468 >> > dev.ixl.0.pf.que22.tx_packets: 6626 >> > dev.ixl.0.pf.que22.no_desc_avail: 0 >> > dev.ixl.0.pf.que22.tx_dma_setup: 0 >> > dev.ixl.0.pf.que22.tso_tx: 0 >> > dev.ixl.0.pf.que22.irqs: 6627 >> > dev.ixl.0.pf.que22.dropped: 0 >> > dev.ixl.0.pf.que22.mbuf_defrag_failed: 0 >> > dev.ixl.0.pf.que21.rx_bytes: 0 >> > dev.ixl.0.pf.que21.rx_packets: 0 >> > dev.ixl.0.pf.que21.tx_bytes: 2092668 >> > dev.ixl.0.pf.que21.tx_packets: 6739 >> > dev.ixl.0.pf.que21.no_desc_avail: 0 >> > dev.ixl.0.pf.que21.tx_dma_setup: 0 >> > dev.ixl.0.pf.que21.tso_tx: 0 >> > dev.ixl.0.pf.que21.irqs: 6728 >> > dev.ixl.0.pf.que21.dropped: 0 >> > dev.ixl.0.pf.que21.mbuf_defrag_failed: 0 >> > dev.ixl.0.pf.que20.rx_bytes: 0 >> > dev.ixl.0.pf.que20.rx_packets: 0 >> > dev.ixl.0.pf.que20.tx_bytes: 1742176 >> > dev.ixl.0.pf.que20.tx_packets: 6246 >> > dev.ixl.0.pf.que20.no_desc_avail: 0 >> > dev.ixl.0.pf.que20.tx_dma_setup: 0 >> > dev.ixl.0.pf.que20.tso_tx: 0 >> > dev.ixl.0.pf.que20.irqs: 6249 >> > dev.ixl.0.pf.que20.dropped: 0 >> > dev.ixl.0.pf.que20.mbuf_defrag_failed: 0 >> > dev.ixl.0.pf.que19.rx_bytes: 0 >> > dev.ixl.0.pf.que19.rx_packets: 0 >> > dev.ixl.0.pf.que19.tx_bytes: 2102284 >> > dev.ixl.0.pf.que19.tx_packets: 6979 >> > dev.ixl.0.pf.que19.no_desc_avail: 0 >> > dev.ixl.0.pf.que19.tx_dma_setup: 0 >> > dev.ixl.0.pf.que19.tso_tx: 0 >> > dev.ixl.0.pf.que19.irqs: 6979 >> > dev.ixl.0.pf.que19.dropped: 0 >> > dev.ixl.0.pf.que19.mbuf_defrag_failed: 0 >> > dev.ixl.0.pf.que18.rx_bytes: 0 >> > dev.ixl.0.pf.que18.rx_packets: 0 >> > dev.ixl.0.pf.que18.tx_bytes: 1532360 >> > dev.ixl.0.pf.que18.tx_packets: 5588 >> > dev.ixl.0.pf.que18.no_desc_avail: 0 >> > dev.ixl.0.pf.que18.tx_dma_setup: 0 >> > dev.ixl.0.pf.que18.tso_tx: 0 >> > dev.ixl.0.pf.que18.irqs: 5588 >> > dev.ixl.0.pf.que18.dropped: 0 >> > dev.ixl.0.pf.que18.mbuf_defrag_failed: 0 >> > dev.ixl.0.pf.que17.rx_bytes: 0 >> > dev.ixl.0.pf.que17.rx_packets: 0 >> > dev.ixl.0.pf.que17.tx_bytes: 1809684 >> > dev.ixl.0.pf.que17.tx_packets: 6136 >> > dev.ixl.0.pf.que17.no_desc_avail: 0 >> > dev.ixl.0.pf.que17.tx_dma_setup: 0 >> > dev.ixl.0.pf.que17.tso_tx: 0 >> > dev.ixl.0.pf.que17.irqs: 6136 >> > dev.ixl.0.pf.que17.dropped: 0 >> > dev.ixl.0.pf.que17.mbuf_defrag_failed: 0 >> > dev.ixl.0.pf.que16.rx_bytes: 0 >> > dev.ixl.0.pf.que16.rx_packets: 0 >> > dev.ixl.0.pf.que16.tx_bytes: 286836299105 >> > dev.ixl.0.pf.que16.tx_packets: 263532601 >> > dev.ixl.0.pf.que16.no_desc_avail: 0 >> > dev.ixl.0.pf.que16.tx_dma_setup: 0 >> > dev.ixl.0.pf.que16.tso_tx: 0 >> > dev.ixl.0.pf.que16.irqs: 83232941 >> > dev.ixl.0.pf.que16.dropped: 0 >> > dev.ixl.0.pf.que16.mbuf_defrag_failed: 0 >> > dev.ixl.0.pf.que15.rx_bytes: 106345323488 >> > dev.ixl.0.pf.que15.rx_packets: 208869912 >> > dev.ixl.0.pf.que15.tx_bytes: 298825179301 >> > dev.ixl.0.pf.que15.tx_packets: 288517504 >> > dev.ixl.0.pf.que15.no_desc_avail: 0 >> > dev.ixl.0.pf.que15.tx_dma_setup: 0 >> > dev.ixl.0.pf.que15.tso_tx: 0 >> > dev.ixl.0.pf.que15.irqs: 223322408 >> > dev.ixl.0.pf.que15.dropped: 0 >> > dev.ixl.0.pf.que15.mbuf_defrag_failed: 0 >> > dev.ixl.0.pf.que14.rx_bytes: 106721900547 >> > dev.ixl.0.pf.que14.rx_packets: 208566121 >> > dev.ixl.0.pf.que14.tx_bytes: 288657751920 >> > dev.ixl.0.pf.que14.tx_packets: 263556000 >> > dev.ixl.0.pf.que14.no_desc_avail: 0 >> > dev.ixl.0.pf.que14.tx_dma_setup: 0 >> > dev.ixl.0.pf.que14.tso_tx: 0 >> > dev.ixl.0.pf.que14.irqs: 220377537 >> > dev.ixl.0.pf.que14.dropped: 0 >> > dev.ixl.0.pf.que14.mbuf_defrag_failed: 0 >> > dev.ixl.0.pf.que13.rx_bytes: 111978971378 >> > dev.ixl.0.pf.que13.rx_packets: 218447354 >> > dev.ixl.0.pf.que13.tx_bytes: 298439860675 >> > dev.ixl.0.pf.que13.tx_packets: 276806617 >> > dev.ixl.0.pf.que13.no_desc_avail: 0 >> > dev.ixl.0.pf.que13.tx_dma_setup: 0 >> > dev.ixl.0.pf.que13.tso_tx: 0 >> > dev.ixl.0.pf.que13.irqs: 227474625 >> > dev.ixl.0.pf.que13.dropped: 0 >> > dev.ixl.0.pf.que13.mbuf_defrag_failed: 0 >> > dev.ixl.0.pf.que12.rx_bytes: 112969704706 >> > dev.ixl.0.pf.que12.rx_packets: 220275562 >> > dev.ixl.0.pf.que12.tx_bytes: 304750620079 >> > dev.ixl.0.pf.que12.tx_packets: 272244483 >> > dev.ixl.0.pf.que12.no_desc_avail: 0 >> > dev.ixl.0.pf.que12.tx_dma_setup: 0 >> > dev.ixl.0.pf.que12.tso_tx: 183 >> > dev.ixl.0.pf.que12.irqs: 230111291 >> > dev.ixl.0.pf.que12.dropped: 0 >> > dev.ixl.0.pf.que12.mbuf_defrag_failed: 0 >> > dev.ixl.0.pf.que11.rx_bytes: 96405343036 >> > dev.ixl.0.pf.que11.rx_packets: 202329448 >> > dev.ixl.0.pf.que11.tx_bytes: 302481707696 >> > dev.ixl.0.pf.que11.tx_packets: 271689246 >> > dev.ixl.0.pf.que11.no_desc_avail: 0 >> > dev.ixl.0.pf.que11.tx_dma_setup: 0 >> > dev.ixl.0.pf.que11.tso_tx: 0 >> > dev.ixl.0.pf.que11.irqs: 220717612 >> > dev.ixl.0.pf.que11.dropped: 0 >> > dev.ixl.0.pf.que11.mbuf_defrag_failed: 0 >> > dev.ixl.0.pf.que10.rx_bytes: 111280008670 >> > dev.ixl.0.pf.que10.rx_packets: 214900261 >> > dev.ixl.0.pf.que10.tx_bytes: 318638566198 >> > dev.ixl.0.pf.que10.tx_packets: 295011389 >> > dev.ixl.0.pf.que10.no_desc_avail: 0 >> > dev.ixl.0.pf.que10.tx_dma_setup: 0 >> > dev.ixl.0.pf.que10.tso_tx: 0 >> > dev.ixl.0.pf.que10.irqs: 230681709 >> > dev.ixl.0.pf.que10.dropped: 0 >> > dev.ixl.0.pf.que10.mbuf_defrag_failed: 0 >> > dev.ixl.0.pf.que9.rx_bytes: 93566025126 >> > dev.ixl.0.pf.que9.rx_packets: 198726483 >> > dev.ixl.0.pf.que9.tx_bytes: 288858818348 >> > dev.ixl.0.pf.que9.tx_packets: 258926864 >> > dev.ixl.0.pf.que9.no_desc_avail: 0 >> > dev.ixl.0.pf.que9.tx_dma_setup: 0 >> > dev.ixl.0.pf.que9.tso_tx: 0 >> > dev.ixl.0.pf.que9.irqs: 217918160 >> > dev.ixl.0.pf.que9.dropped: 0 >> > dev.ixl.0.pf.que9.mbuf_defrag_failed: 0 >> > dev.ixl.0.pf.que8.rx_bytes: 117169019041 >> > dev.ixl.0.pf.que8.rx_packets: 226938172 >> > dev.ixl.0.pf.que8.tx_bytes: 665794492752 >> > dev.ixl.0.pf.que8.tx_packets: 593519436 >> > dev.ixl.0.pf.que8.no_desc_avail: 0 >> > dev.ixl.0.pf.que8.tx_dma_setup: 0 >> > dev.ixl.0.pf.que8.tso_tx: 0 >> > dev.ixl.0.pf.que8.irqs: 244643578 >> > dev.ixl.0.pf.que8.dropped: 0 >> > dev.ixl.0.pf.que8.mbuf_defrag_failed: 0 >> > dev.ixl.0.pf.que7.rx_bytes: 206974266022 >> > dev.ixl.0.pf.que7.rx_packets: 449899895 >> > dev.ixl.0.pf.que7.tx_bytes: 638527685820 >> > dev.ixl.0.pf.que7.tx_packets: 580750916 >> > dev.ixl.0.pf.que7.no_desc_avail: 0 >> > dev.ixl.0.pf.que7.tx_dma_setup: 0 >> > dev.ixl.0.pf.que7.tso_tx: 0 >> > dev.ixl.0.pf.que7.irqs: 391760959 >> > dev.ixl.0.pf.que7.dropped: 0 >> > dev.ixl.0.pf.que7.mbuf_defrag_failed: 0 >> > dev.ixl.0.pf.que6.rx_bytes: 204373984670 >> > dev.ixl.0.pf.que6.rx_packets: 449990985 >> > dev.ixl.0.pf.que6.tx_bytes: 655511068125 >> > dev.ixl.0.pf.que6.tx_packets: 600735086 >> > dev.ixl.0.pf.que6.no_desc_avail: 0 >> > dev.ixl.0.pf.que6.tx_dma_setup: 0 >> > dev.ixl.0.pf.que6.tso_tx: 0 >> > dev.ixl.0.pf.que6.irqs: 394961024 >> > dev.ixl.0.pf.que6.dropped: 0 >> > dev.ixl.0.pf.que6.mbuf_defrag_failed: 0 >> > dev.ixl.0.pf.que5.rx_bytes: 222919535872 >> > dev.ixl.0.pf.que5.rx_packets: 466659705 >> > dev.ixl.0.pf.que5.tx_bytes: 647689764751 >> > dev.ixl.0.pf.que5.tx_packets: 582532691 >> > dev.ixl.0.pf.que5.no_desc_avail: 0 >> > dev.ixl.0.pf.que5.tx_dma_setup: 0 >> > dev.ixl.0.pf.que5.tso_tx: 5 >> > dev.ixl.0.pf.que5.irqs: 404552229 >> > dev.ixl.0.pf.que5.dropped: 0 >> > dev.ixl.0.pf.que5.mbuf_defrag_failed: 0 >> > dev.ixl.0.pf.que4.rx_bytes: 231706806551 >> > dev.ixl.0.pf.que4.rx_packets: 464397112 >> > dev.ixl.0.pf.que4.tx_bytes: 669945424739 >> > dev.ixl.0.pf.que4.tx_packets: 598527594 >> > dev.ixl.0.pf.que4.no_desc_avail: 0 >> > dev.ixl.0.pf.que4.tx_dma_setup: 0 >> > dev.ixl.0.pf.que4.tso_tx: 452 >> > dev.ixl.0.pf.que4.irqs: 405018727 >> > dev.ixl.0.pf.que4.dropped: 0 >> > dev.ixl.0.pf.que4.mbuf_defrag_failed: 0 >> > dev.ixl.0.pf.que3.rx_bytes: 217942511336 >> > dev.ixl.0.pf.que3.rx_packets: 456454137 >> > dev.ixl.0.pf.que3.tx_bytes: 674027217503 >> > dev.ixl.0.pf.que3.tx_packets: 604815959 >> > dev.ixl.0.pf.que3.no_desc_avail: 0 >> > dev.ixl.0.pf.que3.tx_dma_setup: 0 >> > dev.ixl.0.pf.que3.tso_tx: 0 >> > dev.ixl.0.pf.que3.irqs: 399890434 >> > dev.ixl.0.pf.que3.dropped: 0 >> > dev.ixl.0.pf.que3.mbuf_defrag_failed: 0 >> > dev.ixl.0.pf.que2.rx_bytes: 235057952930 >> > dev.ixl.0.pf.que2.rx_packets: 470668205 >> > dev.ixl.0.pf.que2.tx_bytes: 653598762323 >> > dev.ixl.0.pf.que2.tx_packets: 595468539 >> > dev.ixl.0.pf.que2.no_desc_avail: 0 >> > dev.ixl.0.pf.que2.tx_dma_setup: 0 >> > dev.ixl.0.pf.que2.tso_tx: 0 >> > dev.ixl.0.pf.que2.irqs: 410972406 >> > dev.ixl.0.pf.que2.dropped: 0 >> > dev.ixl.0.pf.que2.mbuf_defrag_failed: 0 >> > dev.ixl.0.pf.que1.rx_bytes: 212570053522 >> > dev.ixl.0.pf.que1.rx_packets: 456981561 >> > dev.ixl.0.pf.que1.tx_bytes: 677227126330 >> > dev.ixl.0.pf.que1.tx_packets: 612428010 >> > dev.ixl.0.pf.que1.no_desc_avail: 0 >> > dev.ixl.0.pf.que1.tx_dma_setup: 0 >> > dev.ixl.0.pf.que1.tso_tx: 0 >> > dev.ixl.0.pf.que1.irqs: 404727745 >> > dev.ixl.0.pf.que1.dropped: 0 >> > dev.ixl.0.pf.que1.mbuf_defrag_failed: 0 >> > dev.ixl.0.pf.que0.rx_bytes: 239424279142 >> > dev.ixl.0.pf.que0.rx_packets: 479078356 >> > dev.ixl.0.pf.que0.tx_bytes: 513283 >> > dev.ixl.0.pf.que0.tx_packets: 3990 >> > dev.ixl.0.pf.que0.no_desc_avail: 0 >> > dev.ixl.0.pf.que0.tx_dma_setup: 0 >> > dev.ixl.0.pf.que0.tso_tx: 0 >> > dev.ixl.0.pf.que0.irqs: 178414974 >> > dev.ixl.0.pf.que0.dropped: 0 >> > dev.ixl.0.pf.que0.mbuf_defrag_failed: 0 >> > dev.ixl.0.pf.bcast_pkts_txd: 302 >> > dev.ixl.0.pf.mcast_pkts_txd: 33965 >> > dev.ixl.0.pf.ucast_pkts_txd: 6958908879 >> > dev.ixl.0.pf.good_octets_txd: 7669637462330 >> > dev.ixl.0.pf.rx_discards: 0 >> > dev.ixl.0.pf.bcast_pkts_rcvd: 1 >> > dev.ixl.0.pf.mcast_pkts_rcvd: 49549 >> > dev.ixl.0.pf.ucast_pkts_rcvd: 5392999777 >> > dev.ixl.0.pf.good_octets_rcvd: 2648906886817 >> > dev.ixl.0.vc_debug_level: 1 >> > dev.ixl.0.admin_irq: 0 >> > dev.ixl.0.watchdog_events: 0 >> > dev.ixl.0.debug: 0 >> > dev.ixl.0.dynamic_tx_itr: 0 >> > dev.ixl.0.tx_itr: 122 >> > dev.ixl.0.dynamic_rx_itr: 0 >> > dev.ixl.0.rx_itr: 62 >> > dev.ixl.0.fw_version: f4.33 a1.2 n04.42 e8000191d >> > dev.ixl.0.current_speed: 10G >> > dev.ixl.0.advertise_speed: 0 >> > dev.ixl.0.fc: 0 >> > dev.ixl.0.%parent: pci129 >> > dev.ixl.0.%pnpinfo: vendor=0x8086 device=0x1572 subvendor=0x8086 >> > subdevice=0x0002 class=0x020000 >> > dev.ixl.0.%location: slot=0 function=0 handle=\_SB_.PCI1.QR3A.H000 >> > dev.ixl.0.%driver: ixl >> > dev.ixl.0.%desc: Intel(R) Ethernet Connection XL710 Driver, Version - >> 1.4.0 >> > dev.ixl.%parent: >> >> > _______________________________________________ > freebsd-net@freebsd.org mailing list > https://lists.freebsd.org/mailman/listinfo/freebsd-net > To unsubscribe, send any mail to "freebsd-net-unsubscribe@freebsd.org" From owner-freebsd-net@freebsd.org Wed Aug 19 19:36:20 2015 Return-Path: Delivered-To: freebsd-net@mailman.ysv.freebsd.org Received: from mx1.freebsd.org (mx1.freebsd.org [IPv6:2001:1900:2254:206a::19:1]) by mailman.ysv.freebsd.org (Postfix) with ESMTP id 79A4B9BE25D for ; Wed, 19 Aug 2015 19:36:20 +0000 (UTC) (envelope-from john@maxnet.ru) Received: from basic.maxnet.ru (mx.maxnet.ru [195.112.97.17]) by mx1.freebsd.org (Postfix) with ESMTP id DE06ABA7; Wed, 19 Aug 2015 19:36:17 +0000 (UTC) (envelope-from john@maxnet.ru) Received: from [217.15.204.72] (John.Office.Obninsk.MAXnet.ru [217.15.204.72] (may be forged)) by basic.maxnet.ru (8.14.6/8.14.6) with ESMTP id t7JJaFZb002976; Wed, 19 Aug 2015 22:36:15 +0300 (MSK) (envelope-from john@maxnet.ru) Message-ID: <55D4DAB3.1020401@maxnet.ru> Date: Wed, 19 Aug 2015 22:36:19 +0300 From: Evgeny Khorokhorin User-Agent: Mozilla/5.0 (Windows NT 6.1; WOW64; rv:31.0) Gecko/20100101 Thunderbird/31.7.0 MIME-Version: 1.0 To: Eric Joyner , hiren panchasara CC: freebsd-net@freebsd.org Subject: Re: FreeBSD 10.2-STABLE + Intel XL710 - free queues References: <55D49611.40603@maxnet.ru> <20150819180051.GM94440@strugglingcoder.info> In-Reply-To: Content-Type: text/plain; charset=utf-8; format=flowed Content-Transfer-Encoding: 8bit X-Content-Filtered-By: Mailman/MimeDel 2.1.20 X-BeenThere: freebsd-net@freebsd.org X-Mailman-Version: 2.1.20 Precedence: list List-Id: Networking and TCP/IP with FreeBSD List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , X-List-Received-Date: Wed, 19 Aug 2015 19:36:20 -0000 Eric, I updated this driver in kernel, not as module. And I removed #include "opt_rss.h" from if_ixl.c and ixl_txrx.c: #ifndef IXL_STANDALONE_BUILD #include "opt_inet.h" #include "opt_inet6.h" #include "opt_rss.h" #endif because RSS for is only in HEAD Could I break smth by doing this? Best regards, Evgeny Khorokhorin 19.08.2015 21:17, Eric Joyner пишет: > The IXLV_MAX_QUEUES value is for the VF driver; the standard driver > should be able to allocate and properly use up to 64 queues. > > That said, you're only getting rx traffic on the first 16 queues, so > that looks like a bug in the driver. I'll take a look at it. > > - Eric > > On Wed, Aug 19, 2015 at 11:00 AM hiren panchasara > > wrote: > > On 08/19/15 at 05:43P, Evgeny Khorokhorin wrote: > > Hi All, > > > > FreeBSD 10.2-STABLE > > 2*CPU Intel E5-2643v3 with HyperThreading enabled > > Intel XL710 network adapter > > I updated the ixl driver to version 1.4.0 from > download.intel.com > > Every ixl interface create 24 queues (6 cores *2 HT *2 CPUs) but > > utilizes only 16-17 of them. Where is the reason of such behavior or > > driver bug? > > Not sure what is the h/w limit but this may be a possible cause: > #define IXLV_MAX_QUEUES 16 > in sys/dev/ixl/ixlv.h > > and ixlv_init_msix() doing: > if (queues > IXLV_MAX_QUEUES) > queues = IXLV_MAX_QUEUES; > > Adding eric from intel to confirm. > > Cheers, > Hiren > > > > irq284: ixl0:q0 177563088 2054 > > irq285: ixl0:q1 402668179 4659 > > irq286: ixl0:q2 408885088 4731 > > irq287: ixl0:q3 397744300 4602 > > irq288: ixl0:q4 403040766 4663 > > irq289: ixl0:q5 402499314 4657 > > irq290: ixl0:q6 392693663 4543 > > irq291: ixl0:q7 389364966 4505 > > irq292: ixl0:q8 243244346 2814 > > irq293: ixl0:q9 216834450 2509 > > irq294: ixl0:q10 229460056 2655 > > irq295: ixl0:q11 219591953 2540 > > irq296: ixl0:q12 228944960 2649 > > irq297: ixl0:q13 226385454 2619 > > irq298: ixl0:q14 219174953 2536 > > irq299: ixl0:q15 222151378 2570 > > irq300: ixl0:q16 82799713 958 > > irq301: ixl0:q17 6131 0 > > irq302: ixl0:q18 5586 0 > > irq303: ixl0:q19 6975 0 > > irq304: ixl0:q20 6243 0 > > irq305: ixl0:q21 6729 0 > > irq306: ixl0:q22 6623 0 > > irq307: ixl0:q23 7306 0 > > irq309: ixl1:q0 174074462 2014 > > irq310: ixl1:q1 435716449 5041 > > irq311: ixl1:q2 431030443 4987 > > irq312: ixl1:q3 424156413 4907 > > irq313: ixl1:q4 414791657 4799 > > irq314: ixl1:q5 420260382 4862 > > irq315: ixl1:q6 415645708 4809 > > irq316: ixl1:q7 422783859 4892 > > irq317: ixl1:q8 252737383 2924 > > irq318: ixl1:q9 269655708 3120 > > irq319: ixl1:q10 252397826 2920 > > irq320: ixl1:q11 255649144 2958 > > irq321: ixl1:q12 246025621 2846 > > irq322: ixl1:q13 240176554 2779 > > irq323: ixl1:q14 254882418 2949 > > irq324: ixl1:q15 236846536 2740 > > irq325: ixl1:q16 86794467 1004 > > irq326: ixl1:q17 83 0 > > irq327: ixl1:q18 74 0 > > irq328: ixl1:q19 202 0 > > irq329: ixl1:q20 99 0 > > irq330: ixl1:q21 96 0 > > irq331: ixl1:q22 91 0 > > irq332: ixl1:q23 89 0 > > > > last pid: 28710; load averages: 7.16, 6.76, 6.49 up > 1+00:00:41 17:40:46 > > 391 processes: 32 running, 215 sleeping, 144 waiting > > CPU 0: 0.0% user, 0.0% nice, 0.0% system, 49.2% interrupt, > 50.8% idle > > CPU 1: 0.0% user, 0.0% nice, 0.4% system, 41.3% interrupt, > 58.3% idle > > CPU 2: 0.0% user, 0.0% nice, 0.0% system, 39.0% interrupt, > 61.0% idle > > CPU 3: 0.0% user, 0.0% nice, 0.0% system, 46.5% interrupt, > 53.5% idle > > CPU 4: 0.0% user, 0.0% nice, 0.0% system, 37.4% interrupt, > 62.6% idle > > CPU 5: 0.0% user, 0.0% nice, 0.0% system, 40.9% interrupt, > 59.1% idle > > CPU 6: 0.0% user, 0.0% nice, 0.0% system, 40.2% interrupt, > 59.8% idle > > CPU 7: 0.0% user, 0.0% nice, 0.0% system, 45.3% interrupt, > 54.7% idle > > CPU 8: 0.0% user, 0.0% nice, 0.0% system, 20.5% interrupt, > 79.5% idle > > CPU 9: 0.0% user, 0.0% nice, 0.0% system, 25.2% interrupt, > 74.8% idle > > CPU 10: 0.0% user, 0.0% nice, 0.0% system, 23.2% interrupt, > 76.8% idle > > CPU 11: 0.0% user, 0.0% nice, 0.0% system, 19.3% interrupt, > 80.7% idle > > CPU 12: 0.0% user, 0.0% nice, 0.0% system, 28.7% interrupt, > 71.3% idle > > CPU 13: 0.0% user, 0.0% nice, 0.0% system, 20.5% interrupt, > 79.5% idle > > CPU 14: 0.0% user, 0.0% nice, 0.0% system, 35.0% interrupt, > 65.0% idle > > CPU 15: 0.0% user, 0.0% nice, 0.0% system, 23.2% interrupt, > 76.8% idle > > CPU 16: 0.0% user, 0.0% nice, 0.4% system, 1.2% interrupt, > 98.4% idle > > CPU 17: 0.0% user, 0.0% nice, 2.0% system, 0.0% interrupt, > 98.0% idle > > CPU 18: 0.0% user, 0.0% nice, 2.4% system, 0.0% interrupt, > 97.6% idle > > CPU 19: 0.0% user, 0.0% nice, 2.8% system, 0.0% interrupt, > 97.2% idle > > CPU 20: 0.0% user, 0.0% nice, 2.4% system, 0.0% interrupt, > 97.6% idle > > CPU 21: 0.0% user, 0.0% nice, 1.6% system, 0.0% interrupt, > 98.4% idle > > CPU 22: 0.0% user, 0.0% nice, 2.8% system, 0.0% interrupt, > 97.2% idle > > CPU 23: 0.0% user, 0.0% nice, 0.4% system, 0.0% interrupt, > 99.6% idle > > > > # netstat -I ixl0 -w1 -h > > input ixl0 output > > packets errs idrops bytes packets errs bytes colls > > 253K 0 0 136M 311K 0 325M 0 > > 251K 0 0 129M 314K 0 334M 0 > > 250K 0 0 135M 313K 0 333M 0 > > > > hw.ixl.tx_itr: 122 > > hw.ixl.rx_itr: 62 > > hw.ixl.dynamic_tx_itr: 0 > > hw.ixl.dynamic_rx_itr: 0 > > hw.ixl.max_queues: 0 > > hw.ixl.ring_size: 4096 > > hw.ixl.enable_msix: 1 > > dev.ixl.3.mac.xoff_recvd: 0 > > dev.ixl.3.mac.xoff_txd: 0 > > dev.ixl.3.mac.xon_recvd: 0 > > dev.ixl.3.mac.xon_txd: 0 > > dev.ixl.3.mac.tx_frames_big: 0 > > dev.ixl.3.mac.tx_frames_1024_1522: 0 > > dev.ixl.3.mac.tx_frames_512_1023: 0 > > dev.ixl.3.mac.tx_frames_256_511: 0 > > dev.ixl.3.mac.tx_frames_128_255: 0 > > dev.ixl.3.mac.tx_frames_65_127: 0 > > dev.ixl.3.mac.tx_frames_64: 0 > > dev.ixl.3.mac.checksum_errors: 0 > > dev.ixl.3.mac.rx_jabber: 0 > > dev.ixl.3.mac.rx_oversized: 0 > > dev.ixl.3.mac.rx_fragmented: 0 > > dev.ixl.3.mac.rx_undersize: 0 > > dev.ixl.3.mac.rx_frames_big: 0 > > dev.ixl.3.mac.rx_frames_1024_1522: 0 > > dev.ixl.3.mac.rx_frames_512_1023: 0 > > dev.ixl.3.mac.rx_frames_256_511: 0 > > dev.ixl.3.mac.rx_frames_128_255: 0 > > dev.ixl.3.mac.rx_frames_65_127: 0 > > dev.ixl.3.mac.rx_frames_64: 0 > > dev.ixl.3.mac.rx_length_errors: 0 > > dev.ixl.3.mac.remote_faults: 0 > > dev.ixl.3.mac.local_faults: 0 > > dev.ixl.3.mac.illegal_bytes: 0 > > dev.ixl.3.mac.crc_errors: 0 > > dev.ixl.3.mac.bcast_pkts_txd: 0 > > dev.ixl.3.mac.mcast_pkts_txd: 0 > > dev.ixl.3.mac.ucast_pkts_txd: 0 > > dev.ixl.3.mac.good_octets_txd: 0 > > dev.ixl.3.mac.rx_discards: 0 > > dev.ixl.3.mac.bcast_pkts_rcvd: 0 > > dev.ixl.3.mac.mcast_pkts_rcvd: 0 > > dev.ixl.3.mac.ucast_pkts_rcvd: 0 > > dev.ixl.3.mac.good_octets_rcvd: 0 > > dev.ixl.3.pf.que23.rx_bytes: 0 > > dev.ixl.3.pf.que23.rx_packets: 0 > > dev.ixl.3.pf.que23.tx_bytes: 0 > > dev.ixl.3.pf.que23.tx_packets: 0 > > dev.ixl.3.pf.que23.no_desc_avail: 0 > > dev.ixl.3.pf.que23.tx_dma_setup: 0 > > dev.ixl.3.pf.que23.tso_tx: 0 > > dev.ixl.3.pf.que23.irqs: 0 > > dev.ixl.3.pf.que23.dropped: 0 > > dev.ixl.3.pf.que23.mbuf_defrag_failed: 0 > > dev.ixl.3.pf.que22.rx_bytes: 0 > > dev.ixl.3.pf.que22.rx_packets: 0 > > dev.ixl.3.pf.que22.tx_bytes: 0 > > dev.ixl.3.pf.que22.tx_packets: 0 > > dev.ixl.3.pf.que22.no_desc_avail: 0 > > dev.ixl.3.pf.que22.tx_dma_setup: 0 > > dev.ixl.3.pf.que22.tso_tx: 0 > > dev.ixl.3.pf.que22.irqs: 0 > > dev.ixl.3.pf.que22.dropped: 0 > > dev.ixl.3.pf.que22.mbuf_defrag_failed: 0 > > dev.ixl.3.pf.que21.rx_bytes: 0 > > dev.ixl.3.pf.que21.rx_packets: 0 > > dev.ixl.3.pf.que21.tx_bytes: 0 > > dev.ixl.3.pf.que21.tx_packets: 0 > > dev.ixl.3.pf.que21.no_desc_avail: 0 > > dev.ixl.3.pf.que21.tx_dma_setup: 0 > > dev.ixl.3.pf.que21.tso_tx: 0 > > dev.ixl.3.pf.que21.irqs: 0 > > dev.ixl.3.pf.que21.dropped: 0 > > dev.ixl.3.pf.que21.mbuf_defrag_failed: 0 > > dev.ixl.3.pf.que20.rx_bytes: 0 > > dev.ixl.3.pf.que20.rx_packets: 0 > > dev.ixl.3.pf.que20.tx_bytes: 0 > > dev.ixl.3.pf.que20.tx_packets: 0 > > dev.ixl.3.pf.que20.no_desc_avail: 0 > > dev.ixl.3.pf.que20.tx_dma_setup: 0 > > dev.ixl.3.pf.que20.tso_tx: 0 > > dev.ixl.3.pf.que20.irqs: 0 > > dev.ixl.3.pf.que20.dropped: 0 > > dev.ixl.3.pf.que20.mbuf_defrag_failed: 0 > > dev.ixl.3.pf.que19.rx_bytes: 0 > > dev.ixl.3.pf.que19.rx_packets: 0 > > dev.ixl.3.pf.que19.tx_bytes: 0 > > dev.ixl.3.pf.que19.tx_packets: 0 > > dev.ixl.3.pf.que19.no_desc_avail: 0 > > dev.ixl.3.pf.que19.tx_dma_setup: 0 > > dev.ixl.3.pf.que19.tso_tx: 0 > > dev.ixl.3.pf.que19.irqs: 0 > > dev.ixl.3.pf.que19.dropped: 0 > > dev.ixl.3.pf.que19.mbuf_defrag_failed: 0 > > dev.ixl.3.pf.que18.rx_bytes: 0 > > dev.ixl.3.pf.que18.rx_packets: 0 > > dev.ixl.3.pf.que18.tx_bytes: 0 > > dev.ixl.3.pf.que18.tx_packets: 0 > > dev.ixl.3.pf.que18.no_desc_avail: 0 > > dev.ixl.3.pf.que18.tx_dma_setup: 0 > > dev.ixl.3.pf.que18.tso_tx: 0 > > dev.ixl.3.pf.que18.irqs: 0 > > dev.ixl.3.pf.que18.dropped: 0 > > dev.ixl.3.pf.que18.mbuf_defrag_failed: 0 > > dev.ixl.3.pf.que17.rx_bytes: 0 > > dev.ixl.3.pf.que17.rx_packets: 0 > > dev.ixl.3.pf.que17.tx_bytes: 0 > > dev.ixl.3.pf.que17.tx_packets: 0 > > dev.ixl.3.pf.que17.no_desc_avail: 0 > > dev.ixl.3.pf.que17.tx_dma_setup: 0 > > dev.ixl.3.pf.que17.tso_tx: 0 > > dev.ixl.3.pf.que17.irqs: 0 > > dev.ixl.3.pf.que17.dropped: 0 > > dev.ixl.3.pf.que17.mbuf_defrag_failed: 0 > > dev.ixl.3.pf.que16.rx_bytes: 0 > > dev.ixl.3.pf.que16.rx_packets: 0 > > dev.ixl.3.pf.que16.tx_bytes: 0 > > dev.ixl.3.pf.que16.tx_packets: 0 > > dev.ixl.3.pf.que16.no_desc_avail: 0 > > dev.ixl.3.pf.que16.tx_dma_setup: 0 > > dev.ixl.3.pf.que16.tso_tx: 0 > > dev.ixl.3.pf.que16.irqs: 0 > > dev.ixl.3.pf.que16.dropped: 0 > > dev.ixl.3.pf.que16.mbuf_defrag_failed: 0 > > dev.ixl.3.pf.que15.rx_bytes: 0 > > dev.ixl.3.pf.que15.rx_packets: 0 > > dev.ixl.3.pf.que15.tx_bytes: 0 > > dev.ixl.3.pf.que15.tx_packets: 0 > > dev.ixl.3.pf.que15.no_desc_avail: 0 > > dev.ixl.3.pf.que15.tx_dma_setup: 0 > > dev.ixl.3.pf.que15.tso_tx: 0 > > dev.ixl.3.pf.que15.irqs: 0 > > dev.ixl.3.pf.que15.dropped: 0 > > dev.ixl.3.pf.que15.mbuf_defrag_failed: 0 > > dev.ixl.3.pf.que14.rx_bytes: 0 > > dev.ixl.3.pf.que14.rx_packets: 0 > > dev.ixl.3.pf.que14.tx_bytes: 0 > > dev.ixl.3.pf.que14.tx_packets: 0 > > dev.ixl.3.pf.que14.no_desc_avail: 0 > > dev.ixl.3.pf.que14.tx_dma_setup: 0 > > dev.ixl.3.pf.que14.tso_tx: 0 > > dev.ixl.3.pf.que14.irqs: 0 > > dev.ixl.3.pf.que14.dropped: 0 > > dev.ixl.3.pf.que14.mbuf_defrag_failed: 0 > > dev.ixl.3.pf.que13.rx_bytes: 0 > > dev.ixl.3.pf.que13.rx_packets: 0 > > dev.ixl.3.pf.que13.tx_bytes: 0 > > dev.ixl.3.pf.que13.tx_packets: 0 > > dev.ixl.3.pf.que13.no_desc_avail: 0 > > dev.ixl.3.pf.que13.tx_dma_setup: 0 > > dev.ixl.3.pf.que13.tso_tx: 0 > > dev.ixl.3.pf.que13.irqs: 0 > > dev.ixl.3.pf.que13.dropped: 0 > > dev.ixl.3.pf.que13.mbuf_defrag_failed: 0 > > dev.ixl.3.pf.que12.rx_bytes: 0 > > dev.ixl.3.pf.que12.rx_packets: 0 > > dev.ixl.3.pf.que12.tx_bytes: 0 > > dev.ixl.3.pf.que12.tx_packets: 0 > > dev.ixl.3.pf.que12.no_desc_avail: 0 > > dev.ixl.3.pf.que12.tx_dma_setup: 0 > > dev.ixl.3.pf.que12.tso_tx: 0 > > dev.ixl.3.pf.que12.irqs: 0 > > dev.ixl.3.pf.que12.dropped: 0 > > dev.ixl.3.pf.que12.mbuf_defrag_failed: 0 > > dev.ixl.3.pf.que11.rx_bytes: 0 > > dev.ixl.3.pf.que11.rx_packets: 0 > > dev.ixl.3.pf.que11.tx_bytes: 0 > > dev.ixl.3.pf.que11.tx_packets: 0 > > dev.ixl.3.pf.que11.no_desc_avail: 0 > > dev.ixl.3.pf.que11.tx_dma_setup: 0 > > dev.ixl.3.pf.que11.tso_tx: 0 > > dev.ixl.3.pf.que11.irqs: 0 > > dev.ixl.3.pf.que11.dropped: 0 > > dev.ixl.3.pf.que11.mbuf_defrag_failed: 0 > > dev.ixl.3.pf.que10.rx_bytes: 0 > > dev.ixl.3.pf.que10.rx_packets: 0 > > dev.ixl.3.pf.que10.tx_bytes: 0 > > dev.ixl.3.pf.que10.tx_packets: 0 > > dev.ixl.3.pf.que10.no_desc_avail: 0 > > dev.ixl.3.pf.que10.tx_dma_setup: 0 > > dev.ixl.3.pf.que10.tso_tx: 0 > > dev.ixl.3.pf.que10.irqs: 0 > > dev.ixl.3.pf.que10.dropped: 0 > > dev.ixl.3.pf.que10.mbuf_defrag_failed: 0 > > dev.ixl.3.pf.que9.rx_bytes: 0 > > dev.ixl.3.pf.que9.rx_packets: 0 > > dev.ixl.3.pf.que9.tx_bytes: 0 > > dev.ixl.3.pf.que9.tx_packets: 0 > > dev.ixl.3.pf.que9.no_desc_avail: 0 > > dev.ixl.3.pf.que9.tx_dma_setup: 0 > > dev.ixl.3.pf.que9.tso_tx: 0 > > dev.ixl.3.pf.que9.irqs: 0 > > dev.ixl.3.pf.que9.dropped: 0 > > dev.ixl.3.pf.que9.mbuf_defrag_failed: 0 > > dev.ixl.3.pf.que8.rx_bytes: 0 > > dev.ixl.3.pf.que8.rx_packets: 0 > > dev.ixl.3.pf.que8.tx_bytes: 0 > > dev.ixl.3.pf.que8.tx_packets: 0 > > dev.ixl.3.pf.que8.no_desc_avail: 0 > > dev.ixl.3.pf.que8.tx_dma_setup: 0 > > dev.ixl.3.pf.que8.tso_tx: 0 > > dev.ixl.3.pf.que8.irqs: 0 > > dev.ixl.3.pf.que8.dropped: 0 > > dev.ixl.3.pf.que8.mbuf_defrag_failed: 0 > > dev.ixl.3.pf.que7.rx_bytes: 0 > > dev.ixl.3.pf.que7.rx_packets: 0 > > dev.ixl.3.pf.que7.tx_bytes: 0 > > dev.ixl.3.pf.que7.tx_packets: 0 > > dev.ixl.3.pf.que7.no_desc_avail: 0 > > dev.ixl.3.pf.que7.tx_dma_setup: 0 > > dev.ixl.3.pf.que7.tso_tx: 0 > > dev.ixl.3.pf.que7.irqs: 0 > > dev.ixl.3.pf.que7.dropped: 0 > > dev.ixl.3.pf.que7.mbuf_defrag_failed: 0 > > dev.ixl.3.pf.que6.rx_bytes: 0 > > dev.ixl.3.pf.que6.rx_packets: 0 > > dev.ixl.3.pf.que6.tx_bytes: 0 > > dev.ixl.3.pf.que6.tx_packets: 0 > > dev.ixl.3.pf.que6.no_desc_avail: 0 > > dev.ixl.3.pf.que6.tx_dma_setup: 0 > > dev.ixl.3.pf.que6.tso_tx: 0 > > dev.ixl.3.pf.que6.irqs: 0 > > dev.ixl.3.pf.que6.dropped: 0 > > dev.ixl.3.pf.que6.mbuf_defrag_failed: 0 > > dev.ixl.3.pf.que5.rx_bytes: 0 > > dev.ixl.3.pf.que5.rx_packets: 0 > > dev.ixl.3.pf.que5.tx_bytes: 0 > > dev.ixl.3.pf.que5.tx_packets: 0 > > dev.ixl.3.pf.que5.no_desc_avail: 0 > > dev.ixl.3.pf.que5.tx_dma_setup: 0 > > dev.ixl.3.pf.que5.tso_tx: 0 > > dev.ixl.3.pf.que5.irqs: 0 > > dev.ixl.3.pf.que5.dropped: 0 > > dev.ixl.3.pf.que5.mbuf_defrag_failed: 0 > > dev.ixl.3.pf.que4.rx_bytes: 0 > > dev.ixl.3.pf.que4.rx_packets: 0 > > dev.ixl.3.pf.que4.tx_bytes: 0 > > dev.ixl.3.pf.que4.tx_packets: 0 > > dev.ixl.3.pf.que4.no_desc_avail: 0 > > dev.ixl.3.pf.que4.tx_dma_setup: 0 > > dev.ixl.3.pf.que4.tso_tx: 0 > > dev.ixl.3.pf.que4.irqs: 0 > > dev.ixl.3.pf.que4.dropped: 0 > > dev.ixl.3.pf.que4.mbuf_defrag_failed: 0 > > dev.ixl.3.pf.que3.rx_bytes: 0 > > dev.ixl.3.pf.que3.rx_packets: 0 > > dev.ixl.3.pf.que3.tx_bytes: 0 > > dev.ixl.3.pf.que3.tx_packets: 0 > > dev.ixl.3.pf.que3.no_desc_avail: 0 > > dev.ixl.3.pf.que3.tx_dma_setup: 0 > > dev.ixl.3.pf.que3.tso_tx: 0 > > dev.ixl.3.pf.que3.irqs: 0 > > dev.ixl.3.pf.que3.dropped: 0 > > dev.ixl.3.pf.que3.mbuf_defrag_failed: 0 > > dev.ixl.3.pf.que2.rx_bytes: 0 > > dev.ixl.3.pf.que2.rx_packets: 0 > > dev.ixl.3.pf.que2.tx_bytes: 0 > > dev.ixl.3.pf.que2.tx_packets: 0 > > dev.ixl.3.pf.que2.no_desc_avail: 0 > > dev.ixl.3.pf.que2.tx_dma_setup: 0 > > dev.ixl.3.pf.que2.tso_tx: 0 > > dev.ixl.3.pf.que2.irqs: 0 > > dev.ixl.3.pf.que2.dropped: 0 > > dev.ixl.3.pf.que2.mbuf_defrag_failed: 0 > > dev.ixl.3.pf.que1.rx_bytes: 0 > > dev.ixl.3.pf.que1.rx_packets: 0 > > dev.ixl.3.pf.que1.tx_bytes: 0 > > dev.ixl.3.pf.que1.tx_packets: 0 > > dev.ixl.3.pf.que1.no_desc_avail: 0 > > dev.ixl.3.pf.que1.tx_dma_setup: 0 > > dev.ixl.3.pf.que1.tso_tx: 0 > > dev.ixl.3.pf.que1.irqs: 0 > > dev.ixl.3.pf.que1.dropped: 0 > > dev.ixl.3.pf.que1.mbuf_defrag_failed: 0 > > dev.ixl.3.pf.que0.rx_bytes: 0 > > dev.ixl.3.pf.que0.rx_packets: 0 > > dev.ixl.3.pf.que0.tx_bytes: 0 > > dev.ixl.3.pf.que0.tx_packets: 0 > > dev.ixl.3.pf.que0.no_desc_avail: 0 > > dev.ixl.3.pf.que0.tx_dma_setup: 0 > > dev.ixl.3.pf.que0.tso_tx: 0 > > dev.ixl.3.pf.que0.irqs: 0 > > dev.ixl.3.pf.que0.dropped: 0 > > dev.ixl.3.pf.que0.mbuf_defrag_failed: 0 > > dev.ixl.3.pf.bcast_pkts_txd: 0 > > dev.ixl.3.pf.mcast_pkts_txd: 0 > > dev.ixl.3.pf.ucast_pkts_txd: 0 > > dev.ixl.3.pf.good_octets_txd: 0 > > dev.ixl.3.pf.rx_discards: 0 > > dev.ixl.3.pf.bcast_pkts_rcvd: 0 > > dev.ixl.3.pf.mcast_pkts_rcvd: 0 > > dev.ixl.3.pf.ucast_pkts_rcvd: 0 > > dev.ixl.3.pf.good_octets_rcvd: 0 > > dev.ixl.3.vc_debug_level: 1 > > dev.ixl.3.admin_irq: 0 > > dev.ixl.3.watchdog_events: 0 > > dev.ixl.3.debug: 0 > > dev.ixl.3.dynamic_tx_itr: 0 > > dev.ixl.3.tx_itr: 122 > > dev.ixl.3.dynamic_rx_itr: 0 > > dev.ixl.3.rx_itr: 62 > > dev.ixl.3.fw_version: f4.33 a1.2 n04.42 e8000191d > > dev.ixl.3.current_speed: Unknown > > dev.ixl.3.advertise_speed: 0 > > dev.ixl.3.fc: 0 > > dev.ixl.3.%parent: pci129 > > dev.ixl.3.%pnpinfo: vendor=0x8086 device=0x1572 subvendor=0x8086 > > subdevice=0x0000 class=0x020000 > > dev.ixl.3.%location: slot=0 function=3 handle=\_SB_.PCI1.QR3A.H003 > > dev.ixl.3.%driver: ixl > > dev.ixl.3.%desc: Intel(R) Ethernet Connection XL710 Driver, > Version - 1.4.0 > > dev.ixl.2.mac.xoff_recvd: 0 > > dev.ixl.2.mac.xoff_txd: 0 > > dev.ixl.2.mac.xon_recvd: 0 > > dev.ixl.2.mac.xon_txd: 0 > > dev.ixl.2.mac.tx_frames_big: 0 > > dev.ixl.2.mac.tx_frames_1024_1522: 0 > > dev.ixl.2.mac.tx_frames_512_1023: 0 > > dev.ixl.2.mac.tx_frames_256_511: 0 > > dev.ixl.2.mac.tx_frames_128_255: 0 > > dev.ixl.2.mac.tx_frames_65_127: 0 > > dev.ixl.2.mac.tx_frames_64: 0 > > dev.ixl.2.mac.checksum_errors: 0 > > dev.ixl.2.mac.rx_jabber: 0 > > dev.ixl.2.mac.rx_oversized: 0 > > dev.ixl.2.mac.rx_fragmented: 0 > > dev.ixl.2.mac.rx_undersize: 0 > > dev.ixl.2.mac.rx_frames_big: 0 > > dev.ixl.2.mac.rx_frames_1024_1522: 0 > > dev.ixl.2.mac.rx_frames_512_1023: 0 > > dev.ixl.2.mac.rx_frames_256_511: 0 > > dev.ixl.2.mac.rx_frames_128_255: 0 > > dev.ixl.2.mac.rx_frames_65_127: 0 > > dev.ixl.2.mac.rx_frames_64: 0 > > dev.ixl.2.mac.rx_length_errors: 0 > > dev.ixl.2.mac.remote_faults: 0 > > dev.ixl.2.mac.local_faults: 0 > > dev.ixl.2.mac.illegal_bytes: 0 > > dev.ixl.2.mac.crc_errors: 0 > > dev.ixl.2.mac.bcast_pkts_txd: 0 > > dev.ixl.2.mac.mcast_pkts_txd: 0 > > dev.ixl.2.mac.ucast_pkts_txd: 0 > > dev.ixl.2.mac.good_octets_txd: 0 > > dev.ixl.2.mac.rx_discards: 0 > > dev.ixl.2.mac.bcast_pkts_rcvd: 0 > > dev.ixl.2.mac.mcast_pkts_rcvd: 0 > > dev.ixl.2.mac.ucast_pkts_rcvd: 0 > > dev.ixl.2.mac.good_octets_rcvd: 0 > > dev.ixl.2.pf.que23.rx_bytes: 0 > > dev.ixl.2.pf.que23.rx_packets: 0 > > dev.ixl.2.pf.que23.tx_bytes: 0 > > dev.ixl.2.pf.que23.tx_packets: 0 > > dev.ixl.2.pf.que23.no_desc_avail: 0 > > dev.ixl.2.pf.que23.tx_dma_setup: 0 > > dev.ixl.2.pf.que23.tso_tx: 0 > > dev.ixl.2.pf.que23.irqs: 0 > > dev.ixl.2.pf.que23.dropped: 0 > > dev.ixl.2.pf.que23.mbuf_defrag_failed: 0 > > dev.ixl.2.pf.que22.rx_bytes: 0 > > dev.ixl.2.pf.que22.rx_packets: 0 > > dev.ixl.2.pf.que22.tx_bytes: 0 > > dev.ixl.2.pf.que22.tx_packets: 0 > > dev.ixl.2.pf.que22.no_desc_avail: 0 > > dev.ixl.2.pf.que22.tx_dma_setup: 0 > > dev.ixl.2.pf.que22.tso_tx: 0 > > dev.ixl.2.pf.que22.irqs: 0 > > dev.ixl.2.pf.que22.dropped: 0 > > dev.ixl.2.pf.que22.mbuf_defrag_failed: 0 > > dev.ixl.2.pf.que21.rx_bytes: 0 > > dev.ixl.2.pf.que21.rx_packets: 0 > > dev.ixl.2.pf.que21.tx_bytes: 0 > > dev.ixl.2.pf.que21.tx_packets: 0 > > dev.ixl.2.pf.que21.no_desc_avail: 0 > > dev.ixl.2.pf.que21.tx_dma_setup: 0 > > dev.ixl.2.pf.que21.tso_tx: 0 > > dev.ixl.2.pf.que21.irqs: 0 > > dev.ixl.2.pf.que21.dropped: 0 > > dev.ixl.2.pf.que21.mbuf_defrag_failed: 0 > > dev.ixl.2.pf.que20.rx_bytes: 0 > > dev.ixl.2.pf.que20.rx_packets: 0 > > dev.ixl.2.pf.que20.tx_bytes: 0 > > dev.ixl.2.pf.que20.tx_packets: 0 > > dev.ixl.2.pf.que20.no_desc_avail: 0 > > dev.ixl.2.pf.que20.tx_dma_setup: 0 > > dev.ixl.2.pf.que20.tso_tx: 0 > > dev.ixl.2.pf.que20.irqs: 0 > > dev.ixl.2.pf.que20.dropped: 0 > > dev.ixl.2.pf.que20.mbuf_defrag_failed: 0 > > dev.ixl.2.pf.que19.rx_bytes: 0 > > dev.ixl.2.pf.que19.rx_packets: 0 > > dev.ixl.2.pf.que19.tx_bytes: 0 > > dev.ixl.2.pf.que19.tx_packets: 0 > > dev.ixl.2.pf.que19.no_desc_avail: 0 > > dev.ixl.2.pf.que19.tx_dma_setup: 0 > > dev.ixl.2.pf.que19.tso_tx: 0 > > dev.ixl.2.pf.que19.irqs: 0 > > dev.ixl.2.pf.que19.dropped: 0 > > dev.ixl.2.pf.que19.mbuf_defrag_failed: 0 > > dev.ixl.2.pf.que18.rx_bytes: 0 > > dev.ixl.2.pf.que18.rx_packets: 0 > > dev.ixl.2.pf.que18.tx_bytes: 0 > > dev.ixl.2.pf.que18.tx_packets: 0 > > dev.ixl.2.pf.que18.no_desc_avail: 0 > > dev.ixl.2.pf.que18.tx_dma_setup: 0 > > dev.ixl.2.pf.que18.tso_tx: 0 > > dev.ixl.2.pf.que18.irqs: 0 > > dev.ixl.2.pf.que18.dropped: 0 > > dev.ixl.2.pf.que18.mbuf_defrag_failed: 0 > > dev.ixl.2.pf.que17.rx_bytes: 0 > > dev.ixl.2.pf.que17.rx_packets: 0 > > dev.ixl.2.pf.que17.tx_bytes: 0 > > dev.ixl.2.pf.que17.tx_packets: 0 > > dev.ixl.2.pf.que17.no_desc_avail: 0 > > dev.ixl.2.pf.que17.tx_dma_setup: 0 > > dev.ixl.2.pf.que17.tso_tx: 0 > > dev.ixl.2.pf.que17.irqs: 0 > > dev.ixl.2.pf.que17.dropped: 0 > > dev.ixl.2.pf.que17.mbuf_defrag_failed: 0 > > dev.ixl.2.pf.que16.rx_bytes: 0 > > dev.ixl.2.pf.que16.rx_packets: 0 > > dev.ixl.2.pf.que16.tx_bytes: 0 > > dev.ixl.2.pf.que16.tx_packets: 0 > > dev.ixl.2.pf.que16.no_desc_avail: 0 > > dev.ixl.2.pf.que16.tx_dma_setup: 0 > > dev.ixl.2.pf.que16.tso_tx: 0 > > dev.ixl.2.pf.que16.irqs: 0 > > dev.ixl.2.pf.que16.dropped: 0 > > dev.ixl.2.pf.que16.mbuf_defrag_failed: 0 > > dev.ixl.2.pf.que15.rx_bytes: 0 > > dev.ixl.2.pf.que15.rx_packets: 0 > > dev.ixl.2.pf.que15.tx_bytes: 0 > > dev.ixl.2.pf.que15.tx_packets: 0 > > dev.ixl.2.pf.que15.no_desc_avail: 0 > > dev.ixl.2.pf.que15.tx_dma_setup: 0 > > dev.ixl.2.pf.que15.tso_tx: 0 > > dev.ixl.2.pf.que15.irqs: 0 > > dev.ixl.2.pf.que15.dropped: 0 > > dev.ixl.2.pf.que15.mbuf_defrag_failed: 0 > > dev.ixl.2.pf.que14.rx_bytes: 0 > > dev.ixl.2.pf.que14.rx_packets: 0 > > dev.ixl.2.pf.que14.tx_bytes: 0 > > dev.ixl.2.pf.que14.tx_packets: 0 > > dev.ixl.2.pf.que14.no_desc_avail: 0 > > dev.ixl.2.pf.que14.tx_dma_setup: 0 > > dev.ixl.2.pf.que14.tso_tx: 0 > > dev.ixl.2.pf.que14.irqs: 0 > > dev.ixl.2.pf.que14.dropped: 0 > > dev.ixl.2.pf.que14.mbuf_defrag_failed: 0 > > dev.ixl.2.pf.que13.rx_bytes: 0 > > dev.ixl.2.pf.que13.rx_packets: 0 > > dev.ixl.2.pf.que13.tx_bytes: 0 > > dev.ixl.2.pf.que13.tx_packets: 0 > > dev.ixl.2.pf.que13.no_desc_avail: 0 > > dev.ixl.2.pf.que13.tx_dma_setup: 0 > > dev.ixl.2.pf.que13.tso_tx: 0 > > dev.ixl.2.pf.que13.irqs: 0 > > dev.ixl.2.pf.que13.dropped: 0 > > dev.ixl.2.pf.que13.mbuf_defrag_failed: 0 > > dev.ixl.2.pf.que12.rx_bytes: 0 > > dev.ixl.2.pf.que12.rx_packets: 0 > > dev.ixl.2.pf.que12.tx_bytes: 0 > > dev.ixl.2.pf.que12.tx_packets: 0 > > dev.ixl.2.pf.que12.no_desc_avail: 0 > > dev.ixl.2.pf.que12.tx_dma_setup: 0 > > dev.ixl.2.pf.que12.tso_tx: 0 > > dev.ixl.2.pf.que12.irqs: 0 > > dev.ixl.2.pf.que12.dropped: 0 > > dev.ixl.2.pf.que12.mbuf_defrag_failed: 0 > > dev.ixl.2.pf.que11.rx_bytes: 0 > > dev.ixl.2.pf.que11.rx_packets: 0 > > dev.ixl.2.pf.que11.tx_bytes: 0 > > dev.ixl.2.pf.que11.tx_packets: 0 > > dev.ixl.2.pf.que11.no_desc_avail: 0 > > dev.ixl.2.pf.que11.tx_dma_setup: 0 > > dev.ixl.2.pf.que11.tso_tx: 0 > > dev.ixl.2.pf.que11.irqs: 0 > > dev.ixl.2.pf.que11.dropped: 0 > > dev.ixl.2.pf.que11.mbuf_defrag_failed: 0 > > dev.ixl.2.pf.que10.rx_bytes: 0 > > dev.ixl.2.pf.que10.rx_packets: 0 > > dev.ixl.2.pf.que10.tx_bytes: 0 > > dev.ixl.2.pf.que10.tx_packets: 0 > > dev.ixl.2.pf.que10.no_desc_avail: 0 > > dev.ixl.2.pf.que10.tx_dma_setup: 0 > > dev.ixl.2.pf.que10.tso_tx: 0 > > dev.ixl.2.pf.que10.irqs: 0 > > dev.ixl.2.pf.que10.dropped: 0 > > dev.ixl.2.pf.que10.mbuf_defrag_failed: 0 > > dev.ixl.2.pf.que9.rx_bytes: 0 > > dev.ixl.2.pf.que9.rx_packets: 0 > > dev.ixl.2.pf.que9.tx_bytes: 0 > > dev.ixl.2.pf.que9.tx_packets: 0 > > dev.ixl.2.pf.que9.no_desc_avail: 0 > > dev.ixl.2.pf.que9.tx_dma_setup: 0 > > dev.ixl.2.pf.que9.tso_tx: 0 > > dev.ixl.2.pf.que9.irqs: 0 > > dev.ixl.2.pf.que9.dropped: 0 > > dev.ixl.2.pf.que9.mbuf_defrag_failed: 0 > > dev.ixl.2.pf.que8.rx_bytes: 0 > > dev.ixl.2.pf.que8.rx_packets: 0 > > dev.ixl.2.pf.que8.tx_bytes: 0 > > dev.ixl.2.pf.que8.tx_packets: 0 > > dev.ixl.2.pf.que8.no_desc_avail: 0 > > dev.ixl.2.pf.que8.tx_dma_setup: 0 > > dev.ixl.2.pf.que8.tso_tx: 0 > > dev.ixl.2.pf.que8.irqs: 0 > > dev.ixl.2.pf.que8.dropped: 0 > > dev.ixl.2.pf.que8.mbuf_defrag_failed: 0 > > dev.ixl.2.pf.que7.rx_bytes: 0 > > dev.ixl.2.pf.que7.rx_packets: 0 > > dev.ixl.2.pf.que7.tx_bytes: 0 > > dev.ixl.2.pf.que7.tx_packets: 0 > > dev.ixl.2.pf.que7.no_desc_avail: 0 > > dev.ixl.2.pf.que7.tx_dma_setup: 0 > > dev.ixl.2.pf.que7.tso_tx: 0 > > dev.ixl.2.pf.que7.irqs: 0 > > dev.ixl.2.pf.que7.dropped: 0 > > dev.ixl.2.pf.que7.mbuf_defrag_failed: 0 > > dev.ixl.2.pf.que6.rx_bytes: 0 > > dev.ixl.2.pf.que6.rx_packets: 0 > > dev.ixl.2.pf.que6.tx_bytes: 0 > > dev.ixl.2.pf.que6.tx_packets: 0 > > dev.ixl.2.pf.que6.no_desc_avail: 0 > > dev.ixl.2.pf.que6.tx_dma_setup: 0 > > dev.ixl.2.pf.que6.tso_tx: 0 > > dev.ixl.2.pf.que6.irqs: 0 > > dev.ixl.2.pf.que6.dropped: 0 > > dev.ixl.2.pf.que6.mbuf_defrag_failed: 0 > > dev.ixl.2.pf.que5.rx_bytes: 0 > > dev.ixl.2.pf.que5.rx_packets: 0 > > dev.ixl.2.pf.que5.tx_bytes: 0 > > dev.ixl.2.pf.que5.tx_packets: 0 > > dev.ixl.2.pf.que5.no_desc_avail: 0 > > dev.ixl.2.pf.que5.tx_dma_setup: 0 > > dev.ixl.2.pf.que5.tso_tx: 0 > > dev.ixl.2.pf.que5.irqs: 0 > > dev.ixl.2.pf.que5.dropped: 0 > > dev.ixl.2.pf.que5.mbuf_defrag_failed: 0 > > dev.ixl.2.pf.que4.rx_bytes: 0 > > dev.ixl.2.pf.que4.rx_packets: 0 > > dev.ixl.2.pf.que4.tx_bytes: 0 > > dev.ixl.2.pf.que4.tx_packets: 0 > > dev.ixl.2.pf.que4.no_desc_avail: 0 > > dev.ixl.2.pf.que4.tx_dma_setup: 0 > > dev.ixl.2.pf.que4.tso_tx: 0 > > dev.ixl.2.pf.que4.irqs: 0 > > dev.ixl.2.pf.que4.dropped: 0 > > dev.ixl.2.pf.que4.mbuf_defrag_failed: 0 > > dev.ixl.2.pf.que3.rx_bytes: 0 > > dev.ixl.2.pf.que3.rx_packets: 0 > > dev.ixl.2.pf.que3.tx_bytes: 0 > > dev.ixl.2.pf.que3.tx_packets: 0 > > dev.ixl.2.pf.que3.no_desc_avail: 0 > > dev.ixl.2.pf.que3.tx_dma_setup: 0 > > dev.ixl.2.pf.que3.tso_tx: 0 > > dev.ixl.2.pf.que3.irqs: 0 > > dev.ixl.2.pf.que3.dropped: 0 > > dev.ixl.2.pf.que3.mbuf_defrag_failed: 0 > > dev.ixl.2.pf.que2.rx_bytes: 0 > > dev.ixl.2.pf.que2.rx_packets: 0 > > dev.ixl.2.pf.que2.tx_bytes: 0 > > dev.ixl.2.pf.que2.tx_packets: 0 > > dev.ixl.2.pf.que2.no_desc_avail: 0 > > dev.ixl.2.pf.que2.tx_dma_setup: 0 > > dev.ixl.2.pf.que2.tso_tx: 0 > > dev.ixl.2.pf.que2.irqs: 0 > > dev.ixl.2.pf.que2.dropped: 0 > > dev.ixl.2.pf.que2.mbuf_defrag_failed: 0 > > dev.ixl.2.pf.que1.rx_bytes: 0 > > dev.ixl.2.pf.que1.rx_packets: 0 > > dev.ixl.2.pf.que1.tx_bytes: 0 > > dev.ixl.2.pf.que1.tx_packets: 0 > > dev.ixl.2.pf.que1.no_desc_avail: 0 > > dev.ixl.2.pf.que1.tx_dma_setup: 0 > > dev.ixl.2.pf.que1.tso_tx: 0 > > dev.ixl.2.pf.que1.irqs: 0 > > dev.ixl.2.pf.que1.dropped: 0 > > dev.ixl.2.pf.que1.mbuf_defrag_failed: 0 > > dev.ixl.2.pf.que0.rx_bytes: 0 > > dev.ixl.2.pf.que0.rx_packets: 0 > > dev.ixl.2.pf.que0.tx_bytes: 0 > > dev.ixl.2.pf.que0.tx_packets: 0 > > dev.ixl.2.pf.que0.no_desc_avail: 0 > > dev.ixl.2.pf.que0.tx_dma_setup: 0 > > dev.ixl.2.pf.que0.tso_tx: 0 > > dev.ixl.2.pf.que0.irqs: 0 > > dev.ixl.2.pf.que0.dropped: 0 > > dev.ixl.2.pf.que0.mbuf_defrag_failed: 0 > > dev.ixl.2.pf.bcast_pkts_txd: 0 > > dev.ixl.2.pf.mcast_pkts_txd: 0 > > dev.ixl.2.pf.ucast_pkts_txd: 0 > > dev.ixl.2.pf.good_octets_txd: 0 > > dev.ixl.2.pf.rx_discards: 0 > > dev.ixl.2.pf.bcast_pkts_rcvd: 0 > > dev.ixl.2.pf.mcast_pkts_rcvd: 0 > > dev.ixl.2.pf.ucast_pkts_rcvd: 0 > > dev.ixl.2.pf.good_octets_rcvd: 0 > > dev.ixl.2.vc_debug_level: 1 > > dev.ixl.2.admin_irq: 0 > > dev.ixl.2.watchdog_events: 0 > > dev.ixl.2.debug: 0 > > dev.ixl.2.dynamic_tx_itr: 0 > > dev.ixl.2.tx_itr: 122 > > dev.ixl.2.dynamic_rx_itr: 0 > > dev.ixl.2.rx_itr: 62 > > dev.ixl.2.fw_version: f4.33 a1.2 n04.42 e8000191d > > dev.ixl.2.current_speed: Unknown > > dev.ixl.2.advertise_speed: 0 > > dev.ixl.2.fc: 0 > > dev.ixl.2.%parent: pci129 > > dev.ixl.2.%pnpinfo: vendor=0x8086 device=0x1572 subvendor=0x8086 > > subdevice=0x0000 class=0x020000 > > dev.ixl.2.%location: slot=0 function=2 handle=\_SB_.PCI1.QR3A.H002 > > dev.ixl.2.%driver: ixl > > dev.ixl.2.%desc: Intel(R) Ethernet Connection XL710 Driver, > Version - 1.4.0 > > dev.ixl.1.mac.xoff_recvd: 0 > > dev.ixl.1.mac.xoff_txd: 0 > > dev.ixl.1.mac.xon_recvd: 0 > > dev.ixl.1.mac.xon_txd: 0 > > dev.ixl.1.mac.tx_frames_big: 0 > > dev.ixl.1.mac.tx_frames_1024_1522: 1565670684 > > dev.ixl.1.mac.tx_frames_512_1023: 101286418 > > dev.ixl.1.mac.tx_frames_256_511: 49713129 > > dev.ixl.1.mac.tx_frames_128_255: 231617277 > > dev.ixl.1.mac.tx_frames_65_127: 2052767669 > > dev.ixl.1.mac.tx_frames_64: 1318689044 > > dev.ixl.1.mac.checksum_errors: 0 > > dev.ixl.1.mac.rx_jabber: 0 > > dev.ixl.1.mac.rx_oversized: 0 > > dev.ixl.1.mac.rx_fragmented: 0 > > dev.ixl.1.mac.rx_undersize: 0 > > dev.ixl.1.mac.rx_frames_big: 0 > > dev.ixl.1.mac.rx_frames_1024_1522: 4960403414 > > dev.ixl.1.mac.rx_frames_512_1023: 113675084 > > dev.ixl.1.mac.rx_frames_256_511: 253904920 > > dev.ixl.1.mac.rx_frames_128_255: 196369726 > > dev.ixl.1.mac.rx_frames_65_127: 1436626211 > > dev.ixl.1.mac.rx_frames_64: 242768681 > > dev.ixl.1.mac.rx_length_errors: 0 > > dev.ixl.1.mac.remote_faults: 0 > > dev.ixl.1.mac.local_faults: 0 > > dev.ixl.1.mac.illegal_bytes: 0 > > dev.ixl.1.mac.crc_errors: 0 > > dev.ixl.1.mac.bcast_pkts_txd: 277 > > dev.ixl.1.mac.mcast_pkts_txd: 0 > > dev.ixl.1.mac.ucast_pkts_txd: 5319743942 > > dev.ixl.1.mac.good_octets_txd: 2642351885737 > > dev.ixl.1.mac.rx_discards: 0 > > dev.ixl.1.mac.bcast_pkts_rcvd: 5 > > dev.ixl.1.mac.mcast_pkts_rcvd: 144 > > dev.ixl.1.mac.ucast_pkts_rcvd: 7203747879 > > dev.ixl.1.mac.good_octets_rcvd: 7770230492434 > > dev.ixl.1.pf.que23.rx_bytes: 0 > > dev.ixl.1.pf.que23.rx_packets: 0 > > dev.ixl.1.pf.que23.tx_bytes: 7111 > > dev.ixl.1.pf.que23.tx_packets: 88 > > dev.ixl.1.pf.que23.no_desc_avail: 0 > > dev.ixl.1.pf.que23.tx_dma_setup: 0 > > dev.ixl.1.pf.que23.tso_tx: 0 > > dev.ixl.1.pf.que23.irqs: 88 > > dev.ixl.1.pf.que23.dropped: 0 > > dev.ixl.1.pf.que23.mbuf_defrag_failed: 0 > > dev.ixl.1.pf.que22.rx_bytes: 0 > > dev.ixl.1.pf.que22.rx_packets: 0 > > dev.ixl.1.pf.que22.tx_bytes: 6792 > > dev.ixl.1.pf.que22.tx_packets: 88 > > dev.ixl.1.pf.que22.no_desc_avail: 0 > > dev.ixl.1.pf.que22.tx_dma_setup: 0 > > dev.ixl.1.pf.que22.tso_tx: 0 > > dev.ixl.1.pf.que22.irqs: 89 > > dev.ixl.1.pf.que22.dropped: 0 > > dev.ixl.1.pf.que22.mbuf_defrag_failed: 0 > > dev.ixl.1.pf.que21.rx_bytes: 0 > > dev.ixl.1.pf.que21.rx_packets: 0 > > dev.ixl.1.pf.que21.tx_bytes: 7486 > > dev.ixl.1.pf.que21.tx_packets: 93 > > dev.ixl.1.pf.que21.no_desc_avail: 0 > > dev.ixl.1.pf.que21.tx_dma_setup: 0 > > dev.ixl.1.pf.que21.tso_tx: 0 > > dev.ixl.1.pf.que21.irqs: 95 > > dev.ixl.1.pf.que21.dropped: 0 > > dev.ixl.1.pf.que21.mbuf_defrag_failed: 0 > > dev.ixl.1.pf.que20.rx_bytes: 0 > > dev.ixl.1.pf.que20.rx_packets: 0 > > dev.ixl.1.pf.que20.tx_bytes: 7850 > > dev.ixl.1.pf.que20.tx_packets: 98 > > dev.ixl.1.pf.que20.no_desc_avail: 0 > > dev.ixl.1.pf.que20.tx_dma_setup: 0 > > dev.ixl.1.pf.que20.tso_tx: 0 > > dev.ixl.1.pf.que20.irqs: 99 > > dev.ixl.1.pf.que20.dropped: 0 > > dev.ixl.1.pf.que20.mbuf_defrag_failed: 0 > > dev.ixl.1.pf.que19.rx_bytes: 0 > > dev.ixl.1.pf.que19.rx_packets: 0 > > dev.ixl.1.pf.que19.tx_bytes: 64643 > > dev.ixl.1.pf.que19.tx_packets: 202 > > dev.ixl.1.pf.que19.no_desc_avail: 0 > > dev.ixl.1.pf.que19.tx_dma_setup: 0 > > dev.ixl.1.pf.que19.tso_tx: 0 > > dev.ixl.1.pf.que19.irqs: 202 > > dev.ixl.1.pf.que19.dropped: 0 > > dev.ixl.1.pf.que19.mbuf_defrag_failed: 0 > > dev.ixl.1.pf.que18.rx_bytes: 0 > > dev.ixl.1.pf.que18.rx_packets: 0 > > dev.ixl.1.pf.que18.tx_bytes: 5940 > > dev.ixl.1.pf.que18.tx_packets: 74 > > dev.ixl.1.pf.que18.no_desc_avail: 0 > > dev.ixl.1.pf.que18.tx_dma_setup: 0 > > dev.ixl.1.pf.que18.tso_tx: 0 > > dev.ixl.1.pf.que18.irqs: 74 > > dev.ixl.1.pf.que18.dropped: 0 > > dev.ixl.1.pf.que18.mbuf_defrag_failed: 0 > > dev.ixl.1.pf.que17.rx_bytes: 0 > > dev.ixl.1.pf.que17.rx_packets: 0 > > dev.ixl.1.pf.que17.tx_bytes: 11675 > > dev.ixl.1.pf.que17.tx_packets: 83 > > dev.ixl.1.pf.que17.no_desc_avail: 0 > > dev.ixl.1.pf.que17.tx_dma_setup: 0 > > dev.ixl.1.pf.que17.tso_tx: 0 > > dev.ixl.1.pf.que17.irqs: 83 > > dev.ixl.1.pf.que17.dropped: 0 > > dev.ixl.1.pf.que17.mbuf_defrag_failed: 0 > > dev.ixl.1.pf.que16.rx_bytes: 0 > > dev.ixl.1.pf.que16.rx_packets: 0 > > dev.ixl.1.pf.que16.tx_bytes: 105750457831 > > dev.ixl.1.pf.que16.tx_packets: 205406766 > > dev.ixl.1.pf.que16.no_desc_avail: 0 > > dev.ixl.1.pf.que16.tx_dma_setup: 0 > > dev.ixl.1.pf.que16.tso_tx: 0 > > dev.ixl.1.pf.que16.irqs: 87222978 > > dev.ixl.1.pf.que16.dropped: 0 > > dev.ixl.1.pf.que16.mbuf_defrag_failed: 0 > > dev.ixl.1.pf.que15.rx_bytes: 289558174088 > > dev.ixl.1.pf.que15.rx_packets: 272466190 > > dev.ixl.1.pf.que15.tx_bytes: 106152524681 > > dev.ixl.1.pf.que15.tx_packets: 205379247 > > dev.ixl.1.pf.que15.no_desc_avail: 0 > > dev.ixl.1.pf.que15.tx_dma_setup: 0 > > dev.ixl.1.pf.que15.tso_tx: 0 > > dev.ixl.1.pf.que15.irqs: 238145862 > > dev.ixl.1.pf.que15.dropped: 0 > > dev.ixl.1.pf.que15.mbuf_defrag_failed: 0 > > dev.ixl.1.pf.que14.rx_bytes: 301934533473 > > dev.ixl.1.pf.que14.rx_packets: 298452930 > > dev.ixl.1.pf.que14.tx_bytes: 111420393725 > > dev.ixl.1.pf.que14.tx_packets: 215722532 > > dev.ixl.1.pf.que14.no_desc_avail: 0 > > dev.ixl.1.pf.que14.tx_dma_setup: 0 > > dev.ixl.1.pf.que14.tso_tx: 0 > > dev.ixl.1.pf.que14.irqs: 256291617 > > dev.ixl.1.pf.que14.dropped: 0 > > dev.ixl.1.pf.que14.mbuf_defrag_failed: 0 > > dev.ixl.1.pf.que13.rx_bytes: 291380746253 > > dev.ixl.1.pf.que13.rx_packets: 273037957 > > dev.ixl.1.pf.que13.tx_bytes: 112417776222 > > dev.ixl.1.pf.que13.tx_packets: 217500943 > > dev.ixl.1.pf.que13.no_desc_avail: 0 > > dev.ixl.1.pf.que13.tx_dma_setup: 0 > > dev.ixl.1.pf.que13.tso_tx: 0 > > dev.ixl.1.pf.que13.irqs: 241422331 > > dev.ixl.1.pf.que13.dropped: 0 > > dev.ixl.1.pf.que13.mbuf_defrag_failed: 0 > > dev.ixl.1.pf.que12.rx_bytes: 301105585425 > > dev.ixl.1.pf.que12.rx_packets: 286137817 > > dev.ixl.1.pf.que12.tx_bytes: 95851784579 > > dev.ixl.1.pf.que12.tx_packets: 199715765 > > dev.ixl.1.pf.que12.no_desc_avail: 0 > > dev.ixl.1.pf.que12.tx_dma_setup: 0 > > dev.ixl.1.pf.que12.tso_tx: 0 > > dev.ixl.1.pf.que12.irqs: 247322880 > > dev.ixl.1.pf.que12.dropped: 0 > > dev.ixl.1.pf.que12.mbuf_defrag_failed: 0 > > dev.ixl.1.pf.que11.rx_bytes: 307105398143 > > dev.ixl.1.pf.que11.rx_packets: 281046463 > > dev.ixl.1.pf.que11.tx_bytes: 110710957789 > > dev.ixl.1.pf.que11.tx_packets: 211784031 > > dev.ixl.1.pf.que11.no_desc_avail: 0 > > dev.ixl.1.pf.que11.tx_dma_setup: 0 > > dev.ixl.1.pf.que11.tso_tx: 0 > > dev.ixl.1.pf.que11.irqs: 256987179 > > dev.ixl.1.pf.que11.dropped: 0 > > dev.ixl.1.pf.que11.mbuf_defrag_failed: 0 > > dev.ixl.1.pf.que10.rx_bytes: 304288000453 > > dev.ixl.1.pf.que10.rx_packets: 278987858 > > dev.ixl.1.pf.que10.tx_bytes: 93022244338 > > dev.ixl.1.pf.que10.tx_packets: 195869210 > > dev.ixl.1.pf.que10.no_desc_avail: 0 > > dev.ixl.1.pf.que10.tx_dma_setup: 0 > > dev.ixl.1.pf.que10.tso_tx: 0 > > dev.ixl.1.pf.que10.irqs: 253622192 > > dev.ixl.1.pf.que10.dropped: 0 > > dev.ixl.1.pf.que10.mbuf_defrag_failed: 0 > > dev.ixl.1.pf.que9.rx_bytes: 320340203822 > > dev.ixl.1.pf.que9.rx_packets: 302309010 > > dev.ixl.1.pf.que9.tx_bytes: 116604776460 > > dev.ixl.1.pf.que9.tx_packets: 223949025 > > dev.ixl.1.pf.que9.no_desc_avail: 0 > > dev.ixl.1.pf.que9.tx_dma_setup: 0 > > dev.ixl.1.pf.que9.tso_tx: 0 > > dev.ixl.1.pf.que9.irqs: 271165440 > > dev.ixl.1.pf.que9.dropped: 0 > > dev.ixl.1.pf.que9.mbuf_defrag_failed: 0 > > dev.ixl.1.pf.que8.rx_bytes: 291403725592 > > dev.ixl.1.pf.que8.rx_packets: 267859568 > > dev.ixl.1.pf.que8.tx_bytes: 205745654558 > > dev.ixl.1.pf.que8.tx_packets: 443349835 > > dev.ixl.1.pf.que8.no_desc_avail: 0 > > dev.ixl.1.pf.que8.tx_dma_setup: 0 > > dev.ixl.1.pf.que8.tso_tx: 0 > > dev.ixl.1.pf.que8.irqs: 254116755 > > dev.ixl.1.pf.que8.dropped: 0 > > dev.ixl.1.pf.que8.mbuf_defrag_failed: 0 > > dev.ixl.1.pf.que7.rx_bytes: 673363127346 > > dev.ixl.1.pf.que7.rx_packets: 617269774 > > dev.ixl.1.pf.que7.tx_bytes: 203162891886 > > dev.ixl.1.pf.que7.tx_packets: 443709339 > > dev.ixl.1.pf.que7.no_desc_avail: 0 > > dev.ixl.1.pf.que7.tx_dma_setup: 0 > > dev.ixl.1.pf.que7.tso_tx: 0 > > dev.ixl.1.pf.que7.irqs: 424706771 > > dev.ixl.1.pf.que7.dropped: 0 > > dev.ixl.1.pf.que7.mbuf_defrag_failed: 0 > > dev.ixl.1.pf.que6.rx_bytes: 644709094218 > > dev.ixl.1.pf.que6.rx_packets: 601892919 > > dev.ixl.1.pf.que6.tx_bytes: 221661735032 > > dev.ixl.1.pf.que6.tx_packets: 460127064 > > dev.ixl.1.pf.que6.no_desc_avail: 0 > > dev.ixl.1.pf.que6.tx_dma_setup: 0 > > dev.ixl.1.pf.que6.tso_tx: 0 > > dev.ixl.1.pf.que6.irqs: 417748074 > > dev.ixl.1.pf.que6.dropped: 0 > > dev.ixl.1.pf.que6.mbuf_defrag_failed: 0 > > dev.ixl.1.pf.que5.rx_bytes: 661904432231 > > dev.ixl.1.pf.que5.rx_packets: 622012837 > > dev.ixl.1.pf.que5.tx_bytes: 230514282876 > > dev.ixl.1.pf.que5.tx_packets: 458571100 > > dev.ixl.1.pf.que5.no_desc_avail: 0 > > dev.ixl.1.pf.que5.tx_dma_setup: 0 > > dev.ixl.1.pf.que5.tso_tx: 0 > > dev.ixl.1.pf.que5.irqs: 422305039 > > dev.ixl.1.pf.que5.dropped: 0 > > dev.ixl.1.pf.que5.mbuf_defrag_failed: 0 > > dev.ixl.1.pf.que4.rx_bytes: 653522179234 > > dev.ixl.1.pf.que4.rx_packets: 603345546 > > dev.ixl.1.pf.que4.tx_bytes: 216761219483 > > dev.ixl.1.pf.que4.tx_packets: 450329641 > > dev.ixl.1.pf.que4.no_desc_avail: 0 > > dev.ixl.1.pf.que4.tx_dma_setup: 0 > > dev.ixl.1.pf.que4.tso_tx: 3 > > dev.ixl.1.pf.que4.irqs: 416920533 > > dev.ixl.1.pf.que4.dropped: 0 > > dev.ixl.1.pf.que4.mbuf_defrag_failed: 0 > > dev.ixl.1.pf.que3.rx_bytes: 676494225882 > > dev.ixl.1.pf.que3.rx_packets: 620605168 > > dev.ixl.1.pf.que3.tx_bytes: 233854020454 > > dev.ixl.1.pf.que3.tx_packets: 464425616 > > dev.ixl.1.pf.que3.no_desc_avail: 0 > > dev.ixl.1.pf.que3.tx_dma_setup: 0 > > dev.ixl.1.pf.que3.tso_tx: 0 > > dev.ixl.1.pf.que3.irqs: 426349030 > > dev.ixl.1.pf.que3.dropped: 0 > > dev.ixl.1.pf.que3.mbuf_defrag_failed: 0 > > dev.ixl.1.pf.que2.rx_bytes: 677779337711 > > dev.ixl.1.pf.que2.rx_packets: 620883699 > > dev.ixl.1.pf.que2.tx_bytes: 211297141668 > > dev.ixl.1.pf.que2.tx_packets: 450501525 > > dev.ixl.1.pf.que2.no_desc_avail: 0 > > dev.ixl.1.pf.que2.tx_dma_setup: 0 > > dev.ixl.1.pf.que2.tso_tx: 0 > > dev.ixl.1.pf.que2.irqs: 433146278 > > dev.ixl.1.pf.que2.dropped: 0 > > dev.ixl.1.pf.que2.mbuf_defrag_failed: 0 > > dev.ixl.1.pf.que1.rx_bytes: 661360798018 > > dev.ixl.1.pf.que1.rx_packets: 619700636 > > dev.ixl.1.pf.que1.tx_bytes: 238264220772 > > dev.ixl.1.pf.que1.tx_packets: 473425354 > > dev.ixl.1.pf.que1.no_desc_avail: 0 > > dev.ixl.1.pf.que1.tx_dma_setup: 0 > > dev.ixl.1.pf.que1.tso_tx: 0 > > dev.ixl.1.pf.que1.irqs: 437959829 > > dev.ixl.1.pf.que1.dropped: 0 > > dev.ixl.1.pf.que1.mbuf_defrag_failed: 0 > > dev.ixl.1.pf.que0.rx_bytes: 685201226330 > > dev.ixl.1.pf.que0.rx_packets: 637772348 > > dev.ixl.1.pf.que0.tx_bytes: 124808 > > dev.ixl.1.pf.que0.tx_packets: 1782 > > dev.ixl.1.pf.que0.no_desc_avail: 0 > > dev.ixl.1.pf.que0.tx_dma_setup: 0 > > dev.ixl.1.pf.que0.tso_tx: 0 > > dev.ixl.1.pf.que0.irqs: 174905480 > > dev.ixl.1.pf.que0.dropped: 0 > > dev.ixl.1.pf.que0.mbuf_defrag_failed: 0 > > dev.ixl.1.pf.bcast_pkts_txd: 277 > > dev.ixl.1.pf.mcast_pkts_txd: 0 > > dev.ixl.1.pf.ucast_pkts_txd: 5319743945 > > dev.ixl.1.pf.good_octets_txd: 2613178367282 > > dev.ixl.1.pf.rx_discards: 0 > > dev.ixl.1.pf.bcast_pkts_rcvd: 1 > > dev.ixl.1.pf.mcast_pkts_rcvd: 0 > > dev.ixl.1.pf.ucast_pkts_rcvd: 7203747890 > > dev.ixl.1.pf.good_octets_rcvd: 7770230490224 > > dev.ixl.1.vc_debug_level: 1 > > dev.ixl.1.admin_irq: 0 > > dev.ixl.1.watchdog_events: 0 > > dev.ixl.1.debug: 0 > > dev.ixl.1.dynamic_tx_itr: 0 > > dev.ixl.1.tx_itr: 122 > > dev.ixl.1.dynamic_rx_itr: 0 > > dev.ixl.1.rx_itr: 62 > > dev.ixl.1.fw_version: f4.33 a1.2 n04.42 e8000191d > > dev.ixl.1.current_speed: 10G > > dev.ixl.1.advertise_speed: 0 > > dev.ixl.1.fc: 0 > > dev.ixl.1.%parent: pci129 > > dev.ixl.1.%pnpinfo: vendor=0x8086 device=0x1572 subvendor=0x8086 > > subdevice=0x0000 class=0x020000 > > dev.ixl.1.%location: slot=0 function=1 handle=\_SB_.PCI1.QR3A.H001 > > dev.ixl.1.%driver: ixl > > dev.ixl.1.%desc: Intel(R) Ethernet Connection XL710 Driver, > Version - 1.4.0 > > dev.ixl.0.mac.xoff_recvd: 0 > > dev.ixl.0.mac.xoff_txd: 0 > > dev.ixl.0.mac.xon_recvd: 0 > > dev.ixl.0.mac.xon_txd: 0 > > dev.ixl.0.mac.tx_frames_big: 0 > > dev.ixl.0.mac.tx_frames_1024_1522: 4961134019 > > dev.ixl.0.mac.tx_frames_512_1023: 113082136 > > dev.ixl.0.mac.tx_frames_256_511: 123538450 > > dev.ixl.0.mac.tx_frames_128_255: 185051082 > > dev.ixl.0.mac.tx_frames_65_127: 1332798493 > > dev.ixl.0.mac.tx_frames_64: 243338964 > > dev.ixl.0.mac.checksum_errors: 0 > > dev.ixl.0.mac.rx_jabber: 0 > > dev.ixl.0.mac.rx_oversized: 0 > > dev.ixl.0.mac.rx_fragmented: 0 > > dev.ixl.0.mac.rx_undersize: 0 > > dev.ixl.0.mac.rx_frames_big: 0 > > dev.ixl.0.mac.rx_frames_1024_1522: 1566499069 > > dev.ixl.0.mac.rx_frames_512_1023: 101390143 > > dev.ixl.0.mac.rx_frames_256_511: 49831970 > > dev.ixl.0.mac.rx_frames_128_255: 231738168 > > dev.ixl.0.mac.rx_frames_65_127: 2123185819 > > dev.ixl.0.mac.rx_frames_64: 1320404300 > > dev.ixl.0.mac.rx_length_errors: 0 > > dev.ixl.0.mac.remote_faults: 0 > > dev.ixl.0.mac.local_faults: 0 > > dev.ixl.0.mac.illegal_bytes: 0 > > dev.ixl.0.mac.crc_errors: 0 > > dev.ixl.0.mac.bcast_pkts_txd: 302 > > dev.ixl.0.mac.mcast_pkts_txd: 33965 > > dev.ixl.0.mac.ucast_pkts_txd: 6958908862 > > dev.ixl.0.mac.good_octets_txd: 7698936138858 > > dev.ixl.0.mac.rx_discards: 0 > > dev.ixl.0.mac.bcast_pkts_rcvd: 1 > > dev.ixl.0.mac.mcast_pkts_rcvd: 49693 > > dev.ixl.0.mac.ucast_pkts_rcvd: 5392999771 > > dev.ixl.0.mac.good_octets_rcvd: 2648906893811 > > dev.ixl.0.pf.que23.rx_bytes: 0 > > dev.ixl.0.pf.que23.rx_packets: 0 > > dev.ixl.0.pf.que23.tx_bytes: 2371273 > > dev.ixl.0.pf.que23.tx_packets: 7313 > > dev.ixl.0.pf.que23.no_desc_avail: 0 > > dev.ixl.0.pf.que23.tx_dma_setup: 0 > > dev.ixl.0.pf.que23.tso_tx: 0 > > dev.ixl.0.pf.que23.irqs: 7313 > > dev.ixl.0.pf.que23.dropped: 0 > > dev.ixl.0.pf.que23.mbuf_defrag_failed: 0 > > dev.ixl.0.pf.que22.rx_bytes: 0 > > dev.ixl.0.pf.que22.rx_packets: 0 > > dev.ixl.0.pf.que22.tx_bytes: 1908468 > > dev.ixl.0.pf.que22.tx_packets: 6626 > > dev.ixl.0.pf.que22.no_desc_avail: 0 > > dev.ixl.0.pf.que22.tx_dma_setup: 0 > > dev.ixl.0.pf.que22.tso_tx: 0 > > dev.ixl.0.pf.que22.irqs: 6627 > > dev.ixl.0.pf.que22.dropped: 0 > > dev.ixl.0.pf.que22.mbuf_defrag_failed: 0 > > dev.ixl.0.pf.que21.rx_bytes: 0 > > dev.ixl.0.pf.que21.rx_packets: 0 > > dev.ixl.0.pf.que21.tx_bytes: 2092668 > > dev.ixl.0.pf.que21.tx_packets: 6739 > > dev.ixl.0.pf.que21.no_desc_avail: 0 > > dev.ixl.0.pf.que21.tx_dma_setup: 0 > > dev.ixl.0.pf.que21.tso_tx: 0 > > dev.ixl.0.pf.que21.irqs: 6728 > > dev.ixl.0.pf.que21.dropped: 0 > > dev.ixl.0.pf.que21.mbuf_defrag_failed: 0 > > dev.ixl.0.pf.que20.rx_bytes: 0 > > dev.ixl.0.pf.que20.rx_packets: 0 > > dev.ixl.0.pf.que20.tx_bytes: 1742176 > > dev.ixl.0.pf.que20.tx_packets: 6246 > > dev.ixl.0.pf.que20.no_desc_avail: 0 > > dev.ixl.0.pf.que20.tx_dma_setup: 0 > > dev.ixl.0.pf.que20.tso_tx: 0 > > dev.ixl.0.pf.que20.irqs: 6249 > > dev.ixl.0.pf.que20.dropped: 0 > > dev.ixl.0.pf.que20.mbuf_defrag_failed: 0 > > dev.ixl.0.pf.que19.rx_bytes: 0 > > dev.ixl.0.pf.que19.rx_packets: 0 > > dev.ixl.0.pf.que19.tx_bytes: 2102284 > > dev.ixl.0.pf.que19.tx_packets: 6979 > > dev.ixl.0.pf.que19.no_desc_avail: 0 > > dev.ixl.0.pf.que19.tx_dma_setup: 0 > > dev.ixl.0.pf.que19.tso_tx: 0 > > dev.ixl.0.pf.que19.irqs: 6979 > > dev.ixl.0.pf.que19.dropped: 0 > > dev.ixl.0.pf.que19.mbuf_defrag_failed: 0 > > dev.ixl.0.pf.que18.rx_bytes: 0 > > dev.ixl.0.pf.que18.rx_packets: 0 > > dev.ixl.0.pf.que18.tx_bytes: 1532360 > > dev.ixl.0.pf.que18.tx_packets: 5588 > > dev.ixl.0.pf.que18.no_desc_avail: 0 > > dev.ixl.0.pf.que18.tx_dma_setup: 0 > > dev.ixl.0.pf.que18.tso_tx: 0 > > dev.ixl.0.pf.que18.irqs: 5588 > > dev.ixl.0.pf.que18.dropped: 0 > > dev.ixl.0.pf.que18.mbuf_defrag_failed: 0 > > dev.ixl.0.pf.que17.rx_bytes: 0 > > dev.ixl.0.pf.que17.rx_packets: 0 > > dev.ixl.0.pf.que17.tx_bytes: 1809684 > > dev.ixl.0.pf.que17.tx_packets: 6136 > > dev.ixl.0.pf.que17.no_desc_avail: 0 > > dev.ixl.0.pf.que17.tx_dma_setup: 0 > > dev.ixl.0.pf.que17.tso_tx: 0 > > dev.ixl.0.pf.que17.irqs: 6136 > > dev.ixl.0.pf.que17.dropped: 0 > > dev.ixl.0.pf.que17.mbuf_defrag_failed: 0 > > dev.ixl.0.pf.que16.rx_bytes: 0 > > dev.ixl.0.pf.que16.rx_packets: 0 > > dev.ixl.0.pf.que16.tx_bytes: 286836299105 > > dev.ixl.0.pf.que16.tx_packets: 263532601 > > dev.ixl.0.pf.que16.no_desc_avail: 0 > > dev.ixl.0.pf.que16.tx_dma_setup: 0 > > dev.ixl.0.pf.que16.tso_tx: 0 > > dev.ixl.0.pf.que16.irqs: 83232941 > > dev.ixl.0.pf.que16.dropped: 0 > > dev.ixl.0.pf.que16.mbuf_defrag_failed: 0 > > dev.ixl.0.pf.que15.rx_bytes: 106345323488 > > dev.ixl.0.pf.que15.rx_packets: 208869912 > > dev.ixl.0.pf.que15.tx_bytes: 298825179301 > > dev.ixl.0.pf.que15.tx_packets: 288517504 > > dev.ixl.0.pf.que15.no_desc_avail: 0 > > dev.ixl.0.pf.que15.tx_dma_setup: 0 > > dev.ixl.0.pf.que15.tso_tx: 0 > > dev.ixl.0.pf.que15.irqs: 223322408 > > dev.ixl.0.pf.que15.dropped: 0 > > dev.ixl.0.pf.que15.mbuf_defrag_failed: 0 > > dev.ixl.0.pf.que14.rx_bytes: 106721900547 > > dev.ixl.0.pf.que14.rx_packets: 208566121 > > dev.ixl.0.pf.que14.tx_bytes: 288657751920 > > dev.ixl.0.pf.que14.tx_packets: 263556000 > > dev.ixl.0.pf.que14.no_desc_avail: 0 > > dev.ixl.0.pf.que14.tx_dma_setup: 0 > > dev.ixl.0.pf.que14.tso_tx: 0 > > dev.ixl.0.pf.que14.irqs: 220377537 > > dev.ixl.0.pf.que14.dropped: 0 > > dev.ixl.0.pf.que14.mbuf_defrag_failed: 0 > > dev.ixl.0.pf.que13.rx_bytes: 111978971378 > > dev.ixl.0.pf.que13.rx_packets: 218447354 > > dev.ixl.0.pf.que13.tx_bytes: 298439860675 > > dev.ixl.0.pf.que13.tx_packets: 276806617 > > dev.ixl.0.pf.que13.no_desc_avail: 0 > > dev.ixl.0.pf.que13.tx_dma_setup: 0 > > dev.ixl.0.pf.que13.tso_tx: 0 > > dev.ixl.0.pf.que13.irqs: 227474625 > > dev.ixl.0.pf.que13.dropped: 0 > > dev.ixl.0.pf.que13.mbuf_defrag_failed: 0 > > dev.ixl.0.pf.que12.rx_bytes: 112969704706 > > dev.ixl.0.pf.que12.rx_packets: 220275562 > > dev.ixl.0.pf.que12.tx_bytes: 304750620079 > > dev.ixl.0.pf.que12.tx_packets: 272244483 > > dev.ixl.0.pf.que12.no_desc_avail: 0 > > dev.ixl.0.pf.que12.tx_dma_setup: 0 > > dev.ixl.0.pf.que12.tso_tx: 183 > > dev.ixl.0.pf.que12.irqs: 230111291 > > dev.ixl.0.pf.que12.dropped: 0 > > dev.ixl.0.pf.que12.mbuf_defrag_failed: 0 > > dev.ixl.0.pf.que11.rx_bytes: 96405343036 > > dev.ixl.0.pf.que11.rx_packets: 202329448 > > dev.ixl.0.pf.que11.tx_bytes: 302481707696 > > dev.ixl.0.pf.que11.tx_packets: 271689246 > > dev.ixl.0.pf.que11.no_desc_avail: 0 > > dev.ixl.0.pf.que11.tx_dma_setup: 0 > > dev.ixl.0.pf.que11.tso_tx: 0 > > dev.ixl.0.pf.que11.irqs: 220717612 > > dev.ixl.0.pf.que11.dropped: 0 > > dev.ixl.0.pf.que11.mbuf_defrag_failed: 0 > > dev.ixl.0.pf.que10.rx_bytes: 111280008670 > > dev.ixl.0.pf.que10.rx_packets: 214900261 > > dev.ixl.0.pf.que10.tx_bytes: 318638566198 > > dev.ixl.0.pf.que10.tx_packets: 295011389 > > dev.ixl.0.pf.que10.no_desc_avail: 0 > > dev.ixl.0.pf.que10.tx_dma_setup: 0 > > dev.ixl.0.pf.que10.tso_tx: 0 > > dev.ixl.0.pf.que10.irqs: 230681709 > > dev.ixl.0.pf.que10.dropped: 0 > > dev.ixl.0.pf.que10.mbuf_defrag_failed: 0 > > dev.ixl.0.pf.que9.rx_bytes: 93566025126 > > dev.ixl.0.pf.que9.rx_packets: 198726483 > > dev.ixl.0.pf.que9.tx_bytes: 288858818348 > > dev.ixl.0.pf.que9.tx_packets: 258926864 > > dev.ixl.0.pf.que9.no_desc_avail: 0 > > dev.ixl.0.pf.que9.tx_dma_setup: 0 > > dev.ixl.0.pf.que9.tso_tx: 0 > > dev.ixl.0.pf.que9.irqs: 217918160 > > dev.ixl.0.pf.que9.dropped: 0 > > dev.ixl.0.pf.que9.mbuf_defrag_failed: 0 > > dev.ixl.0.pf.que8.rx_bytes: 117169019041 > > dev.ixl.0.pf.que8.rx_packets: 226938172 > > dev.ixl.0.pf.que8.tx_bytes: 665794492752 > > dev.ixl.0.pf.que8.tx_packets: 593519436 > > dev.ixl.0.pf.que8.no_desc_avail: 0 > > dev.ixl.0.pf.que8.tx_dma_setup: 0 > > dev.ixl.0.pf.que8.tso_tx: 0 > > dev.ixl.0.pf.que8.irqs: 244643578 > > dev.ixl.0.pf.que8.dropped: 0 > > dev.ixl.0.pf.que8.mbuf_defrag_failed: 0 > > dev.ixl.0.pf.que7.rx_bytes: 206974266022 > > dev.ixl.0.pf.que7.rx_packets: 449899895 > > dev.ixl.0.pf.que7.tx_bytes: 638527685820 > > dev.ixl.0.pf.que7.tx_packets: 580750916 > > dev.ixl.0.pf.que7.no_desc_avail: 0 > > dev.ixl.0.pf.que7.tx_dma_setup: 0 > > dev.ixl.0.pf.que7.tso_tx: 0 > > dev.ixl.0.pf.que7.irqs: 391760959 > > dev.ixl.0.pf.que7.dropped: 0 > > dev.ixl.0.pf.que7.mbuf_defrag_failed: 0 > > dev.ixl.0.pf.que6.rx_bytes: 204373984670 > > dev.ixl.0.pf.que6.rx_packets: 449990985 > > dev.ixl.0.pf.que6.tx_bytes: 655511068125 > > dev.ixl.0.pf.que6.tx_packets: 600735086 > > dev.ixl.0.pf.que6.no_desc_avail: 0 > > dev.ixl.0.pf.que6.tx_dma_setup: 0 > > dev.ixl.0.pf.que6.tso_tx: 0 > > dev.ixl.0.pf.que6.irqs: 394961024 > > dev.ixl.0.pf.que6.dropped: 0 > > dev.ixl.0.pf.que6.mbuf_defrag_failed: 0 > > dev.ixl.0.pf.que5.rx_bytes: 222919535872 > > dev.ixl.0.pf.que5.rx_packets: 466659705 > > dev.ixl.0.pf.que5.tx_bytes: 647689764751 > > dev.ixl.0.pf.que5.tx_packets: 582532691 > > dev.ixl.0.pf.que5.no_desc_avail: 0 > > dev.ixl.0.pf.que5.tx_dma_setup: 0 > > dev.ixl.0.pf.que5.tso_tx: 5 > > dev.ixl.0.pf.que5.irqs: 404552229 > > dev.ixl.0.pf.que5.dropped: 0 > > dev.ixl.0.pf.que5.mbuf_defrag_failed: 0 > > dev.ixl.0.pf.que4.rx_bytes: 231706806551 > > dev.ixl.0.pf.que4.rx_packets: 464397112 > > dev.ixl.0.pf.que4.tx_bytes: 669945424739 > > dev.ixl.0.pf.que4.tx_packets: 598527594 > > dev.ixl.0.pf.que4.no_desc_avail: 0 > > dev.ixl.0.pf.que4.tx_dma_setup: 0 > > dev.ixl.0.pf.que4.tso_tx: 452 > > dev.ixl.0.pf.que4.irqs: 405018727 > > dev.ixl.0.pf.que4.dropped: 0 > > dev.ixl.0.pf.que4.mbuf_defrag_failed: 0 > > dev.ixl.0.pf.que3.rx_bytes: 217942511336 > > dev.ixl.0.pf.que3.rx_packets: 456454137 > > dev.ixl.0.pf.que3.tx_bytes: 674027217503 > > dev.ixl.0.pf.que3.tx_packets: 604815959 > > dev.ixl.0.pf.que3.no_desc_avail: 0 > > dev.ixl.0.pf.que3.tx_dma_setup: 0 > > dev.ixl.0.pf.que3.tso_tx: 0 > > dev.ixl.0.pf.que3.irqs: 399890434 > > dev.ixl.0.pf.que3.dropped: 0 > > dev.ixl.0.pf.que3.mbuf_defrag_failed: 0 > > dev.ixl.0.pf.que2.rx_bytes: 235057952930 > > dev.ixl.0.pf.que2.rx_packets: 470668205 > > dev.ixl.0.pf.que2.tx_bytes: 653598762323 > > dev.ixl.0.pf.que2.tx_packets: 595468539 > > dev.ixl.0.pf.que2.no_desc_avail: 0 > > dev.ixl.0.pf.que2.tx_dma_setup: 0 > > dev.ixl.0.pf.que2.tso_tx: 0 > > dev.ixl.0.pf.que2.irqs: 410972406 > > dev.ixl.0.pf.que2.dropped: 0 > > dev.ixl.0.pf.que2.mbuf_defrag_failed: 0 > > dev.ixl.0.pf.que1.rx_bytes: 212570053522 > > dev.ixl.0.pf.que1.rx_packets: 456981561 > > dev.ixl.0.pf.que1.tx_bytes: 677227126330 > > dev.ixl.0.pf.que1.tx_packets: 612428010 > > dev.ixl.0.pf.que1.no_desc_avail: 0 > > dev.ixl.0.pf.que1.tx_dma_setup: 0 > > dev.ixl.0.pf.que1.tso_tx: 0 > > dev.ixl.0.pf.que1.irqs: 404727745 > > dev.ixl.0.pf.que1.dropped: 0 > > dev.ixl.0.pf.que1.mbuf_defrag_failed: 0 > > dev.ixl.0.pf.que0.rx_bytes: 239424279142 > > dev.ixl.0.pf.que0.rx_packets: 479078356 > > dev.ixl.0.pf.que0.tx_bytes: 513283 > > dev.ixl.0.pf.que0.tx_packets: 3990 > > dev.ixl.0.pf.que0.no_desc_avail: 0 > > dev.ixl.0.pf.que0.tx_dma_setup: 0 > > dev.ixl.0.pf.que0.tso_tx: 0 > > dev.ixl.0.pf.que0.irqs: 178414974 > > dev.ixl.0.pf.que0.dropped: 0 > > dev.ixl.0.pf.que0.mbuf_defrag_failed: 0 > > dev.ixl.0.pf.bcast_pkts_txd: 302 > > dev.ixl.0.pf.mcast_pkts_txd: 33965 > > dev.ixl.0.pf.ucast_pkts_txd: 6958908879 > > dev.ixl.0.pf.good_octets_txd: 7669637462330 > > dev.ixl.0.pf.rx_discards: 0 > > dev.ixl.0.pf.bcast_pkts_rcvd: 1 > > dev.ixl.0.pf.mcast_pkts_rcvd: 49549 > > dev.ixl.0.pf.ucast_pkts_rcvd: 5392999777 > > dev.ixl.0.pf.good_octets_rcvd: 2648906886817 > > dev.ixl.0.vc_debug_level: 1 > > dev.ixl.0.admin_irq: 0 > > dev.ixl.0.watchdog_events: 0 > > dev.ixl.0.debug: 0 > > dev.ixl.0.dynamic_tx_itr: 0 > > dev.ixl.0.tx_itr: 122 > > dev.ixl.0.dynamic_rx_itr: 0 > > dev.ixl.0.rx_itr: 62 > > dev.ixl.0.fw_version: f4.33 a1.2 n04.42 e8000191d > > dev.ixl.0.current_speed: 10G > > dev.ixl.0.advertise_speed: 0 > > dev.ixl.0.fc: 0 > > dev.ixl.0.%parent: pci129 > > dev.ixl.0.%pnpinfo: vendor=0x8086 device=0x1572 subvendor=0x8086 > > subdevice=0x0002 class=0x020000 > > dev.ixl.0.%location: slot=0 function=0 handle=\_SB_.PCI1.QR3A.H000 > > dev.ixl.0.%driver: ixl > > dev.ixl.0.%desc: Intel(R) Ethernet Connection XL710 Driver, > Version - 1.4.0 > > dev.ixl.%parent: > From owner-freebsd-net@freebsd.org Wed Aug 19 19:41:33 2015 Return-Path: Delivered-To: freebsd-net@mailman.ysv.freebsd.org Received: from mx1.freebsd.org (mx1.freebsd.org [IPv6:2001:1900:2254:206a::19:1]) by mailman.ysv.freebsd.org (Postfix) with ESMTP id 6F0BC9BE3B2 for ; Wed, 19 Aug 2015 19:41:33 +0000 (UTC) (envelope-from adrian.chadd@gmail.com) Received: from mail-io0-x22e.google.com (mail-io0-x22e.google.com [IPv6:2607:f8b0:4001:c06::22e]) (using TLSv1.2 with cipher ECDHE-RSA-AES128-GCM-SHA256 (128/128 bits)) (Client CN "smtp.gmail.com", Issuer "Google Internet Authority G2" (verified OK)) by mx1.freebsd.org (Postfix) with ESMTPS id 31EFCEB0; Wed, 19 Aug 2015 19:41:33 +0000 (UTC) (envelope-from adrian.chadd@gmail.com) Received: by iodb91 with SMTP id b91so21499831iod.1; Wed, 19 Aug 2015 12:41:32 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20120113; h=mime-version:in-reply-to:references:date:message-id:subject:from:to :cc:content-type:content-transfer-encoding; bh=EXJhsL6rparymxMHwQ4mSIlxUoQ/4XWyBtcxo84s8Sc=; b=g97i6435Y4AIwjhlbhdDv21ii4GFVhWPbaqxUegYGTY/fElJ1NOyOVRyeohEIg87bo /k/Rd/S6BYZOMRhgi3SWQVIdM3nDFSR1tcls910QPwdoSrimUDHdT2wBrmVbP8VAzCPX EdM66RGh+ESioEKdCaGfOTw3YR4/gpcTQ+dOxCMpBfyAznZvriTDooKfhSpVH2BaXZeW 42OLd803kXiDkgOosyeOV/cRw466Fkc3BrUc5v73LUz3qEq1CZ1fdJBWPCZUAV3LK8ja e9wLUO3/RJhJboyjS/h8gmNVMBm0VOUSXZkBWHzSp7YxWWnhRFE31SX/TPihXwatDRuY wiUw== MIME-Version: 1.0 X-Received: by 10.107.164.103 with SMTP id n100mr13585790ioe.123.1440013292617; Wed, 19 Aug 2015 12:41:32 -0700 (PDT) Received: by 10.36.38.133 with HTTP; Wed, 19 Aug 2015 12:41:32 -0700 (PDT) In-Reply-To: <55D4DAB3.1020401@maxnet.ru> References: <55D49611.40603@maxnet.ru> <20150819180051.GM94440@strugglingcoder.info> <55D4DAB3.1020401@maxnet.ru> Date: Wed, 19 Aug 2015 12:41:32 -0700 Message-ID: Subject: Re: FreeBSD 10.2-STABLE + Intel XL710 - free queues From: Adrian Chadd To: Evgeny Khorokhorin Cc: Eric Joyner , hiren panchasara , FreeBSD Net Content-Type: text/plain; charset=UTF-8 Content-Transfer-Encoding: quoted-printable X-BeenThere: freebsd-net@freebsd.org X-Mailman-Version: 2.1.20 Precedence: list List-Id: Networking and TCP/IP with FreeBSD List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , X-List-Received-Date: Wed, 19 Aug 2015 19:41:33 -0000 no, it's not the RSS option - it's the RSS configuration in the NIC for steering traffic into different queues based on header contents. The RSS kernel option includes the framework that ties it all together into the network stack - if you don't use it (which is the default), the NICs are free to do whatever they want and there's no affinity in the network stack. Eric - does the intel driver / hardware here support receive traffic distribution into > 16 queues? -adrian On 19 August 2015 at 12:36, Evgeny Khorokhorin wrote: > Eric, > I updated this driver in kernel, not as module. And I removed > #include "opt_rss.h" > > from if_ixl.c and ixl_txrx.c: > > #ifndef IXL_STANDALONE_BUILD > #include "opt_inet.h" > #include "opt_inet6.h" > #include "opt_rss.h" > #endif > > because RSS for is only in HEAD > Could I break smth by doing this? > > Best regards, > Evgeny Khorokhorin > > 19.08.2015 21:17, Eric Joyner =D0=BF=D0=B8=D1=88=D0=B5=D1=82: >> >> The IXLV_MAX_QUEUES value is for the VF driver; the standard driver shou= ld >> be able to allocate and properly use up to 64 queues. >> >> That said, you're only getting rx traffic on the first 16 queues, so tha= t >> looks like a bug in the driver. I'll take a look at it. >> >> - Eric >> >> On Wed, Aug 19, 2015 at 11:00 AM hiren panchasara >> > wrote: >> >> On 08/19/15 at 05:43P, Evgeny Khorokhorin wrote: >> > Hi All, >> > >> > FreeBSD 10.2-STABLE >> > 2*CPU Intel E5-2643v3 with HyperThreading enabled >> > Intel XL710 network adapter >> > I updated the ixl driver to version 1.4.0 from >> download.intel.com >> >> > Every ixl interface create 24 queues (6 cores *2 HT *2 CPUs) but >> > utilizes only 16-17 of them. Where is the reason of such behavior = or >> > driver bug? >> >> Not sure what is the h/w limit but this may be a possible cause: >> #define IXLV_MAX_QUEUES 16 >> in sys/dev/ixl/ixlv.h >> >> and ixlv_init_msix() doing: >> if (queues > IXLV_MAX_QUEUES) >> queues =3D IXLV_MAX_QUEUES; >> >> Adding eric from intel to confirm. >> >> Cheers, >> Hiren >> > >> > irq284: ixl0:q0 177563088 2054 >> > irq285: ixl0:q1 402668179 4659 >> > irq286: ixl0:q2 408885088 4731 >> > irq287: ixl0:q3 397744300 4602 >> > irq288: ixl0:q4 403040766 4663 >> > irq289: ixl0:q5 402499314 4657 >> > irq290: ixl0:q6 392693663 4543 >> > irq291: ixl0:q7 389364966 4505 >> > irq292: ixl0:q8 243244346 2814 >> > irq293: ixl0:q9 216834450 2509 >> > irq294: ixl0:q10 229460056 2655 >> > irq295: ixl0:q11 219591953 2540 >> > irq296: ixl0:q12 228944960 2649 >> > irq297: ixl0:q13 226385454 2619 >> > irq298: ixl0:q14 219174953 2536 >> > irq299: ixl0:q15 222151378 2570 >> > irq300: ixl0:q16 82799713 958 >> > irq301: ixl0:q17 6131 0 >> > irq302: ixl0:q18 5586 0 >> > irq303: ixl0:q19 6975 0 >> > irq304: ixl0:q20 6243 0 >> > irq305: ixl0:q21 6729 0 >> > irq306: ixl0:q22 6623 0 >> > irq307: ixl0:q23 7306 0 >> > irq309: ixl1:q0 174074462 2014 >> > irq310: ixl1:q1 435716449 5041 >> > irq311: ixl1:q2 431030443 4987 >> > irq312: ixl1:q3 424156413 4907 >> > irq313: ixl1:q4 414791657 4799 >> > irq314: ixl1:q5 420260382 4862 >> > irq315: ixl1:q6 415645708 4809 >> > irq316: ixl1:q7 422783859 4892 >> > irq317: ixl1:q8 252737383 2924 >> > irq318: ixl1:q9 269655708 3120 >> > irq319: ixl1:q10 252397826 2920 >> > irq320: ixl1:q11 255649144 2958 >> > irq321: ixl1:q12 246025621 2846 >> > irq322: ixl1:q13 240176554 2779 >> > irq323: ixl1:q14 254882418 2949 >> > irq324: ixl1:q15 236846536 2740 >> > irq325: ixl1:q16 86794467 1004 >> > irq326: ixl1:q17 83 0 >> > irq327: ixl1:q18 74 0 >> > irq328: ixl1:q19 202 0 >> > irq329: ixl1:q20 99 0 >> > irq330: ixl1:q21 96 0 >> > irq331: ixl1:q22 91 0 >> > irq332: ixl1:q23 89 0 >> > >> > last pid: 28710; load averages: 7.16, 6.76, 6.49 up >> 1+00:00:41 17:40:46 >> > 391 processes: 32 running, 215 sleeping, 144 waiting >> > CPU 0: 0.0% user, 0.0% nice, 0.0% system, 49.2% interrupt, >> 50.8% idle >> > CPU 1: 0.0% user, 0.0% nice, 0.4% system, 41.3% interrupt, >> 58.3% idle >> > CPU 2: 0.0% user, 0.0% nice, 0.0% system, 39.0% interrupt, >> 61.0% idle >> > CPU 3: 0.0% user, 0.0% nice, 0.0% system, 46.5% interrupt, >> 53.5% idle >> > CPU 4: 0.0% user, 0.0% nice, 0.0% system, 37.4% interrupt, >> 62.6% idle >> > CPU 5: 0.0% user, 0.0% nice, 0.0% system, 40.9% interrupt, >> 59.1% idle >> > CPU 6: 0.0% user, 0.0% nice, 0.0% system, 40.2% interrupt, >> 59.8% idle >> > CPU 7: 0.0% user, 0.0% nice, 0.0% system, 45.3% interrupt, >> 54.7% idle >> > CPU 8: 0.0% user, 0.0% nice, 0.0% system, 20.5% interrupt, >> 79.5% idle >> > CPU 9: 0.0% user, 0.0% nice, 0.0% system, 25.2% interrupt, >> 74.8% idle >> > CPU 10: 0.0% user, 0.0% nice, 0.0% system, 23.2% interrupt, >> 76.8% idle >> > CPU 11: 0.0% user, 0.0% nice, 0.0% system, 19.3% interrupt, >> 80.7% idle >> > CPU 12: 0.0% user, 0.0% nice, 0.0% system, 28.7% interrupt, >> 71.3% idle >> > CPU 13: 0.0% user, 0.0% nice, 0.0% system, 20.5% interrupt, >> 79.5% idle >> > CPU 14: 0.0% user, 0.0% nice, 0.0% system, 35.0% interrupt, >> 65.0% idle >> > CPU 15: 0.0% user, 0.0% nice, 0.0% system, 23.2% interrupt, >> 76.8% idle >> > CPU 16: 0.0% user, 0.0% nice, 0.4% system, 1.2% interrupt, >> 98.4% idle >> > CPU 17: 0.0% user, 0.0% nice, 2.0% system, 0.0% interrupt, >> 98.0% idle >> > CPU 18: 0.0% user, 0.0% nice, 2.4% system, 0.0% interrupt, >> 97.6% idle >> > CPU 19: 0.0% user, 0.0% nice, 2.8% system, 0.0% interrupt, >> 97.2% idle >> > CPU 20: 0.0% user, 0.0% nice, 2.4% system, 0.0% interrupt, >> 97.6% idle >> > CPU 21: 0.0% user, 0.0% nice, 1.6% system, 0.0% interrupt, >> 98.4% idle >> > CPU 22: 0.0% user, 0.0% nice, 2.8% system, 0.0% interrupt, >> 97.2% idle >> > CPU 23: 0.0% user, 0.0% nice, 0.4% system, 0.0% interrupt, >> 99.6% idle >> > >> > # netstat -I ixl0 -w1 -h >> > input ixl0 output >> > packets errs idrops bytes packets errs bytes colls >> > 253K 0 0 136M 311K 0 325M 0 >> > 251K 0 0 129M 314K 0 334M 0 >> > 250K 0 0 135M 313K 0 333M 0 >> > >> > hw.ixl.tx_itr: 122 >> > hw.ixl.rx_itr: 62 >> > hw.ixl.dynamic_tx_itr: 0 >> > hw.ixl.dynamic_rx_itr: 0 >> > hw.ixl.max_queues: 0 >> > hw.ixl.ring_size: 4096 >> > hw.ixl.enable_msix: 1 >> > dev.ixl.3.mac.xoff_recvd: 0 >> > dev.ixl.3.mac.xoff_txd: 0 >> > dev.ixl.3.mac.xon_recvd: 0 >> > dev.ixl.3.mac.xon_txd: 0 >> > dev.ixl.3.mac.tx_frames_big: 0 >> > dev.ixl.3.mac.tx_frames_1024_1522: 0 >> > dev.ixl.3.mac.tx_frames_512_1023: 0 >> > dev.ixl.3.mac.tx_frames_256_511: 0 >> > dev.ixl.3.mac.tx_frames_128_255: 0 >> > dev.ixl.3.mac.tx_frames_65_127: 0 >> > dev.ixl.3.mac.tx_frames_64: 0 >> > dev.ixl.3.mac.checksum_errors: 0 >> > dev.ixl.3.mac.rx_jabber: 0 >> > dev.ixl.3.mac.rx_oversized: 0 >> > dev.ixl.3.mac.rx_fragmented: 0 >> > dev.ixl.3.mac.rx_undersize: 0 >> > dev.ixl.3.mac.rx_frames_big: 0 >> > dev.ixl.3.mac.rx_frames_1024_1522: 0 >> > dev.ixl.3.mac.rx_frames_512_1023: 0 >> > dev.ixl.3.mac.rx_frames_256_511: 0 >> > dev.ixl.3.mac.rx_frames_128_255: 0 >> > dev.ixl.3.mac.rx_frames_65_127: 0 >> > dev.ixl.3.mac.rx_frames_64: 0 >> > dev.ixl.3.mac.rx_length_errors: 0 >> > dev.ixl.3.mac.remote_faults: 0 >> > dev.ixl.3.mac.local_faults: 0 >> > dev.ixl.3.mac.illegal_bytes: 0 >> > dev.ixl.3.mac.crc_errors: 0 >> > dev.ixl.3.mac.bcast_pkts_txd: 0 >> > dev.ixl.3.mac.mcast_pkts_txd: 0 >> > dev.ixl.3.mac.ucast_pkts_txd: 0 >> > dev.ixl.3.mac.good_octets_txd: 0 >> > dev.ixl.3.mac.rx_discards: 0 >> > dev.ixl.3.mac.bcast_pkts_rcvd: 0 >> > dev.ixl.3.mac.mcast_pkts_rcvd: 0 >> > dev.ixl.3.mac.ucast_pkts_rcvd: 0 >> > dev.ixl.3.mac.good_octets_rcvd: 0 >> > dev.ixl.3.pf.que23.rx_bytes: 0 >> > dev.ixl.3.pf.que23.rx_packets: 0 >> > dev.ixl.3.pf.que23.tx_bytes: 0 >> > dev.ixl.3.pf.que23.tx_packets: 0 >> > dev.ixl.3.pf.que23.no_desc_avail: 0 >> > dev.ixl.3.pf.que23.tx_dma_setup: 0 >> > dev.ixl.3.pf.que23.tso_tx: 0 >> > dev.ixl.3.pf.que23.irqs: 0 >> > dev.ixl.3.pf.que23.dropped: 0 >> > dev.ixl.3.pf.que23.mbuf_defrag_failed: 0 >> > dev.ixl.3.pf.que22.rx_bytes: 0 >> > dev.ixl.3.pf.que22.rx_packets: 0 >> > dev.ixl.3.pf.que22.tx_bytes: 0 >> > dev.ixl.3.pf.que22.tx_packets: 0 >> > dev.ixl.3.pf.que22.no_desc_avail: 0 >> > dev.ixl.3.pf.que22.tx_dma_setup: 0 >> > dev.ixl.3.pf.que22.tso_tx: 0 >> > dev.ixl.3.pf.que22.irqs: 0 >> > dev.ixl.3.pf.que22.dropped: 0 >> > dev.ixl.3.pf.que22.mbuf_defrag_failed: 0 >> > dev.ixl.3.pf.que21.rx_bytes: 0 >> > dev.ixl.3.pf.que21.rx_packets: 0 >> > dev.ixl.3.pf.que21.tx_bytes: 0 >> > dev.ixl.3.pf.que21.tx_packets: 0 >> > dev.ixl.3.pf.que21.no_desc_avail: 0 >> > dev.ixl.3.pf.que21.tx_dma_setup: 0 >> > dev.ixl.3.pf.que21.tso_tx: 0 >> > dev.ixl.3.pf.que21.irqs: 0 >> > dev.ixl.3.pf.que21.dropped: 0 >> > dev.ixl.3.pf.que21.mbuf_defrag_failed: 0 >> > dev.ixl.3.pf.que20.rx_bytes: 0 >> > dev.ixl.3.pf.que20.rx_packets: 0 >> > dev.ixl.3.pf.que20.tx_bytes: 0 >> > dev.ixl.3.pf.que20.tx_packets: 0 >> > dev.ixl.3.pf.que20.no_desc_avail: 0 >> > dev.ixl.3.pf.que20.tx_dma_setup: 0 >> > dev.ixl.3.pf.que20.tso_tx: 0 >> > dev.ixl.3.pf.que20.irqs: 0 >> > dev.ixl.3.pf.que20.dropped: 0 >> > dev.ixl.3.pf.que20.mbuf_defrag_failed: 0 >> > dev.ixl.3.pf.que19.rx_bytes: 0 >> > dev.ixl.3.pf.que19.rx_packets: 0 >> > dev.ixl.3.pf.que19.tx_bytes: 0 >> > dev.ixl.3.pf.que19.tx_packets: 0 >> > dev.ixl.3.pf.que19.no_desc_avail: 0 >> > dev.ixl.3.pf.que19.tx_dma_setup: 0 >> > dev.ixl.3.pf.que19.tso_tx: 0 >> > dev.ixl.3.pf.que19.irqs: 0 >> > dev.ixl.3.pf.que19.dropped: 0 >> > dev.ixl.3.pf.que19.mbuf_defrag_failed: 0 >> > dev.ixl.3.pf.que18.rx_bytes: 0 >> > dev.ixl.3.pf.que18.rx_packets: 0 >> > dev.ixl.3.pf.que18.tx_bytes: 0 >> > dev.ixl.3.pf.que18.tx_packets: 0 >> > dev.ixl.3.pf.que18.no_desc_avail: 0 >> > dev.ixl.3.pf.que18.tx_dma_setup: 0 >> > dev.ixl.3.pf.que18.tso_tx: 0 >> > dev.ixl.3.pf.que18.irqs: 0 >> > dev.ixl.3.pf.que18.dropped: 0 >> > dev.ixl.3.pf.que18.mbuf_defrag_failed: 0 >> > dev.ixl.3.pf.que17.rx_bytes: 0 >> > dev.ixl.3.pf.que17.rx_packets: 0 >> > dev.ixl.3.pf.que17.tx_bytes: 0 >> > dev.ixl.3.pf.que17.tx_packets: 0 >> > dev.ixl.3.pf.que17.no_desc_avail: 0 >> > dev.ixl.3.pf.que17.tx_dma_setup: 0 >> > dev.ixl.3.pf.que17.tso_tx: 0 >> > dev.ixl.3.pf.que17.irqs: 0 >> > dev.ixl.3.pf.que17.dropped: 0 >> > dev.ixl.3.pf.que17.mbuf_defrag_failed: 0 >> > dev.ixl.3.pf.que16.rx_bytes: 0 >> > dev.ixl.3.pf.que16.rx_packets: 0 >> > dev.ixl.3.pf.que16.tx_bytes: 0 >> > dev.ixl.3.pf.que16.tx_packets: 0 >> > dev.ixl.3.pf.que16.no_desc_avail: 0 >> > dev.ixl.3.pf.que16.tx_dma_setup: 0 >> > dev.ixl.3.pf.que16.tso_tx: 0 >> > dev.ixl.3.pf.que16.irqs: 0 >> > dev.ixl.3.pf.que16.dropped: 0 >> > dev.ixl.3.pf.que16.mbuf_defrag_failed: 0 >> > dev.ixl.3.pf.que15.rx_bytes: 0 >> > dev.ixl.3.pf.que15.rx_packets: 0 >> > dev.ixl.3.pf.que15.tx_bytes: 0 >> > dev.ixl.3.pf.que15.tx_packets: 0 >> > dev.ixl.3.pf.que15.no_desc_avail: 0 >> > dev.ixl.3.pf.que15.tx_dma_setup: 0 >> > dev.ixl.3.pf.que15.tso_tx: 0 >> > dev.ixl.3.pf.que15.irqs: 0 >> > dev.ixl.3.pf.que15.dropped: 0 >> > dev.ixl.3.pf.que15.mbuf_defrag_failed: 0 >> > dev.ixl.3.pf.que14.rx_bytes: 0 >> > dev.ixl.3.pf.que14.rx_packets: 0 >> > dev.ixl.3.pf.que14.tx_bytes: 0 >> > dev.ixl.3.pf.que14.tx_packets: 0 >> > dev.ixl.3.pf.que14.no_desc_avail: 0 >> > dev.ixl.3.pf.que14.tx_dma_setup: 0 >> > dev.ixl.3.pf.que14.tso_tx: 0 >> > dev.ixl.3.pf.que14.irqs: 0 >> > dev.ixl.3.pf.que14.dropped: 0 >> > dev.ixl.3.pf.que14.mbuf_defrag_failed: 0 >> > dev.ixl.3.pf.que13.rx_bytes: 0 >> > dev.ixl.3.pf.que13.rx_packets: 0 >> > dev.ixl.3.pf.que13.tx_bytes: 0 >> > dev.ixl.3.pf.que13.tx_packets: 0 >> > dev.ixl.3.pf.que13.no_desc_avail: 0 >> > dev.ixl.3.pf.que13.tx_dma_setup: 0 >> > dev.ixl.3.pf.que13.tso_tx: 0 >> > dev.ixl.3.pf.que13.irqs: 0 >> > dev.ixl.3.pf.que13.dropped: 0 >> > dev.ixl.3.pf.que13.mbuf_defrag_failed: 0 >> > dev.ixl.3.pf.que12.rx_bytes: 0 >> > dev.ixl.3.pf.que12.rx_packets: 0 >> > dev.ixl.3.pf.que12.tx_bytes: 0 >> > dev.ixl.3.pf.que12.tx_packets: 0 >> > dev.ixl.3.pf.que12.no_desc_avail: 0 >> > dev.ixl.3.pf.que12.tx_dma_setup: 0 >> > dev.ixl.3.pf.que12.tso_tx: 0 >> > dev.ixl.3.pf.que12.irqs: 0 >> > dev.ixl.3.pf.que12.dropped: 0 >> > dev.ixl.3.pf.que12.mbuf_defrag_failed: 0 >> > dev.ixl.3.pf.que11.rx_bytes: 0 >> > dev.ixl.3.pf.que11.rx_packets: 0 >> > dev.ixl.3.pf.que11.tx_bytes: 0 >> > dev.ixl.3.pf.que11.tx_packets: 0 >> > dev.ixl.3.pf.que11.no_desc_avail: 0 >> > dev.ixl.3.pf.que11.tx_dma_setup: 0 >> > dev.ixl.3.pf.que11.tso_tx: 0 >> > dev.ixl.3.pf.que11.irqs: 0 >> > dev.ixl.3.pf.que11.dropped: 0 >> > dev.ixl.3.pf.que11.mbuf_defrag_failed: 0 >> > dev.ixl.3.pf.que10.rx_bytes: 0 >> > dev.ixl.3.pf.que10.rx_packets: 0 >> > dev.ixl.3.pf.que10.tx_bytes: 0 >> > dev.ixl.3.pf.que10.tx_packets: 0 >> > dev.ixl.3.pf.que10.no_desc_avail: 0 >> > dev.ixl.3.pf.que10.tx_dma_setup: 0 >> > dev.ixl.3.pf.que10.tso_tx: 0 >> > dev.ixl.3.pf.que10.irqs: 0 >> > dev.ixl.3.pf.que10.dropped: 0 >> > dev.ixl.3.pf.que10.mbuf_defrag_failed: 0 >> > dev.ixl.3.pf.que9.rx_bytes: 0 >> > dev.ixl.3.pf.que9.rx_packets: 0 >> > dev.ixl.3.pf.que9.tx_bytes: 0 >> > dev.ixl.3.pf.que9.tx_packets: 0 >> > dev.ixl.3.pf.que9.no_desc_avail: 0 >> > dev.ixl.3.pf.que9.tx_dma_setup: 0 >> > dev.ixl.3.pf.que9.tso_tx: 0 >> > dev.ixl.3.pf.que9.irqs: 0 >> > dev.ixl.3.pf.que9.dropped: 0 >> > dev.ixl.3.pf.que9.mbuf_defrag_failed: 0 >> > dev.ixl.3.pf.que8.rx_bytes: 0 >> > dev.ixl.3.pf.que8.rx_packets: 0 >> > dev.ixl.3.pf.que8.tx_bytes: 0 >> > dev.ixl.3.pf.que8.tx_packets: 0 >> > dev.ixl.3.pf.que8.no_desc_avail: 0 >> > dev.ixl.3.pf.que8.tx_dma_setup: 0 >> > dev.ixl.3.pf.que8.tso_tx: 0 >> > dev.ixl.3.pf.que8.irqs: 0 >> > dev.ixl.3.pf.que8.dropped: 0 >> > dev.ixl.3.pf.que8.mbuf_defrag_failed: 0 >> > dev.ixl.3.pf.que7.rx_bytes: 0 >> > dev.ixl.3.pf.que7.rx_packets: 0 >> > dev.ixl.3.pf.que7.tx_bytes: 0 >> > dev.ixl.3.pf.que7.tx_packets: 0 >> > dev.ixl.3.pf.que7.no_desc_avail: 0 >> > dev.ixl.3.pf.que7.tx_dma_setup: 0 >> > dev.ixl.3.pf.que7.tso_tx: 0 >> > dev.ixl.3.pf.que7.irqs: 0 >> > dev.ixl.3.pf.que7.dropped: 0 >> > dev.ixl.3.pf.que7.mbuf_defrag_failed: 0 >> > dev.ixl.3.pf.que6.rx_bytes: 0 >> > dev.ixl.3.pf.que6.rx_packets: 0 >> > dev.ixl.3.pf.que6.tx_bytes: 0 >> > dev.ixl.3.pf.que6.tx_packets: 0 >> > dev.ixl.3.pf.que6.no_desc_avail: 0 >> > dev.ixl.3.pf.que6.tx_dma_setup: 0 >> > dev.ixl.3.pf.que6.tso_tx: 0 >> > dev.ixl.3.pf.que6.irqs: 0 >> > dev.ixl.3.pf.que6.dropped: 0 >> > dev.ixl.3.pf.que6.mbuf_defrag_failed: 0 >> > dev.ixl.3.pf.que5.rx_bytes: 0 >> > dev.ixl.3.pf.que5.rx_packets: 0 >> > dev.ixl.3.pf.que5.tx_bytes: 0 >> > dev.ixl.3.pf.que5.tx_packets: 0 >> > dev.ixl.3.pf.que5.no_desc_avail: 0 >> > dev.ixl.3.pf.que5.tx_dma_setup: 0 >> > dev.ixl.3.pf.que5.tso_tx: 0 >> > dev.ixl.3.pf.que5.irqs: 0 >> > dev.ixl.3.pf.que5.dropped: 0 >> > dev.ixl.3.pf.que5.mbuf_defrag_failed: 0 >> > dev.ixl.3.pf.que4.rx_bytes: 0 >> > dev.ixl.3.pf.que4.rx_packets: 0 >> > dev.ixl.3.pf.que4.tx_bytes: 0 >> > dev.ixl.3.pf.que4.tx_packets: 0 >> > dev.ixl.3.pf.que4.no_desc_avail: 0 >> > dev.ixl.3.pf.que4.tx_dma_setup: 0 >> > dev.ixl.3.pf.que4.tso_tx: 0 >> > dev.ixl.3.pf.que4.irqs: 0 >> > dev.ixl.3.pf.que4.dropped: 0 >> > dev.ixl.3.pf.que4.mbuf_defrag_failed: 0 >> > dev.ixl.3.pf.que3.rx_bytes: 0 >> > dev.ixl.3.pf.que3.rx_packets: 0 >> > dev.ixl.3.pf.que3.tx_bytes: 0 >> > dev.ixl.3.pf.que3.tx_packets: 0 >> > dev.ixl.3.pf.que3.no_desc_avail: 0 >> > dev.ixl.3.pf.que3.tx_dma_setup: 0 >> > dev.ixl.3.pf.que3.tso_tx: 0 >> > dev.ixl.3.pf.que3.irqs: 0 >> > dev.ixl.3.pf.que3.dropped: 0 >> > dev.ixl.3.pf.que3.mbuf_defrag_failed: 0 >> > dev.ixl.3.pf.que2.rx_bytes: 0 >> > dev.ixl.3.pf.que2.rx_packets: 0 >> > dev.ixl.3.pf.que2.tx_bytes: 0 >> > dev.ixl.3.pf.que2.tx_packets: 0 >> > dev.ixl.3.pf.que2.no_desc_avail: 0 >> > dev.ixl.3.pf.que2.tx_dma_setup: 0 >> > dev.ixl.3.pf.que2.tso_tx: 0 >> > dev.ixl.3.pf.que2.irqs: 0 >> > dev.ixl.3.pf.que2.dropped: 0 >> > dev.ixl.3.pf.que2.mbuf_defrag_failed: 0 >> > dev.ixl.3.pf.que1.rx_bytes: 0 >> > dev.ixl.3.pf.que1.rx_packets: 0 >> > dev.ixl.3.pf.que1.tx_bytes: 0 >> > dev.ixl.3.pf.que1.tx_packets: 0 >> > dev.ixl.3.pf.que1.no_desc_avail: 0 >> > dev.ixl.3.pf.que1.tx_dma_setup: 0 >> > dev.ixl.3.pf.que1.tso_tx: 0 >> > dev.ixl.3.pf.que1.irqs: 0 >> > dev.ixl.3.pf.que1.dropped: 0 >> > dev.ixl.3.pf.que1.mbuf_defrag_failed: 0 >> > dev.ixl.3.pf.que0.rx_bytes: 0 >> > dev.ixl.3.pf.que0.rx_packets: 0 >> > dev.ixl.3.pf.que0.tx_bytes: 0 >> > dev.ixl.3.pf.que0.tx_packets: 0 >> > dev.ixl.3.pf.que0.no_desc_avail: 0 >> > dev.ixl.3.pf.que0.tx_dma_setup: 0 >> > dev.ixl.3.pf.que0.tso_tx: 0 >> > dev.ixl.3.pf.que0.irqs: 0 >> > dev.ixl.3.pf.que0.dropped: 0 >> > dev.ixl.3.pf.que0.mbuf_defrag_failed: 0 >> > dev.ixl.3.pf.bcast_pkts_txd: 0 >> > dev.ixl.3.pf.mcast_pkts_txd: 0 >> > dev.ixl.3.pf.ucast_pkts_txd: 0 >> > dev.ixl.3.pf.good_octets_txd: 0 >> > dev.ixl.3.pf.rx_discards: 0 >> > dev.ixl.3.pf.bcast_pkts_rcvd: 0 >> > dev.ixl.3.pf.mcast_pkts_rcvd: 0 >> > dev.ixl.3.pf.ucast_pkts_rcvd: 0 >> > dev.ixl.3.pf.good_octets_rcvd: 0 >> > dev.ixl.3.vc_debug_level: 1 >> > dev.ixl.3.admin_irq: 0 >> > dev.ixl.3.watchdog_events: 0 >> > dev.ixl.3.debug: 0 >> > dev.ixl.3.dynamic_tx_itr: 0 >> > dev.ixl.3.tx_itr: 122 >> > dev.ixl.3.dynamic_rx_itr: 0 >> > dev.ixl.3.rx_itr: 62 >> > dev.ixl.3.fw_version: f4.33 a1.2 n04.42 e8000191d >> > dev.ixl.3.current_speed: Unknown >> > dev.ixl.3.advertise_speed: 0 >> > dev.ixl.3.fc: 0 >> > dev.ixl.3.%parent: pci129 >> > dev.ixl.3.%pnpinfo: vendor=3D0x8086 device=3D0x1572 subvendor=3D0x= 8086 >> > subdevice=3D0x0000 class=3D0x020000 >> > dev.ixl.3.%location: slot=3D0 function=3D3 handle=3D\_SB_.PCI1.QR3= A.H003 >> > dev.ixl.3.%driver: ixl >> > dev.ixl.3.%desc: Intel(R) Ethernet Connection XL710 Driver, >> Version - 1.4.0 >> > dev.ixl.2.mac.xoff_recvd: 0 >> > dev.ixl.2.mac.xoff_txd: 0 >> > dev.ixl.2.mac.xon_recvd: 0 >> > dev.ixl.2.mac.xon_txd: 0 >> > dev.ixl.2.mac.tx_frames_big: 0 >> > dev.ixl.2.mac.tx_frames_1024_1522: 0 >> > dev.ixl.2.mac.tx_frames_512_1023: 0 >> > dev.ixl.2.mac.tx_frames_256_511: 0 >> > dev.ixl.2.mac.tx_frames_128_255: 0 >> > dev.ixl.2.mac.tx_frames_65_127: 0 >> > dev.ixl.2.mac.tx_frames_64: 0 >> > dev.ixl.2.mac.checksum_errors: 0 >> > dev.ixl.2.mac.rx_jabber: 0 >> > dev.ixl.2.mac.rx_oversized: 0 >> > dev.ixl.2.mac.rx_fragmented: 0 >> > dev.ixl.2.mac.rx_undersize: 0 >> > dev.ixl.2.mac.rx_frames_big: 0 >> > dev.ixl.2.mac.rx_frames_1024_1522: 0 >> > dev.ixl.2.mac.rx_frames_512_1023: 0 >> > dev.ixl.2.mac.rx_frames_256_511: 0 >> > dev.ixl.2.mac.rx_frames_128_255: 0 >> > dev.ixl.2.mac.rx_frames_65_127: 0 >> > dev.ixl.2.mac.rx_frames_64: 0 >> > dev.ixl.2.mac.rx_length_errors: 0 >> > dev.ixl.2.mac.remote_faults: 0 >> > dev.ixl.2.mac.local_faults: 0 >> > dev.ixl.2.mac.illegal_bytes: 0 >> > dev.ixl.2.mac.crc_errors: 0 >> > dev.ixl.2.mac.bcast_pkts_txd: 0 >> > dev.ixl.2.mac.mcast_pkts_txd: 0 >> > dev.ixl.2.mac.ucast_pkts_txd: 0 >> > dev.ixl.2.mac.good_octets_txd: 0 >> > dev.ixl.2.mac.rx_discards: 0 >> > dev.ixl.2.mac.bcast_pkts_rcvd: 0 >> > dev.ixl.2.mac.mcast_pkts_rcvd: 0 >> > dev.ixl.2.mac.ucast_pkts_rcvd: 0 >> > dev.ixl.2.mac.good_octets_rcvd: 0 >> > dev.ixl.2.pf.que23.rx_bytes: 0 >> > dev.ixl.2.pf.que23.rx_packets: 0 >> > dev.ixl.2.pf.que23.tx_bytes: 0 >> > dev.ixl.2.pf.que23.tx_packets: 0 >> > dev.ixl.2.pf.que23.no_desc_avail: 0 >> > dev.ixl.2.pf.que23.tx_dma_setup: 0 >> > dev.ixl.2.pf.que23.tso_tx: 0 >> > dev.ixl.2.pf.que23.irqs: 0 >> > dev.ixl.2.pf.que23.dropped: 0 >> > dev.ixl.2.pf.que23.mbuf_defrag_failed: 0 >> > dev.ixl.2.pf.que22.rx_bytes: 0 >> > dev.ixl.2.pf.que22.rx_packets: 0 >> > dev.ixl.2.pf.que22.tx_bytes: 0 >> > dev.ixl.2.pf.que22.tx_packets: 0 >> > dev.ixl.2.pf.que22.no_desc_avail: 0 >> > dev.ixl.2.pf.que22.tx_dma_setup: 0 >> > dev.ixl.2.pf.que22.tso_tx: 0 >> > dev.ixl.2.pf.que22.irqs: 0 >> > dev.ixl.2.pf.que22.dropped: 0 >> > dev.ixl.2.pf.que22.mbuf_defrag_failed: 0 >> > dev.ixl.2.pf.que21.rx_bytes: 0 >> > dev.ixl.2.pf.que21.rx_packets: 0 >> > dev.ixl.2.pf.que21.tx_bytes: 0 >> > dev.ixl.2.pf.que21.tx_packets: 0 >> > dev.ixl.2.pf.que21.no_desc_avail: 0 >> > dev.ixl.2.pf.que21.tx_dma_setup: 0 >> > dev.ixl.2.pf.que21.tso_tx: 0 >> > dev.ixl.2.pf.que21.irqs: 0 >> > dev.ixl.2.pf.que21.dropped: 0 >> > dev.ixl.2.pf.que21.mbuf_defrag_failed: 0 >> > dev.ixl.2.pf.que20.rx_bytes: 0 >> > dev.ixl.2.pf.que20.rx_packets: 0 >> > dev.ixl.2.pf.que20.tx_bytes: 0 >> > dev.ixl.2.pf.que20.tx_packets: 0 >> > dev.ixl.2.pf.que20.no_desc_avail: 0 >> > dev.ixl.2.pf.que20.tx_dma_setup: 0 >> > dev.ixl.2.pf.que20.tso_tx: 0 >> > dev.ixl.2.pf.que20.irqs: 0 >> > dev.ixl.2.pf.que20.dropped: 0 >> > dev.ixl.2.pf.que20.mbuf_defrag_failed: 0 >> > dev.ixl.2.pf.que19.rx_bytes: 0 >> > dev.ixl.2.pf.que19.rx_packets: 0 >> > dev.ixl.2.pf.que19.tx_bytes: 0 >> > dev.ixl.2.pf.que19.tx_packets: 0 >> > dev.ixl.2.pf.que19.no_desc_avail: 0 >> > dev.ixl.2.pf.que19.tx_dma_setup: 0 >> > dev.ixl.2.pf.que19.tso_tx: 0 >> > dev.ixl.2.pf.que19.irqs: 0 >> > dev.ixl.2.pf.que19.dropped: 0 >> > dev.ixl.2.pf.que19.mbuf_defrag_failed: 0 >> > dev.ixl.2.pf.que18.rx_bytes: 0 >> > dev.ixl.2.pf.que18.rx_packets: 0 >> > dev.ixl.2.pf.que18.tx_bytes: 0 >> > dev.ixl.2.pf.que18.tx_packets: 0 >> > dev.ixl.2.pf.que18.no_desc_avail: 0 >> > dev.ixl.2.pf.que18.tx_dma_setup: 0 >> > dev.ixl.2.pf.que18.tso_tx: 0 >> > dev.ixl.2.pf.que18.irqs: 0 >> > dev.ixl.2.pf.que18.dropped: 0 >> > dev.ixl.2.pf.que18.mbuf_defrag_failed: 0 >> > dev.ixl.2.pf.que17.rx_bytes: 0 >> > dev.ixl.2.pf.que17.rx_packets: 0 >> > dev.ixl.2.pf.que17.tx_bytes: 0 >> > dev.ixl.2.pf.que17.tx_packets: 0 >> > dev.ixl.2.pf.que17.no_desc_avail: 0 >> > dev.ixl.2.pf.que17.tx_dma_setup: 0 >> > dev.ixl.2.pf.que17.tso_tx: 0 >> > dev.ixl.2.pf.que17.irqs: 0 >> > dev.ixl.2.pf.que17.dropped: 0 >> > dev.ixl.2.pf.que17.mbuf_defrag_failed: 0 >> > dev.ixl.2.pf.que16.rx_bytes: 0 >> > dev.ixl.2.pf.que16.rx_packets: 0 >> > dev.ixl.2.pf.que16.tx_bytes: 0 >> > dev.ixl.2.pf.que16.tx_packets: 0 >> > dev.ixl.2.pf.que16.no_desc_avail: 0 >> > dev.ixl.2.pf.que16.tx_dma_setup: 0 >> > dev.ixl.2.pf.que16.tso_tx: 0 >> > dev.ixl.2.pf.que16.irqs: 0 >> > dev.ixl.2.pf.que16.dropped: 0 >> > dev.ixl.2.pf.que16.mbuf_defrag_failed: 0 >> > dev.ixl.2.pf.que15.rx_bytes: 0 >> > dev.ixl.2.pf.que15.rx_packets: 0 >> > dev.ixl.2.pf.que15.tx_bytes: 0 >> > dev.ixl.2.pf.que15.tx_packets: 0 >> > dev.ixl.2.pf.que15.no_desc_avail: 0 >> > dev.ixl.2.pf.que15.tx_dma_setup: 0 >> > dev.ixl.2.pf.que15.tso_tx: 0 >> > dev.ixl.2.pf.que15.irqs: 0 >> > dev.ixl.2.pf.que15.dropped: 0 >> > dev.ixl.2.pf.que15.mbuf_defrag_failed: 0 >> > dev.ixl.2.pf.que14.rx_bytes: 0 >> > dev.ixl.2.pf.que14.rx_packets: 0 >> > dev.ixl.2.pf.que14.tx_bytes: 0 >> > dev.ixl.2.pf.que14.tx_packets: 0 >> > dev.ixl.2.pf.que14.no_desc_avail: 0 >> > dev.ixl.2.pf.que14.tx_dma_setup: 0 >> > dev.ixl.2.pf.que14.tso_tx: 0 >> > dev.ixl.2.pf.que14.irqs: 0 >> > dev.ixl.2.pf.que14.dropped: 0 >> > dev.ixl.2.pf.que14.mbuf_defrag_failed: 0 >> > dev.ixl.2.pf.que13.rx_bytes: 0 >> > dev.ixl.2.pf.que13.rx_packets: 0 >> > dev.ixl.2.pf.que13.tx_bytes: 0 >> > dev.ixl.2.pf.que13.tx_packets: 0 >> > dev.ixl.2.pf.que13.no_desc_avail: 0 >> > dev.ixl.2.pf.que13.tx_dma_setup: 0 >> > dev.ixl.2.pf.que13.tso_tx: 0 >> > dev.ixl.2.pf.que13.irqs: 0 >> > dev.ixl.2.pf.que13.dropped: 0 >> > dev.ixl.2.pf.que13.mbuf_defrag_failed: 0 >> > dev.ixl.2.pf.que12.rx_bytes: 0 >> > dev.ixl.2.pf.que12.rx_packets: 0 >> > dev.ixl.2.pf.que12.tx_bytes: 0 >> > dev.ixl.2.pf.que12.tx_packets: 0 >> > dev.ixl.2.pf.que12.no_desc_avail: 0 >> > dev.ixl.2.pf.que12.tx_dma_setup: 0 >> > dev.ixl.2.pf.que12.tso_tx: 0 >> > dev.ixl.2.pf.que12.irqs: 0 >> > dev.ixl.2.pf.que12.dropped: 0 >> > dev.ixl.2.pf.que12.mbuf_defrag_failed: 0 >> > dev.ixl.2.pf.que11.rx_bytes: 0 >> > dev.ixl.2.pf.que11.rx_packets: 0 >> > dev.ixl.2.pf.que11.tx_bytes: 0 >> > dev.ixl.2.pf.que11.tx_packets: 0 >> > dev.ixl.2.pf.que11.no_desc_avail: 0 >> > dev.ixl.2.pf.que11.tx_dma_setup: 0 >> > dev.ixl.2.pf.que11.tso_tx: 0 >> > dev.ixl.2.pf.que11.irqs: 0 >> > dev.ixl.2.pf.que11.dropped: 0 >> > dev.ixl.2.pf.que11.mbuf_defrag_failed: 0 >> > dev.ixl.2.pf.que10.rx_bytes: 0 >> > dev.ixl.2.pf.que10.rx_packets: 0 >> > dev.ixl.2.pf.que10.tx_bytes: 0 >> > dev.ixl.2.pf.que10.tx_packets: 0 >> > dev.ixl.2.pf.que10.no_desc_avail: 0 >> > dev.ixl.2.pf.que10.tx_dma_setup: 0 >> > dev.ixl.2.pf.que10.tso_tx: 0 >> > dev.ixl.2.pf.que10.irqs: 0 >> > dev.ixl.2.pf.que10.dropped: 0 >> > dev.ixl.2.pf.que10.mbuf_defrag_failed: 0 >> > dev.ixl.2.pf.que9.rx_bytes: 0 >> > dev.ixl.2.pf.que9.rx_packets: 0 >> > dev.ixl.2.pf.que9.tx_bytes: 0 >> > dev.ixl.2.pf.que9.tx_packets: 0 >> > dev.ixl.2.pf.que9.no_desc_avail: 0 >> > dev.ixl.2.pf.que9.tx_dma_setup: 0 >> > dev.ixl.2.pf.que9.tso_tx: 0 >> > dev.ixl.2.pf.que9.irqs: 0 >> > dev.ixl.2.pf.que9.dropped: 0 >> > dev.ixl.2.pf.que9.mbuf_defrag_failed: 0 >> > dev.ixl.2.pf.que8.rx_bytes: 0 >> > dev.ixl.2.pf.que8.rx_packets: 0 >> > dev.ixl.2.pf.que8.tx_bytes: 0 >> > dev.ixl.2.pf.que8.tx_packets: 0 >> > dev.ixl.2.pf.que8.no_desc_avail: 0 >> > dev.ixl.2.pf.que8.tx_dma_setup: 0 >> > dev.ixl.2.pf.que8.tso_tx: 0 >> > dev.ixl.2.pf.que8.irqs: 0 >> > dev.ixl.2.pf.que8.dropped: 0 >> > dev.ixl.2.pf.que8.mbuf_defrag_failed: 0 >> > dev.ixl.2.pf.que7.rx_bytes: 0 >> > dev.ixl.2.pf.que7.rx_packets: 0 >> > dev.ixl.2.pf.que7.tx_bytes: 0 >> > dev.ixl.2.pf.que7.tx_packets: 0 >> > dev.ixl.2.pf.que7.no_desc_avail: 0 >> > dev.ixl.2.pf.que7.tx_dma_setup: 0 >> > dev.ixl.2.pf.que7.tso_tx: 0 >> > dev.ixl.2.pf.que7.irqs: 0 >> > dev.ixl.2.pf.que7.dropped: 0 >> > dev.ixl.2.pf.que7.mbuf_defrag_failed: 0 >> > dev.ixl.2.pf.que6.rx_bytes: 0 >> > dev.ixl.2.pf.que6.rx_packets: 0 >> > dev.ixl.2.pf.que6.tx_bytes: 0 >> > dev.ixl.2.pf.que6.tx_packets: 0 >> > dev.ixl.2.pf.que6.no_desc_avail: 0 >> > dev.ixl.2.pf.que6.tx_dma_setup: 0 >> > dev.ixl.2.pf.que6.tso_tx: 0 >> > dev.ixl.2.pf.que6.irqs: 0 >> > dev.ixl.2.pf.que6.dropped: 0 >> > dev.ixl.2.pf.que6.mbuf_defrag_failed: 0 >> > dev.ixl.2.pf.que5.rx_bytes: 0 >> > dev.ixl.2.pf.que5.rx_packets: 0 >> > dev.ixl.2.pf.que5.tx_bytes: 0 >> > dev.ixl.2.pf.que5.tx_packets: 0 >> > dev.ixl.2.pf.que5.no_desc_avail: 0 >> > dev.ixl.2.pf.que5.tx_dma_setup: 0 >> > dev.ixl.2.pf.que5.tso_tx: 0 >> > dev.ixl.2.pf.que5.irqs: 0 >> > dev.ixl.2.pf.que5.dropped: 0 >> > dev.ixl.2.pf.que5.mbuf_defrag_failed: 0 >> > dev.ixl.2.pf.que4.rx_bytes: 0 >> > dev.ixl.2.pf.que4.rx_packets: 0 >> > dev.ixl.2.pf.que4.tx_bytes: 0 >> > dev.ixl.2.pf.que4.tx_packets: 0 >> > dev.ixl.2.pf.que4.no_desc_avail: 0 >> > dev.ixl.2.pf.que4.tx_dma_setup: 0 >> > dev.ixl.2.pf.que4.tso_tx: 0 >> > dev.ixl.2.pf.que4.irqs: 0 >> > dev.ixl.2.pf.que4.dropped: 0 >> > dev.ixl.2.pf.que4.mbuf_defrag_failed: 0 >> > dev.ixl.2.pf.que3.rx_bytes: 0 >> > dev.ixl.2.pf.que3.rx_packets: 0 >> > dev.ixl.2.pf.que3.tx_bytes: 0 >> > dev.ixl.2.pf.que3.tx_packets: 0 >> > dev.ixl.2.pf.que3.no_desc_avail: 0 >> > dev.ixl.2.pf.que3.tx_dma_setup: 0 >> > dev.ixl.2.pf.que3.tso_tx: 0 >> > dev.ixl.2.pf.que3.irqs: 0 >> > dev.ixl.2.pf.que3.dropped: 0 >> > dev.ixl.2.pf.que3.mbuf_defrag_failed: 0 >> > dev.ixl.2.pf.que2.rx_bytes: 0 >> > dev.ixl.2.pf.que2.rx_packets: 0 >> > dev.ixl.2.pf.que2.tx_bytes: 0 >> > dev.ixl.2.pf.que2.tx_packets: 0 >> > dev.ixl.2.pf.que2.no_desc_avail: 0 >> > dev.ixl.2.pf.que2.tx_dma_setup: 0 >> > dev.ixl.2.pf.que2.tso_tx: 0 >> > dev.ixl.2.pf.que2.irqs: 0 >> > dev.ixl.2.pf.que2.dropped: 0 >> > dev.ixl.2.pf.que2.mbuf_defrag_failed: 0 >> > dev.ixl.2.pf.que1.rx_bytes: 0 >> > dev.ixl.2.pf.que1.rx_packets: 0 >> > dev.ixl.2.pf.que1.tx_bytes: 0 >> > dev.ixl.2.pf.que1.tx_packets: 0 >> > dev.ixl.2.pf.que1.no_desc_avail: 0 >> > dev.ixl.2.pf.que1.tx_dma_setup: 0 >> > dev.ixl.2.pf.que1.tso_tx: 0 >> > dev.ixl.2.pf.que1.irqs: 0 >> > dev.ixl.2.pf.que1.dropped: 0 >> > dev.ixl.2.pf.que1.mbuf_defrag_failed: 0 >> > dev.ixl.2.pf.que0.rx_bytes: 0 >> > dev.ixl.2.pf.que0.rx_packets: 0 >> > dev.ixl.2.pf.que0.tx_bytes: 0 >> > dev.ixl.2.pf.que0.tx_packets: 0 >> > dev.ixl.2.pf.que0.no_desc_avail: 0 >> > dev.ixl.2.pf.que0.tx_dma_setup: 0 >> > dev.ixl.2.pf.que0.tso_tx: 0 >> > dev.ixl.2.pf.que0.irqs: 0 >> > dev.ixl.2.pf.que0.dropped: 0 >> > dev.ixl.2.pf.que0.mbuf_defrag_failed: 0 >> > dev.ixl.2.pf.bcast_pkts_txd: 0 >> > dev.ixl.2.pf.mcast_pkts_txd: 0 >> > dev.ixl.2.pf.ucast_pkts_txd: 0 >> > dev.ixl.2.pf.good_octets_txd: 0 >> > dev.ixl.2.pf.rx_discards: 0 >> > dev.ixl.2.pf.bcast_pkts_rcvd: 0 >> > dev.ixl.2.pf.mcast_pkts_rcvd: 0 >> > dev.ixl.2.pf.ucast_pkts_rcvd: 0 >> > dev.ixl.2.pf.good_octets_rcvd: 0 >> > dev.ixl.2.vc_debug_level: 1 >> > dev.ixl.2.admin_irq: 0 >> > dev.ixl.2.watchdog_events: 0 >> > dev.ixl.2.debug: 0 >> > dev.ixl.2.dynamic_tx_itr: 0 >> > dev.ixl.2.tx_itr: 122 >> > dev.ixl.2.dynamic_rx_itr: 0 >> > dev.ixl.2.rx_itr: 62 >> > dev.ixl.2.fw_version: f4.33 a1.2 n04.42 e8000191d >> > dev.ixl.2.current_speed: Unknown >> > dev.ixl.2.advertise_speed: 0 >> > dev.ixl.2.fc: 0 >> > dev.ixl.2.%parent: pci129 >> > dev.ixl.2.%pnpinfo: vendor=3D0x8086 device=3D0x1572 subvendor=3D0x= 8086 >> > subdevice=3D0x0000 class=3D0x020000 >> > dev.ixl.2.%location: slot=3D0 function=3D2 handle=3D\_SB_.PCI1.QR3= A.H002 >> > dev.ixl.2.%driver: ixl >> > dev.ixl.2.%desc: Intel(R) Ethernet Connection XL710 Driver, >> Version - 1.4.0 >> > dev.ixl.1.mac.xoff_recvd: 0 >> > dev.ixl.1.mac.xoff_txd: 0 >> > dev.ixl.1.mac.xon_recvd: 0 >> > dev.ixl.1.mac.xon_txd: 0 >> > dev.ixl.1.mac.tx_frames_big: 0 >> > dev.ixl.1.mac.tx_frames_1024_1522: 1565670684 >> > dev.ixl.1.mac.tx_frames_512_1023: 101286418 >> > dev.ixl.1.mac.tx_frames_256_511: 49713129 >> > dev.ixl.1.mac.tx_frames_128_255: 231617277 >> > dev.ixl.1.mac.tx_frames_65_127: 2052767669 >> > dev.ixl.1.mac.tx_frames_64: 1318689044 >> > dev.ixl.1.mac.checksum_errors: 0 >> > dev.ixl.1.mac.rx_jabber: 0 >> > dev.ixl.1.mac.rx_oversized: 0 >> > dev.ixl.1.mac.rx_fragmented: 0 >> > dev.ixl.1.mac.rx_undersize: 0 >> > dev.ixl.1.mac.rx_frames_big: 0 >> > dev.ixl.1.mac.rx_frames_1024_1522: 4960403414 >> > dev.ixl.1.mac.rx_frames_512_1023: 113675084 >> > dev.ixl.1.mac.rx_frames_256_511: 253904920 >> > dev.ixl.1.mac.rx_frames_128_255: 196369726 >> > dev.ixl.1.mac.rx_frames_65_127: 1436626211 >> > dev.ixl.1.mac.rx_frames_64: 242768681 >> > dev.ixl.1.mac.rx_length_errors: 0 >> > dev.ixl.1.mac.remote_faults: 0 >> > dev.ixl.1.mac.local_faults: 0 >> > dev.ixl.1.mac.illegal_bytes: 0 >> > dev.ixl.1.mac.crc_errors: 0 >> > dev.ixl.1.mac.bcast_pkts_txd: 277 >> > dev.ixl.1.mac.mcast_pkts_txd: 0 >> > dev.ixl.1.mac.ucast_pkts_txd: 5319743942 >> > dev.ixl.1.mac.good_octets_txd: 2642351885737 >> > dev.ixl.1.mac.rx_discards: 0 >> > dev.ixl.1.mac.bcast_pkts_rcvd: 5 >> > dev.ixl.1.mac.mcast_pkts_rcvd: 144 >> > dev.ixl.1.mac.ucast_pkts_rcvd: 7203747879 >> > dev.ixl.1.mac.good_octets_rcvd: 7770230492434 >> > dev.ixl.1.pf.que23.rx_bytes: 0 >> > dev.ixl.1.pf.que23.rx_packets: 0 >> > dev.ixl.1.pf.que23.tx_bytes: 7111 >> > dev.ixl.1.pf.que23.tx_packets: 88 >> > dev.ixl.1.pf.que23.no_desc_avail: 0 >> > dev.ixl.1.pf.que23.tx_dma_setup: 0 >> > dev.ixl.1.pf.que23.tso_tx: 0 >> > dev.ixl.1.pf.que23.irqs: 88 >> > dev.ixl.1.pf.que23.dropped: 0 >> > dev.ixl.1.pf.que23.mbuf_defrag_failed: 0 >> > dev.ixl.1.pf.que22.rx_bytes: 0 >> > dev.ixl.1.pf.que22.rx_packets: 0 >> > dev.ixl.1.pf.que22.tx_bytes: 6792 >> > dev.ixl.1.pf.que22.tx_packets: 88 >> > dev.ixl.1.pf.que22.no_desc_avail: 0 >> > dev.ixl.1.pf.que22.tx_dma_setup: 0 >> > dev.ixl.1.pf.que22.tso_tx: 0 >> > dev.ixl.1.pf.que22.irqs: 89 >> > dev.ixl.1.pf.que22.dropped: 0 >> > dev.ixl.1.pf.que22.mbuf_defrag_failed: 0 >> > dev.ixl.1.pf.que21.rx_bytes: 0 >> > dev.ixl.1.pf.que21.rx_packets: 0 >> > dev.ixl.1.pf.que21.tx_bytes: 7486 >> > dev.ixl.1.pf.que21.tx_packets: 93 >> > dev.ixl.1.pf.que21.no_desc_avail: 0 >> > dev.ixl.1.pf.que21.tx_dma_setup: 0 >> > dev.ixl.1.pf.que21.tso_tx: 0 >> > dev.ixl.1.pf.que21.irqs: 95 >> > dev.ixl.1.pf.que21.dropped: 0 >> > dev.ixl.1.pf.que21.mbuf_defrag_failed: 0 >> > dev.ixl.1.pf.que20.rx_bytes: 0 >> > dev.ixl.1.pf.que20.rx_packets: 0 >> > dev.ixl.1.pf.que20.tx_bytes: 7850 >> > dev.ixl.1.pf.que20.tx_packets: 98 >> > dev.ixl.1.pf.que20.no_desc_avail: 0 >> > dev.ixl.1.pf.que20.tx_dma_setup: 0 >> > dev.ixl.1.pf.que20.tso_tx: 0 >> > dev.ixl.1.pf.que20.irqs: 99 >> > dev.ixl.1.pf.que20.dropped: 0 >> > dev.ixl.1.pf.que20.mbuf_defrag_failed: 0 >> > dev.ixl.1.pf.que19.rx_bytes: 0 >> > dev.ixl.1.pf.que19.rx_packets: 0 >> > dev.ixl.1.pf.que19.tx_bytes: 64643 >> > dev.ixl.1.pf.que19.tx_packets: 202 >> > dev.ixl.1.pf.que19.no_desc_avail: 0 >> > dev.ixl.1.pf.que19.tx_dma_setup: 0 >> > dev.ixl.1.pf.que19.tso_tx: 0 >> > dev.ixl.1.pf.que19.irqs: 202 >> > dev.ixl.1.pf.que19.dropped: 0 >> > dev.ixl.1.pf.que19.mbuf_defrag_failed: 0 >> > dev.ixl.1.pf.que18.rx_bytes: 0 >> > dev.ixl.1.pf.que18.rx_packets: 0 >> > dev.ixl.1.pf.que18.tx_bytes: 5940 >> > dev.ixl.1.pf.que18.tx_packets: 74 >> > dev.ixl.1.pf.que18.no_desc_avail: 0 >> > dev.ixl.1.pf.que18.tx_dma_setup: 0 >> > dev.ixl.1.pf.que18.tso_tx: 0 >> > dev.ixl.1.pf.que18.irqs: 74 >> > dev.ixl.1.pf.que18.dropped: 0 >> > dev.ixl.1.pf.que18.mbuf_defrag_failed: 0 >> > dev.ixl.1.pf.que17.rx_bytes: 0 >> > dev.ixl.1.pf.que17.rx_packets: 0 >> > dev.ixl.1.pf.que17.tx_bytes: 11675 >> > dev.ixl.1.pf.que17.tx_packets: 83 >> > dev.ixl.1.pf.que17.no_desc_avail: 0 >> > dev.ixl.1.pf.que17.tx_dma_setup: 0 >> > dev.ixl.1.pf.que17.tso_tx: 0 >> > dev.ixl.1.pf.que17.irqs: 83 >> > dev.ixl.1.pf.que17.dropped: 0 >> > dev.ixl.1.pf.que17.mbuf_defrag_failed: 0 >> > dev.ixl.1.pf.que16.rx_bytes: 0 >> > dev.ixl.1.pf.que16.rx_packets: 0 >> > dev.ixl.1.pf.que16.tx_bytes: 105750457831 >> > dev.ixl.1.pf.que16.tx_packets: 205406766 >> > dev.ixl.1.pf.que16.no_desc_avail: 0 >> > dev.ixl.1.pf.que16.tx_dma_setup: 0 >> > dev.ixl.1.pf.que16.tso_tx: 0 >> > dev.ixl.1.pf.que16.irqs: 87222978 >> > dev.ixl.1.pf.que16.dropped: 0 >> > dev.ixl.1.pf.que16.mbuf_defrag_failed: 0 >> > dev.ixl.1.pf.que15.rx_bytes: 289558174088 >> > dev.ixl.1.pf.que15.rx_packets: 272466190 >> > dev.ixl.1.pf.que15.tx_bytes: 106152524681 >> > dev.ixl.1.pf.que15.tx_packets: 205379247 >> > dev.ixl.1.pf.que15.no_desc_avail: 0 >> > dev.ixl.1.pf.que15.tx_dma_setup: 0 >> > dev.ixl.1.pf.que15.tso_tx: 0 >> > dev.ixl.1.pf.que15.irqs: 238145862 >> > dev.ixl.1.pf.que15.dropped: 0 >> > dev.ixl.1.pf.que15.mbuf_defrag_failed: 0 >> > dev.ixl.1.pf.que14.rx_bytes: 301934533473 >> > dev.ixl.1.pf.que14.rx_packets: 298452930 >> > dev.ixl.1.pf.que14.tx_bytes: 111420393725 >> > dev.ixl.1.pf.que14.tx_packets: 215722532 >> > dev.ixl.1.pf.que14.no_desc_avail: 0 >> > dev.ixl.1.pf.que14.tx_dma_setup: 0 >> > dev.ixl.1.pf.que14.tso_tx: 0 >> > dev.ixl.1.pf.que14.irqs: 256291617 >> > dev.ixl.1.pf.que14.dropped: 0 >> > dev.ixl.1.pf.que14.mbuf_defrag_failed: 0 >> > dev.ixl.1.pf.que13.rx_bytes: 291380746253 >> > dev.ixl.1.pf.que13.rx_packets: 273037957 >> > dev.ixl.1.pf.que13.tx_bytes: 112417776222 >> > dev.ixl.1.pf.que13.tx_packets: 217500943 >> > dev.ixl.1.pf.que13.no_desc_avail: 0 >> > dev.ixl.1.pf.que13.tx_dma_setup: 0 >> > dev.ixl.1.pf.que13.tso_tx: 0 >> > dev.ixl.1.pf.que13.irqs: 241422331 >> > dev.ixl.1.pf.que13.dropped: 0 >> > dev.ixl.1.pf.que13.mbuf_defrag_failed: 0 >> > dev.ixl.1.pf.que12.rx_bytes: 301105585425 >> > dev.ixl.1.pf.que12.rx_packets: 286137817 >> > dev.ixl.1.pf.que12.tx_bytes: 95851784579 >> > dev.ixl.1.pf.que12.tx_packets: 199715765 >> > dev.ixl.1.pf.que12.no_desc_avail: 0 >> > dev.ixl.1.pf.que12.tx_dma_setup: 0 >> > dev.ixl.1.pf.que12.tso_tx: 0 >> > dev.ixl.1.pf.que12.irqs: 247322880 >> > dev.ixl.1.pf.que12.dropped: 0 >> > dev.ixl.1.pf.que12.mbuf_defrag_failed: 0 >> > dev.ixl.1.pf.que11.rx_bytes: 307105398143 >> > dev.ixl.1.pf.que11.rx_packets: 281046463 >> > dev.ixl.1.pf.que11.tx_bytes: 110710957789 >> > dev.ixl.1.pf.que11.tx_packets: 211784031 >> > dev.ixl.1.pf.que11.no_desc_avail: 0 >> > dev.ixl.1.pf.que11.tx_dma_setup: 0 >> > dev.ixl.1.pf.que11.tso_tx: 0 >> > dev.ixl.1.pf.que11.irqs: 256987179 >> > dev.ixl.1.pf.que11.dropped: 0 >> > dev.ixl.1.pf.que11.mbuf_defrag_failed: 0 >> > dev.ixl.1.pf.que10.rx_bytes: 304288000453 >> > dev.ixl.1.pf.que10.rx_packets: 278987858 >> > dev.ixl.1.pf.que10.tx_bytes: 93022244338 >> > dev.ixl.1.pf.que10.tx_packets: 195869210 >> > dev.ixl.1.pf.que10.no_desc_avail: 0 >> > dev.ixl.1.pf.que10.tx_dma_setup: 0 >> > dev.ixl.1.pf.que10.tso_tx: 0 >> > dev.ixl.1.pf.que10.irqs: 253622192 >> > dev.ixl.1.pf.que10.dropped: 0 >> > dev.ixl.1.pf.que10.mbuf_defrag_failed: 0 >> > dev.ixl.1.pf.que9.rx_bytes: 320340203822 >> > dev.ixl.1.pf.que9.rx_packets: 302309010 >> > dev.ixl.1.pf.que9.tx_bytes: 116604776460 >> > dev.ixl.1.pf.que9.tx_packets: 223949025 >> > dev.ixl.1.pf.que9.no_desc_avail: 0 >> > dev.ixl.1.pf.que9.tx_dma_setup: 0 >> > dev.ixl.1.pf.que9.tso_tx: 0 >> > dev.ixl.1.pf.que9.irqs: 271165440 >> > dev.ixl.1.pf.que9.dropped: 0 >> > dev.ixl.1.pf.que9.mbuf_defrag_failed: 0 >> > dev.ixl.1.pf.que8.rx_bytes: 291403725592 >> > dev.ixl.1.pf.que8.rx_packets: 267859568 >> > dev.ixl.1.pf.que8.tx_bytes: 205745654558 >> > dev.ixl.1.pf.que8.tx_packets: 443349835 >> > dev.ixl.1.pf.que8.no_desc_avail: 0 >> > dev.ixl.1.pf.que8.tx_dma_setup: 0 >> > dev.ixl.1.pf.que8.tso_tx: 0 >> > dev.ixl.1.pf.que8.irqs: 254116755 >> > dev.ixl.1.pf.que8.dropped: 0 >> > dev.ixl.1.pf.que8.mbuf_defrag_failed: 0 >> > dev.ixl.1.pf.que7.rx_bytes: 673363127346 >> > dev.ixl.1.pf.que7.rx_packets: 617269774 >> > dev.ixl.1.pf.que7.tx_bytes: 203162891886 >> > dev.ixl.1.pf.que7.tx_packets: 443709339 >> > dev.ixl.1.pf.que7.no_desc_avail: 0 >> > dev.ixl.1.pf.que7.tx_dma_setup: 0 >> > dev.ixl.1.pf.que7.tso_tx: 0 >> > dev.ixl.1.pf.que7.irqs: 424706771 >> > dev.ixl.1.pf.que7.dropped: 0 >> > dev.ixl.1.pf.que7.mbuf_defrag_failed: 0 >> > dev.ixl.1.pf.que6.rx_bytes: 644709094218 >> > dev.ixl.1.pf.que6.rx_packets: 601892919 >> > dev.ixl.1.pf.que6.tx_bytes: 221661735032 >> > dev.ixl.1.pf.que6.tx_packets: 460127064 >> > dev.ixl.1.pf.que6.no_desc_avail: 0 >> > dev.ixl.1.pf.que6.tx_dma_setup: 0 >> > dev.ixl.1.pf.que6.tso_tx: 0 >> > dev.ixl.1.pf.que6.irqs: 417748074 >> > dev.ixl.1.pf.que6.dropped: 0 >> > dev.ixl.1.pf.que6.mbuf_defrag_failed: 0 >> > dev.ixl.1.pf.que5.rx_bytes: 661904432231 >> > dev.ixl.1.pf.que5.rx_packets: 622012837 >> > dev.ixl.1.pf.que5.tx_bytes: 230514282876 >> > dev.ixl.1.pf.que5.tx_packets: 458571100 >> > dev.ixl.1.pf.que5.no_desc_avail: 0 >> > dev.ixl.1.pf.que5.tx_dma_setup: 0 >> > dev.ixl.1.pf.que5.tso_tx: 0 >> > dev.ixl.1.pf.que5.irqs: 422305039 >> > dev.ixl.1.pf.que5.dropped: 0 >> > dev.ixl.1.pf.que5.mbuf_defrag_failed: 0 >> > dev.ixl.1.pf.que4.rx_bytes: 653522179234 >> > dev.ixl.1.pf.que4.rx_packets: 603345546 >> > dev.ixl.1.pf.que4.tx_bytes: 216761219483 >> > dev.ixl.1.pf.que4.tx_packets: 450329641 >> > dev.ixl.1.pf.que4.no_desc_avail: 0 >> > dev.ixl.1.pf.que4.tx_dma_setup: 0 >> > dev.ixl.1.pf.que4.tso_tx: 3 >> > dev.ixl.1.pf.que4.irqs: 416920533 >> > dev.ixl.1.pf.que4.dropped: 0 >> > dev.ixl.1.pf.que4.mbuf_defrag_failed: 0 >> > dev.ixl.1.pf.que3.rx_bytes: 676494225882 >> > dev.ixl.1.pf.que3.rx_packets: 620605168 >> > dev.ixl.1.pf.que3.tx_bytes: 233854020454 >> > dev.ixl.1.pf.que3.tx_packets: 464425616 >> > dev.ixl.1.pf.que3.no_desc_avail: 0 >> > dev.ixl.1.pf.que3.tx_dma_setup: 0 >> > dev.ixl.1.pf.que3.tso_tx: 0 >> > dev.ixl.1.pf.que3.irqs: 426349030 >> > dev.ixl.1.pf.que3.dropped: 0 >> > dev.ixl.1.pf.que3.mbuf_defrag_failed: 0 >> > dev.ixl.1.pf.que2.rx_bytes: 677779337711 >> > dev.ixl.1.pf.que2.rx_packets: 620883699 >> > dev.ixl.1.pf.que2.tx_bytes: 211297141668 >> > dev.ixl.1.pf.que2.tx_packets: 450501525 >> > dev.ixl.1.pf.que2.no_desc_avail: 0 >> > dev.ixl.1.pf.que2.tx_dma_setup: 0 >> > dev.ixl.1.pf.que2.tso_tx: 0 >> > dev.ixl.1.pf.que2.irqs: 433146278 >> > dev.ixl.1.pf.que2.dropped: 0 >> > dev.ixl.1.pf.que2.mbuf_defrag_failed: 0 >> > dev.ixl.1.pf.que1.rx_bytes: 661360798018 >> > dev.ixl.1.pf.que1.rx_packets: 619700636 >> > dev.ixl.1.pf.que1.tx_bytes: 238264220772 >> > dev.ixl.1.pf.que1.tx_packets: 473425354 >> > dev.ixl.1.pf.que1.no_desc_avail: 0 >> > dev.ixl.1.pf.que1.tx_dma_setup: 0 >> > dev.ixl.1.pf.que1.tso_tx: 0 >> > dev.ixl.1.pf.que1.irqs: 437959829 >> > dev.ixl.1.pf.que1.dropped: 0 >> > dev.ixl.1.pf.que1.mbuf_defrag_failed: 0 >> > dev.ixl.1.pf.que0.rx_bytes: 685201226330 >> > dev.ixl.1.pf.que0.rx_packets: 637772348 >> > dev.ixl.1.pf.que0.tx_bytes: 124808 >> > dev.ixl.1.pf.que0.tx_packets: 1782 >> > dev.ixl.1.pf.que0.no_desc_avail: 0 >> > dev.ixl.1.pf.que0.tx_dma_setup: 0 >> > dev.ixl.1.pf.que0.tso_tx: 0 >> > dev.ixl.1.pf.que0.irqs: 174905480 >> > dev.ixl.1.pf.que0.dropped: 0 >> > dev.ixl.1.pf.que0.mbuf_defrag_failed: 0 >> > dev.ixl.1.pf.bcast_pkts_txd: 277 >> > dev.ixl.1.pf.mcast_pkts_txd: 0 >> > dev.ixl.1.pf.ucast_pkts_txd: 5319743945 >> > dev.ixl.1.pf.good_octets_txd: 2613178367282 >> > dev.ixl.1.pf.rx_discards: 0 >> > dev.ixl.1.pf.bcast_pkts_rcvd: 1 >> > dev.ixl.1.pf.mcast_pkts_rcvd: 0 >> > dev.ixl.1.pf.ucast_pkts_rcvd: 7203747890 >> > dev.ixl.1.pf.good_octets_rcvd: 7770230490224 >> > dev.ixl.1.vc_debug_level: 1 >> > dev.ixl.1.admin_irq: 0 >> > dev.ixl.1.watchdog_events: 0 >> > dev.ixl.1.debug: 0 >> > dev.ixl.1.dynamic_tx_itr: 0 >> > dev.ixl.1.tx_itr: 122 >> > dev.ixl.1.dynamic_rx_itr: 0 >> > dev.ixl.1.rx_itr: 62 >> > dev.ixl.1.fw_version: f4.33 a1.2 n04.42 e8000191d >> > dev.ixl.1.current_speed: 10G >> > dev.ixl.1.advertise_speed: 0 >> > dev.ixl.1.fc: 0 >> > dev.ixl.1.%parent: pci129 >> > dev.ixl.1.%pnpinfo: vendor=3D0x8086 device=3D0x1572 subvendor=3D0x= 8086 >> > subdevice=3D0x0000 class=3D0x020000 >> > dev.ixl.1.%location: slot=3D0 function=3D1 handle=3D\_SB_.PCI1.QR3= A.H001 >> > dev.ixl.1.%driver: ixl >> > dev.ixl.1.%desc: Intel(R) Ethernet Connection XL710 Driver, >> Version - 1.4.0 >> > dev.ixl.0.mac.xoff_recvd: 0 >> > dev.ixl.0.mac.xoff_txd: 0 >> > dev.ixl.0.mac.xon_recvd: 0 >> > dev.ixl.0.mac.xon_txd: 0 >> > dev.ixl.0.mac.tx_frames_big: 0 >> > dev.ixl.0.mac.tx_frames_1024_1522: 4961134019 >> > dev.ixl.0.mac.tx_frames_512_1023: 113082136 >> > dev.ixl.0.mac.tx_frames_256_511: 123538450 >> > dev.ixl.0.mac.tx_frames_128_255: 185051082 >> > dev.ixl.0.mac.tx_frames_65_127: 1332798493 >> > dev.ixl.0.mac.tx_frames_64: 243338964 >> > dev.ixl.0.mac.checksum_errors: 0 >> > dev.ixl.0.mac.rx_jabber: 0 >> > dev.ixl.0.mac.rx_oversized: 0 >> > dev.ixl.0.mac.rx_fragmented: 0 >> > dev.ixl.0.mac.rx_undersize: 0 >> > dev.ixl.0.mac.rx_frames_big: 0 >> > dev.ixl.0.mac.rx_frames_1024_1522: 1566499069 >> > dev.ixl.0.mac.rx_frames_512_1023: 101390143 >> > dev.ixl.0.mac.rx_frames_256_511: 49831970 >> > dev.ixl.0.mac.rx_frames_128_255: 231738168 >> > dev.ixl.0.mac.rx_frames_65_127: 2123185819 >> > dev.ixl.0.mac.rx_frames_64: 1320404300 >> > dev.ixl.0.mac.rx_length_errors: 0 >> > dev.ixl.0.mac.remote_faults: 0 >> > dev.ixl.0.mac.local_faults: 0 >> > dev.ixl.0.mac.illegal_bytes: 0 >> > dev.ixl.0.mac.crc_errors: 0 >> > dev.ixl.0.mac.bcast_pkts_txd: 302 >> > dev.ixl.0.mac.mcast_pkts_txd: 33965 >> > dev.ixl.0.mac.ucast_pkts_txd: 6958908862 >> > dev.ixl.0.mac.good_octets_txd: 7698936138858 >> > dev.ixl.0.mac.rx_discards: 0 >> > dev.ixl.0.mac.bcast_pkts_rcvd: 1 >> > dev.ixl.0.mac.mcast_pkts_rcvd: 49693 >> > dev.ixl.0.mac.ucast_pkts_rcvd: 5392999771 >> > dev.ixl.0.mac.good_octets_rcvd: 2648906893811 >> > dev.ixl.0.pf.que23.rx_bytes: 0 >> > dev.ixl.0.pf.que23.rx_packets: 0 >> > dev.ixl.0.pf.que23.tx_bytes: 2371273 >> > dev.ixl.0.pf.que23.tx_packets: 7313 >> > dev.ixl.0.pf.que23.no_desc_avail: 0 >> > dev.ixl.0.pf.que23.tx_dma_setup: 0 >> > dev.ixl.0.pf.que23.tso_tx: 0 >> > dev.ixl.0.pf.que23.irqs: 7313 >> > dev.ixl.0.pf.que23.dropped: 0 >> > dev.ixl.0.pf.que23.mbuf_defrag_failed: 0 >> > dev.ixl.0.pf.que22.rx_bytes: 0 >> > dev.ixl.0.pf.que22.rx_packets: 0 >> > dev.ixl.0.pf.que22.tx_bytes: 1908468 >> > dev.ixl.0.pf.que22.tx_packets: 6626 >> > dev.ixl.0.pf.que22.no_desc_avail: 0 >> > dev.ixl.0.pf.que22.tx_dma_setup: 0 >> > dev.ixl.0.pf.que22.tso_tx: 0 >> > dev.ixl.0.pf.que22.irqs: 6627 >> > dev.ixl.0.pf.que22.dropped: 0 >> > dev.ixl.0.pf.que22.mbuf_defrag_failed: 0 >> > dev.ixl.0.pf.que21.rx_bytes: 0 >> > dev.ixl.0.pf.que21.rx_packets: 0 >> > dev.ixl.0.pf.que21.tx_bytes: 2092668 >> > dev.ixl.0.pf.que21.tx_packets: 6739 >> > dev.ixl.0.pf.que21.no_desc_avail: 0 >> > dev.ixl.0.pf.que21.tx_dma_setup: 0 >> > dev.ixl.0.pf.que21.tso_tx: 0 >> > dev.ixl.0.pf.que21.irqs: 6728 >> > dev.ixl.0.pf.que21.dropped: 0 >> > dev.ixl.0.pf.que21.mbuf_defrag_failed: 0 >> > dev.ixl.0.pf.que20.rx_bytes: 0 >> > dev.ixl.0.pf.que20.rx_packets: 0 >> > dev.ixl.0.pf.que20.tx_bytes: 1742176 >> > dev.ixl.0.pf.que20.tx_packets: 6246 >> > dev.ixl.0.pf.que20.no_desc_avail: 0 >> > dev.ixl.0.pf.que20.tx_dma_setup: 0 >> > dev.ixl.0.pf.que20.tso_tx: 0 >> > dev.ixl.0.pf.que20.irqs: 6249 >> > dev.ixl.0.pf.que20.dropped: 0 >> > dev.ixl.0.pf.que20.mbuf_defrag_failed: 0 >> > dev.ixl.0.pf.que19.rx_bytes: 0 >> > dev.ixl.0.pf.que19.rx_packets: 0 >> > dev.ixl.0.pf.que19.tx_bytes: 2102284 >> > dev.ixl.0.pf.que19.tx_packets: 6979 >> > dev.ixl.0.pf.que19.no_desc_avail: 0 >> > dev.ixl.0.pf.que19.tx_dma_setup: 0 >> > dev.ixl.0.pf.que19.tso_tx: 0 >> > dev.ixl.0.pf.que19.irqs: 6979 >> > dev.ixl.0.pf.que19.dropped: 0 >> > dev.ixl.0.pf.que19.mbuf_defrag_failed: 0 >> > dev.ixl.0.pf.que18.rx_bytes: 0 >> > dev.ixl.0.pf.que18.rx_packets: 0 >> > dev.ixl.0.pf.que18.tx_bytes: 1532360 >> > dev.ixl.0.pf.que18.tx_packets: 5588 >> > dev.ixl.0.pf.que18.no_desc_avail: 0 >> > dev.ixl.0.pf.que18.tx_dma_setup: 0 >> > dev.ixl.0.pf.que18.tso_tx: 0 >> > dev.ixl.0.pf.que18.irqs: 5588 >> > dev.ixl.0.pf.que18.dropped: 0 >> > dev.ixl.0.pf.que18.mbuf_defrag_failed: 0 >> > dev.ixl.0.pf.que17.rx_bytes: 0 >> > dev.ixl.0.pf.que17.rx_packets: 0 >> > dev.ixl.0.pf.que17.tx_bytes: 1809684 >> > dev.ixl.0.pf.que17.tx_packets: 6136 >> > dev.ixl.0.pf.que17.no_desc_avail: 0 >> > dev.ixl.0.pf.que17.tx_dma_setup: 0 >> > dev.ixl.0.pf.que17.tso_tx: 0 >> > dev.ixl.0.pf.que17.irqs: 6136 >> > dev.ixl.0.pf.que17.dropped: 0 >> > dev.ixl.0.pf.que17.mbuf_defrag_failed: 0 >> > dev.ixl.0.pf.que16.rx_bytes: 0 >> > dev.ixl.0.pf.que16.rx_packets: 0 >> > dev.ixl.0.pf.que16.tx_bytes: 286836299105 >> > dev.ixl.0.pf.que16.tx_packets: 263532601 >> > dev.ixl.0.pf.que16.no_desc_avail: 0 >> > dev.ixl.0.pf.que16.tx_dma_setup: 0 >> > dev.ixl.0.pf.que16.tso_tx: 0 >> > dev.ixl.0.pf.que16.irqs: 83232941 >> > dev.ixl.0.pf.que16.dropped: 0 >> > dev.ixl.0.pf.que16.mbuf_defrag_failed: 0 >> > dev.ixl.0.pf.que15.rx_bytes: 106345323488 >> > dev.ixl.0.pf.que15.rx_packets: 208869912 >> > dev.ixl.0.pf.que15.tx_bytes: 298825179301 >> > dev.ixl.0.pf.que15.tx_packets: 288517504 >> > dev.ixl.0.pf.que15.no_desc_avail: 0 >> > dev.ixl.0.pf.que15.tx_dma_setup: 0 >> > dev.ixl.0.pf.que15.tso_tx: 0 >> > dev.ixl.0.pf.que15.irqs: 223322408 >> > dev.ixl.0.pf.que15.dropped: 0 >> > dev.ixl.0.pf.que15.mbuf_defrag_failed: 0 >> > dev.ixl.0.pf.que14.rx_bytes: 106721900547 >> > dev.ixl.0.pf.que14.rx_packets: 208566121 >> > dev.ixl.0.pf.que14.tx_bytes: 288657751920 >> > dev.ixl.0.pf.que14.tx_packets: 263556000 >> > dev.ixl.0.pf.que14.no_desc_avail: 0 >> > dev.ixl.0.pf.que14.tx_dma_setup: 0 >> > dev.ixl.0.pf.que14.tso_tx: 0 >> > dev.ixl.0.pf.que14.irqs: 220377537 >> > dev.ixl.0.pf.que14.dropped: 0 >> > dev.ixl.0.pf.que14.mbuf_defrag_failed: 0 >> > dev.ixl.0.pf.que13.rx_bytes: 111978971378 >> > dev.ixl.0.pf.que13.rx_packets: 218447354 >> > dev.ixl.0.pf.que13.tx_bytes: 298439860675 >> > dev.ixl.0.pf.que13.tx_packets: 276806617 >> > dev.ixl.0.pf.que13.no_desc_avail: 0 >> > dev.ixl.0.pf.que13.tx_dma_setup: 0 >> > dev.ixl.0.pf.que13.tso_tx: 0 >> > dev.ixl.0.pf.que13.irqs: 227474625 >> > dev.ixl.0.pf.que13.dropped: 0 >> > dev.ixl.0.pf.que13.mbuf_defrag_failed: 0 >> > dev.ixl.0.pf.que12.rx_bytes: 112969704706 >> > dev.ixl.0.pf.que12.rx_packets: 220275562 >> > dev.ixl.0.pf.que12.tx_bytes: 304750620079 >> > dev.ixl.0.pf.que12.tx_packets: 272244483 >> > dev.ixl.0.pf.que12.no_desc_avail: 0 >> > dev.ixl.0.pf.que12.tx_dma_setup: 0 >> > dev.ixl.0.pf.que12.tso_tx: 183 >> > dev.ixl.0.pf.que12.irqs: 230111291 >> > dev.ixl.0.pf.que12.dropped: 0 >> > dev.ixl.0.pf.que12.mbuf_defrag_failed: 0 >> > dev.ixl.0.pf.que11.rx_bytes: 96405343036 >> > dev.ixl.0.pf.que11.rx_packets: 202329448 >> > dev.ixl.0.pf.que11.tx_bytes: 302481707696 >> > dev.ixl.0.pf.que11.tx_packets: 271689246 >> > dev.ixl.0.pf.que11.no_desc_avail: 0 >> > dev.ixl.0.pf.que11.tx_dma_setup: 0 >> > dev.ixl.0.pf.que11.tso_tx: 0 >> > dev.ixl.0.pf.que11.irqs: 220717612 >> > dev.ixl.0.pf.que11.dropped: 0 >> > dev.ixl.0.pf.que11.mbuf_defrag_failed: 0 >> > dev.ixl.0.pf.que10.rx_bytes: 111280008670 >> > dev.ixl.0.pf.que10.rx_packets: 214900261 >> > dev.ixl.0.pf.que10.tx_bytes: 318638566198 >> > dev.ixl.0.pf.que10.tx_packets: 295011389 >> > dev.ixl.0.pf.que10.no_desc_avail: 0 >> > dev.ixl.0.pf.que10.tx_dma_setup: 0 >> > dev.ixl.0.pf.que10.tso_tx: 0 >> > dev.ixl.0.pf.que10.irqs: 230681709 >> > dev.ixl.0.pf.que10.dropped: 0 >> > dev.ixl.0.pf.que10.mbuf_defrag_failed: 0 >> > dev.ixl.0.pf.que9.rx_bytes: 93566025126 >> > dev.ixl.0.pf.que9.rx_packets: 198726483 >> > dev.ixl.0.pf.que9.tx_bytes: 288858818348 >> > dev.ixl.0.pf.que9.tx_packets: 258926864 >> > dev.ixl.0.pf.que9.no_desc_avail: 0 >> > dev.ixl.0.pf.que9.tx_dma_setup: 0 >> > dev.ixl.0.pf.que9.tso_tx: 0 >> > dev.ixl.0.pf.que9.irqs: 217918160 >> > dev.ixl.0.pf.que9.dropped: 0 >> > dev.ixl.0.pf.que9.mbuf_defrag_failed: 0 >> > dev.ixl.0.pf.que8.rx_bytes: 117169019041 >> > dev.ixl.0.pf.que8.rx_packets: 226938172 >> > dev.ixl.0.pf.que8.tx_bytes: 665794492752 >> > dev.ixl.0.pf.que8.tx_packets: 593519436 >> > dev.ixl.0.pf.que8.no_desc_avail: 0 >> > dev.ixl.0.pf.que8.tx_dma_setup: 0 >> > dev.ixl.0.pf.que8.tso_tx: 0 >> > dev.ixl.0.pf.que8.irqs: 244643578 >> > dev.ixl.0.pf.que8.dropped: 0 >> > dev.ixl.0.pf.que8.mbuf_defrag_failed: 0 >> > dev.ixl.0.pf.que7.rx_bytes: 206974266022 >> > dev.ixl.0.pf.que7.rx_packets: 449899895 >> > dev.ixl.0.pf.que7.tx_bytes: 638527685820 >> > dev.ixl.0.pf.que7.tx_packets: 580750916 >> > dev.ixl.0.pf.que7.no_desc_avail: 0 >> > dev.ixl.0.pf.que7.tx_dma_setup: 0 >> > dev.ixl.0.pf.que7.tso_tx: 0 >> > dev.ixl.0.pf.que7.irqs: 391760959 >> > dev.ixl.0.pf.que7.dropped: 0 >> > dev.ixl.0.pf.que7.mbuf_defrag_failed: 0 >> > dev.ixl.0.pf.que6.rx_bytes: 204373984670 >> > dev.ixl.0.pf.que6.rx_packets: 449990985 >> > dev.ixl.0.pf.que6.tx_bytes: 655511068125 >> > dev.ixl.0.pf.que6.tx_packets: 600735086 >> > dev.ixl.0.pf.que6.no_desc_avail: 0 >> > dev.ixl.0.pf.que6.tx_dma_setup: 0 >> > dev.ixl.0.pf.que6.tso_tx: 0 >> > dev.ixl.0.pf.que6.irqs: 394961024 >> > dev.ixl.0.pf.que6.dropped: 0 >> > dev.ixl.0.pf.que6.mbuf_defrag_failed: 0 >> > dev.ixl.0.pf.que5.rx_bytes: 222919535872 >> > dev.ixl.0.pf.que5.rx_packets: 466659705 >> > dev.ixl.0.pf.que5.tx_bytes: 647689764751 >> > dev.ixl.0.pf.que5.tx_packets: 582532691 >> > dev.ixl.0.pf.que5.no_desc_avail: 0 >> > dev.ixl.0.pf.que5.tx_dma_setup: 0 >> > dev.ixl.0.pf.que5.tso_tx: 5 >> > dev.ixl.0.pf.que5.irqs: 404552229 >> > dev.ixl.0.pf.que5.dropped: 0 >> > dev.ixl.0.pf.que5.mbuf_defrag_failed: 0 >> > dev.ixl.0.pf.que4.rx_bytes: 231706806551 >> > dev.ixl.0.pf.que4.rx_packets: 464397112 >> > dev.ixl.0.pf.que4.tx_bytes: 669945424739 >> > dev.ixl.0.pf.que4.tx_packets: 598527594 >> > dev.ixl.0.pf.que4.no_desc_avail: 0 >> > dev.ixl.0.pf.que4.tx_dma_setup: 0 >> > dev.ixl.0.pf.que4.tso_tx: 452 >> > dev.ixl.0.pf.que4.irqs: 405018727 >> > dev.ixl.0.pf.que4.dropped: 0 >> > dev.ixl.0.pf.que4.mbuf_defrag_failed: 0 >> > dev.ixl.0.pf.que3.rx_bytes: 217942511336 >> > dev.ixl.0.pf.que3.rx_packets: 456454137 >> > dev.ixl.0.pf.que3.tx_bytes: 674027217503 >> > dev.ixl.0.pf.que3.tx_packets: 604815959 >> > dev.ixl.0.pf.que3.no_desc_avail: 0 >> > dev.ixl.0.pf.que3.tx_dma_setup: 0 >> > dev.ixl.0.pf.que3.tso_tx: 0 >> > dev.ixl.0.pf.que3.irqs: 399890434 >> > dev.ixl.0.pf.que3.dropped: 0 >> > dev.ixl.0.pf.que3.mbuf_defrag_failed: 0 >> > dev.ixl.0.pf.que2.rx_bytes: 235057952930 >> > dev.ixl.0.pf.que2.rx_packets: 470668205 >> > dev.ixl.0.pf.que2.tx_bytes: 653598762323 >> > dev.ixl.0.pf.que2.tx_packets: 595468539 >> > dev.ixl.0.pf.que2.no_desc_avail: 0 >> > dev.ixl.0.pf.que2.tx_dma_setup: 0 >> > dev.ixl.0.pf.que2.tso_tx: 0 >> > dev.ixl.0.pf.que2.irqs: 410972406 >> > dev.ixl.0.pf.que2.dropped: 0 >> > dev.ixl.0.pf.que2.mbuf_defrag_failed: 0 >> > dev.ixl.0.pf.que1.rx_bytes: 212570053522 >> > dev.ixl.0.pf.que1.rx_packets: 456981561 >> > dev.ixl.0.pf.que1.tx_bytes: 677227126330 >> > dev.ixl.0.pf.que1.tx_packets: 612428010 >> > dev.ixl.0.pf.que1.no_desc_avail: 0 >> > dev.ixl.0.pf.que1.tx_dma_setup: 0 >> > dev.ixl.0.pf.que1.tso_tx: 0 >> > dev.ixl.0.pf.que1.irqs: 404727745 >> > dev.ixl.0.pf.que1.dropped: 0 >> > dev.ixl.0.pf.que1.mbuf_defrag_failed: 0 >> > dev.ixl.0.pf.que0.rx_bytes: 239424279142 >> > dev.ixl.0.pf.que0.rx_packets: 479078356 >> > dev.ixl.0.pf.que0.tx_bytes: 513283 >> > dev.ixl.0.pf.que0.tx_packets: 3990 >> > dev.ixl.0.pf.que0.no_desc_avail: 0 >> > dev.ixl.0.pf.que0.tx_dma_setup: 0 >> > dev.ixl.0.pf.que0.tso_tx: 0 >> > dev.ixl.0.pf.que0.irqs: 178414974 >> > dev.ixl.0.pf.que0.dropped: 0 >> > dev.ixl.0.pf.que0.mbuf_defrag_failed: 0 >> > dev.ixl.0.pf.bcast_pkts_txd: 302 >> > dev.ixl.0.pf.mcast_pkts_txd: 33965 >> > dev.ixl.0.pf.ucast_pkts_txd: 6958908879 >> > dev.ixl.0.pf.good_octets_txd: 7669637462330 >> > dev.ixl.0.pf.rx_discards: 0 >> > dev.ixl.0.pf.bcast_pkts_rcvd: 1 >> > dev.ixl.0.pf.mcast_pkts_rcvd: 49549 >> > dev.ixl.0.pf.ucast_pkts_rcvd: 5392999777 >> > dev.ixl.0.pf.good_octets_rcvd: 2648906886817 >> > dev.ixl.0.vc_debug_level: 1 >> > dev.ixl.0.admin_irq: 0 >> > dev.ixl.0.watchdog_events: 0 >> > dev.ixl.0.debug: 0 >> > dev.ixl.0.dynamic_tx_itr: 0 >> > dev.ixl.0.tx_itr: 122 >> > dev.ixl.0.dynamic_rx_itr: 0 >> > dev.ixl.0.rx_itr: 62 >> > dev.ixl.0.fw_version: f4.33 a1.2 n04.42 e8000191d >> > dev.ixl.0.current_speed: 10G >> > dev.ixl.0.advertise_speed: 0 >> > dev.ixl.0.fc: 0 >> > dev.ixl.0.%parent: pci129 >> > dev.ixl.0.%pnpinfo: vendor=3D0x8086 device=3D0x1572 subvendor=3D0x= 8086 >> > subdevice=3D0x0002 class=3D0x020000 >> > dev.ixl.0.%location: slot=3D0 function=3D0 handle=3D\_SB_.PCI1.QR3= A.H000 >> > dev.ixl.0.%driver: ixl >> > dev.ixl.0.%desc: Intel(R) Ethernet Connection XL710 Driver, >> Version - 1.4.0 >> > dev.ixl.%parent: >> > > _______________________________________________ > freebsd-net@freebsd.org mailing list > https://lists.freebsd.org/mailman/listinfo/freebsd-net > To unsubscribe, send any mail to "freebsd-net-unsubscribe@freebsd.org" From owner-freebsd-net@freebsd.org Wed Aug 19 20:01:30 2015 Return-Path: Delivered-To: freebsd-net@mailman.ysv.freebsd.org Received: from mx1.freebsd.org (mx1.freebsd.org [IPv6:2001:1900:2254:206a::19:1]) by mailman.ysv.freebsd.org (Postfix) with ESMTP id 930F39BE764 for ; Wed, 19 Aug 2015 20:01:30 +0000 (UTC) (envelope-from david@catwhisker.org) Received: from mailman.ysv.freebsd.org (mailman.ysv.freebsd.org [IPv6:2001:1900:2254:206a::50:5]) by mx1.freebsd.org (Postfix) with ESMTP id 72FD21ADA for ; Wed, 19 Aug 2015 20:01:30 +0000 (UTC) (envelope-from david@catwhisker.org) Received: by mailman.ysv.freebsd.org (Postfix) id 729179BE761; Wed, 19 Aug 2015 20:01:30 +0000 (UTC) Delivered-To: net@mailman.ysv.freebsd.org Received: from mx1.freebsd.org (mx1.freebsd.org [IPv6:2001:1900:2254:206a::19:1]) by mailman.ysv.freebsd.org (Postfix) with ESMTP id 71E709BE760; Wed, 19 Aug 2015 20:01:30 +0000 (UTC) (envelope-from david@catwhisker.org) Received: from albert.catwhisker.org (mx.catwhisker.org [198.144.209.73]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (Client did not present a certificate) by mx1.freebsd.org (Postfix) with ESMTPS id 115DF1AD2; Wed, 19 Aug 2015 20:01:26 +0000 (UTC) (envelope-from david@catwhisker.org) Received: from albert.catwhisker.org (localhost [127.0.0.1]) by albert.catwhisker.org (8.15.2/8.15.2) with ESMTP id t7JK1ONJ069448; Wed, 19 Aug 2015 13:01:24 -0700 (PDT) (envelope-from david@albert.catwhisker.org) Received: (from david@localhost) by albert.catwhisker.org (8.15.2/8.15.2/Submit) id t7JK1Ohx069447; Wed, 19 Aug 2015 13:01:24 -0700 (PDT) (envelope-from david) Date: Wed, 19 Aug 2015 13:01:24 -0700 From: David Wolfskill To: stable@freebsd.org, net@freebsd.org, wireless@freebsd.org Cc: Adrian Chadd Subject: Re: Panic [page fault] in _ieee80211_crypto_delkey(): stable/10/amd64 @r286878 Message-ID: <20150819200124.GR63584@albert.catwhisker.org> Mail-Followup-To: David Wolfskill , stable@freebsd.org, net@freebsd.org, wireless@freebsd.org, Adrian Chadd References: <20150818232007.GN1189@albert.catwhisker.org> <20150819160716.GK63584@albert.catwhisker.org> <20150819190852.GN63584@albert.catwhisker.org> <20150819192315.GO63584@albert.catwhisker.org> MIME-Version: 1.0 Content-Type: multipart/signed; micalg=pgp-sha512; protocol="application/pgp-signature"; boundary="/Isdj7O9hWi8F9Bn" Content-Disposition: inline In-Reply-To: <20150819160716.GK63584@albert.catwhisker.org> User-Agent: Mutt/1.5.23 (2014-03-12) X-BeenThere: freebsd-net@freebsd.org X-Mailman-Version: 2.1.20 Precedence: list List-Id: Networking and TCP/IP with FreeBSD List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , X-List-Received-Date: Wed, 19 Aug 2015 20:01:30 -0000 --/Isdj7O9hWi8F9Bn Content-Type: text/plain; charset=us-ascii Content-Disposition: inline Content-Transfer-Encoding: quoted-printable On Wed, Aug 19, 2015 at 12:25:38PM -0700, Adrian Chadd wrote: > ... But we definitely ahe enough to put into a PR.. > ... Bug 202494 - Panic [page fault] in _ieee80211_crypto_delkey()=20 Peace, david --=20 David H. Wolfskill david@catwhisker.org Those who would murder in the name of God or prophet are blasphemous coward= s. See http://www.catwhisker.org/~david/publickey.gpg for my public key. --/Isdj7O9hWi8F9Bn Content-Type: application/pgp-signature -----BEGIN PGP SIGNATURE----- Version: GnuPG v2 iQJ8BAEBCgBmBQJV1OCUXxSAAAAAAC4AKGlzc3Vlci1mcHJAbm90YXRpb25zLm9w ZW5wZ3AuZmlmdGhob3JzZW1hbi5uZXQ4RThEMDY4QTIxMjc1MDZFRDIzODYzRTc4 QTY3RjlDOERFRjQxOTNCAAoJEIpn+cje9Bk7b50P/084RhNSE/TNlG20ZhnOFh4A jAOGPpM/CSmmXUd0GW22JOEq9q7EsymaRkip2vVN34eugEMW7HrXJXtX1I9gR8ji 7MsAYUo3yBHnuI4vrKeeq0YOOwxeZdZXWaZMCU72YAsK/1GJQHmKAZjwx4x00FqL kGjyZeSNHBmQtCfip36rrrlhcpbQlSl7G0Gocz3fh33eVM21jmfjUyBf/ehHcKFd VJogqTkItR1OEoaxNXKk0jqAKqeFEY2B7OtN9rW3KSb/pEVZ4pjGoTEhLk/5Z7tS sdqHqFWVkjyszqANPSjnpAd2KoiUoPbiEcjyFfRjvLEk/sN7MoJZQyhIhelnDWrl 9OdZfXnsf9M3/pI+3Lj4VBYhBV6THzeo+z/FZJ5Mrm04/3NzYAfYEeNCB0z3AGiH lwjCW1iNWFsU9Mu4WwTZggnxgRXxnC3FmJK9QQacAh5LdEtu7CfPdx7mhft5cwkJ D801WXX6u2pVy28T8GP8AaWRhH4l3u7IB04Wz9g36rGlUaUm72no9x595GBTkorw h7X23Dk2CfLceaCeVeRX5WniB1li82NbnjHTmKgFReeYwZp4tRvxzKUL30v28p87 1QYDRrNNTqPh4m/7ASNuvYROnURAiVYc4H0rcrTxQgLmsyIhgZkKzK/7bebuyxD3 ffwqPaV28gm/o5vasIK5 =iqAq -----END PGP SIGNATURE----- --/Isdj7O9hWi8F9Bn-- From owner-freebsd-net@freebsd.org Wed Aug 19 20:16:14 2015 Return-Path: Delivered-To: freebsd-net@mailman.ysv.freebsd.org Received: from mx1.freebsd.org (mx1.freebsd.org [IPv6:2001:1900:2254:206a::19:1]) by mailman.ysv.freebsd.org (Postfix) with ESMTP id 7C6F49BEA4F for ; Wed, 19 Aug 2015 20:16:14 +0000 (UTC) (envelope-from ricera10@gmail.com) Received: from mail-qg0-f54.google.com (mail-qg0-f54.google.com [209.85.192.54]) (using TLSv1.2 with cipher ECDHE-RSA-AES128-GCM-SHA256 (128/128 bits)) (Client CN "smtp.gmail.com", Issuer "Google Internet Authority G2" (verified OK)) by mx1.freebsd.org (Postfix) with ESMTPS id 12C699A5 for ; Wed, 19 Aug 2015 20:16:13 +0000 (UTC) (envelope-from ricera10@gmail.com) Received: by qgeb6 with SMTP id b6so13519295qge.3 for ; Wed, 19 Aug 2015 13:16:12 -0700 (PDT) X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20130820; h=mime-version:references:in-reply-to:from:date:message-id:subject:to :cc:content-type; bh=f2ciCnytOdWX7XOmuaEydu0fz/HXDq9KUlVIPdt7FxY=; b=DbhrzDAQ0+w+/U3etG0YcMmXbra8ZyYk5YHgx4X+mU6lfxXCbwsWPofZlBHJ4j1H0l sWD0yC//WJehXafd/nMYtpaSNbOZ0ugOpd7w+LL6aMpcVxkDPlEusXDUlZZNvlnyVP5+ z6x/dG/2C4/YHPk1NkZCFlu+PTesaclO/qIX7igz5lmgkrXvTPHuKGBTxWtvyoNUIfL6 q5jcvwbu8AVA3HlqfPVaArowbYhTFpKEQLypRCOKpp8twFFwlP18o6dKpssEIgHvz12c kltO/40fFoUzh5co1iA9f08LJsDVlUjJahsoNfMdyCuhTSkH86XUf9bNFBNlvncabmg8 5wVg== X-Received: by 10.140.165.5 with SMTP id l5mr27589908qhl.85.1440015372615; Wed, 19 Aug 2015 13:16:12 -0700 (PDT) Received: from mail-qg0-f42.google.com (mail-qg0-f42.google.com. [209.85.192.42]) by smtp.gmail.com with ESMTPSA id 125sm962594qha.41.2015.08.19.13.16.12 for (version=TLSv1.2 cipher=ECDHE-RSA-AES128-GCM-SHA256 bits=128/128); Wed, 19 Aug 2015 13:16:12 -0700 (PDT) Received: by qgeg42 with SMTP id g42so13636259qge.1 for ; Wed, 19 Aug 2015 13:16:12 -0700 (PDT) X-Received: by 10.140.38.138 with SMTP id t10mr26072514qgt.74.1440015372054; Wed, 19 Aug 2015 13:16:12 -0700 (PDT) MIME-Version: 1.0 References: <55D49611.40603@maxnet.ru> <20150819180051.GM94440@strugglingcoder.info> <55D4DAB3.1020401@maxnet.ru> In-Reply-To: From: Eric Joyner Date: Wed, 19 Aug 2015 20:16:02 +0000 Message-ID: Subject: Re: FreeBSD 10.2-STABLE + Intel XL710 - free queues To: Adrian Chadd , Evgeny Khorokhorin Cc: hiren panchasara , FreeBSD Net Content-Type: text/plain; charset=UTF-8 Content-Transfer-Encoding: quoted-printable X-Content-Filtered-By: Mailman/MimeDel 2.1.20 X-BeenThere: freebsd-net@freebsd.org X-Mailman-Version: 2.1.20 Precedence: list List-Id: Networking and TCP/IP with FreeBSD List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , X-List-Received-Date: Wed, 19 Aug 2015 20:16:14 -0000 Yeah; it should be able to do up to 64 queues for the PF's. It's possible for the NVM to limit the RSS table size and entry width, but that seems unlikely. - Eric On Wed, Aug 19, 2015 at 12:41 PM Adrian Chadd wrote: > no, it's not the RSS option - it's the RSS configuration in the NIC > for steering traffic into different queues based on header contents. > > The RSS kernel option includes the framework that ties it all together > into the network stack - if you don't use it (which is the default), > the NICs are free to do whatever they want and there's no affinity in > the network stack. > > Eric - does the intel driver / hardware here support receive traffic > distribution into > 16 queues? > > > > -adrian > > > On 19 August 2015 at 12:36, Evgeny Khorokhorin wrote: > > Eric, > > I updated this driver in kernel, not as module. And I removed > > #include "opt_rss.h" > > > > from if_ixl.c and ixl_txrx.c: > > > > #ifndef IXL_STANDALONE_BUILD > > #include "opt_inet.h" > > #include "opt_inet6.h" > > #include "opt_rss.h" > > #endif > > > > because RSS for is only in HEAD > > Could I break smth by doing this? > > > > Best regards, > > Evgeny Khorokhorin > > > > 19.08.2015 21:17, Eric Joyner =D0=BF=D0=B8=D1=88=D0=B5=D1=82: > >> > >> The IXLV_MAX_QUEUES value is for the VF driver; the standard driver > should > >> be able to allocate and properly use up to 64 queues. > >> > >> That said, you're only getting rx traffic on the first 16 queues, so > that > >> looks like a bug in the driver. I'll take a look at it. > >> > >> - Eric > >> > >> On Wed, Aug 19, 2015 at 11:00 AM hiren panchasara > >> > wrote= : > >> > >> On 08/19/15 at 05:43P, Evgeny Khorokhorin wrote: > >> > Hi All, > >> > > >> > FreeBSD 10.2-STABLE > >> > 2*CPU Intel E5-2643v3 with HyperThreading enabled > >> > Intel XL710 network adapter > >> > I updated the ixl driver to version 1.4.0 from > >> download.intel.com > >> > >> > Every ixl interface create 24 queues (6 cores *2 HT *2 CPUs) but > >> > utilizes only 16-17 of them. Where is the reason of such behavio= r > or > >> > driver bug? > >> > >> Not sure what is the h/w limit but this may be a possible cause: > >> #define IXLV_MAX_QUEUES 16 > >> in sys/dev/ixl/ixlv.h > >> > >> and ixlv_init_msix() doing: > >> if (queues > IXLV_MAX_QUEUES) > >> queues =3D IXLV_MAX_QUEUES; > >> > >> Adding eric from intel to confirm. > >> > >> Cheers, > >> Hiren > >> > > >> > irq284: ixl0:q0 177563088 2054 > >> > irq285: ixl0:q1 402668179 4659 > >> > irq286: ixl0:q2 408885088 4731 > >> > irq287: ixl0:q3 397744300 4602 > >> > irq288: ixl0:q4 403040766 4663 > >> > irq289: ixl0:q5 402499314 4657 > >> > irq290: ixl0:q6 392693663 4543 > >> > irq291: ixl0:q7 389364966 4505 > >> > irq292: ixl0:q8 243244346 2814 > >> > irq293: ixl0:q9 216834450 2509 > >> > irq294: ixl0:q10 229460056 2655 > >> > irq295: ixl0:q11 219591953 2540 > >> > irq296: ixl0:q12 228944960 2649 > >> > irq297: ixl0:q13 226385454 2619 > >> > irq298: ixl0:q14 219174953 2536 > >> > irq299: ixl0:q15 222151378 2570 > >> > irq300: ixl0:q16 82799713 958 > >> > irq301: ixl0:q17 6131 0 > >> > irq302: ixl0:q18 5586 0 > >> > irq303: ixl0:q19 6975 0 > >> > irq304: ixl0:q20 6243 0 > >> > irq305: ixl0:q21 6729 0 > >> > irq306: ixl0:q22 6623 0 > >> > irq307: ixl0:q23 7306 0 > >> > irq309: ixl1:q0 174074462 2014 > >> > irq310: ixl1:q1 435716449 5041 > >> > irq311: ixl1:q2 431030443 4987 > >> > irq312: ixl1:q3 424156413 4907 > >> > irq313: ixl1:q4 414791657 4799 > >> > irq314: ixl1:q5 420260382 4862 > >> > irq315: ixl1:q6 415645708 4809 > >> > irq316: ixl1:q7 422783859 4892 > >> > irq317: ixl1:q8 252737383 2924 > >> > irq318: ixl1:q9 269655708 3120 > >> > irq319: ixl1:q10 252397826 2920 > >> > irq320: ixl1:q11 255649144 2958 > >> > irq321: ixl1:q12 246025621 2846 > >> > irq322: ixl1:q13 240176554 2779 > >> > irq323: ixl1:q14 254882418 2949 > >> > irq324: ixl1:q15 236846536 2740 > >> > irq325: ixl1:q16 86794467 1004 > >> > irq326: ixl1:q17 83 0 > >> > irq327: ixl1:q18 74 0 > >> > irq328: ixl1:q19 202 0 > >> > irq329: ixl1:q20 99 0 > >> > irq330: ixl1:q21 96 0 > >> > irq331: ixl1:q22 91 0 > >> > irq332: ixl1:q23 89 0 > >> > > >> > last pid: 28710; load averages: 7.16, 6.76, 6.49 up > >> 1+00:00:41 17:40:46 > >> > 391 processes: 32 running, 215 sleeping, 144 waiting > >> > CPU 0: 0.0% user, 0.0% nice, 0.0% system, 49.2% interrupt, > >> 50.8% idle > >> > CPU 1: 0.0% user, 0.0% nice, 0.4% system, 41.3% interrupt, > >> 58.3% idle > >> > CPU 2: 0.0% user, 0.0% nice, 0.0% system, 39.0% interrupt, > >> 61.0% idle > >> > CPU 3: 0.0% user, 0.0% nice, 0.0% system, 46.5% interrupt, > >> 53.5% idle > >> > CPU 4: 0.0% user, 0.0% nice, 0.0% system, 37.4% interrupt, > >> 62.6% idle > >> > CPU 5: 0.0% user, 0.0% nice, 0.0% system, 40.9% interrupt, > >> 59.1% idle > >> > CPU 6: 0.0% user, 0.0% nice, 0.0% system, 40.2% interrupt, > >> 59.8% idle > >> > CPU 7: 0.0% user, 0.0% nice, 0.0% system, 45.3% interrupt, > >> 54.7% idle > >> > CPU 8: 0.0% user, 0.0% nice, 0.0% system, 20.5% interrupt, > >> 79.5% idle > >> > CPU 9: 0.0% user, 0.0% nice, 0.0% system, 25.2% interrupt, > >> 74.8% idle > >> > CPU 10: 0.0% user, 0.0% nice, 0.0% system, 23.2% interrupt, > >> 76.8% idle > >> > CPU 11: 0.0% user, 0.0% nice, 0.0% system, 19.3% interrupt, > >> 80.7% idle > >> > CPU 12: 0.0% user, 0.0% nice, 0.0% system, 28.7% interrupt, > >> 71.3% idle > >> > CPU 13: 0.0% user, 0.0% nice, 0.0% system, 20.5% interrupt, > >> 79.5% idle > >> > CPU 14: 0.0% user, 0.0% nice, 0.0% system, 35.0% interrupt, > >> 65.0% idle > >> > CPU 15: 0.0% user, 0.0% nice, 0.0% system, 23.2% interrupt, > >> 76.8% idle > >> > CPU 16: 0.0% user, 0.0% nice, 0.4% system, 1.2% interrupt, > >> 98.4% idle > >> > CPU 17: 0.0% user, 0.0% nice, 2.0% system, 0.0% interrupt, > >> 98.0% idle > >> > CPU 18: 0.0% user, 0.0% nice, 2.4% system, 0.0% interrupt, > >> 97.6% idle > >> > CPU 19: 0.0% user, 0.0% nice, 2.8% system, 0.0% interrupt, > >> 97.2% idle > >> > CPU 20: 0.0% user, 0.0% nice, 2.4% system, 0.0% interrupt, > >> 97.6% idle > >> > CPU 21: 0.0% user, 0.0% nice, 1.6% system, 0.0% interrupt, > >> 98.4% idle > >> > CPU 22: 0.0% user, 0.0% nice, 2.8% system, 0.0% interrupt, > >> 97.2% idle > >> > CPU 23: 0.0% user, 0.0% nice, 0.4% system, 0.0% interrupt, > >> 99.6% idle > >> > > >> > # netstat -I ixl0 -w1 -h > >> > input ixl0 output > >> > packets errs idrops bytes packets errs bytes colls > >> > 253K 0 0 136M 311K 0 325M 0 > >> > 251K 0 0 129M 314K 0 334M 0 > >> > 250K 0 0 135M 313K 0 333M 0 > >> > > >> > hw.ixl.tx_itr: 122 > >> > hw.ixl.rx_itr: 62 > >> > hw.ixl.dynamic_tx_itr: 0 > >> > hw.ixl.dynamic_rx_itr: 0 > >> > hw.ixl.max_queues: 0 > >> > hw.ixl.ring_size: 4096 > >> > hw.ixl.enable_msix: 1 > >> > dev.ixl.3.mac.xoff_recvd: 0 > >> > dev.ixl.3.mac.xoff_txd: 0 > >> > dev.ixl.3.mac.xon_recvd: 0 > >> > dev.ixl.3.mac.xon_txd: 0 > >> > dev.ixl.3.mac.tx_frames_big: 0 > >> > dev.ixl.3.mac.tx_frames_1024_1522: 0 > >> > dev.ixl.3.mac.tx_frames_512_1023: 0 > >> > dev.ixl.3.mac.tx_frames_256_511: 0 > >> > dev.ixl.3.mac.tx_frames_128_255: 0 > >> > dev.ixl.3.mac.tx_frames_65_127: 0 > >> > dev.ixl.3.mac.tx_frames_64: 0 > >> > dev.ixl.3.mac.checksum_errors: 0 > >> > dev.ixl.3.mac.rx_jabber: 0 > >> > dev.ixl.3.mac.rx_oversized: 0 > >> > dev.ixl.3.mac.rx_fragmented: 0 > >> > dev.ixl.3.mac.rx_undersize: 0 > >> > dev.ixl.3.mac.rx_frames_big: 0 > >> > dev.ixl.3.mac.rx_frames_1024_1522: 0 > >> > dev.ixl.3.mac.rx_frames_512_1023: 0 > >> > dev.ixl.3.mac.rx_frames_256_511: 0 > >> > dev.ixl.3.mac.rx_frames_128_255: 0 > >> > dev.ixl.3.mac.rx_frames_65_127: 0 > >> > dev.ixl.3.mac.rx_frames_64: 0 > >> > dev.ixl.3.mac.rx_length_errors: 0 > >> > dev.ixl.3.mac.remote_faults: 0 > >> > dev.ixl.3.mac.local_faults: 0 > >> > dev.ixl.3.mac.illegal_bytes: 0 > >> > dev.ixl.3.mac.crc_errors: 0 > >> > dev.ixl.3.mac.bcast_pkts_txd: 0 > >> > dev.ixl.3.mac.mcast_pkts_txd: 0 > >> > dev.ixl.3.mac.ucast_pkts_txd: 0 > >> > dev.ixl.3.mac.good_octets_txd: 0 > >> > dev.ixl.3.mac.rx_discards: 0 > >> > dev.ixl.3.mac.bcast_pkts_rcvd: 0 > >> > dev.ixl.3.mac.mcast_pkts_rcvd: 0 > >> > dev.ixl.3.mac.ucast_pkts_rcvd: 0 > >> > dev.ixl.3.mac.good_octets_rcvd: 0 > >> > dev.ixl.3.pf.que23.rx_bytes: 0 > >> > dev.ixl.3.pf.que23.rx_packets: 0 > >> > dev.ixl.3.pf.que23.tx_bytes: 0 > >> > dev.ixl.3.pf.que23.tx_packets: 0 > >> > dev.ixl.3.pf.que23.no_desc_avail: 0 > >> > dev.ixl.3.pf.que23.tx_dma_setup: 0 > >> > dev.ixl.3.pf.que23.tso_tx: 0 > >> > dev.ixl.3.pf.que23.irqs: 0 > >> > dev.ixl.3.pf.que23.dropped: 0 > >> > dev.ixl.3.pf.que23.mbuf_defrag_failed: 0 > >> > dev.ixl.3.pf.que22.rx_bytes: 0 > >> > dev.ixl.3.pf.que22.rx_packets: 0 > >> > dev.ixl.3.pf.que22.tx_bytes: 0 > >> > dev.ixl.3.pf.que22.tx_packets: 0 > >> > dev.ixl.3.pf.que22.no_desc_avail: 0 > >> > dev.ixl.3.pf.que22.tx_dma_setup: 0 > >> > dev.ixl.3.pf.que22.tso_tx: 0 > >> > dev.ixl.3.pf.que22.irqs: 0 > >> > dev.ixl.3.pf.que22.dropped: 0 > >> > dev.ixl.3.pf.que22.mbuf_defrag_failed: 0 > >> > dev.ixl.3.pf.que21.rx_bytes: 0 > >> > dev.ixl.3.pf.que21.rx_packets: 0 > >> > dev.ixl.3.pf.que21.tx_bytes: 0 > >> > dev.ixl.3.pf.que21.tx_packets: 0 > >> > dev.ixl.3.pf.que21.no_desc_avail: 0 > >> > dev.ixl.3.pf.que21.tx_dma_setup: 0 > >> > dev.ixl.3.pf.que21.tso_tx: 0 > >> > dev.ixl.3.pf.que21.irqs: 0 > >> > dev.ixl.3.pf.que21.dropped: 0 > >> > dev.ixl.3.pf.que21.mbuf_defrag_failed: 0 > >> > dev.ixl.3.pf.que20.rx_bytes: 0 > >> > dev.ixl.3.pf.que20.rx_packets: 0 > >> > dev.ixl.3.pf.que20.tx_bytes: 0 > >> > dev.ixl.3.pf.que20.tx_packets: 0 > >> > dev.ixl.3.pf.que20.no_desc_avail: 0 > >> > dev.ixl.3.pf.que20.tx_dma_setup: 0 > >> > dev.ixl.3.pf.que20.tso_tx: 0 > >> > dev.ixl.3.pf.que20.irqs: 0 > >> > dev.ixl.3.pf.que20.dropped: 0 > >> > dev.ixl.3.pf.que20.mbuf_defrag_failed: 0 > >> > dev.ixl.3.pf.que19.rx_bytes: 0 > >> > dev.ixl.3.pf.que19.rx_packets: 0 > >> > dev.ixl.3.pf.que19.tx_bytes: 0 > >> > dev.ixl.3.pf.que19.tx_packets: 0 > >> > dev.ixl.3.pf.que19.no_desc_avail: 0 > >> > dev.ixl.3.pf.que19.tx_dma_setup: 0 > >> > dev.ixl.3.pf.que19.tso_tx: 0 > >> > dev.ixl.3.pf.que19.irqs: 0 > >> > dev.ixl.3.pf.que19.dropped: 0 > >> > dev.ixl.3.pf.que19.mbuf_defrag_failed: 0 > >> > dev.ixl.3.pf.que18.rx_bytes: 0 > >> > dev.ixl.3.pf.que18.rx_packets: 0 > >> > dev.ixl.3.pf.que18.tx_bytes: 0 > >> > dev.ixl.3.pf.que18.tx_packets: 0 > >> > dev.ixl.3.pf.que18.no_desc_avail: 0 > >> > dev.ixl.3.pf.que18.tx_dma_setup: 0 > >> > dev.ixl.3.pf.que18.tso_tx: 0 > >> > dev.ixl.3.pf.que18.irqs: 0 > >> > dev.ixl.3.pf.que18.dropped: 0 > >> > dev.ixl.3.pf.que18.mbuf_defrag_failed: 0 > >> > dev.ixl.3.pf.que17.rx_bytes: 0 > >> > dev.ixl.3.pf.que17.rx_packets: 0 > >> > dev.ixl.3.pf.que17.tx_bytes: 0 > >> > dev.ixl.3.pf.que17.tx_packets: 0 > >> > dev.ixl.3.pf.que17.no_desc_avail: 0 > >> > dev.ixl.3.pf.que17.tx_dma_setup: 0 > >> > dev.ixl.3.pf.que17.tso_tx: 0 > >> > dev.ixl.3.pf.que17.irqs: 0 > >> > dev.ixl.3.pf.que17.dropped: 0 > >> > dev.ixl.3.pf.que17.mbuf_defrag_failed: 0 > >> > dev.ixl.3.pf.que16.rx_bytes: 0 > >> > dev.ixl.3.pf.que16.rx_packets: 0 > >> > dev.ixl.3.pf.que16.tx_bytes: 0 > >> > dev.ixl.3.pf.que16.tx_packets: 0 > >> > dev.ixl.3.pf.que16.no_desc_avail: 0 > >> > dev.ixl.3.pf.que16.tx_dma_setup: 0 > >> > dev.ixl.3.pf.que16.tso_tx: 0 > >> > dev.ixl.3.pf.que16.irqs: 0 > >> > dev.ixl.3.pf.que16.dropped: 0 > >> > dev.ixl.3.pf.que16.mbuf_defrag_failed: 0 > >> > dev.ixl.3.pf.que15.rx_bytes: 0 > >> > dev.ixl.3.pf.que15.rx_packets: 0 > >> > dev.ixl.3.pf.que15.tx_bytes: 0 > >> > dev.ixl.3.pf.que15.tx_packets: 0 > >> > dev.ixl.3.pf.que15.no_desc_avail: 0 > >> > dev.ixl.3.pf.que15.tx_dma_setup: 0 > >> > dev.ixl.3.pf.que15.tso_tx: 0 > >> > dev.ixl.3.pf.que15.irqs: 0 > >> > dev.ixl.3.pf.que15.dropped: 0 > >> > dev.ixl.3.pf.que15.mbuf_defrag_failed: 0 > >> > dev.ixl.3.pf.que14.rx_bytes: 0 > >> > dev.ixl.3.pf.que14.rx_packets: 0 > >> > dev.ixl.3.pf.que14.tx_bytes: 0 > >> > dev.ixl.3.pf.que14.tx_packets: 0 > >> > dev.ixl.3.pf.que14.no_desc_avail: 0 > >> > dev.ixl.3.pf.que14.tx_dma_setup: 0 > >> > dev.ixl.3.pf.que14.tso_tx: 0 > >> > dev.ixl.3.pf.que14.irqs: 0 > >> > dev.ixl.3.pf.que14.dropped: 0 > >> > dev.ixl.3.pf.que14.mbuf_defrag_failed: 0 > >> > dev.ixl.3.pf.que13.rx_bytes: 0 > >> > dev.ixl.3.pf.que13.rx_packets: 0 > >> > dev.ixl.3.pf.que13.tx_bytes: 0 > >> > dev.ixl.3.pf.que13.tx_packets: 0 > >> > dev.ixl.3.pf.que13.no_desc_avail: 0 > >> > dev.ixl.3.pf.que13.tx_dma_setup: 0 > >> > dev.ixl.3.pf.que13.tso_tx: 0 > >> > dev.ixl.3.pf.que13.irqs: 0 > >> > dev.ixl.3.pf.que13.dropped: 0 > >> > dev.ixl.3.pf.que13.mbuf_defrag_failed: 0 > >> > dev.ixl.3.pf.que12.rx_bytes: 0 > >> > dev.ixl.3.pf.que12.rx_packets: 0 > >> > dev.ixl.3.pf.que12.tx_bytes: 0 > >> > dev.ixl.3.pf.que12.tx_packets: 0 > >> > dev.ixl.3.pf.que12.no_desc_avail: 0 > >> > dev.ixl.3.pf.que12.tx_dma_setup: 0 > >> > dev.ixl.3.pf.que12.tso_tx: 0 > >> > dev.ixl.3.pf.que12.irqs: 0 > >> > dev.ixl.3.pf.que12.dropped: 0 > >> > dev.ixl.3.pf.que12.mbuf_defrag_failed: 0 > >> > dev.ixl.3.pf.que11.rx_bytes: 0 > >> > dev.ixl.3.pf.que11.rx_packets: 0 > >> > dev.ixl.3.pf.que11.tx_bytes: 0 > >> > dev.ixl.3.pf.que11.tx_packets: 0 > >> > dev.ixl.3.pf.que11.no_desc_avail: 0 > >> > dev.ixl.3.pf.que11.tx_dma_setup: 0 > >> > dev.ixl.3.pf.que11.tso_tx: 0 > >> > dev.ixl.3.pf.que11.irqs: 0 > >> > dev.ixl.3.pf.que11.dropped: 0 > >> > dev.ixl.3.pf.que11.mbuf_defrag_failed: 0 > >> > dev.ixl.3.pf.que10.rx_bytes: 0 > >> > dev.ixl.3.pf.que10.rx_packets: 0 > >> > dev.ixl.3.pf.que10.tx_bytes: 0 > >> > dev.ixl.3.pf.que10.tx_packets: 0 > >> > dev.ixl.3.pf.que10.no_desc_avail: 0 > >> > dev.ixl.3.pf.que10.tx_dma_setup: 0 > >> > dev.ixl.3.pf.que10.tso_tx: 0 > >> > dev.ixl.3.pf.que10.irqs: 0 > >> > dev.ixl.3.pf.que10.dropped: 0 > >> > dev.ixl.3.pf.que10.mbuf_defrag_failed: 0 > >> > dev.ixl.3.pf.que9.rx_bytes: 0 > >> > dev.ixl.3.pf.que9.rx_packets: 0 > >> > dev.ixl.3.pf.que9.tx_bytes: 0 > >> > dev.ixl.3.pf.que9.tx_packets: 0 > >> > dev.ixl.3.pf.que9.no_desc_avail: 0 > >> > dev.ixl.3.pf.que9.tx_dma_setup: 0 > >> > dev.ixl.3.pf.que9.tso_tx: 0 > >> > dev.ixl.3.pf.que9.irqs: 0 > >> > dev.ixl.3.pf.que9.dropped: 0 > >> > dev.ixl.3.pf.que9.mbuf_defrag_failed: 0 > >> > dev.ixl.3.pf.que8.rx_bytes: 0 > >> > dev.ixl.3.pf.que8.rx_packets: 0 > >> > dev.ixl.3.pf.que8.tx_bytes: 0 > >> > dev.ixl.3.pf.que8.tx_packets: 0 > >> > dev.ixl.3.pf.que8.no_desc_avail: 0 > >> > dev.ixl.3.pf.que8.tx_dma_setup: 0 > >> > dev.ixl.3.pf.que8.tso_tx: 0 > >> > dev.ixl.3.pf.que8.irqs: 0 > >> > dev.ixl.3.pf.que8.dropped: 0 > >> > dev.ixl.3.pf.que8.mbuf_defrag_failed: 0 > >> > dev.ixl.3.pf.que7.rx_bytes: 0 > >> > dev.ixl.3.pf.que7.rx_packets: 0 > >> > dev.ixl.3.pf.que7.tx_bytes: 0 > >> > dev.ixl.3.pf.que7.tx_packets: 0 > >> > dev.ixl.3.pf.que7.no_desc_avail: 0 > >> > dev.ixl.3.pf.que7.tx_dma_setup: 0 > >> > dev.ixl.3.pf.que7.tso_tx: 0 > >> > dev.ixl.3.pf.que7.irqs: 0 > >> > dev.ixl.3.pf.que7.dropped: 0 > >> > dev.ixl.3.pf.que7.mbuf_defrag_failed: 0 > >> > dev.ixl.3.pf.que6.rx_bytes: 0 > >> > dev.ixl.3.pf.que6.rx_packets: 0 > >> > dev.ixl.3.pf.que6.tx_bytes: 0 > >> > dev.ixl.3.pf.que6.tx_packets: 0 > >> > dev.ixl.3.pf.que6.no_desc_avail: 0 > >> > dev.ixl.3.pf.que6.tx_dma_setup: 0 > >> > dev.ixl.3.pf.que6.tso_tx: 0 > >> > dev.ixl.3.pf.que6.irqs: 0 > >> > dev.ixl.3.pf.que6.dropped: 0 > >> > dev.ixl.3.pf.que6.mbuf_defrag_failed: 0 > >> > dev.ixl.3.pf.que5.rx_bytes: 0 > >> > dev.ixl.3.pf.que5.rx_packets: 0 > >> > dev.ixl.3.pf.que5.tx_bytes: 0 > >> > dev.ixl.3.pf.que5.tx_packets: 0 > >> > dev.ixl.3.pf.que5.no_desc_avail: 0 > >> > dev.ixl.3.pf.que5.tx_dma_setup: 0 > >> > dev.ixl.3.pf.que5.tso_tx: 0 > >> > dev.ixl.3.pf.que5.irqs: 0 > >> > dev.ixl.3.pf.que5.dropped: 0 > >> > dev.ixl.3.pf.que5.mbuf_defrag_failed: 0 > >> > dev.ixl.3.pf.que4.rx_bytes: 0 > >> > dev.ixl.3.pf.que4.rx_packets: 0 > >> > dev.ixl.3.pf.que4.tx_bytes: 0 > >> > dev.ixl.3.pf.que4.tx_packets: 0 > >> > dev.ixl.3.pf.que4.no_desc_avail: 0 > >> > dev.ixl.3.pf.que4.tx_dma_setup: 0 > >> > dev.ixl.3.pf.que4.tso_tx: 0 > >> > dev.ixl.3.pf.que4.irqs: 0 > >> > dev.ixl.3.pf.que4.dropped: 0 > >> > dev.ixl.3.pf.que4.mbuf_defrag_failed: 0 > >> > dev.ixl.3.pf.que3.rx_bytes: 0 > >> > dev.ixl.3.pf.que3.rx_packets: 0 > >> > dev.ixl.3.pf.que3.tx_bytes: 0 > >> > dev.ixl.3.pf.que3.tx_packets: 0 > >> > dev.ixl.3.pf.que3.no_desc_avail: 0 > >> > dev.ixl.3.pf.que3.tx_dma_setup: 0 > >> > dev.ixl.3.pf.que3.tso_tx: 0 > >> > dev.ixl.3.pf.que3.irqs: 0 > >> > dev.ixl.3.pf.que3.dropped: 0 > >> > dev.ixl.3.pf.que3.mbuf_defrag_failed: 0 > >> > dev.ixl.3.pf.que2.rx_bytes: 0 > >> > dev.ixl.3.pf.que2.rx_packets: 0 > >> > dev.ixl.3.pf.que2.tx_bytes: 0 > >> > dev.ixl.3.pf.que2.tx_packets: 0 > >> > dev.ixl.3.pf.que2.no_desc_avail: 0 > >> > dev.ixl.3.pf.que2.tx_dma_setup: 0 > >> > dev.ixl.3.pf.que2.tso_tx: 0 > >> > dev.ixl.3.pf.que2.irqs: 0 > >> > dev.ixl.3.pf.que2.dropped: 0 > >> > dev.ixl.3.pf.que2.mbuf_defrag_failed: 0 > >> > dev.ixl.3.pf.que1.rx_bytes: 0 > >> > dev.ixl.3.pf.que1.rx_packets: 0 > >> > dev.ixl.3.pf.que1.tx_bytes: 0 > >> > dev.ixl.3.pf.que1.tx_packets: 0 > >> > dev.ixl.3.pf.que1.no_desc_avail: 0 > >> > dev.ixl.3.pf.que1.tx_dma_setup: 0 > >> > dev.ixl.3.pf.que1.tso_tx: 0 > >> > dev.ixl.3.pf.que1.irqs: 0 > >> > dev.ixl.3.pf.que1.dropped: 0 > >> > dev.ixl.3.pf.que1.mbuf_defrag_failed: 0 > >> > dev.ixl.3.pf.que0.rx_bytes: 0 > >> > dev.ixl.3.pf.que0.rx_packets: 0 > >> > dev.ixl.3.pf.que0.tx_bytes: 0 > >> > dev.ixl.3.pf.que0.tx_packets: 0 > >> > dev.ixl.3.pf.que0.no_desc_avail: 0 > >> > dev.ixl.3.pf.que0.tx_dma_setup: 0 > >> > dev.ixl.3.pf.que0.tso_tx: 0 > >> > dev.ixl.3.pf.que0.irqs: 0 > >> > dev.ixl.3.pf.que0.dropped: 0 > >> > dev.ixl.3.pf.que0.mbuf_defrag_failed: 0 > >> > dev.ixl.3.pf.bcast_pkts_txd: 0 > >> > dev.ixl.3.pf.mcast_pkts_txd: 0 > >> > dev.ixl.3.pf.ucast_pkts_txd: 0 > >> > dev.ixl.3.pf.good_octets_txd: 0 > >> > dev.ixl.3.pf.rx_discards: 0 > >> > dev.ixl.3.pf.bcast_pkts_rcvd: 0 > >> > dev.ixl.3.pf.mcast_pkts_rcvd: 0 > >> > dev.ixl.3.pf.ucast_pkts_rcvd: 0 > >> > dev.ixl.3.pf.good_octets_rcvd: 0 > >> > dev.ixl.3.vc_debug_level: 1 > >> > dev.ixl.3.admin_irq: 0 > >> > dev.ixl.3.watchdog_events: 0 > >> > dev.ixl.3.debug: 0 > >> > dev.ixl.3.dynamic_tx_itr: 0 > >> > dev.ixl.3.tx_itr: 122 > >> > dev.ixl.3.dynamic_rx_itr: 0 > >> > dev.ixl.3.rx_itr: 62 > >> > dev.ixl.3.fw_version: f4.33 a1.2 n04.42 e8000191d > >> > dev.ixl.3.current_speed: Unknown > >> > dev.ixl.3.advertise_speed: 0 > >> > dev.ixl.3.fc: 0 > >> > dev.ixl.3.%parent: pci129 > >> > dev.ixl.3.%pnpinfo: vendor=3D0x8086 device=3D0x1572 subvendor=3D= 0x8086 > >> > subdevice=3D0x0000 class=3D0x020000 > >> > dev.ixl.3.%location: slot=3D0 function=3D3 handle=3D\_SB_.PCI1.Q= R3A.H003 > >> > dev.ixl.3.%driver: ixl > >> > dev.ixl.3.%desc: Intel(R) Ethernet Connection XL710 Driver, > >> Version - 1.4.0 > >> > dev.ixl.2.mac.xoff_recvd: 0 > >> > dev.ixl.2.mac.xoff_txd: 0 > >> > dev.ixl.2.mac.xon_recvd: 0 > >> > dev.ixl.2.mac.xon_txd: 0 > >> > dev.ixl.2.mac.tx_frames_big: 0 > >> > dev.ixl.2.mac.tx_frames_1024_1522: 0 > >> > dev.ixl.2.mac.tx_frames_512_1023: 0 > >> > dev.ixl.2.mac.tx_frames_256_511: 0 > >> > dev.ixl.2.mac.tx_frames_128_255: 0 > >> > dev.ixl.2.mac.tx_frames_65_127: 0 > >> > dev.ixl.2.mac.tx_frames_64: 0 > >> > dev.ixl.2.mac.checksum_errors: 0 > >> > dev.ixl.2.mac.rx_jabber: 0 > >> > dev.ixl.2.mac.rx_oversized: 0 > >> > dev.ixl.2.mac.rx_fragmented: 0 > >> > dev.ixl.2.mac.rx_undersize: 0 > >> > dev.ixl.2.mac.rx_frames_big: 0 > >> > dev.ixl.2.mac.rx_frames_1024_1522: 0 > >> > dev.ixl.2.mac.rx_frames_512_1023: 0 > >> > dev.ixl.2.mac.rx_frames_256_511: 0 > >> > dev.ixl.2.mac.rx_frames_128_255: 0 > >> > dev.ixl.2.mac.rx_frames_65_127: 0 > >> > dev.ixl.2.mac.rx_frames_64: 0 > >> > dev.ixl.2.mac.rx_length_errors: 0 > >> > dev.ixl.2.mac.remote_faults: 0 > >> > dev.ixl.2.mac.local_faults: 0 > >> > dev.ixl.2.mac.illegal_bytes: 0 > >> > dev.ixl.2.mac.crc_errors: 0 > >> > dev.ixl.2.mac.bcast_pkts_txd: 0 > >> > dev.ixl.2.mac.mcast_pkts_txd: 0 > >> > dev.ixl.2.mac.ucast_pkts_txd: 0 > >> > dev.ixl.2.mac.good_octets_txd: 0 > >> > dev.ixl.2.mac.rx_discards: 0 > >> > dev.ixl.2.mac.bcast_pkts_rcvd: 0 > >> > dev.ixl.2.mac.mcast_pkts_rcvd: 0 > >> > dev.ixl.2.mac.ucast_pkts_rcvd: 0 > >> > dev.ixl.2.mac.good_octets_rcvd: 0 > >> > dev.ixl.2.pf.que23.rx_bytes: 0 > >> > dev.ixl.2.pf.que23.rx_packets: 0 > >> > dev.ixl.2.pf.que23.tx_bytes: 0 > >> > dev.ixl.2.pf.que23.tx_packets: 0 > >> > dev.ixl.2.pf.que23.no_desc_avail: 0 > >> > dev.ixl.2.pf.que23.tx_dma_setup: 0 > >> > dev.ixl.2.pf.que23.tso_tx: 0 > >> > dev.ixl.2.pf.que23.irqs: 0 > >> > dev.ixl.2.pf.que23.dropped: 0 > >> > dev.ixl.2.pf.que23.mbuf_defrag_failed: 0 > >> > dev.ixl.2.pf.que22.rx_bytes: 0 > >> > dev.ixl.2.pf.que22.rx_packets: 0 > >> > dev.ixl.2.pf.que22.tx_bytes: 0 > >> > dev.ixl.2.pf.que22.tx_packets: 0 > >> > dev.ixl.2.pf.que22.no_desc_avail: 0 > >> > dev.ixl.2.pf.que22.tx_dma_setup: 0 > >> > dev.ixl.2.pf.que22.tso_tx: 0 > >> > dev.ixl.2.pf.que22.irqs: 0 > >> > dev.ixl.2.pf.que22.dropped: 0 > >> > dev.ixl.2.pf.que22.mbuf_defrag_failed: 0 > >> > dev.ixl.2.pf.que21.rx_bytes: 0 > >> > dev.ixl.2.pf.que21.rx_packets: 0 > >> > dev.ixl.2.pf.que21.tx_bytes: 0 > >> > dev.ixl.2.pf.que21.tx_packets: 0 > >> > dev.ixl.2.pf.que21.no_desc_avail: 0 > >> > dev.ixl.2.pf.que21.tx_dma_setup: 0 > >> > dev.ixl.2.pf.que21.tso_tx: 0 > >> > dev.ixl.2.pf.que21.irqs: 0 > >> > dev.ixl.2.pf.que21.dropped: 0 > >> > dev.ixl.2.pf.que21.mbuf_defrag_failed: 0 > >> > dev.ixl.2.pf.que20.rx_bytes: 0 > >> > dev.ixl.2.pf.que20.rx_packets: 0 > >> > dev.ixl.2.pf.que20.tx_bytes: 0 > >> > dev.ixl.2.pf.que20.tx_packets: 0 > >> > dev.ixl.2.pf.que20.no_desc_avail: 0 > >> > dev.ixl.2.pf.que20.tx_dma_setup: 0 > >> > dev.ixl.2.pf.que20.tso_tx: 0 > >> > dev.ixl.2.pf.que20.irqs: 0 > >> > dev.ixl.2.pf.que20.dropped: 0 > >> > dev.ixl.2.pf.que20.mbuf_defrag_failed: 0 > >> > dev.ixl.2.pf.que19.rx_bytes: 0 > >> > dev.ixl.2.pf.que19.rx_packets: 0 > >> > dev.ixl.2.pf.que19.tx_bytes: 0 > >> > dev.ixl.2.pf.que19.tx_packets: 0 > >> > dev.ixl.2.pf.que19.no_desc_avail: 0 > >> > dev.ixl.2.pf.que19.tx_dma_setup: 0 > >> > dev.ixl.2.pf.que19.tso_tx: 0 > >> > dev.ixl.2.pf.que19.irqs: 0 > >> > dev.ixl.2.pf.que19.dropped: 0 > >> > dev.ixl.2.pf.que19.mbuf_defrag_failed: 0 > >> > dev.ixl.2.pf.que18.rx_bytes: 0 > >> > dev.ixl.2.pf.que18.rx_packets: 0 > >> > dev.ixl.2.pf.que18.tx_bytes: 0 > >> > dev.ixl.2.pf.que18.tx_packets: 0 > >> > dev.ixl.2.pf.que18.no_desc_avail: 0 > >> > dev.ixl.2.pf.que18.tx_dma_setup: 0 > >> > dev.ixl.2.pf.que18.tso_tx: 0 > >> > dev.ixl.2.pf.que18.irqs: 0 > >> > dev.ixl.2.pf.que18.dropped: 0 > >> > dev.ixl.2.pf.que18.mbuf_defrag_failed: 0 > >> > dev.ixl.2.pf.que17.rx_bytes: 0 > >> > dev.ixl.2.pf.que17.rx_packets: 0 > >> > dev.ixl.2.pf.que17.tx_bytes: 0 > >> > dev.ixl.2.pf.que17.tx_packets: 0 > >> > dev.ixl.2.pf.que17.no_desc_avail: 0 > >> > dev.ixl.2.pf.que17.tx_dma_setup: 0 > >> > dev.ixl.2.pf.que17.tso_tx: 0 > >> > dev.ixl.2.pf.que17.irqs: 0 > >> > dev.ixl.2.pf.que17.dropped: 0 > >> > dev.ixl.2.pf.que17.mbuf_defrag_failed: 0 > >> > dev.ixl.2.pf.que16.rx_bytes: 0 > >> > dev.ixl.2.pf.que16.rx_packets: 0 > >> > dev.ixl.2.pf.que16.tx_bytes: 0 > >> > dev.ixl.2.pf.que16.tx_packets: 0 > >> > dev.ixl.2.pf.que16.no_desc_avail: 0 > >> > dev.ixl.2.pf.que16.tx_dma_setup: 0 > >> > dev.ixl.2.pf.que16.tso_tx: 0 > >> > dev.ixl.2.pf.que16.irqs: 0 > >> > dev.ixl.2.pf.que16.dropped: 0 > >> > dev.ixl.2.pf.que16.mbuf_defrag_failed: 0 > >> > dev.ixl.2.pf.que15.rx_bytes: 0 > >> > dev.ixl.2.pf.que15.rx_packets: 0 > >> > dev.ixl.2.pf.que15.tx_bytes: 0 > >> > dev.ixl.2.pf.que15.tx_packets: 0 > >> > dev.ixl.2.pf.que15.no_desc_avail: 0 > >> > dev.ixl.2.pf.que15.tx_dma_setup: 0 > >> > dev.ixl.2.pf.que15.tso_tx: 0 > >> > dev.ixl.2.pf.que15.irqs: 0 > >> > dev.ixl.2.pf.que15.dropped: 0 > >> > dev.ixl.2.pf.que15.mbuf_defrag_failed: 0 > >> > dev.ixl.2.pf.que14.rx_bytes: 0 > >> > dev.ixl.2.pf.que14.rx_packets: 0 > >> > dev.ixl.2.pf.que14.tx_bytes: 0 > >> > dev.ixl.2.pf.que14.tx_packets: 0 > >> > dev.ixl.2.pf.que14.no_desc_avail: 0 > >> > dev.ixl.2.pf.que14.tx_dma_setup: 0 > >> > dev.ixl.2.pf.que14.tso_tx: 0 > >> > dev.ixl.2.pf.que14.irqs: 0 > >> > dev.ixl.2.pf.que14.dropped: 0 > >> > dev.ixl.2.pf.que14.mbuf_defrag_failed: 0 > >> > dev.ixl.2.pf.que13.rx_bytes: 0 > >> > dev.ixl.2.pf.que13.rx_packets: 0 > >> > dev.ixl.2.pf.que13.tx_bytes: 0 > >> > dev.ixl.2.pf.que13.tx_packets: 0 > >> > dev.ixl.2.pf.que13.no_desc_avail: 0 > >> > dev.ixl.2.pf.que13.tx_dma_setup: 0 > >> > dev.ixl.2.pf.que13.tso_tx: 0 > >> > dev.ixl.2.pf.que13.irqs: 0 > >> > dev.ixl.2.pf.que13.dropped: 0 > >> > dev.ixl.2.pf.que13.mbuf_defrag_failed: 0 > >> > dev.ixl.2.pf.que12.rx_bytes: 0 > >> > dev.ixl.2.pf.que12.rx_packets: 0 > >> > dev.ixl.2.pf.que12.tx_bytes: 0 > >> > dev.ixl.2.pf.que12.tx_packets: 0 > >> > dev.ixl.2.pf.que12.no_desc_avail: 0 > >> > dev.ixl.2.pf.que12.tx_dma_setup: 0 > >> > dev.ixl.2.pf.que12.tso_tx: 0 > >> > dev.ixl.2.pf.que12.irqs: 0 > >> > dev.ixl.2.pf.que12.dropped: 0 > >> > dev.ixl.2.pf.que12.mbuf_defrag_failed: 0 > >> > dev.ixl.2.pf.que11.rx_bytes: 0 > >> > dev.ixl.2.pf.que11.rx_packets: 0 > >> > dev.ixl.2.pf.que11.tx_bytes: 0 > >> > dev.ixl.2.pf.que11.tx_packets: 0 > >> > dev.ixl.2.pf.que11.no_desc_avail: 0 > >> > dev.ixl.2.pf.que11.tx_dma_setup: 0 > >> > dev.ixl.2.pf.que11.tso_tx: 0 > >> > dev.ixl.2.pf.que11.irqs: 0 > >> > dev.ixl.2.pf.que11.dropped: 0 > >> > dev.ixl.2.pf.que11.mbuf_defrag_failed: 0 > >> > dev.ixl.2.pf.que10.rx_bytes: 0 > >> > dev.ixl.2.pf.que10.rx_packets: 0 > >> > dev.ixl.2.pf.que10.tx_bytes: 0 > >> > dev.ixl.2.pf.que10.tx_packets: 0 > >> > dev.ixl.2.pf.que10.no_desc_avail: 0 > >> > dev.ixl.2.pf.que10.tx_dma_setup: 0 > >> > dev.ixl.2.pf.que10.tso_tx: 0 > >> > dev.ixl.2.pf.que10.irqs: 0 > >> > dev.ixl.2.pf.que10.dropped: 0 > >> > dev.ixl.2.pf.que10.mbuf_defrag_failed: 0 > >> > dev.ixl.2.pf.que9.rx_bytes: 0 > >> > dev.ixl.2.pf.que9.rx_packets: 0 > >> > dev.ixl.2.pf.que9.tx_bytes: 0 > >> > dev.ixl.2.pf.que9.tx_packets: 0 > >> > dev.ixl.2.pf.que9.no_desc_avail: 0 > >> > dev.ixl.2.pf.que9.tx_dma_setup: 0 > >> > dev.ixl.2.pf.que9.tso_tx: 0 > >> > dev.ixl.2.pf.que9.irqs: 0 > >> > dev.ixl.2.pf.que9.dropped: 0 > >> > dev.ixl.2.pf.que9.mbuf_defrag_failed: 0 > >> > dev.ixl.2.pf.que8.rx_bytes: 0 > >> > dev.ixl.2.pf.que8.rx_packets: 0 > >> > dev.ixl.2.pf.que8.tx_bytes: 0 > >> > dev.ixl.2.pf.que8.tx_packets: 0 > >> > dev.ixl.2.pf.que8.no_desc_avail: 0 > >> > dev.ixl.2.pf.que8.tx_dma_setup: 0 > >> > dev.ixl.2.pf.que8.tso_tx: 0 > >> > dev.ixl.2.pf.que8.irqs: 0 > >> > dev.ixl.2.pf.que8.dropped: 0 > >> > dev.ixl.2.pf.que8.mbuf_defrag_failed: 0 > >> > dev.ixl.2.pf.que7.rx_bytes: 0 > >> > dev.ixl.2.pf.que7.rx_packets: 0 > >> > dev.ixl.2.pf.que7.tx_bytes: 0 > >> > dev.ixl.2.pf.que7.tx_packets: 0 > >> > dev.ixl.2.pf.que7.no_desc_avail: 0 > >> > dev.ixl.2.pf.que7.tx_dma_setup: 0 > >> > dev.ixl.2.pf.que7.tso_tx: 0 > >> > dev.ixl.2.pf.que7.irqs: 0 > >> > dev.ixl.2.pf.que7.dropped: 0 > >> > dev.ixl.2.pf.que7.mbuf_defrag_failed: 0 > >> > dev.ixl.2.pf.que6.rx_bytes: 0 > >> > dev.ixl.2.pf.que6.rx_packets: 0 > >> > dev.ixl.2.pf.que6.tx_bytes: 0 > >> > dev.ixl.2.pf.que6.tx_packets: 0 > >> > dev.ixl.2.pf.que6.no_desc_avail: 0 > >> > dev.ixl.2.pf.que6.tx_dma_setup: 0 > >> > dev.ixl.2.pf.que6.tso_tx: 0 > >> > dev.ixl.2.pf.que6.irqs: 0 > >> > dev.ixl.2.pf.que6.dropped: 0 > >> > dev.ixl.2.pf.que6.mbuf_defrag_failed: 0 > >> > dev.ixl.2.pf.que5.rx_bytes: 0 > >> > dev.ixl.2.pf.que5.rx_packets: 0 > >> > dev.ixl.2.pf.que5.tx_bytes: 0 > >> > dev.ixl.2.pf.que5.tx_packets: 0 > >> > dev.ixl.2.pf.que5.no_desc_avail: 0 > >> > dev.ixl.2.pf.que5.tx_dma_setup: 0 > >> > dev.ixl.2.pf.que5.tso_tx: 0 > >> > dev.ixl.2.pf.que5.irqs: 0 > >> > dev.ixl.2.pf.que5.dropped: 0 > >> > dev.ixl.2.pf.que5.mbuf_defrag_failed: 0 > >> > dev.ixl.2.pf.que4.rx_bytes: 0 > >> > dev.ixl.2.pf.que4.rx_packets: 0 > >> > dev.ixl.2.pf.que4.tx_bytes: 0 > >> > dev.ixl.2.pf.que4.tx_packets: 0 > >> > dev.ixl.2.pf.que4.no_desc_avail: 0 > >> > dev.ixl.2.pf.que4.tx_dma_setup: 0 > >> > dev.ixl.2.pf.que4.tso_tx: 0 > >> > dev.ixl.2.pf.que4.irqs: 0 > >> > dev.ixl.2.pf.que4.dropped: 0 > >> > dev.ixl.2.pf.que4.mbuf_defrag_failed: 0 > >> > dev.ixl.2.pf.que3.rx_bytes: 0 > >> > dev.ixl.2.pf.que3.rx_packets: 0 > >> > dev.ixl.2.pf.que3.tx_bytes: 0 > >> > dev.ixl.2.pf.que3.tx_packets: 0 > >> > dev.ixl.2.pf.que3.no_desc_avail: 0 > >> > dev.ixl.2.pf.que3.tx_dma_setup: 0 > >> > dev.ixl.2.pf.que3.tso_tx: 0 > >> > dev.ixl.2.pf.que3.irqs: 0 > >> > dev.ixl.2.pf.que3.dropped: 0 > >> > dev.ixl.2.pf.que3.mbuf_defrag_failed: 0 > >> > dev.ixl.2.pf.que2.rx_bytes: 0 > >> > dev.ixl.2.pf.que2.rx_packets: 0 > >> > dev.ixl.2.pf.que2.tx_bytes: 0 > >> > dev.ixl.2.pf.que2.tx_packets: 0 > >> > dev.ixl.2.pf.que2.no_desc_avail: 0 > >> > dev.ixl.2.pf.que2.tx_dma_setup: 0 > >> > dev.ixl.2.pf.que2.tso_tx: 0 > >> > dev.ixl.2.pf.que2.irqs: 0 > >> > dev.ixl.2.pf.que2.dropped: 0 > >> > dev.ixl.2.pf.que2.mbuf_defrag_failed: 0 > >> > dev.ixl.2.pf.que1.rx_bytes: 0 > >> > dev.ixl.2.pf.que1.rx_packets: 0 > >> > dev.ixl.2.pf.que1.tx_bytes: 0 > >> > dev.ixl.2.pf.que1.tx_packets: 0 > >> > dev.ixl.2.pf.que1.no_desc_avail: 0 > >> > dev.ixl.2.pf.que1.tx_dma_setup: 0 > >> > dev.ixl.2.pf.que1.tso_tx: 0 > >> > dev.ixl.2.pf.que1.irqs: 0 > >> > dev.ixl.2.pf.que1.dropped: 0 > >> > dev.ixl.2.pf.que1.mbuf_defrag_failed: 0 > >> > dev.ixl.2.pf.que0.rx_bytes: 0 > >> > dev.ixl.2.pf.que0.rx_packets: 0 > >> > dev.ixl.2.pf.que0.tx_bytes: 0 > >> > dev.ixl.2.pf.que0.tx_packets: 0 > >> > dev.ixl.2.pf.que0.no_desc_avail: 0 > >> > dev.ixl.2.pf.que0.tx_dma_setup: 0 > >> > dev.ixl.2.pf.que0.tso_tx: 0 > >> > dev.ixl.2.pf.que0.irqs: 0 > >> > dev.ixl.2.pf.que0.dropped: 0 > >> > dev.ixl.2.pf.que0.mbuf_defrag_failed: 0 > >> > dev.ixl.2.pf.bcast_pkts_txd: 0 > >> > dev.ixl.2.pf.mcast_pkts_txd: 0 > >> > dev.ixl.2.pf.ucast_pkts_txd: 0 > >> > dev.ixl.2.pf.good_octets_txd: 0 > >> > dev.ixl.2.pf.rx_discards: 0 > >> > dev.ixl.2.pf.bcast_pkts_rcvd: 0 > >> > dev.ixl.2.pf.mcast_pkts_rcvd: 0 > >> > dev.ixl.2.pf.ucast_pkts_rcvd: 0 > >> > dev.ixl.2.pf.good_octets_rcvd: 0 > >> > dev.ixl.2.vc_debug_level: 1 > >> > dev.ixl.2.admin_irq: 0 > >> > dev.ixl.2.watchdog_events: 0 > >> > dev.ixl.2.debug: 0 > >> > dev.ixl.2.dynamic_tx_itr: 0 > >> > dev.ixl.2.tx_itr: 122 > >> > dev.ixl.2.dynamic_rx_itr: 0 > >> > dev.ixl.2.rx_itr: 62 > >> > dev.ixl.2.fw_version: f4.33 a1.2 n04.42 e8000191d > >> > dev.ixl.2.current_speed: Unknown > >> > dev.ixl.2.advertise_speed: 0 > >> > dev.ixl.2.fc: 0 > >> > dev.ixl.2.%parent: pci129 > >> > dev.ixl.2.%pnpinfo: vendor=3D0x8086 device=3D0x1572 subvendor=3D= 0x8086 > >> > subdevice=3D0x0000 class=3D0x020000 > >> > dev.ixl.2.%location: slot=3D0 function=3D2 handle=3D\_SB_.PCI1.Q= R3A.H002 > >> > dev.ixl.2.%driver: ixl > >> > dev.ixl.2.%desc: Intel(R) Ethernet Connection XL710 Driver, > >> Version - 1.4.0 > >> > dev.ixl.1.mac.xoff_recvd: 0 > >> > dev.ixl.1.mac.xoff_txd: 0 > >> > dev.ixl.1.mac.xon_recvd: 0 > >> > dev.ixl.1.mac.xon_txd: 0 > >> > dev.ixl.1.mac.tx_frames_big: 0 > >> > dev.ixl.1.mac.tx_frames_1024_1522: 1565670684 > >> > dev.ixl.1.mac.tx_frames_512_1023: 101286418 > >> > dev.ixl.1.mac.tx_frames_256_511: 49713129 > >> > dev.ixl.1.mac.tx_frames_128_255: 231617277 > >> > dev.ixl.1.mac.tx_frames_65_127: 2052767669 > >> > dev.ixl.1.mac.tx_frames_64: 1318689044 > >> > dev.ixl.1.mac.checksum_errors: 0 > >> > dev.ixl.1.mac.rx_jabber: 0 > >> > dev.ixl.1.mac.rx_oversized: 0 > >> > dev.ixl.1.mac.rx_fragmented: 0 > >> > dev.ixl.1.mac.rx_undersize: 0 > >> > dev.ixl.1.mac.rx_frames_big: 0 > >> > dev.ixl.1.mac.rx_frames_1024_1522: 4960403414 > >> > dev.ixl.1.mac.rx_frames_512_1023: 113675084 > >> > dev.ixl.1.mac.rx_frames_256_511: 253904920 > >> > dev.ixl.1.mac.rx_frames_128_255: 196369726 > >> > dev.ixl.1.mac.rx_frames_65_127: 1436626211 > >> > dev.ixl.1.mac.rx_frames_64: 242768681 > >> > dev.ixl.1.mac.rx_length_errors: 0 > >> > dev.ixl.1.mac.remote_faults: 0 > >> > dev.ixl.1.mac.local_faults: 0 > >> > dev.ixl.1.mac.illegal_bytes: 0 > >> > dev.ixl.1.mac.crc_errors: 0 > >> > dev.ixl.1.mac.bcast_pkts_txd: 277 > >> > dev.ixl.1.mac.mcast_pkts_txd: 0 > >> > dev.ixl.1.mac.ucast_pkts_txd: 5319743942 > >> > dev.ixl.1.mac.good_octets_txd: 2642351885737 > >> > dev.ixl.1.mac.rx_discards: 0 > >> > dev.ixl.1.mac.bcast_pkts_rcvd: 5 > >> > dev.ixl.1.mac.mcast_pkts_rcvd: 144 > >> > dev.ixl.1.mac.ucast_pkts_rcvd: 7203747879 > >> > dev.ixl.1.mac.good_octets_rcvd: 7770230492434 > >> > dev.ixl.1.pf.que23.rx_bytes: 0 > >> > dev.ixl.1.pf.que23.rx_packets: 0 > >> > dev.ixl.1.pf.que23.tx_bytes: 7111 > >> > dev.ixl.1.pf.que23.tx_packets: 88 > >> > dev.ixl.1.pf.que23.no_desc_avail: 0 > >> > dev.ixl.1.pf.que23.tx_dma_setup: 0 > >> > dev.ixl.1.pf.que23.tso_tx: 0 > >> > dev.ixl.1.pf.que23.irqs: 88 > >> > dev.ixl.1.pf.que23.dropped: 0 > >> > dev.ixl.1.pf.que23.mbuf_defrag_failed: 0 > >> > dev.ixl.1.pf.que22.rx_bytes: 0 > >> > dev.ixl.1.pf.que22.rx_packets: 0 > >> > dev.ixl.1.pf.que22.tx_bytes: 6792 > >> > dev.ixl.1.pf.que22.tx_packets: 88 > >> > dev.ixl.1.pf.que22.no_desc_avail: 0 > >> > dev.ixl.1.pf.que22.tx_dma_setup: 0 > >> > dev.ixl.1.pf.que22.tso_tx: 0 > >> > dev.ixl.1.pf.que22.irqs: 89 > >> > dev.ixl.1.pf.que22.dropped: 0 > >> > dev.ixl.1.pf.que22.mbuf_defrag_failed: 0 > >> > dev.ixl.1.pf.que21.rx_bytes: 0 > >> > dev.ixl.1.pf.que21.rx_packets: 0 > >> > dev.ixl.1.pf.que21.tx_bytes: 7486 > >> > dev.ixl.1.pf.que21.tx_packets: 93 > >> > dev.ixl.1.pf.que21.no_desc_avail: 0 > >> > dev.ixl.1.pf.que21.tx_dma_setup: 0 > >> > dev.ixl.1.pf.que21.tso_tx: 0 > >> > dev.ixl.1.pf.que21.irqs: 95 > >> > dev.ixl.1.pf.que21.dropped: 0 > >> > dev.ixl.1.pf.que21.mbuf_defrag_failed: 0 > >> > dev.ixl.1.pf.que20.rx_bytes: 0 > >> > dev.ixl.1.pf.que20.rx_packets: 0 > >> > dev.ixl.1.pf.que20.tx_bytes: 7850 > >> > dev.ixl.1.pf.que20.tx_packets: 98 > >> > dev.ixl.1.pf.que20.no_desc_avail: 0 > >> > dev.ixl.1.pf.que20.tx_dma_setup: 0 > >> > dev.ixl.1.pf.que20.tso_tx: 0 > >> > dev.ixl.1.pf.que20.irqs: 99 > >> > dev.ixl.1.pf.que20.dropped: 0 > >> > dev.ixl.1.pf.que20.mbuf_defrag_failed: 0 > >> > dev.ixl.1.pf.que19.rx_bytes: 0 > >> > dev.ixl.1.pf.que19.rx_packets: 0 > >> > dev.ixl.1.pf.que19.tx_bytes: 64643 > >> > dev.ixl.1.pf.que19.tx_packets: 202 > >> > dev.ixl.1.pf.que19.no_desc_avail: 0 > >> > dev.ixl.1.pf.que19.tx_dma_setup: 0 > >> > dev.ixl.1.pf.que19.tso_tx: 0 > >> > dev.ixl.1.pf.que19.irqs: 202 > >> > dev.ixl.1.pf.que19.dropped: 0 > >> > dev.ixl.1.pf.que19.mbuf_defrag_failed: 0 > >> > dev.ixl.1.pf.que18.rx_bytes: 0 > >> > dev.ixl.1.pf.que18.rx_packets: 0 > >> > dev.ixl.1.pf.que18.tx_bytes: 5940 > >> > dev.ixl.1.pf.que18.tx_packets: 74 > >> > dev.ixl.1.pf.que18.no_desc_avail: 0 > >> > dev.ixl.1.pf.que18.tx_dma_setup: 0 > >> > dev.ixl.1.pf.que18.tso_tx: 0 > >> > dev.ixl.1.pf.que18.irqs: 74 > >> > dev.ixl.1.pf.que18.dropped: 0 > >> > dev.ixl.1.pf.que18.mbuf_defrag_failed: 0 > >> > dev.ixl.1.pf.que17.rx_bytes: 0 > >> > dev.ixl.1.pf.que17.rx_packets: 0 > >> > dev.ixl.1.pf.que17.tx_bytes: 11675 > >> > dev.ixl.1.pf.que17.tx_packets: 83 > >> > dev.ixl.1.pf.que17.no_desc_avail: 0 > >> > dev.ixl.1.pf.que17.tx_dma_setup: 0 > >> > dev.ixl.1.pf.que17.tso_tx: 0 > >> > dev.ixl.1.pf.que17.irqs: 83 > >> > dev.ixl.1.pf.que17.dropped: 0 > >> > dev.ixl.1.pf.que17.mbuf_defrag_failed: 0 > >> > dev.ixl.1.pf.que16.rx_bytes: 0 > >> > dev.ixl.1.pf.que16.rx_packets: 0 > >> > dev.ixl.1.pf.que16.tx_bytes: 105750457831 > >> > dev.ixl.1.pf.que16.tx_packets: 205406766 > >> > dev.ixl.1.pf.que16.no_desc_avail: 0 > >> > dev.ixl.1.pf.que16.tx_dma_setup: 0 > >> > dev.ixl.1.pf.que16.tso_tx: 0 > >> > dev.ixl.1.pf.que16.irqs: 87222978 > >> > dev.ixl.1.pf.que16.dropped: 0 > >> > dev.ixl.1.pf.que16.mbuf_defrag_failed: 0 > >> > dev.ixl.1.pf.que15.rx_bytes: 289558174088 > >> > dev.ixl.1.pf.que15.rx_packets: 272466190 > >> > dev.ixl.1.pf.que15.tx_bytes: 106152524681 > >> > dev.ixl.1.pf.que15.tx_packets: 205379247 > >> > dev.ixl.1.pf.que15.no_desc_avail: 0 > >> > dev.ixl.1.pf.que15.tx_dma_setup: 0 > >> > dev.ixl.1.pf.que15.tso_tx: 0 > >> > dev.ixl.1.pf.que15.irqs: 238145862 > >> > dev.ixl.1.pf.que15.dropped: 0 > >> > dev.ixl.1.pf.que15.mbuf_defrag_failed: 0 > >> > dev.ixl.1.pf.que14.rx_bytes: 301934533473 > >> > dev.ixl.1.pf.que14.rx_packets: 298452930 > >> > dev.ixl.1.pf.que14.tx_bytes: 111420393725 > >> > dev.ixl.1.pf.que14.tx_packets: 215722532 > >> > dev.ixl.1.pf.que14.no_desc_avail: 0 > >> > dev.ixl.1.pf.que14.tx_dma_setup: 0 > >> > dev.ixl.1.pf.que14.tso_tx: 0 > >> > dev.ixl.1.pf.que14.irqs: 256291617 > >> > dev.ixl.1.pf.que14.dropped: 0 > >> > dev.ixl.1.pf.que14.mbuf_defrag_failed: 0 > >> > dev.ixl.1.pf.que13.rx_bytes: 291380746253 > >> > dev.ixl.1.pf.que13.rx_packets: 273037957 > >> > dev.ixl.1.pf.que13.tx_bytes: 112417776222 > >> > dev.ixl.1.pf.que13.tx_packets: 217500943 > >> > dev.ixl.1.pf.que13.no_desc_avail: 0 > >> > dev.ixl.1.pf.que13.tx_dma_setup: 0 > >> > dev.ixl.1.pf.que13.tso_tx: 0 > >> > dev.ixl.1.pf.que13.irqs: 241422331 > >> > dev.ixl.1.pf.que13.dropped: 0 > >> > dev.ixl.1.pf.que13.mbuf_defrag_failed: 0 > >> > dev.ixl.1.pf.que12.rx_bytes: 301105585425 > >> > dev.ixl.1.pf.que12.rx_packets: 286137817 > >> > dev.ixl.1.pf.que12.tx_bytes: 95851784579 > >> > dev.ixl.1.pf.que12.tx_packets: 199715765 > >> > dev.ixl.1.pf.que12.no_desc_avail: 0 > >> > dev.ixl.1.pf.que12.tx_dma_setup: 0 > >> > dev.ixl.1.pf.que12.tso_tx: 0 > >> > dev.ixl.1.pf.que12.irqs: 247322880 > >> > dev.ixl.1.pf.que12.dropped: 0 > >> > dev.ixl.1.pf.que12.mbuf_defrag_failed: 0 > >> > dev.ixl.1.pf.que11.rx_bytes: 307105398143 > >> > dev.ixl.1.pf.que11.rx_packets: 281046463 > >> > dev.ixl.1.pf.que11.tx_bytes: 110710957789 > >> > dev.ixl.1.pf.que11.tx_packets: 211784031 > >> > dev.ixl.1.pf.que11.no_desc_avail: 0 > >> > dev.ixl.1.pf.que11.tx_dma_setup: 0 > >> > dev.ixl.1.pf.que11.tso_tx: 0 > >> > dev.ixl.1.pf.que11.irqs: 256987179 > >> > dev.ixl.1.pf.que11.dropped: 0 > >> > dev.ixl.1.pf.que11.mbuf_defrag_failed: 0 > >> > dev.ixl.1.pf.que10.rx_bytes: 304288000453 > >> > dev.ixl.1.pf.que10.rx_packets: 278987858 > >> > dev.ixl.1.pf.que10.tx_bytes: 93022244338 > >> > dev.ixl.1.pf.que10.tx_packets: 195869210 > >> > dev.ixl.1.pf.que10.no_desc_avail: 0 > >> > dev.ixl.1.pf.que10.tx_dma_setup: 0 > >> > dev.ixl.1.pf.que10.tso_tx: 0 > >> > dev.ixl.1.pf.que10.irqs: 253622192 > >> > dev.ixl.1.pf.que10.dropped: 0 > >> > dev.ixl.1.pf.que10.mbuf_defrag_failed: 0 > >> > dev.ixl.1.pf.que9.rx_bytes: 320340203822 > >> > dev.ixl.1.pf.que9.rx_packets: 302309010 > >> > dev.ixl.1.pf.que9.tx_bytes: 116604776460 > >> > dev.ixl.1.pf.que9.tx_packets: 223949025 > >> > dev.ixl.1.pf.que9.no_desc_avail: 0 > >> > dev.ixl.1.pf.que9.tx_dma_setup: 0 > >> > dev.ixl.1.pf.que9.tso_tx: 0 > >> > dev.ixl.1.pf.que9.irqs: 271165440 > >> > dev.ixl.1.pf.que9.dropped: 0 > >> > dev.ixl.1.pf.que9.mbuf_defrag_failed: 0 > >> > dev.ixl.1.pf.que8.rx_bytes: 291403725592 > >> > dev.ixl.1.pf.que8.rx_packets: 267859568 > >> > dev.ixl.1.pf.que8.tx_bytes: 205745654558 > >> > dev.ixl.1.pf.que8.tx_packets: 443349835 > >> > dev.ixl.1.pf.que8.no_desc_avail: 0 > >> > dev.ixl.1.pf.que8.tx_dma_setup: 0 > >> > dev.ixl.1.pf.que8.tso_tx: 0 > >> > dev.ixl.1.pf.que8.irqs: 254116755 > >> > dev.ixl.1.pf.que8.dropped: 0 > >> > dev.ixl.1.pf.que8.mbuf_defrag_failed: 0 > >> > dev.ixl.1.pf.que7.rx_bytes: 673363127346 > >> > dev.ixl.1.pf.que7.rx_packets: 617269774 > >> > dev.ixl.1.pf.que7.tx_bytes: 203162891886 > >> > dev.ixl.1.pf.que7.tx_packets: 443709339 > >> > dev.ixl.1.pf.que7.no_desc_avail: 0 > >> > dev.ixl.1.pf.que7.tx_dma_setup: 0 > >> > dev.ixl.1.pf.que7.tso_tx: 0 > >> > dev.ixl.1.pf.que7.irqs: 424706771 > >> > dev.ixl.1.pf.que7.dropped: 0 > >> > dev.ixl.1.pf.que7.mbuf_defrag_failed: 0 > >> > dev.ixl.1.pf.que6.rx_bytes: 644709094218 > >> > dev.ixl.1.pf.que6.rx_packets: 601892919 > >> > dev.ixl.1.pf.que6.tx_bytes: 221661735032 > >> > dev.ixl.1.pf.que6.tx_packets: 460127064 > >> > dev.ixl.1.pf.que6.no_desc_avail: 0 > >> > dev.ixl.1.pf.que6.tx_dma_setup: 0 > >> > dev.ixl.1.pf.que6.tso_tx: 0 > >> > dev.ixl.1.pf.que6.irqs: 417748074 > >> > dev.ixl.1.pf.que6.dropped: 0 > >> > dev.ixl.1.pf.que6.mbuf_defrag_failed: 0 > >> > dev.ixl.1.pf.que5.rx_bytes: 661904432231 > >> > dev.ixl.1.pf.que5.rx_packets: 622012837 > >> > dev.ixl.1.pf.que5.tx_bytes: 230514282876 > >> > dev.ixl.1.pf.que5.tx_packets: 458571100 > >> > dev.ixl.1.pf.que5.no_desc_avail: 0 > >> > dev.ixl.1.pf.que5.tx_dma_setup: 0 > >> > dev.ixl.1.pf.que5.tso_tx: 0 > >> > dev.ixl.1.pf.que5.irqs: 422305039 > >> > dev.ixl.1.pf.que5.dropped: 0 > >> > dev.ixl.1.pf.que5.mbuf_defrag_failed: 0 > >> > dev.ixl.1.pf.que4.rx_bytes: 653522179234 > >> > dev.ixl.1.pf.que4.rx_packets: 603345546 > >> > dev.ixl.1.pf.que4.tx_bytes: 216761219483 > >> > dev.ixl.1.pf.que4.tx_packets: 450329641 > >> > dev.ixl.1.pf.que4.no_desc_avail: 0 > >> > dev.ixl.1.pf.que4.tx_dma_setup: 0 > >> > dev.ixl.1.pf.que4.tso_tx: 3 > >> > dev.ixl.1.pf.que4.irqs: 416920533 > >> > dev.ixl.1.pf.que4.dropped: 0 > >> > dev.ixl.1.pf.que4.mbuf_defrag_failed: 0 > >> > dev.ixl.1.pf.que3.rx_bytes: 676494225882 > >> > dev.ixl.1.pf.que3.rx_packets: 620605168 > >> > dev.ixl.1.pf.que3.tx_bytes: 233854020454 > >> > dev.ixl.1.pf.que3.tx_packets: 464425616 > >> > dev.ixl.1.pf.que3.no_desc_avail: 0 > >> > dev.ixl.1.pf.que3.tx_dma_setup: 0 > >> > dev.ixl.1.pf.que3.tso_tx: 0 > >> > dev.ixl.1.pf.que3.irqs: 426349030 > >> > dev.ixl.1.pf.que3.dropped: 0 > >> > dev.ixl.1.pf.que3.mbuf_defrag_failed: 0 > >> > dev.ixl.1.pf.que2.rx_bytes: 677779337711 > >> > dev.ixl.1.pf.que2.rx_packets: 620883699 > >> > dev.ixl.1.pf.que2.tx_bytes: 211297141668 > >> > dev.ixl.1.pf.que2.tx_packets: 450501525 > >> > dev.ixl.1.pf.que2.no_desc_avail: 0 > >> > dev.ixl.1.pf.que2.tx_dma_setup: 0 > >> > dev.ixl.1.pf.que2.tso_tx: 0 > >> > dev.ixl.1.pf.que2.irqs: 433146278 > >> > dev.ixl.1.pf.que2.dropped: 0 > >> > dev.ixl.1.pf.que2.mbuf_defrag_failed: 0 > >> > dev.ixl.1.pf.que1.rx_bytes: 661360798018 > >> > dev.ixl.1.pf.que1.rx_packets: 619700636 > >> > dev.ixl.1.pf.que1.tx_bytes: 238264220772 > >> > dev.ixl.1.pf.que1.tx_packets: 473425354 > >> > dev.ixl.1.pf.que1.no_desc_avail: 0 > >> > dev.ixl.1.pf.que1.tx_dma_setup: 0 > >> > dev.ixl.1.pf.que1.tso_tx: 0 > >> > dev.ixl.1.pf.que1.irqs: 437959829 > >> > dev.ixl.1.pf.que1.dropped: 0 > >> > dev.ixl.1.pf.que1.mbuf_defrag_failed: 0 > >> > dev.ixl.1.pf.que0.rx_bytes: 685201226330 > >> > dev.ixl.1.pf.que0.rx_packets: 637772348 > >> > dev.ixl.1.pf.que0.tx_bytes: 124808 > >> > dev.ixl.1.pf.que0.tx_packets: 1782 > >> > dev.ixl.1.pf.que0.no_desc_avail: 0 > >> > dev.ixl.1.pf.que0.tx_dma_setup: 0 > >> > dev.ixl.1.pf.que0.tso_tx: 0 > >> > dev.ixl.1.pf.que0.irqs: 174905480 > >> > dev.ixl.1.pf.que0.dropped: 0 > >> > dev.ixl.1.pf.que0.mbuf_defrag_failed: 0 > >> > dev.ixl.1.pf.bcast_pkts_txd: 277 > >> > dev.ixl.1.pf.mcast_pkts_txd: 0 > >> > dev.ixl.1.pf.ucast_pkts_txd: 5319743945 > >> > dev.ixl.1.pf.good_octets_txd: 2613178367282 > >> > dev.ixl.1.pf.rx_discards: 0 > >> > dev.ixl.1.pf.bcast_pkts_rcvd: 1 > >> > dev.ixl.1.pf.mcast_pkts_rcvd: 0 > >> > dev.ixl.1.pf.ucast_pkts_rcvd: 7203747890 > >> > dev.ixl.1.pf.good_octets_rcvd: 7770230490224 > >> > dev.ixl.1.vc_debug_level: 1 > >> > dev.ixl.1.admin_irq: 0 > >> > dev.ixl.1.watchdog_events: 0 > >> > dev.ixl.1.debug: 0 > >> > dev.ixl.1.dynamic_tx_itr: 0 > >> > dev.ixl.1.tx_itr: 122 > >> > dev.ixl.1.dynamic_rx_itr: 0 > >> > dev.ixl.1.rx_itr: 62 > >> > dev.ixl.1.fw_version: f4.33 a1.2 n04.42 e8000191d > >> > dev.ixl.1.current_speed: 10G > >> > dev.ixl.1.advertise_speed: 0 > >> > dev.ixl.1.fc: 0 > >> > dev.ixl.1.%parent: pci129 > >> > dev.ixl.1.%pnpinfo: vendor=3D0x8086 device=3D0x1572 subvendor=3D= 0x8086 > >> > subdevice=3D0x0000 class=3D0x020000 > >> > dev.ixl.1.%location: slot=3D0 function=3D1 handle=3D\_SB_.PCI1.Q= R3A.H001 > >> > dev.ixl.1.%driver: ixl > >> > dev.ixl.1.%desc: Intel(R) Ethernet Connection XL710 Driver, > >> Version - 1.4.0 > >> > dev.ixl.0.mac.xoff_recvd: 0 > >> > dev.ixl.0.mac.xoff_txd: 0 > >> > dev.ixl.0.mac.xon_recvd: 0 > >> > dev.ixl.0.mac.xon_txd: 0 > >> > dev.ixl.0.mac.tx_frames_big: 0 > >> > dev.ixl.0.mac.tx_frames_1024_1522: 4961134019 > >> > dev.ixl.0.mac.tx_frames_512_1023: 113082136 > >> > dev.ixl.0.mac.tx_frames_256_511: 123538450 > >> > dev.ixl.0.mac.tx_frames_128_255: 185051082 > >> > dev.ixl.0.mac.tx_frames_65_127: 1332798493 > >> > dev.ixl.0.mac.tx_frames_64: 243338964 > >> > dev.ixl.0.mac.checksum_errors: 0 > >> > dev.ixl.0.mac.rx_jabber: 0 > >> > dev.ixl.0.mac.rx_oversized: 0 > >> > dev.ixl.0.mac.rx_fragmented: 0 > >> > dev.ixl.0.mac.rx_undersize: 0 > >> > dev.ixl.0.mac.rx_frames_big: 0 > >> > dev.ixl.0.mac.rx_frames_1024_1522: 1566499069 > >> > dev.ixl.0.mac.rx_frames_512_1023: 101390143 > >> > dev.ixl.0.mac.rx_frames_256_511: 49831970 > >> > dev.ixl.0.mac.rx_frames_128_255: 231738168 > >> > dev.ixl.0.mac.rx_frames_65_127: 2123185819 > >> > dev.ixl.0.mac.rx_frames_64: 1320404300 > >> > dev.ixl.0.mac.rx_length_errors: 0 > >> > dev.ixl.0.mac.remote_faults: 0 > >> > dev.ixl.0.mac.local_faults: 0 > >> > dev.ixl.0.mac.illegal_bytes: 0 > >> > dev.ixl.0.mac.crc_errors: 0 > >> > dev.ixl.0.mac.bcast_pkts_txd: 302 > >> > dev.ixl.0.mac.mcast_pkts_txd: 33965 > >> > dev.ixl.0.mac.ucast_pkts_txd: 6958908862 > >> > dev.ixl.0.mac.good_octets_txd: 7698936138858 > >> > dev.ixl.0.mac.rx_discards: 0 > >> > dev.ixl.0.mac.bcast_pkts_rcvd: 1 > >> > dev.ixl.0.mac.mcast_pkts_rcvd: 49693 > >> > dev.ixl.0.mac.ucast_pkts_rcvd: 5392999771 > >> > dev.ixl.0.mac.good_octets_rcvd: 2648906893811 > >> > dev.ixl.0.pf.que23.rx_bytes: 0 > >> > dev.ixl.0.pf.que23.rx_packets: 0 > >> > dev.ixl.0.pf.que23.tx_bytes: 2371273 > >> > dev.ixl.0.pf.que23.tx_packets: 7313 > >> > dev.ixl.0.pf.que23.no_desc_avail: 0 > >> > dev.ixl.0.pf.que23.tx_dma_setup: 0 > >> > dev.ixl.0.pf.que23.tso_tx: 0 > >> > dev.ixl.0.pf.que23.irqs: 7313 > >> > dev.ixl.0.pf.que23.dropped: 0 > >> > dev.ixl.0.pf.que23.mbuf_defrag_failed: 0 > >> > dev.ixl.0.pf.que22.rx_bytes: 0 > >> > dev.ixl.0.pf.que22.rx_packets: 0 > >> > dev.ixl.0.pf.que22.tx_bytes: 1908468 > >> > dev.ixl.0.pf.que22.tx_packets: 6626 > >> > dev.ixl.0.pf.que22.no_desc_avail: 0 > >> > dev.ixl.0.pf.que22.tx_dma_setup: 0 > >> > dev.ixl.0.pf.que22.tso_tx: 0 > >> > dev.ixl.0.pf.que22.irqs: 6627 > >> > dev.ixl.0.pf.que22.dropped: 0 > >> > dev.ixl.0.pf.que22.mbuf_defrag_failed: 0 > >> > dev.ixl.0.pf.que21.rx_bytes: 0 > >> > dev.ixl.0.pf.que21.rx_packets: 0 > >> > dev.ixl.0.pf.que21.tx_bytes: 2092668 > >> > dev.ixl.0.pf.que21.tx_packets: 6739 > >> > dev.ixl.0.pf.que21.no_desc_avail: 0 > >> > dev.ixl.0.pf.que21.tx_dma_setup: 0 > >> > dev.ixl.0.pf.que21.tso_tx: 0 > >> > dev.ixl.0.pf.que21.irqs: 6728 > >> > dev.ixl.0.pf.que21.dropped: 0 > >> > dev.ixl.0.pf.que21.mbuf_defrag_failed: 0 > >> > dev.ixl.0.pf.que20.rx_bytes: 0 > >> > dev.ixl.0.pf.que20.rx_packets: 0 > >> > dev.ixl.0.pf.que20.tx_bytes: 1742176 > >> > dev.ixl.0.pf.que20.tx_packets: 6246 > >> > dev.ixl.0.pf.que20.no_desc_avail: 0 > >> > dev.ixl.0.pf.que20.tx_dma_setup: 0 > >> > dev.ixl.0.pf.que20.tso_tx: 0 > >> > dev.ixl.0.pf.que20.irqs: 6249 > >> > dev.ixl.0.pf.que20.dropped: 0 > >> > dev.ixl.0.pf.que20.mbuf_defrag_failed: 0 > >> > dev.ixl.0.pf.que19.rx_bytes: 0 > >> > dev.ixl.0.pf.que19.rx_packets: 0 > >> > dev.ixl.0.pf.que19.tx_bytes: 2102284 > >> > dev.ixl.0.pf.que19.tx_packets: 6979 > >> > dev.ixl.0.pf.que19.no_desc_avail: 0 > >> > dev.ixl.0.pf.que19.tx_dma_setup: 0 > >> > dev.ixl.0.pf.que19.tso_tx: 0 > >> > dev.ixl.0.pf.que19.irqs: 6979 > >> > dev.ixl.0.pf.que19.dropped: 0 > >> > dev.ixl.0.pf.que19.mbuf_defrag_failed: 0 > >> > dev.ixl.0.pf.que18.rx_bytes: 0 > >> > dev.ixl.0.pf.que18.rx_packets: 0 > >> > dev.ixl.0.pf.que18.tx_bytes: 1532360 > >> > dev.ixl.0.pf.que18.tx_packets: 5588 > >> > dev.ixl.0.pf.que18.no_desc_avail: 0 > >> > dev.ixl.0.pf.que18.tx_dma_setup: 0 > >> > dev.ixl.0.pf.que18.tso_tx: 0 > >> > dev.ixl.0.pf.que18.irqs: 5588 > >> > dev.ixl.0.pf.que18.dropped: 0 > >> > dev.ixl.0.pf.que18.mbuf_defrag_failed: 0 > >> > dev.ixl.0.pf.que17.rx_bytes: 0 > >> > dev.ixl.0.pf.que17.rx_packets: 0 > >> > dev.ixl.0.pf.que17.tx_bytes: 1809684 > >> > dev.ixl.0.pf.que17.tx_packets: 6136 > >> > dev.ixl.0.pf.que17.no_desc_avail: 0 > >> > dev.ixl.0.pf.que17.tx_dma_setup: 0 > >> > dev.ixl.0.pf.que17.tso_tx: 0 > >> > dev.ixl.0.pf.que17.irqs: 6136 > >> > dev.ixl.0.pf.que17.dropped: 0 > >> > dev.ixl.0.pf.que17.mbuf_defrag_failed: 0 > >> > dev.ixl.0.pf.que16.rx_bytes: 0 > >> > dev.ixl.0.pf.que16.rx_packets: 0 > >> > dev.ixl.0.pf.que16.tx_bytes: 286836299105 > >> > dev.ixl.0.pf.que16.tx_packets: 263532601 > >> > dev.ixl.0.pf.que16.no_desc_avail: 0 > >> > dev.ixl.0.pf.que16.tx_dma_setup: 0 > >> > dev.ixl.0.pf.que16.tso_tx: 0 > >> > dev.ixl.0.pf.que16.irqs: 83232941 > >> > dev.ixl.0.pf.que16.dropped: 0 > >> > dev.ixl.0.pf.que16.mbuf_defrag_failed: 0 > >> > dev.ixl.0.pf.que15.rx_bytes: 106345323488 > >> > dev.ixl.0.pf.que15.rx_packets: 208869912 > >> > dev.ixl.0.pf.que15.tx_bytes: 298825179301 > >> > dev.ixl.0.pf.que15.tx_packets: 288517504 > >> > dev.ixl.0.pf.que15.no_desc_avail: 0 > >> > dev.ixl.0.pf.que15.tx_dma_setup: 0 > >> > dev.ixl.0.pf.que15.tso_tx: 0 > >> > dev.ixl.0.pf.que15.irqs: 223322408 > >> > dev.ixl.0.pf.que15.dropped: 0 > >> > dev.ixl.0.pf.que15.mbuf_defrag_failed: 0 > >> > dev.ixl.0.pf.que14.rx_bytes: 106721900547 > >> > dev.ixl.0.pf.que14.rx_packets: 208566121 > >> > dev.ixl.0.pf.que14.tx_bytes: 288657751920 > >> > dev.ixl.0.pf.que14.tx_packets: 263556000 > >> > dev.ixl.0.pf.que14.no_desc_avail: 0 > >> > dev.ixl.0.pf.que14.tx_dma_setup: 0 > >> > dev.ixl.0.pf.que14.tso_tx: 0 > >> > dev.ixl.0.pf.que14.irqs: 220377537 > >> > dev.ixl.0.pf.que14.dropped: 0 > >> > dev.ixl.0.pf.que14.mbuf_defrag_failed: 0 > >> > dev.ixl.0.pf.que13.rx_bytes: 111978971378 > >> > dev.ixl.0.pf.que13.rx_packets: 218447354 > >> > dev.ixl.0.pf.que13.tx_bytes: 298439860675 > >> > dev.ixl.0.pf.que13.tx_packets: 276806617 > >> > dev.ixl.0.pf.que13.no_desc_avail: 0 > >> > dev.ixl.0.pf.que13.tx_dma_setup: 0 > >> > dev.ixl.0.pf.que13.tso_tx: 0 > >> > dev.ixl.0.pf.que13.irqs: 227474625 > >> > dev.ixl.0.pf.que13.dropped: 0 > >> > dev.ixl.0.pf.que13.mbuf_defrag_failed: 0 > >> > dev.ixl.0.pf.que12.rx_bytes: 112969704706 > >> > dev.ixl.0.pf.que12.rx_packets: 220275562 > >> > dev.ixl.0.pf.que12.tx_bytes: 304750620079 > >> > dev.ixl.0.pf.que12.tx_packets: 272244483 > >> > dev.ixl.0.pf.que12.no_desc_avail: 0 > >> > dev.ixl.0.pf.que12.tx_dma_setup: 0 > >> > dev.ixl.0.pf.que12.tso_tx: 183 > >> > dev.ixl.0.pf.que12.irqs: 230111291 > >> > dev.ixl.0.pf.que12.dropped: 0 > >> > dev.ixl.0.pf.que12.mbuf_defrag_failed: 0 > >> > dev.ixl.0.pf.que11.rx_bytes: 96405343036 > >> > dev.ixl.0.pf.que11.rx_packets: 202329448 > >> > dev.ixl.0.pf.que11.tx_bytes: 302481707696 > >> > dev.ixl.0.pf.que11.tx_packets: 271689246 > >> > dev.ixl.0.pf.que11.no_desc_avail: 0 > >> > dev.ixl.0.pf.que11.tx_dma_setup: 0 > >> > dev.ixl.0.pf.que11.tso_tx: 0 > >> > dev.ixl.0.pf.que11.irqs: 220717612 > >> > dev.ixl.0.pf.que11.dropped: 0 > >> > dev.ixl.0.pf.que11.mbuf_defrag_failed: 0 > >> > dev.ixl.0.pf.que10.rx_bytes: 111280008670 > >> > dev.ixl.0.pf.que10.rx_packets: 214900261 > >> > dev.ixl.0.pf.que10.tx_bytes: 318638566198 > >> > dev.ixl.0.pf.que10.tx_packets: 295011389 > >> > dev.ixl.0.pf.que10.no_desc_avail: 0 > >> > dev.ixl.0.pf.que10.tx_dma_setup: 0 > >> > dev.ixl.0.pf.que10.tso_tx: 0 > >> > dev.ixl.0.pf.que10.irqs: 230681709 > >> > dev.ixl.0.pf.que10.dropped: 0 > >> > dev.ixl.0.pf.que10.mbuf_defrag_failed: 0 > >> > dev.ixl.0.pf.que9.rx_bytes: 93566025126 > >> > dev.ixl.0.pf.que9.rx_packets: 198726483 > >> > dev.ixl.0.pf.que9.tx_bytes: 288858818348 > >> > dev.ixl.0.pf.que9.tx_packets: 258926864 > >> > dev.ixl.0.pf.que9.no_desc_avail: 0 > >> > dev.ixl.0.pf.que9.tx_dma_setup: 0 > >> > dev.ixl.0.pf.que9.tso_tx: 0 > >> > dev.ixl.0.pf.que9.irqs: 217918160 > >> > dev.ixl.0.pf.que9.dropped: 0 > >> > dev.ixl.0.pf.que9.mbuf_defrag_failed: 0 > >> > dev.ixl.0.pf.que8.rx_bytes: 117169019041 > >> > dev.ixl.0.pf.que8.rx_packets: 226938172 > >> > dev.ixl.0.pf.que8.tx_bytes: 665794492752 > >> > dev.ixl.0.pf.que8.tx_packets: 593519436 > >> > dev.ixl.0.pf.que8.no_desc_avail: 0 > >> > dev.ixl.0.pf.que8.tx_dma_setup: 0 > >> > dev.ixl.0.pf.que8.tso_tx: 0 > >> > dev.ixl.0.pf.que8.irqs: 244643578 > >> > dev.ixl.0.pf.que8.dropped: 0 > >> > dev.ixl.0.pf.que8.mbuf_defrag_failed: 0 > >> > dev.ixl.0.pf.que7.rx_bytes: 206974266022 > >> > dev.ixl.0.pf.que7.rx_packets: 449899895 > >> > dev.ixl.0.pf.que7.tx_bytes: 638527685820 > >> > dev.ixl.0.pf.que7.tx_packets: 580750916 > >> > dev.ixl.0.pf.que7.no_desc_avail: 0 > >> > dev.ixl.0.pf.que7.tx_dma_setup: 0 > >> > dev.ixl.0.pf.que7.tso_tx: 0 > >> > dev.ixl.0.pf.que7.irqs: 391760959 > >> > dev.ixl.0.pf.que7.dropped: 0 > >> > dev.ixl.0.pf.que7.mbuf_defrag_failed: 0 > >> > dev.ixl.0.pf.que6.rx_bytes: 204373984670 > >> > dev.ixl.0.pf.que6.rx_packets: 449990985 > >> > dev.ixl.0.pf.que6.tx_bytes: 655511068125 > >> > dev.ixl.0.pf.que6.tx_packets: 600735086 > >> > dev.ixl.0.pf.que6.no_desc_avail: 0 > >> > dev.ixl.0.pf.que6.tx_dma_setup: 0 > >> > dev.ixl.0.pf.que6.tso_tx: 0 > >> > dev.ixl.0.pf.que6.irqs: 394961024 > >> > dev.ixl.0.pf.que6.dropped: 0 > >> > dev.ixl.0.pf.que6.mbuf_defrag_failed: 0 > >> > dev.ixl.0.pf.que5.rx_bytes: 222919535872 > >> > dev.ixl.0.pf.que5.rx_packets: 466659705 > >> > dev.ixl.0.pf.que5.tx_bytes: 647689764751 > >> > dev.ixl.0.pf.que5.tx_packets: 582532691 > >> > dev.ixl.0.pf.que5.no_desc_avail: 0 > >> > dev.ixl.0.pf.que5.tx_dma_setup: 0 > >> > dev.ixl.0.pf.que5.tso_tx: 5 > >> > dev.ixl.0.pf.que5.irqs: 404552229 > >> > dev.ixl.0.pf.que5.dropped: 0 > >> > dev.ixl.0.pf.que5.mbuf_defrag_failed: 0 > >> > dev.ixl.0.pf.que4.rx_bytes: 231706806551 > >> > From owner-freebsd-net@freebsd.org Wed Aug 19 20:57:30 2015 Return-Path: Delivered-To: freebsd-net@mailman.ysv.freebsd.org Received: from mx1.freebsd.org (mx1.freebsd.org [IPv6:2001:1900:2254:206a::19:1]) by mailman.ysv.freebsd.org (Postfix) with ESMTP id 43C499BE035; Wed, 19 Aug 2015 20:57:30 +0000 (UTC) (envelope-from rmacklem@uoguelph.ca) Received: from esa-jnhn.mail.uoguelph.ca (esa-jnhn.mail.uoguelph.ca [131.104.91.44]) by mx1.freebsd.org (Postfix) with ESMTP id DB090646; Wed, 19 Aug 2015 20:57:29 +0000 (UTC) (envelope-from rmacklem@uoguelph.ca) IronPort-PHdr: 9a23:qC6hNR9j2YktH/9uRHKM819IXTAuvvDOBiVQ1KB91OMcTK2v8tzYMVDF4r011RmSDd6dt6wP0reP+4nbGkU+or+5+EgYd5JNUxJXwe43pCcHRPC/NEvgMfTxZDY7FskRHHVs/nW8LFQHUJ2mPw6anHS+4HYoFwnlMkItf6KuStWU05r8jr3rs7ToICx2xxOFKYtoKxu3qQiD/uI3uqBFbpgL9x3Sv3FTcP5Xz247bXianhL7+9vitMU7q3cYk7sb+sVBSaT3ebgjBfwdVWx+cjN92Mq+mRDFTAaLrlEGW2MXiQEAVwTM6hfrdpzq9CvntOs70SLcPMmgHp4uXjH31aZgS1fNgSwEMzM8uDXNj8V7j6ZWpTq8oBNizorMYMeePawtLevmYdoGSD8ZDY5qXCtbD9b5NtNXAg== X-IronPort-Anti-Spam-Filtered: true X-IronPort-Anti-Spam-Result: A2AoAgCr19RV/61jaINdDoNhaQaDH7o3AQmBbQqFMUoCgX0UAQEBAQEBAQGBCYIdggYBAQEDAQEBASAEJyALEAIBCA4KAgINGQICJwEJJgIECAcEARoCBIgFCA25M5YVAQEBAQEBAQEBAQEBAQEBAQEXBIEiijGEMgYBARw0B4JpgUMFlSSFBIUHhCyHRohxhEiDZgImgg4cgRVaIjMHfwgXI4EEAQEB X-IronPort-AV: E=Sophos;i="5.15,712,1432612800"; d="scan'208";a="231754346" Received: from nipigon.cs.uoguelph.ca (HELO zcs1.mail.uoguelph.ca) ([131.104.99.173]) by esa-jnhn.mail.uoguelph.ca with ESMTP; 19 Aug 2015 16:57:27 -0400 Received: from localhost (localhost [127.0.0.1]) by zcs1.mail.uoguelph.ca (Postfix) with ESMTP id E17E715F55D; Wed, 19 Aug 2015 16:57:27 -0400 (EDT) Received: from zcs1.mail.uoguelph.ca ([127.0.0.1]) by localhost (zcs1.mail.uoguelph.ca [127.0.0.1]) (amavisd-new, port 10032) with ESMTP id gBxRjkkCPWn2; Wed, 19 Aug 2015 16:57:27 -0400 (EDT) Received: from localhost (localhost [127.0.0.1]) by zcs1.mail.uoguelph.ca (Postfix) with ESMTP id 528DB15F563; Wed, 19 Aug 2015 16:57:27 -0400 (EDT) X-Virus-Scanned: amavisd-new at zcs1.mail.uoguelph.ca Received: from zcs1.mail.uoguelph.ca ([127.0.0.1]) by localhost (zcs1.mail.uoguelph.ca [127.0.0.1]) (amavisd-new, port 10026) with ESMTP id oDXVeFL-4bp7; Wed, 19 Aug 2015 16:57:27 -0400 (EDT) Received: from zcs1.mail.uoguelph.ca (zcs1.mail.uoguelph.ca [172.17.95.18]) by zcs1.mail.uoguelph.ca (Postfix) with ESMTP id 3256D15F55D; Wed, 19 Aug 2015 16:57:27 -0400 (EDT) Date: Wed, 19 Aug 2015 16:57:27 -0400 (EDT) From: Rick Macklem To: Daniel Braniss Cc: Hans Petter Selasky , pyunyh@gmail.com, FreeBSD stable , FreeBSD Net , Slawa Olhovchenkov , Christopher Forgeron Message-ID: <796827231.26478408.1440017847125.JavaMail.zimbra@uoguelph.ca> In-Reply-To: <2BF7FA92-2DDD-452C-822C-534C0DC0B49F@cs.huji.ac.il> References: <1D52028A-B39F-4F9B-BD38-CB1D73BF5D56@cs.huji.ac.il> <55D333D6.5040102@selasky.org> <1325951625.25292515.1439934848268.JavaMail.zimbra@uoguelph.ca> <55D429A4.3010407@selasky.org> <20150819074212.GB964@michelle.fasterthan.com> <55D43615.1030401@selasky.org> <2013503980.25726607.1439989235806.JavaMail.zimbra@uoguelph.ca> <2BF7FA92-2DDD-452C-822C-534C0DC0B49F@cs.huji.ac.il> Subject: Re: ix(intel) vs mlxen(mellanox) 10Gb performance MIME-Version: 1.0 Content-Type: text/plain; charset=utf-8 Content-Transfer-Encoding: 7bit X-Originating-IP: [172.17.95.11] X-Mailer: Zimbra 8.0.9_GA_6191 (ZimbraWebClient - FF39 (Win)/8.0.9_GA_6191) Thread-Topic: ix(intel) vs mlxen(mellanox) 10Gb performance Thread-Index: arSbth0Dm/H1DJKuXgkJL3pJQvjzbg== X-BeenThere: freebsd-net@freebsd.org X-Mailman-Version: 2.1.20 Precedence: list List-Id: Networking and TCP/IP with FreeBSD List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , X-List-Received-Date: Wed, 19 Aug 2015 20:57:30 -0000 Daniel Braniss wrote: > > > On 19 Aug 2015, at 16:00, Rick Macklem wrote: > > > > Hans Petter Selasky wrote: > >> On 08/19/15 09:42, Yonghyeon PYUN wrote: > >>> On Wed, Aug 19, 2015 at 09:00:52AM +0200, Hans Petter Selasky wrote: > >>>> On 08/18/15 23:54, Rick Macklem wrote: > >>>>> Ouch! Yes, I now see that the code that counts the # of mbufs is before > >>>>> the > >>>>> code that adds the tcp/ip header mbuf. > >>>>> > >>>>> In my opinion, this should be fixed by setting if_hw_tsomaxsegcount to > >>>>> whatever > >>>>> the driver provides - 1. It is not the driver's responsibility to know > >>>>> if > >>>>> a tcp/ip > >>>>> header mbuf will be added and is a lot less confusing that expecting > >>>>> the > >>>>> driver > >>>>> author to know to subtract one. (I had mistakenly thought that > >>>>> tcp_output() had > >>>>> added the tc/ip header mbuf before the loop that counts mbufs in the > >>>>> list. > >>>>> Btw, > >>>>> this tcp/ip header mbuf also has leading space for the MAC layer > >>>>> header.) > >>>>> > >>>> > >>>> Hi Rick, > >>>> > >>>> Your question is good. With the Mellanox hardware we have separate > >>>> so-called inline data space for the TCP/IP headers, so if the TCP stack > >>>> subtracts something, then we would need to add something to the limit, > >>>> because then the scatter gather list is only used for the data part. > >>>> > >>> > >>> I think all drivers in tree don't subtract 1 for > >>> if_hw_tsomaxsegcount. Probably touching Mellanox driver would be > >>> simpler than fixing all other drivers in tree. > >>> > >>>> Maybe it can be controlled by some kind of flag, if all the three TSO > >>>> limits should include the TCP/IP/ethernet headers too. I'm pretty sure > >>>> we want both versions. > >>>> > >>> > >>> Hmm, I'm afraid it's already complex. Drivers have to tell almost > >>> the same information to both bus_dma(9) and network stack. > >> > >> Don't forget that not all drivers in the tree set the TSO limits before > >> if_attach(), so possibly the subtraction of one TSO fragment needs to go > >> into ip_output() .... > >> > > Ok, I realized that some drivers may not know the answers before > > ether_ifattach(), > > due to the way they are configured/written (I saw the use of > > if_hw_tsomax_update() > > in the patch). > > > > If it is subtracted as a part of the assignment to if_hw_tsomaxsegcount in > > tcp_output() > > at line#791 in tcp_output() like the following, I don't think it should > > matter if the > > values are set before ether_ifattach()? > > /* > > * Subtract 1 for the tcp/ip header mbuf that > > * will be prepended to the mbuf chain in this > > * function in the code below this block. > > */ > > if_hw_tsomaxsegcount = tp->t_tsomaxsegcount - 1; > > Well, you can replace the line in sys/netinet/tcp_output.c that looks like: if_hw_tsomaxsegcount = tp->t_tsomaxsegcount; with the above line (at line #797 in head). Any other patch for this will have the same effect, rick > > I don't have a good solution for the case where a driver doesn't plan on > > using the > > tcp/ip header provided by tcp_output() except to say the driver can add one > > to the > > setting to compensate for that (and if they fail to do so, it still works, > > although > > somewhat suboptimally). When I now read the comment in sys/net/if_var.h it > > is clear > > what it means, but for some reason I didn't read it that way before? (I > > think it was > > the part that said the driver didn't have to subtract for the headers that > > confused me?) > > In any case, we need to try and come up with a clear definition of what > > they need to > > be set to. > > > > I can now think of two ways to deal with this: > > 1 - Leave tcp_output() as is, but provide a macro for the device driver > > authors to use > > that sets if_hw_tsomaxsegcount with a flag for "driver uses tcp/ip > > header mbuf", > > documenting that this flag should normally be true. > > OR > > 2 - Change tcp_output() as above, noting that this is a workaround for > > confusion w.r.t. > > whether or not if_hw_tsomaxsegcount should include the tcp/ip header > > mbuf and > > update the comment in if_var.h to reflect this. Then drivers that don't > > use the > > tcp/ip header mbuf can increase their value for if_hw_tsomaxsegcount by > > 1. > > (The comment should also mention that a value of 35 or greater is much > > preferred to > > 32 if the hardware will support that.) > > > > Also, I'd like to apologize for some of my emails getting a little "blunt". > > I just find > > it flustrating that this problem is still showing up and is even in 10.2. > > This is partly > > my fault for not making it clearer to driver authors what > > if_hw_tsomaxsegcount should be > > set to, because I had it incorrect. > > > > Hopefully we can come up with a solution that everyone is comfortable with, > > rick > > > ok guys, > when you have some code for me to try just let me know. > > danny > > _______________________________________________ > freebsd-net@freebsd.org mailing list > https://lists.freebsd.org/mailman/listinfo/freebsd-net > To unsubscribe, send any mail to "freebsd-net-unsubscribe@freebsd.org" > From owner-freebsd-net@freebsd.org Wed Aug 19 21:06:42 2015 Return-Path: Delivered-To: freebsd-net@mailman.ysv.freebsd.org Received: from mx1.freebsd.org (mx1.freebsd.org [IPv6:2001:1900:2254:206a::19:1]) by mailman.ysv.freebsd.org (Postfix) with ESMTP id 318469BE2D5 for ; Wed, 19 Aug 2015 21:06:42 +0000 (UTC) (envelope-from john@jnielsen.net) Received: from webmail2.jnielsen.net (webmail2.jnielsen.net [50.114.224.20]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (Client CN "webmail2.jnielsen.net", Issuer "freebsdsolutions.net" (not verified)) by mx1.freebsd.org (Postfix) with ESMTPS id 15D1EEB8; Wed, 19 Aug 2015 21:06:41 +0000 (UTC) (envelope-from john@jnielsen.net) Received: from [10.10.1.196] ([199.58.199.60]) (authenticated bits=0) by webmail2.jnielsen.net (8.15.1/8.15.1) with ESMTPSA id t7JL6XcK033832 (version=TLSv1 cipher=ECDHE-RSA-AES256-SHA bits=256 verify=NO); Wed, 19 Aug 2015 15:06:35 -0600 (MDT) (envelope-from john@jnielsen.net) X-Authentication-Warning: webmail2.jnielsen.net: Host [199.58.199.60] claimed to be [10.10.1.196] Content-Type: text/plain; charset=utf-8 Mime-Version: 1.0 (Mac OS X Mail 8.2 \(2102\)) Subject: Re: RFC7084 "Basic Requirements for IPv6 Customer Edge Routers" From: John Nielsen In-Reply-To: <20150817112408.GB13503@in-addr.com> Date: Wed, 19 Aug 2015 15:06:32 -0600 Cc: freebsd-net@freebsd.org Content-Transfer-Encoding: quoted-printable Message-Id: <2B3FA2D7-B0A9-4489-A65B-A6E8630DE62F@jnielsen.net> References: <20150817112408.GB13503@in-addr.com> To: Gary Palmer X-Mailer: Apple Mail (2.2102) X-BeenThere: freebsd-net@freebsd.org X-Mailman-Version: 2.1.20 Precedence: list List-Id: Networking and TCP/IP with FreeBSD List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , X-List-Received-Date: Wed, 19 Aug 2015 21:06:42 -0000 Since FreeBSD is a general-purpose operating system, a fresh install = with default options will certainly not meet all the requirements. = However, from a quick read of the RFC it looks like it would be = straightforward to configure a FreeBSD box to meet the requirements. For simple routing, the include rtadvd(8) may be adequate; however I am = unsure if its behavior by default would meet req G-4 from 4.1. (If it = doesn=E2=80=99t, a small script to add interfaces using rtadvctl once = appropriate WAN routes are available would suffice.) FreeBSD does = include a RIP6 routing daemon (route6d); other routing protocols are = supported by third-party programs such as quagga or openbgpd. FreeBSD supports SLA but requires third-party software for DHCP6 client = or server operation. Examples of such software include dhcp6, = isc-dhcp43-client, and isc-dhcp43-server. Those are all the potential sticking points that stood out to me. On Aug 17, 2015, at 5:24 AM, Gary Palmer wrote: >=20 >=20 >=20 > Hi, >=20 > Does anyone know if FreeBSD 9.3 is compliant with RFC7034? =20 >=20 > Thanks, >=20 > Gary > _______________________________________________ > freebsd-net@freebsd.org mailing list > https://lists.freebsd.org/mailman/listinfo/freebsd-net > To unsubscribe, send any mail to "freebsd-net-unsubscribe@freebsd.org" >=20 From owner-freebsd-net@freebsd.org Thu Aug 20 02:30:37 2015 Return-Path: Delivered-To: freebsd-net@mailman.ysv.freebsd.org Received: from mx1.freebsd.org (mx1.freebsd.org [IPv6:2001:1900:2254:206a::19:1]) by mailman.ysv.freebsd.org (Postfix) with ESMTP id 8FE929BD6AD; Thu, 20 Aug 2015 02:30:37 +0000 (UTC) (envelope-from pyunyh@gmail.com) Received: from mail-pd0-x22f.google.com (mail-pd0-x22f.google.com [IPv6:2607:f8b0:400e:c02::22f]) (using TLSv1.2 with cipher ECDHE-RSA-AES128-GCM-SHA256 (128/128 bits)) (Client CN "smtp.gmail.com", Issuer "Google Internet Authority G2" (verified OK)) by mx1.freebsd.org (Postfix) with ESMTPS id 5650E1D29; Thu, 20 Aug 2015 02:30:37 +0000 (UTC) (envelope-from pyunyh@gmail.com) Received: by pdbmi9 with SMTP id mi9so8247316pdb.3; Wed, 19 Aug 2015 19:30:36 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20120113; h=from:date:to:cc:subject:message-id:reply-to:references:mime-version :content-type:content-disposition:in-reply-to:user-agent; bh=eQTimaEialIMMPKihgRhyaqLUInJ7JYcDtq5NJmch/I=; b=JRe0lBHMhWSn+1p95Bnf9eLM6MkepZq9Bqaiz42vniZTCxj4UXD3CsRnS6HxO5P4eA js3PXcW32KC0MX59qkX4T5nmuMnFXOLHrP1wpaqtJUqIKgZUJDArcg5u5Vj7bl3lroJY AkTNO6Ledl6HE8OkwLdbz6HIZ1tCBPsQM+rZKhtPhY0broHi3Ckrcwi2DZfzU3EZHR8I c8c7K8o7DLxr9mZwfjmHlao5mOVJIIEXArzZ+SKWR/GTPZMzV8mVsHf9Lj+xrPamcwjx Ize2seGECskFd5LcGGo7gqJchVN8Vfpg+qDVUA3yXy0WR3R1kUq+hQJ4wpt/ETh9svbh Tiqg== X-Received: by 10.70.103.74 with SMTP id fu10mr1578826pdb.11.1440037836738; Wed, 19 Aug 2015 19:30:36 -0700 (PDT) Received: from pyunyh@gmail.com ([106.247.248.2]) by smtp.gmail.com with ESMTPSA id em1sm2330892pbd.42.2015.08.19.19.30.31 (version=TLSv1 cipher=RC4-SHA bits=128/128); Wed, 19 Aug 2015 19:30:35 -0700 (PDT) From: Yonghyeon PYUN X-Google-Original-From: "Yonghyeon PYUN" Received: by pyunyh@gmail.com (sSMTP sendmail emulation); Thu, 20 Aug 2015 11:30:24 +0900 Date: Thu, 20 Aug 2015 11:30:24 +0900 To: Rick Macklem Cc: Hans Petter Selasky , FreeBSD stable , FreeBSD Net , Slawa Olhovchenkov , Christopher Forgeron , Daniel Braniss , Gleb Smirnoff Subject: Re: ix(intel) vs mlxen(mellanox) 10Gb performance Message-ID: <20150820023024.GB996@michelle.fasterthan.com> Reply-To: pyunyh@gmail.com References: <1D52028A-B39F-4F9B-BD38-CB1D73BF5D56@cs.huji.ac.il> <9D8B0503-E8FA-43CA-88F0-01F184F84D9B@cs.huji.ac.il> <1721122651.24481798.1439902381663.JavaMail.zimbra@uoguelph.ca> <55D333D6.5040102@selasky.org> <1325951625.25292515.1439934848268.JavaMail.zimbra@uoguelph.ca> <55D429A4.3010407@selasky.org> <20150819074212.GB964@michelle.fasterthan.com> <55D43615.1030401@selasky.org> <2013503980.25726607.1439989235806.JavaMail.zimbra@uoguelph.ca> Mime-Version: 1.0 Content-Type: text/plain; charset=us-ascii Content-Disposition: inline In-Reply-To: <2013503980.25726607.1439989235806.JavaMail.zimbra@uoguelph.ca> User-Agent: Mutt/1.4.2.3i X-BeenThere: freebsd-net@freebsd.org X-Mailman-Version: 2.1.20 Precedence: list List-Id: Networking and TCP/IP with FreeBSD List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , X-List-Received-Date: Thu, 20 Aug 2015 02:30:37 -0000 On Wed, Aug 19, 2015 at 09:00:35AM -0400, Rick Macklem wrote: > Hans Petter Selasky wrote: > > On 08/19/15 09:42, Yonghyeon PYUN wrote: > > > On Wed, Aug 19, 2015 at 09:00:52AM +0200, Hans Petter Selasky wrote: > > >> On 08/18/15 23:54, Rick Macklem wrote: > > >>> Ouch! Yes, I now see that the code that counts the # of mbufs is before > > >>> the > > >>> code that adds the tcp/ip header mbuf. > > >>> > > >>> In my opinion, this should be fixed by setting if_hw_tsomaxsegcount to > > >>> whatever > > >>> the driver provides - 1. It is not the driver's responsibility to know if > > >>> a tcp/ip > > >>> header mbuf will be added and is a lot less confusing that expecting the > > >>> driver > > >>> author to know to subtract one. (I had mistakenly thought that > > >>> tcp_output() had > > >>> added the tc/ip header mbuf before the loop that counts mbufs in the > > >>> list. > > >>> Btw, > > >>> this tcp/ip header mbuf also has leading space for the MAC layer header.) > > >>> > > >> > > >> Hi Rick, > > >> > > >> Your question is good. With the Mellanox hardware we have separate > > >> so-called inline data space for the TCP/IP headers, so if the TCP stack > > >> subtracts something, then we would need to add something to the limit, > > >> because then the scatter gather list is only used for the data part. > > >> > > > > > > I think all drivers in tree don't subtract 1 for > > > if_hw_tsomaxsegcount. Probably touching Mellanox driver would be > > > simpler than fixing all other drivers in tree. > > > > > >> Maybe it can be controlled by some kind of flag, if all the three TSO > > >> limits should include the TCP/IP/ethernet headers too. I'm pretty sure > > >> we want both versions. > > >> > > > > > > Hmm, I'm afraid it's already complex. Drivers have to tell almost > > > the same information to both bus_dma(9) and network stack. > > > > Don't forget that not all drivers in the tree set the TSO limits before > > if_attach(), so possibly the subtraction of one TSO fragment needs to go > > into ip_output() .... > > > Ok, I realized that some drivers may not know the answers before ether_ifattach(), > due to the way they are configured/written (I saw the use of if_hw_tsomax_update() > in the patch). I was not able to find an interface that configures TSO parameters after if_t conversion. I'm under the impression if_hw_tsomax_update() is not designed to use this way. Probably we need a better one?(CCed to Gleb). > > If it is subtracted as a part of the assignment to if_hw_tsomaxsegcount in tcp_output() > at line#791 in tcp_output() like the following, I don't think it should matter if the > values are set before ether_ifattach()? > /* > * Subtract 1 for the tcp/ip header mbuf that > * will be prepended to the mbuf chain in this > * function in the code below this block. > */ > if_hw_tsomaxsegcount = tp->t_tsomaxsegcount - 1; > > I don't have a good solution for the case where a driver doesn't plan on using the > tcp/ip header provided by tcp_output() except to say the driver can add one to the > setting to compensate for that (and if they fail to do so, it still works, although > somewhat suboptimally). When I now read the comment in sys/net/if_var.h it is clear > what it means, but for some reason I didn't read it that way before? (I think it was > the part that said the driver didn't have to subtract for the headers that confused me?) > In any case, we need to try and come up with a clear definition of what they need to > be set to. > > I can now think of two ways to deal with this: > 1 - Leave tcp_output() as is, but provide a macro for the device driver authors to use > that sets if_hw_tsomaxsegcount with a flag for "driver uses tcp/ip header mbuf", > documenting that this flag should normally be true. > OR > 2 - Change tcp_output() as above, noting that this is a workaround for confusion w.r.t. > whether or not if_hw_tsomaxsegcount should include the tcp/ip header mbuf and > update the comment in if_var.h to reflect this. Then drivers that don't use the > tcp/ip header mbuf can increase their value for if_hw_tsomaxsegcount by 1. > (The comment should also mention that a value of 35 or greater is much preferred to > 32 if the hardware will support that.) > Both works for me. My preference is 2 just because it's very common for most drivers that use tcp/ip header mbuf. From owner-freebsd-net@freebsd.org Thu Aug 20 04:51:39 2015 Return-Path: Delivered-To: freebsd-net@mailman.ysv.freebsd.org Received: from mx1.freebsd.org (mx1.freebsd.org [IPv6:2001:1900:2254:206a::19:1]) by mailman.ysv.freebsd.org (Postfix) with ESMTP id 4D6FB9BEDEE; Thu, 20 Aug 2015 04:51:39 +0000 (UTC) (envelope-from pyunyh@gmail.com) Received: from mail-pd0-x22d.google.com (mail-pd0-x22d.google.com [IPv6:2607:f8b0:400e:c02::22d]) (using TLSv1.2 with cipher ECDHE-RSA-AES128-GCM-SHA256 (128/128 bits)) (Client CN "smtp.gmail.com", Issuer "Google Internet Authority G2" (verified OK)) by mx1.freebsd.org (Postfix) with ESMTPS id 1AF47E22; Thu, 20 Aug 2015 04:51:39 +0000 (UTC) (envelope-from pyunyh@gmail.com) Received: by pdbmi9 with SMTP id mi9so9547853pdb.3; Wed, 19 Aug 2015 21:51:38 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20120113; h=from:date:to:cc:subject:message-id:reply-to:references:mime-version :content-type:content-disposition:in-reply-to:user-agent; bh=IjJI9xKS/Ee/lQYdIJZfNB/0g0GYNXctkYnPkN380Kg=; b=f/DCgY01KHz4pNVqTRvx6LkT3cCpnOnx+vNiYHmtEZWPjjC20SHZCeuvP3GeJ1qi1N Qj4Y/zCILAujFnZfxwFs04LZco/Wgl1wywGNeGnjP3NjkviV9kKDmddqVF/hg72Hcd4G aJ8ZFz8wYr3YvgDu+Ui55zYCMjjr25/XHe9kYRNHcRfNtxS3Ehuzfsl2VJFDeewCSe4f 8xbZ1IzdARwpFubECeh3/SIYH5zIM9xQFpN2T7HRt3n0DDP/yhVN2ZplDBh8yOHzcVZj 0otr+GYB3UEZ4jIXGRWyYeMN9oiREkx2eKpZRnYK/an/yA+uFrhi3EoVHufnmnEXIEow Q3Ag== X-Received: by 10.70.90.98 with SMTP id bv2mr2684918pdb.36.1440046298588; Wed, 19 Aug 2015 21:51:38 -0700 (PDT) Received: from pyunyh@gmail.com ([106.247.248.2]) by smtp.gmail.com with ESMTPSA id xp10sm2637722pac.34.2015.08.19.21.51.33 (version=TLSv1 cipher=RC4-SHA bits=128/128); Wed, 19 Aug 2015 21:51:37 -0700 (PDT) From: Yonghyeon PYUN X-Google-Original-From: "Yonghyeon PYUN" Received: by pyunyh@gmail.com (sSMTP sendmail emulation); Thu, 20 Aug 2015 13:51:25 +0900 Date: Thu, 20 Aug 2015 13:51:25 +0900 To: Rick Macklem Cc: Hans Petter Selasky , FreeBSD stable , FreeBSD Net , Slawa Olhovchenkov , Christopher Forgeron , Daniel Braniss Subject: Re: ix(intel) vs mlxen(mellanox) 10Gb performance Message-ID: <20150820045125.GA982@michelle.fasterthan.com> Reply-To: pyunyh@gmail.com References: <473274181.23263108.1439814072514.JavaMail.zimbra@uoguelph.ca> <1721122651.24481798.1439902381663.JavaMail.zimbra@uoguelph.ca> <55D333D6.5040102@selasky.org> <1325951625.25292515.1439934848268.JavaMail.zimbra@uoguelph.ca> <55D429A4.3010407@selasky.org> <20150819074212.GB964@michelle.fasterthan.com> <55D43590.8050508@selasky.org> <20150819081308.GC964@michelle.fasterthan.com> <1154739904.25677089.1439986439408.JavaMail.zimbra@uoguelph.ca> Mime-Version: 1.0 Content-Type: text/plain; charset=us-ascii Content-Disposition: inline In-Reply-To: <1154739904.25677089.1439986439408.JavaMail.zimbra@uoguelph.ca> User-Agent: Mutt/1.4.2.3i X-BeenThere: freebsd-net@freebsd.org X-Mailman-Version: 2.1.20 Precedence: list List-Id: Networking and TCP/IP with FreeBSD List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , X-List-Received-Date: Thu, 20 Aug 2015 04:51:39 -0000 On Wed, Aug 19, 2015 at 08:13:59AM -0400, Rick Macklem wrote: > Yonghyeon PYUN wrote: > > On Wed, Aug 19, 2015 at 09:51:44AM +0200, Hans Petter Selasky wrote: > > > On 08/19/15 09:42, Yonghyeon PYUN wrote: > > > >On Wed, Aug 19, 2015 at 09:00:52AM +0200, Hans Petter Selasky wrote: > > > >>On 08/18/15 23:54, Rick Macklem wrote: > > > >>>Ouch! Yes, I now see that the code that counts the # of mbufs is before > > > >>>the > > > >>>code that adds the tcp/ip header mbuf. > > > >>> > > > >>>In my opinion, this should be fixed by setting if_hw_tsomaxsegcount to > > > >>>whatever > > > >>>the driver provides - 1. It is not the driver's responsibility to know > > > >>>if > > > >>>a tcp/ip > > > >>>header mbuf will be added and is a lot less confusing that expecting the > > > >>>driver > > > >>>author to know to subtract one. (I had mistakenly thought that > > > >>>tcp_output() had > > > >>>added the tc/ip header mbuf before the loop that counts mbufs in the > > > >>>list. > > > >>>Btw, > > > >>>this tcp/ip header mbuf also has leading space for the MAC layer > > > >>>header.) > > > >>> > > > >> > > > >>Hi Rick, > > > >> > > > >>Your question is good. With the Mellanox hardware we have separate > > > >>so-called inline data space for the TCP/IP headers, so if the TCP stack > > > >>subtracts something, then we would need to add something to the limit, > > > >>because then the scatter gather list is only used for the data part. > > > >> > > > > > > > >I think all drivers in tree don't subtract 1 for > > > >if_hw_tsomaxsegcount. Probably touching Mellanox driver would be > > > >simpler than fixing all other drivers in tree. > > > > > > Hi, > > > > > > If you change the behaviour don't forget to update and/or add comments > > > describing it. Maybe the amount of subtraction could be defined by some > > > macro? Then drivers which inline the headers can subtract it? > > > > > > > I'm also ok with your suggestion. > > > > > Your suggestion is fine by me. > > > > > > > > The initial TSO limits were tried to be preserved, and I believe that > > > TSO limits never accounted for IP/TCP/ETHERNET/VLAN headers! > > > > > > > I guess FreeBSD used to follow MS LSOv1 specification with minor > > exception in pseudo checksum computation. If I recall correctly the > > specification says upper stack can generate up to IP_MAXPACKET sized > > packet. Other L2 headers like ethernet/vlan header size is not > > included in the packet and it's drivers responsibility to allocate > > additional DMA buffers/segments for L2 headers. > > > Yep. The default for if_hw_tsomax was reduced from IP_MAXPACKET to > 32 * MCLBYTES - max_ethernet_header_size as a workaround/hack so that > devices limited to 32 transmit segments would work (ie. the entire packet, > including MAC header would fit in 32 MCLBYTE clusters). > This implied that many drivers did end up using m_defrag() to copy the mbuf > list to one made up of 32 MCLBYTE clusters. > > If a driver sets if_hw_tsomaxsegcount correctly, then it can set if_hw_tsomax > to whatever it can handle as the largest TSO packet (without MAC header) the > hardware can handle. If it can handle > IP_MAXPACKET, then it can set it to that. > I thought the upper limit was still IP_MAXPACKET. If driver increase it (i.e. > IP_MAXPACKET, the length field in the IP header would overflow which in turn may break firewalls and other packet handling in IPv4/IPv6 code path. If the limit no longer apply to network stack, that's great. Some controllers can handle up to 256KB TCP/UDP segmentation and supporting that feature wouldn't be hard. From owner-freebsd-net@freebsd.org Thu Aug 20 08:10:34 2015 Return-Path: Delivered-To: freebsd-net@mailman.ysv.freebsd.org Received: from mx1.freebsd.org (mx1.freebsd.org [IPv6:2001:1900:2254:206a::19:1]) by mailman.ysv.freebsd.org (Postfix) with ESMTP id 8E3029BE82F; Thu, 20 Aug 2015 08:10:34 +0000 (UTC) (envelope-from glebius@FreeBSD.org) Received: from cell.glebius.int.ru (glebius.int.ru [81.19.69.10]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (Client CN "cell.glebius.int.ru", Issuer "cell.glebius.int.ru" (not verified)) by mx1.freebsd.org (Postfix) with ESMTPS id 1386D1832; Thu, 20 Aug 2015 08:10:33 +0000 (UTC) (envelope-from glebius@FreeBSD.org) Received: from cell.glebius.int.ru (localhost [127.0.0.1]) by cell.glebius.int.ru (8.15.2/8.15.2) with ESMTPS id t7K8ADUE040211 (version=TLSv1.2 cipher=DHE-RSA-AES256-GCM-SHA384 bits=256 verify=NO); Thu, 20 Aug 2015 11:10:13 +0300 (MSK) (envelope-from glebius@FreeBSD.org) Received: (from glebius@localhost) by cell.glebius.int.ru (8.15.2/8.15.2/Submit) id t7K8ACxi040210; Thu, 20 Aug 2015 11:10:12 +0300 (MSK) (envelope-from glebius@FreeBSD.org) X-Authentication-Warning: cell.glebius.int.ru: glebius set sender to glebius@FreeBSD.org using -f Date: Thu, 20 Aug 2015 11:10:12 +0300 From: Gleb Smirnoff To: Yonghyeon PYUN Cc: Rick Macklem , Hans Petter Selasky , FreeBSD stable , FreeBSD Net , Slawa Olhovchenkov , Christopher Forgeron , Daniel Braniss Subject: Re: ix(intel) vs mlxen(mellanox) 10Gb performance Message-ID: <20150820081012.GY75813@glebius.int.ru> References: <1D52028A-B39F-4F9B-BD38-CB1D73BF5D56@cs.huji.ac.il> <9D8B0503-E8FA-43CA-88F0-01F184F84D9B@cs.huji.ac.il> <1721122651.24481798.1439902381663.JavaMail.zimbra@uoguelph.ca> <55D333D6.5040102@selasky.org> <1325951625.25292515.1439934848268.JavaMail.zimbra@uoguelph.ca> <55D429A4.3010407@selasky.org> <20150819074212.GB964@michelle.fasterthan.com> <55D43615.1030401@selasky.org> <2013503980.25726607.1439989235806.JavaMail.zimbra@uoguelph.ca> <20150820023024.GB996@michelle.fasterthan.com> MIME-Version: 1.0 Content-Type: text/plain; charset=us-ascii Content-Disposition: inline In-Reply-To: <20150820023024.GB996@michelle.fasterthan.com> User-Agent: Mutt/1.5.23 (2014-03-12) X-BeenThere: freebsd-net@freebsd.org X-Mailman-Version: 2.1.20 Precedence: list List-Id: Networking and TCP/IP with FreeBSD List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , X-List-Received-Date: Thu, 20 Aug 2015 08:10:34 -0000 Yonghyeon, On Thu, Aug 20, 2015 at 11:30:24AM +0900, Yonghyeon PYUN wrote: Y> > > >> Maybe it can be controlled by some kind of flag, if all the three TSO Y> > > >> limits should include the TCP/IP/ethernet headers too. I'm pretty sure Y> > > >> we want both versions. Y> > > >> Y> > > > Y> > > > Hmm, I'm afraid it's already complex. Drivers have to tell almost Y> > > > the same information to both bus_dma(9) and network stack. Y> > > Y> > > Don't forget that not all drivers in the tree set the TSO limits before Y> > > if_attach(), so possibly the subtraction of one TSO fragment needs to go Y> > > into ip_output() .... Y> > > Y> > Ok, I realized that some drivers may not know the answers before ether_ifattach(), Y> > due to the way they are configured/written (I saw the use of if_hw_tsomax_update() Y> > in the patch). Y> Y> I was not able to find an interface that configures TSO parameters Y> after if_t conversion. I'm under the impression Y> if_hw_tsomax_update() is not designed to use this way. Probably we Y> need a better one?(CCed to Gleb). Yes. In the projects/ifnet all the TSO stuff is configured differently. I'd really appreciate if other developers look there and review it, try it, give some input. Here is a snippet from net/if.h in projects/ifnet: /* * Structure describing TSO properties of an interface. Known both to ifnet * layer and TCP. Most interfaces point to a static tsomax in ifdriver * definition. However, vlan(4) and lagg(4) require a dynamic tsomax. */ struct iftsomax { uint32_t tsomax_bytes; /* TSO total burst length limit in bytes */ uint32_t tsomax_segcount; /* TSO maximum segment count */ uint32_t tsomax_segsize; /* TSO maximum segment size in bytes */ }; Now closer to your original question. I haven't yet converted lagg(4), so haven't yet worked on if_hw_tsomax_update(). I am convinced that it shouldn't be needed for a regular driver (save lagg(4). A proper driver should first study its hardware and only then call if_attach(). Correct me if am wrong, please. Also, I suppose, that a piece of hardware can't change its TSO maximums at runtime, so I don't see reason for changing them at runtime (save lagg(4)). -- Totus tuus, Glebius. From owner-freebsd-net@freebsd.org Thu Aug 20 12:03:49 2015 Return-Path: Delivered-To: freebsd-net@mailman.ysv.freebsd.org Received: from mx1.freebsd.org (mx1.freebsd.org [IPv6:2001:1900:2254:206a::19:1]) by mailman.ysv.freebsd.org (Postfix) with ESMTP id C3F769BED08; Thu, 20 Aug 2015 12:03:49 +0000 (UTC) (envelope-from rmacklem@uoguelph.ca) Received: from esa-annu.net.uoguelph.ca (esa-annu.mail.uoguelph.ca [131.104.91.36]) by mx1.freebsd.org (Postfix) with ESMTP id 4C89C159; Thu, 20 Aug 2015 12:03:48 +0000 (UTC) (envelope-from rmacklem@uoguelph.ca) IronPort-PHdr: 9a23:c4mtjRVinxux5MmOj2cbwepSweHV8LGtZVwlr6E/grcLSJyIuqrYZhGAt8tkgFKBZ4jH8fUM07OQ6PC7HzBQqsvZ+Fk5M7VyFDY9wf0MmAIhBMPXQWbaF9XNKxIAIcJZSVV+9Gu6O0UGUOz3ZlnVv2HgpWVKQka3CwN5K6zPF5LIiIzvjqbpq8aVP1UD2WL1SIgxBSv1hD2ZjtMRj4pmJ/R54TryiVwMRd5rw3h1L0mYhRf265T41pdi9yNNp6BprJYYAu3SNp41Rr1ADTkgL3t9pIiy7UGCHkOz4S4EQ3gQgxpgDA3M7RW8VZD04QXgse8o4iiRPoXTRLs3XTmnp/NxTRbjiyMKMhYk927Kh8hojORQqUTy9FRE34fIbdTNZ7JFdaTHcIZfHDIZUw== X-IronPort-Anti-Spam-Filtered: true X-IronPort-Anti-Spam-Result: A2BYAgBLwdVV/61jaINdg29pBoMfui0BCYFtCoUxSgKBaBQBAQEBAQEBAYEJgh2CBgEBAQMBAQEBICsgCwULAgEIGAICDRkCAicBCSYCDAcEARwEiAUIDbkclgQBAQEBAQEBAwEBAQEBGQSBIooxhDEBBgEBHDQHgmmBQwWVKIUFhQiELJA/hEiDZwImgg4cgW8iMwd+AQgXI4EEAQEB X-IronPort-AV: E=Sophos;i="5.15,714,1432612800"; d="scan'208";a="233568830" Received: from nipigon.cs.uoguelph.ca (HELO zcs1.mail.uoguelph.ca) ([131.104.99.173]) by esa-annu.net.uoguelph.ca with ESMTP; 20 Aug 2015 08:03:41 -0400 Received: from localhost (localhost [127.0.0.1]) by zcs1.mail.uoguelph.ca (Postfix) with ESMTP id 039F715F55D; Thu, 20 Aug 2015 08:03:41 -0400 (EDT) Received: from zcs1.mail.uoguelph.ca ([127.0.0.1]) by localhost (zcs1.mail.uoguelph.ca [127.0.0.1]) (amavisd-new, port 10032) with ESMTP id ydQ8_x38jF7C; Thu, 20 Aug 2015 08:03:40 -0400 (EDT) Received: from localhost (localhost [127.0.0.1]) by zcs1.mail.uoguelph.ca (Postfix) with ESMTP id 2012815F563; Thu, 20 Aug 2015 08:03:40 -0400 (EDT) X-Virus-Scanned: amavisd-new at zcs1.mail.uoguelph.ca Received: from zcs1.mail.uoguelph.ca ([127.0.0.1]) by localhost (zcs1.mail.uoguelph.ca [127.0.0.1]) (amavisd-new, port 10026) with ESMTP id QiqjKPhbu5_J; Thu, 20 Aug 2015 08:03:39 -0400 (EDT) Received: from zcs1.mail.uoguelph.ca (zcs1.mail.uoguelph.ca [172.17.95.18]) by zcs1.mail.uoguelph.ca (Postfix) with ESMTP id EA2C315F55D; Thu, 20 Aug 2015 08:03:39 -0400 (EDT) Date: Thu, 20 Aug 2015 08:03:39 -0400 (EDT) From: Rick Macklem To: pyunyh@gmail.com Cc: Hans Petter Selasky , FreeBSD stable , FreeBSD Net , Slawa Olhovchenkov , Christopher Forgeron Message-ID: <1935256446.26896702.1440072219573.JavaMail.zimbra@uoguelph.ca> In-Reply-To: <20150820045125.GA982@michelle.fasterthan.com> References: <473274181.23263108.1439814072514.JavaMail.zimbra@uoguelph.ca> <1325951625.25292515.1439934848268.JavaMail.zimbra@uoguelph.ca> <55D429A4.3010407@selasky.org> <20150819074212.GB964@michelle.fasterthan.com> <55D43590.8050508@selasky.org> <20150819081308.GC964@michelle.fasterthan.com> <1154739904.25677089.1439986439408.JavaMail.zimbra@uoguelph.ca> <20150820045125.GA982@michelle.fasterthan.com> Subject: Re: ix(intel) vs mlxen(mellanox) 10Gb performance MIME-Version: 1.0 Content-Type: text/plain; charset=utf-8 Content-Transfer-Encoding: 7bit X-Originating-IP: [172.17.95.11] X-Mailer: Zimbra 8.0.9_GA_6191 (ZimbraWebClient - FF34 (Win)/8.0.9_GA_6191) Thread-Topic: ix(intel) vs mlxen(mellanox) 10Gb performance Thread-Index: rwJq7qWbysh2G9XrT4qz/e9qa1lX4w== X-BeenThere: freebsd-net@freebsd.org X-Mailman-Version: 2.1.20 Precedence: list List-Id: Networking and TCP/IP with FreeBSD List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , X-List-Received-Date: Thu, 20 Aug 2015 12:03:50 -0000 Yonghyeon PYUN wrote: > On Wed, Aug 19, 2015 at 08:13:59AM -0400, Rick Macklem wrote: > > Yonghyeon PYUN wrote: > > > On Wed, Aug 19, 2015 at 09:51:44AM +0200, Hans Petter Selasky wrote: > > > > On 08/19/15 09:42, Yonghyeon PYUN wrote: > > > > >On Wed, Aug 19, 2015 at 09:00:52AM +0200, Hans Petter Selasky wrote: > > > > >>On 08/18/15 23:54, Rick Macklem wrote: > > > > >>>Ouch! Yes, I now see that the code that counts the # of mbufs is > > > > >>>before > > > > >>>the > > > > >>>code that adds the tcp/ip header mbuf. > > > > >>> > > > > >>>In my opinion, this should be fixed by setting if_hw_tsomaxsegcount > > > > >>>to > > > > >>>whatever > > > > >>>the driver provides - 1. It is not the driver's responsibility to > > > > >>>know > > > > >>>if > > > > >>>a tcp/ip > > > > >>>header mbuf will be added and is a lot less confusing that expecting > > > > >>>the > > > > >>>driver > > > > >>>author to know to subtract one. (I had mistakenly thought that > > > > >>>tcp_output() had > > > > >>>added the tc/ip header mbuf before the loop that counts mbufs in the > > > > >>>list. > > > > >>>Btw, > > > > >>>this tcp/ip header mbuf also has leading space for the MAC layer > > > > >>>header.) > > > > >>> > > > > >> > > > > >>Hi Rick, > > > > >> > > > > >>Your question is good. With the Mellanox hardware we have separate > > > > >>so-called inline data space for the TCP/IP headers, so if the TCP > > > > >>stack > > > > >>subtracts something, then we would need to add something to the > > > > >>limit, > > > > >>because then the scatter gather list is only used for the data part. > > > > >> > > > > > > > > > >I think all drivers in tree don't subtract 1 for > > > > >if_hw_tsomaxsegcount. Probably touching Mellanox driver would be > > > > >simpler than fixing all other drivers in tree. > > > > > > > > Hi, > > > > > > > > If you change the behaviour don't forget to update and/or add comments > > > > describing it. Maybe the amount of subtraction could be defined by some > > > > macro? Then drivers which inline the headers can subtract it? > > > > > > > > > > I'm also ok with your suggestion. > > > > > > > Your suggestion is fine by me. > > > > > > > > > > > The initial TSO limits were tried to be preserved, and I believe that > > > > TSO limits never accounted for IP/TCP/ETHERNET/VLAN headers! > > > > > > > > > > I guess FreeBSD used to follow MS LSOv1 specification with minor > > > exception in pseudo checksum computation. If I recall correctly the > > > specification says upper stack can generate up to IP_MAXPACKET sized > > > packet. Other L2 headers like ethernet/vlan header size is not > > > included in the packet and it's drivers responsibility to allocate > > > additional DMA buffers/segments for L2 headers. > > > > > Yep. The default for if_hw_tsomax was reduced from IP_MAXPACKET to > > 32 * MCLBYTES - max_ethernet_header_size as a workaround/hack so that > > devices limited to 32 transmit segments would work (ie. the entire packet, > > including MAC header would fit in 32 MCLBYTE clusters). > > This implied that many drivers did end up using m_defrag() to copy the mbuf > > list to one made up of 32 MCLBYTE clusters. > > > > If a driver sets if_hw_tsomaxsegcount correctly, then it can set > > if_hw_tsomax > > to whatever it can handle as the largest TSO packet (without MAC header) > > the > > hardware can handle. If it can handle > IP_MAXPACKET, then it can set it to > > that. > > > > I thought the upper limit was still IP_MAXPACKET. If driver > increase it (i.e. > IP_MAXPACKET, the length field in the IP > header would overflow which in turn may break firewalls and other > packet handling in IPv4/IPv6 code path. I have no idea if a bogus value in the ip_len field of the TSO segment would break something in ip_output() or not. This would need to be checked before anyone configures if_hw_tsomax > IP_MAXPACKET. I didn't think of any effect this would have in ip_output(), I just knew that the hardware would be replacing ip_len when it generated the TCP/IP segments from the TSO segment. As you note, I vaguely recall some hardware being able to handle a TSO segment > IP_MAXPACKET (presumably getting the TSO segment's length some other way). It would be nice if this was checked, but yes, the comment should specify an upper bound on if_hw_tsomax of IP_MAXPACKET until then. rick > If the limit no longer apply to network stack, that's great. Some > controllers can handle up to 256KB TCP/UDP segmentation and > supporting that feature wouldn't be hard. > _______________________________________________ > freebsd-stable@freebsd.org mailing list > https://lists.freebsd.org/mailman/listinfo/freebsd-stable > To unsubscribe, send any mail to "freebsd-stable-unsubscribe@freebsd.org" > From owner-freebsd-net@freebsd.org Thu Aug 20 20:55:38 2015 Return-Path: Delivered-To: freebsd-net@mailman.ysv.freebsd.org Received: from mx1.freebsd.org (mx1.freebsd.org [IPv6:2001:1900:2254:206a::19:1]) by mailman.ysv.freebsd.org (Postfix) with ESMTP id 0D5589BF78F for ; Thu, 20 Aug 2015 20:55:38 +0000 (UTC) (envelope-from feld@FreeBSD.org) Received: from out3-smtp.messagingengine.com (out3-smtp.messagingengine.com [66.111.4.27]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (Client did not present a certificate) by mx1.freebsd.org (Postfix) with ESMTPS id DA1C1199F for ; Thu, 20 Aug 2015 20:55:37 +0000 (UTC) (envelope-from feld@FreeBSD.org) Received: from compute2.internal (compute2.nyi.internal [10.202.2.42]) by mailout.nyi.internal (Postfix) with ESMTP id 072452095E for ; Thu, 20 Aug 2015 16:55:37 -0400 (EDT) Received: from web3 ([10.202.2.213]) by compute2.internal (MEProxy); Thu, 20 Aug 2015 16:55:37 -0400 DKIM-Signature: v=1; a=rsa-sha1; c=relaxed/relaxed; d= messagingengine.com; h=content-transfer-encoding:content-type :date:from:in-reply-to:message-id:mime-version:references :subject:to:x-sasl-enc:x-sasl-enc; s=smtpout; bh=u2GezBLjSo6wFZv CL2dMu9+9HPQ=; b=Vl0gKl8zsKHKdfjpiH6QcNIsF79gK8SyTqYfk57a4/oZTMh g75zAmuHNTodsFHDqQPBZrjSF7tNA6Gkg2b3MOS9TDBWmNHoSPSxoq6/McLWyz4+ RJrv9J1uOHEoRMHBM++gVKsSCHfceBjJutDXKyD0w8+nZ6bm4uKtaS1JmSJ0= Received: by web3.nyi.internal (Postfix, from userid 99) id DB9AF10A65A; Thu, 20 Aug 2015 16:55:36 -0400 (EDT) Message-Id: <1440104136.948992.361675545.1F50290F@webmail.messagingengine.com> X-Sasl-Enc: UxgWlEBuWHWzQ/OhDkB+GVI1Jadu4jQKgdTtqIdFKL/z 1440104136 From: Mark Felder To: Gary Palmer , freebsd-net@freebsd.org MIME-Version: 1.0 Content-Transfer-Encoding: 7bit Content-Type: text/plain X-Mailer: MessagingEngine.com Webmail Interface - ajax-4fee8ba5 In-Reply-To: <20150817112408.GB13503@in-addr.com> References: <20150817112408.GB13503@in-addr.com> Subject: Re: RFC7084 "Basic Requirements for IPv6 Customer Edge Routers" Date: Thu, 20 Aug 2015 15:55:36 -0500 X-BeenThere: freebsd-net@freebsd.org X-Mailman-Version: 2.1.20 Precedence: list List-Id: Networking and TCP/IP with FreeBSD List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , X-List-Received-Date: Thu, 20 Aug 2015 20:55:38 -0000 On Mon, Aug 17, 2015, at 06:24, Gary Palmer wrote: > > Hi, > > Does anyone know if FreeBSD 9.3 is compliant with RFC7034? > > Thanks, > > Gary > One of the requirements is 6rd. This is not supported out of the box by FreeBSD. It is handled by the patched "stf" driver you can get from ports/packages: net/stf-6rd-kmod It has not made it into the kernel yet. As I understand it is incomplete. I need it for IPv6 from my ISP, but you can also use a GRE tunnel to get functional IPv6 from a provider who uses 6rd. The caveat is that you can't communicate with others on your same subnet, but that's not something a consumer is probably trying to do. They're trying to get to the greater internet, not connect to a device on their neighbor's IPv6 network. From owner-freebsd-net@freebsd.org Thu Aug 20 21:29:48 2015 Return-Path: Delivered-To: freebsd-net@mailman.ysv.freebsd.org Received: from mx1.freebsd.org (mx1.freebsd.org [IPv6:2001:1900:2254:206a::19:1]) by mailman.ysv.freebsd.org (Postfix) with ESMTP id 2392A9BFCAF for ; Thu, 20 Aug 2015 21:29:48 +0000 (UTC) (envelope-from aurfalien@gmail.com) Received: from mail-pa0-x22e.google.com (mail-pa0-x22e.google.com [IPv6:2607:f8b0:400e:c03::22e]) (using TLSv1.2 with cipher ECDHE-RSA-AES128-GCM-SHA256 (128/128 bits)) (Client CN "smtp.gmail.com", Issuer "Google Internet Authority G2" (verified OK)) by mx1.freebsd.org (Postfix) with ESMTPS id E3879D79 for ; Thu, 20 Aug 2015 21:29:47 +0000 (UTC) (envelope-from aurfalien@gmail.com) Received: by paom9 with SMTP id m9so2781330pao.1 for ; Thu, 20 Aug 2015 14:29:47 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20120113; h=from:content-type:subject:message-id:date:to:mime-version; bh=Hz8a/Zj8aMi6r47UVdLkgI452zOBucPMRa+dBnHMINU=; b=qWaroKJeUiSXRg+u4QhsvQnwkWgnDI1XtVZDRFnpCW3DVru+wA8sE+c0ZCxIXpWPwJ CyYAgAlrW7mfpM6A291dIBPHVy3Pejzur25XV0EIe9zOsSklT+EOPnAlSeUYWiJUpOa5 8XOveArUKt2j1Ogwo9audyzhKhE5sJ+4l80nemp9hjT2d/vHIogVT/qMDK1Y8oB+S+Ci MOOXw/6z9D8Bu9ecbrUMBmPUCGFXWZdXr77bvjFtmXU96qvreU6Z/M3QX02Aooo3IrjC M9ReNW1ZBXwi9O4Yy0cik0TtO3ui36/GsVj+XWSKbnNgYd4etM0m/LeYuwgAmeixsI8L pQdQ== X-Received: by 10.68.162.99 with SMTP id xz3mr10455352pbb.134.1440106187321; Thu, 20 Aug 2015 14:29:47 -0700 (PDT) Received: from briankrusicw.logan.tv ([64.17.255.138]) by smtp.gmail.com with ESMTPSA id f5sm5401129pas.23.2015.08.20.14.29.46 for (version=TLSv1 cipher=ECDHE-RSA-RC4-SHA bits=128/128); Thu, 20 Aug 2015 14:29:46 -0700 (PDT) From: aurfalien Subject: Mellanox 40Gb support Message-Id: <39463A45-148F-431E-9C75-87952B27033A@gmail.com> Date: Thu, 20 Aug 2015 14:29:48 -0700 To: freebsd-net@freebsd.org Mime-Version: 1.0 (Mac OS X Mail 7.3 \(1878.6\)) X-Mailer: Apple Mail (2.1878.6) Content-Type: text/plain; charset=us-ascii Content-Transfer-Encoding: quoted-printable X-Content-Filtered-By: Mailman/MimeDel 2.1.20 X-BeenThere: freebsd-net@freebsd.org X-Mailman-Version: 2.1.20 Precedence: list List-Id: Networking and TCP/IP with FreeBSD List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , X-List-Received-Date: Thu, 20 Aug 2015 21:29:48 -0000 Hi, Curious if the community has any new driver support for the Mellanox = 40Gb ethernet card? The Mellanox site has an OFED driver v2.1.6 date 5/11/15. Thanks in advance, =20 - aurf "Janitorial Services" From owner-freebsd-net@freebsd.org Thu Aug 20 21:49:10 2015 Return-Path: Delivered-To: freebsd-net@mailman.ysv.freebsd.org Received: from mx1.freebsd.org (mx1.freebsd.org [IPv6:2001:1900:2254:206a::19:1]) by mailman.ysv.freebsd.org (Postfix) with ESMTP id 485D09BE084 for ; Thu, 20 Aug 2015 21:49:10 +0000 (UTC) (envelope-from kmacybsd@gmail.com) Received: from mail-ig0-x236.google.com (mail-ig0-x236.google.com [IPv6:2607:f8b0:4001:c05::236]) (using TLSv1.2 with cipher ECDHE-RSA-AES128-GCM-SHA256 (128/128 bits)) (Client CN "smtp.gmail.com", Issuer "Google Internet Authority G2" (verified OK)) by mx1.freebsd.org (Postfix) with ESMTPS id 143F519F4 for ; Thu, 20 Aug 2015 21:49:10 +0000 (UTC) (envelope-from kmacybsd@gmail.com) Received: by igxp17 with SMTP id p17so1313422igx.1 for ; Thu, 20 Aug 2015 14:49:09 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20120113; h=mime-version:sender:in-reply-to:references:date:message-id:subject :from:to:cc:content-type; bh=5Uiz9Kd8nT+DCPn/8Uc8Cz7oDrCLDuIskxJK2SCDeA4=; b=uQNxUYCs21bJhuAhgNf1JxaYKwDwNkHcBx+bi/PIFAfDl5ZgWGCRsy0Z3akTEwFN3H bQ6YYCqet2sTVd471RlUcVK3QBHrgD4zAy7Klu4lys1etNdXYpDx4t+lqYrOo81//ao5 Gi9uSCvoq/HBbJyi+K/I2bHa//OIJVpLvrniaB7i+CqFw4d9Cf1oaCn/cBuXgVGq9WmT cZvpd0Su31LHmWUckbBUzCga8hPnKyNklmphvmwIKTe5efHPFQBPUhozzVzT0XQMvWvS UtKJqMsScT8m7cHw/AbjbJGcU4G4Ln3niuSxtHAnLBKfNi++pzL1m1Lx+/gYY1fvCG1z T9Cw== MIME-Version: 1.0 X-Received: by 10.50.30.197 with SMTP id u5mr258259igh.9.1440107349438; Thu, 20 Aug 2015 14:49:09 -0700 (PDT) Sender: kmacybsd@gmail.com Received: by 10.36.29.193 with HTTP; Thu, 20 Aug 2015 14:49:09 -0700 (PDT) Received: by 10.36.29.193 with HTTP; Thu, 20 Aug 2015 14:49:09 -0700 (PDT) In-Reply-To: <39463A45-148F-431E-9C75-87952B27033A@gmail.com> References: <39463A45-148F-431E-9C75-87952B27033A@gmail.com> Date: Thu, 20 Aug 2015 14:49:09 -0700 X-Google-Sender-Auth: Ws27lu3jGVDya3tcHIzcwRJNIRM Message-ID: Subject: Re: Mellanox 40Gb support From: "K. Macy" To: aurfalien Cc: freebsd-net@freebsd.org Content-Type: text/plain; charset=UTF-8 X-Content-Filtered-By: Mailman/MimeDel 2.1.20 X-BeenThere: freebsd-net@freebsd.org X-Mailman-Version: 2.1.20 Precedence: list List-Id: Networking and TCP/IP with FreeBSD List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , X-List-Received-Date: Thu, 20 Aug 2015 21:49:10 -0000 Mlxen supports all ConnectX 3 to the best of my knowledge. Are there bug fixes in the latest version that aren't in svn? The ConnectX 4 driver is under development. I will be converting mlxen to iflib in the next few weeks. -K On Aug 20, 2015 2:29 PM, "aurfalien" wrote: > Hi, > > Curious if the community has any new driver support for the Mellanox 40Gb > ethernet card? > > The Mellanox site has an OFED driver v2.1.6 date 5/11/15. > > Thanks in advance, > > - aurf > > "Janitorial Services" > > _______________________________________________ > freebsd-net@freebsd.org mailing list > https://lists.freebsd.org/mailman/listinfo/freebsd-net > To unsubscribe, send any mail to "freebsd-net-unsubscribe@freebsd.org" > From owner-freebsd-net@freebsd.org Fri Aug 21 09:00:06 2015 Return-Path: Delivered-To: freebsd-net@mailman.ysv.freebsd.org Received: from mx1.freebsd.org (mx1.freebsd.org [IPv6:2001:1900:2254:206a::19:1]) by mailman.ysv.freebsd.org (Postfix) with ESMTP id B335B9BFDCC for ; Fri, 21 Aug 2015 09:00:06 +0000 (UTC) (envelope-from avg@FreeBSD.org) Received: from citadel.icyb.net.ua (citadel.icyb.net.ua [212.40.38.140]) by mx1.freebsd.org (Postfix) with ESMTP id DB9E5919 for ; Fri, 21 Aug 2015 09:00:05 +0000 (UTC) (envelope-from avg@FreeBSD.org) Received: from porto.starpoint.kiev.ua (porto-e.starpoint.kiev.ua [212.40.38.100]) by citadel.icyb.net.ua (8.8.8p3/ICyb-2.3exp) with ESMTP id MAA21849; Fri, 21 Aug 2015 12:00:03 +0300 (EEST) (envelope-from avg@FreeBSD.org) Received: from localhost ([127.0.0.1]) by porto.starpoint.kiev.ua with esmtp (Exim 4.34 (FreeBSD)) id 1ZSiB9-000DcK-FV; Fri, 21 Aug 2015 12:00:03 +0300 Subject: Re: pf and new interface To: Reko Turja , freebsd-net@FreeBSD.org References: <55D2E9B3.2040301@FreeBSD.org> <3FEB78C5597F471D94843F93EC1EC5CE@Rivendell> From: Andriy Gapon Message-ID: <55D6E873.1000306@FreeBSD.org> Date: Fri, 21 Aug 2015 11:59:31 +0300 User-Agent: Mozilla/5.0 (Windows NT 6.3; WOW64; rv:38.0) Gecko/20100101 Thunderbird/38.2.0 MIME-Version: 1.0 In-Reply-To: <3FEB78C5597F471D94843F93EC1EC5CE@Rivendell> Content-Type: text/plain; charset=windows-1252 Content-Transfer-Encoding: 7bit X-BeenThere: freebsd-net@freebsd.org X-Mailman-Version: 2.1.20 Precedence: list List-Id: Networking and TCP/IP with FreeBSD List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , X-List-Received-Date: Fri, 21 Aug 2015 09:00:06 -0000 On 18/08/2015 20:43, Reko Turja wrote: > Hmm does the: > > set skip on (tap) > > syntax work in this case? Basically parentheses around the alias should > tell pf that the IP is volatile and can be either activated at later > time or it can be dynamic via dhcp etc. It seems that this would be a syntax error. ($if) is a way of specifying an ip address, not an interface. -- Andriy Gapon From owner-freebsd-net@freebsd.org Fri Aug 21 14:29:42 2015 Return-Path: Delivered-To: freebsd-net@mailman.ysv.freebsd.org Received: from mx1.freebsd.org (mx1.freebsd.org [IPv6:2001:1900:2254:206a::19:1]) by mailman.ysv.freebsd.org (Postfix) with ESMTP id 2E79C9BFD7E for ; Fri, 21 Aug 2015 14:29:42 +0000 (UTC) (envelope-from aurfalien@gmail.com) Received: from mail-pa0-x22b.google.com (mail-pa0-x22b.google.com [IPv6:2607:f8b0:400e:c03::22b]) (using TLSv1.2 with cipher ECDHE-RSA-AES128-GCM-SHA256 (128/128 bits)) (Client CN "smtp.gmail.com", Issuer "Google Internet Authority G2" (verified OK)) by mx1.freebsd.org (Postfix) with ESMTPS id EFFECAC7; Fri, 21 Aug 2015 14:29:41 +0000 (UTC) (envelope-from aurfalien@gmail.com) Received: by pawq9 with SMTP id q9so53974084paw.3; Fri, 21 Aug 2015 07:29:41 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20120113; h=content-type:mime-version:subject:from:in-reply-to:date:cc :message-id:references:to; bh=xTF/aJHHGhEbfQz+3qQLosto1ZxKQbtWS7r+Ft5+ejg=; b=0ixVYibBGpcvz+SZriFBDLT6MAiLvDfQplMlnJWRJMhXL8RCCvB6ZB07X8o1ZtklSc qy8e+Cr8uuYNcKacNl2UPlqKVk/x9115VmEG3rYyRqbjvMc4MZgzB7qb3Nk3v8AMjafn x/2fNWwI4S4qxrN6WglpBJtQtynw5YVP9qXdGGsFP0DLqCn4r6lJ+SPZo1xFBij6Z7nr LBif7KiPQf42v28eVcywOekBMSoNKw5uwBVnS9qN34agjE7IgC20Nh34wLX+fhnjgK/f OHCzlv2DqhlgmZQcOuMXUy+eRxaEeeLpoXFW3NidFWz9yJ8U2gA9qL+kLVo9ayaKB71E el5w== X-Received: by 10.69.3.228 with SMTP id bz4mr17954952pbd.79.1440167381482; Fri, 21 Aug 2015 07:29:41 -0700 (PDT) Received: from heidegger.home (pool-98-119-79-32.lsanca.fios.verizon.net. [98.119.79.32]) by smtp.gmail.com with ESMTPSA id jr12sm8027327pbb.91.2015.08.21.07.29.40 (version=TLSv1 cipher=ECDHE-RSA-RC4-SHA bits=128/128); Fri, 21 Aug 2015 07:29:41 -0700 (PDT) Mime-Version: 1.0 (Mac OS X Mail 7.3 \(1878.6\)) Subject: Re: Mellanox 40Gb support From: aurfalien In-Reply-To: Date: Fri, 21 Aug 2015 07:29:39 -0700 Cc: freebsd-net@freebsd.org Message-Id: References: <39463A45-148F-431E-9C75-87952B27033A@gmail.com> To: "K. Macy" X-Mailer: Apple Mail (2.1878.6) Content-Type: text/plain; charset=windows-1252 Content-Transfer-Encoding: quoted-printable X-Content-Filtered-By: Mailman/MimeDel 2.1.20 X-BeenThere: freebsd-net@freebsd.org X-Mailman-Version: 2.1.20 Precedence: list List-Id: Networking and TCP/IP with FreeBSD List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , X-List-Received-Date: Fri, 21 Aug 2015 14:29:42 -0000 Hi, Thanks very much for the response. Well, I=92m implementing NFSoRDMA and as a best practices, Mellanox = suggested I use the very latest drivers. So no bug fixes I am looking for as I haven=92t implemented yet. And I was wrong about the date, its 6/11/15, not 5/11/15, sorry about = that, typo. - aurf "Janitorial Services" On Aug 20, 2015, at 2:49 PM, K. Macy wrote: > Mlxen supports all ConnectX 3 to the best of my knowledge. Are there = bug fixes in the latest version that aren't in svn? The ConnectX 4 = driver is under development. I will be converting mlxen to iflib in the = next few weeks. >=20 > -K > On Aug 20, 2015 2:29 PM, "aurfalien" wrote: > Hi, >=20 > Curious if the community has any new driver support for the Mellanox = 40Gb ethernet card? >=20 > The Mellanox site has an OFED driver v2.1.6 date 5/11/15. >=20 > Thanks in advance, >=20 > - aurf >=20 > "Janitorial Services" >=20 > _______________________________________________ > freebsd-net@freebsd.org mailing list > https://lists.freebsd.org/mailman/listinfo/freebsd-net > To unsubscribe, send any mail to "freebsd-net-unsubscribe@freebsd.org" From owner-freebsd-net@freebsd.org Fri Aug 21 15:21:56 2015 Return-Path: Delivered-To: freebsd-net@mailman.ysv.freebsd.org Received: from mx1.freebsd.org (mx1.freebsd.org [IPv6:2001:1900:2254:206a::19:1]) by mailman.ysv.freebsd.org (Postfix) with ESMTP id B0A0C9BF6EF for ; Fri, 21 Aug 2015 15:21:56 +0000 (UTC) (envelope-from julian@freebsd.org) Received: from vps1.elischer.org (vps1.elischer.org [204.109.63.16]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (Client CN "vps1.elischer.org", Issuer "CA Cert Signing Authority" (not verified)) by mx1.freebsd.org (Postfix) with ESMTPS id 71800897; Fri, 21 Aug 2015 15:21:55 +0000 (UTC) (envelope-from julian@freebsd.org) Received: from Julian-MBP3.local (50-196-156-133-static.hfc.comcastbusiness.net [50.196.156.133]) (authenticated bits=0) by vps1.elischer.org (8.15.2/8.15.2) with ESMTPSA id t7LFLeDr048044 (version=TLSv1.2 cipher=DHE-RSA-AES128-SHA bits=128 verify=NO); Fri, 21 Aug 2015 08:21:46 -0700 (PDT) (envelope-from julian@freebsd.org) Subject: Re: Mellanox 40Gb support To: aurfalien , "K. Macy" References: <39463A45-148F-431E-9C75-87952B27033A@gmail.com> Cc: freebsd-net@freebsd.org From: Julian Elischer Message-ID: <55D741FE.3090009@freebsd.org> Date: Fri, 21 Aug 2015 23:21:34 +0800 User-Agent: Mozilla/5.0 (Macintosh; Intel Mac OS X 10.10; rv:38.0) Gecko/20100101 Thunderbird/38.2.0 MIME-Version: 1.0 In-Reply-To: Content-Type: text/plain; charset=windows-1252; format=flowed Content-Transfer-Encoding: 8bit X-BeenThere: freebsd-net@freebsd.org X-Mailman-Version: 2.1.20 Precedence: list List-Id: Networking and TCP/IP with FreeBSD List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , X-List-Received-Date: Fri, 21 Aug 2015 15:21:56 -0000 On 8/21/15 10:29 PM, aurfalien wrote: > Hi, > > Thanks very much for the response. > > Well, I’m implementing NFSoRDMA and as a best practices, Mellanox suggested I use the very latest drivers. > > really? On FreeBSD? Is this a fresh implementation of NFS or using the NFS in head? From owner-freebsd-net@freebsd.org Fri Aug 21 15:24:38 2015 Return-Path: Delivered-To: freebsd-net@mailman.ysv.freebsd.org Received: from mx1.freebsd.org (mx1.freebsd.org [IPv6:2001:1900:2254:206a::19:1]) by mailman.ysv.freebsd.org (Postfix) with ESMTP id 16B509BF7BC for ; Fri, 21 Aug 2015 15:24:38 +0000 (UTC) (envelope-from aurfalien@gmail.com) Received: from mail-pa0-x236.google.com (mail-pa0-x236.google.com [IPv6:2607:f8b0:400e:c03::236]) (using TLSv1.2 with cipher ECDHE-RSA-AES128-GCM-SHA256 (128/128 bits)) (Client CN "smtp.gmail.com", Issuer "Google Internet Authority G2" (verified OK)) by mx1.freebsd.org (Postfix) with ESMTPS id DAB82CF6; Fri, 21 Aug 2015 15:24:37 +0000 (UTC) (envelope-from aurfalien@gmail.com) Received: by pawq9 with SMTP id q9so54866318paw.3; Fri, 21 Aug 2015 08:24:37 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20120113; h=content-type:mime-version:subject:from:in-reply-to:date:cc :message-id:references:to; bh=TghL5lObTs4xbL5u3rzpKAFWFOC9PdAtY1Yk2hn7MkA=; b=EtH1IAMppv9SsBaKIMWpIzshvRfslLGyhd8E3VZqBwZ9ot+GT9ycgKeVJFriF0tGPW xsCI5tGtYZLyb8vNbx/xaESNCGnXV4SJp81K/OHOA2zRnOSOOf9y+Pg5mXKtwOn0fQBp C8T0FLqMsOtpRqoIjq05MGZIpCBi4BY9XMilhVY2y0+InNXRWgUFkBa5bLcexG3Lk1oR XMtZOZT+SvhPJAy90oN7udMRoBOZv1mrPmzdIeJCYJ1P4w/kOcI3QUsuqY1tyYUfV30Q JEfdxDmVkJGaqaWeBKwRaXELcONbKyG+W/fHgqWWQi9VcUQaB30rYFubdj77U1aDpVoj WVQg== X-Received: by 10.66.175.162 with SMTP id cb2mr17961262pac.91.1440170677456; Fri, 21 Aug 2015 08:24:37 -0700 (PDT) Received: from heidegger.home (pool-98-119-79-32.lsanca.fios.verizon.net. [98.119.79.32]) by smtp.gmail.com with ESMTPSA id oq3sm8194350pdb.75.2015.08.21.08.24.36 (version=TLSv1 cipher=ECDHE-RSA-RC4-SHA bits=128/128); Fri, 21 Aug 2015 08:24:36 -0700 (PDT) Mime-Version: 1.0 (Mac OS X Mail 7.3 \(1878.6\)) Subject: Re: Mellanox 40Gb support From: aurfalien In-Reply-To: <55D741FE.3090009@freebsd.org> Date: Fri, 21 Aug 2015 08:24:36 -0700 Cc: "K. Macy" , freebsd-net@freebsd.org Message-Id: References: <39463A45-148F-431E-9C75-87952B27033A@gmail.com> <55D741FE.3090009@freebsd.org> To: Julian Elischer X-Mailer: Apple Mail (2.1878.6) Content-Type: text/plain; charset=windows-1252 Content-Transfer-Encoding: quoted-printable X-Content-Filtered-By: Mailman/MimeDel 2.1.20 X-BeenThere: freebsd-net@freebsd.org X-Mailman-Version: 2.1.20 Precedence: list List-Id: Networking and TCP/IP with FreeBSD List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , X-List-Received-Date: Fri, 21 Aug 2015 15:24:38 -0000 Hi, Well, this is all in a test env of course, but I=92m planning to use = head. What are your thoughts? =20 - aurf "Janitorial Services" On Aug 21, 2015, at 8:21 AM, Julian Elischer wrote: > On 8/21/15 10:29 PM, aurfalien wrote: >> Hi, >>=20 >> Thanks very much for the response. >>=20 >> Well, I=92m implementing NFSoRDMA and as a best practices, Mellanox = suggested I use the very latest drivers. >>=20 >>=20 > really? > On FreeBSD? > Is this a fresh implementation of NFS or using the NFS in head? >=20 From owner-freebsd-net@freebsd.org Fri Aug 21 15:26:36 2015 Return-Path: Delivered-To: freebsd-net@mailman.ysv.freebsd.org Received: from mx1.freebsd.org (mx1.freebsd.org [IPv6:2001:1900:2254:206a::19:1]) by mailman.ysv.freebsd.org (Postfix) with ESMTP id B72579BF86E for ; Fri, 21 Aug 2015 15:26:36 +0000 (UTC) (envelope-from julian@freebsd.org) Received: from vps1.elischer.org (vps1.elischer.org [204.109.63.16]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (Client CN "vps1.elischer.org", Issuer "CA Cert Signing Authority" (not verified)) by mx1.freebsd.org (Postfix) with ESMTPS id 9825AF3D; Fri, 21 Aug 2015 15:26:36 +0000 (UTC) (envelope-from julian@freebsd.org) Received: from Julian-MBP3.local (50-196-156-133-static.hfc.comcastbusiness.net [50.196.156.133]) (authenticated bits=0) by vps1.elischer.org (8.15.2/8.15.2) with ESMTPSA id t7LFQPTL048100 (version=TLSv1.2 cipher=DHE-RSA-AES128-SHA bits=128 verify=NO); Fri, 21 Aug 2015 08:26:32 -0700 (PDT) (envelope-from julian@freebsd.org) Subject: Re: Mellanox 40Gb support To: aurfalien References: <39463A45-148F-431E-9C75-87952B27033A@gmail.com> <55D741FE.3090009@freebsd.org> Cc: "K. Macy" , freebsd-net@freebsd.org From: Julian Elischer Message-ID: <55D7431C.4010502@freebsd.org> Date: Fri, 21 Aug 2015 23:26:20 +0800 User-Agent: Mozilla/5.0 (Macintosh; Intel Mac OS X 10.10; rv:38.0) Gecko/20100101 Thunderbird/38.2.0 MIME-Version: 1.0 In-Reply-To: Content-Type: text/plain; charset=windows-1252; format=flowed Content-Transfer-Encoding: 8bit X-Content-Filtered-By: Mailman/MimeDel 2.1.20 X-BeenThere: freebsd-net@freebsd.org X-Mailman-Version: 2.1.20 Precedence: list List-Id: Networking and TCP/IP with FreeBSD List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , X-List-Received-Date: Fri, 21 Aug 2015 15:26:36 -0000 On 8/21/15 11:24 PM, aurfalien wrote: > Hi, > > Well, this is all in a test env of course, but I’m planning to use head. > > What are your thoughts? My curiosity was as to whether you hook this into the current NFS or whether it's so different that it's almost a new implementation..? > > - aurf > > "Janitorial Services" > > On Aug 21, 2015, at 8:21 AM, Julian Elischer > wrote: > >> On 8/21/15 10:29 PM, aurfalien wrote: >>> Hi, >>> >>> Thanks very much for the response. >>> >>> Well, I’m implementing NFSoRDMA and as a best practices, Mellanox >>> suggested I use the very latest drivers. >>> >>> >> really? >> On FreeBSD? >> Is this a fresh implementation of NFS or using the NFS in head? >> > From owner-freebsd-net@freebsd.org Fri Aug 21 15:39:04 2015 Return-Path: Delivered-To: freebsd-net@mailman.ysv.freebsd.org Received: from mx1.freebsd.org (mx1.freebsd.org [IPv6:2001:1900:2254:206a::19:1]) by mailman.ysv.freebsd.org (Postfix) with ESMTP id 1036F9BFBC3 for ; Fri, 21 Aug 2015 15:39:04 +0000 (UTC) (envelope-from aurfalien@gmail.com) Received: from mail-pa0-x232.google.com (mail-pa0-x232.google.com [IPv6:2607:f8b0:400e:c03::232]) (using TLSv1.2 with cipher ECDHE-RSA-AES128-GCM-SHA256 (128/128 bits)) (Client CN "smtp.gmail.com", Issuer "Google Internet Authority G2" (verified OK)) by mx1.freebsd.org (Postfix) with ESMTPS id D00E41D14; Fri, 21 Aug 2015 15:39:03 +0000 (UTC) (envelope-from aurfalien@gmail.com) Received: by pawq9 with SMTP id q9so55100158paw.3; Fri, 21 Aug 2015 08:39:03 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20120113; h=content-type:mime-version:subject:from:in-reply-to:date:cc :message-id:references:to; bh=jLslOqy05eP+dGUyzcuDJ9NE7yAUaOmRgVBBL+s3/r4=; b=sl1qtaUSsvkk4VktnNJRoxOWz8QEA4iYNr/nnxI1jdEusxgsC1TFxMsmqcpGgFwjnx BLil0shBMfx9H2gso1NcMxtKZAEXQZyTuHkkyrd1wcwR3NN2rm/qHs2SzoEjnCNFrMfE v9OgIRvVCzumjfcJB3K6GRR2D0WGV1wobfBRlrSVnWiO4l2XudoFqW/4/4xsK2moHXtp F7vHIwzf4c0Ru6uiQF/F99s/2jmX5lcictJ8OUR+2fFP7e8TeC9lj3OSj2mu2A2SkVyw TZM694A35px5ebUsPL+6niIIUnCEZhWL9kz7dAIkc1FS8qN2ihOSPTEjHV78aWQfitos Dbwg== X-Received: by 10.68.200.72 with SMTP id jq8mr18109659pbc.91.1440171543350; Fri, 21 Aug 2015 08:39:03 -0700 (PDT) Received: from heidegger.home (pool-98-119-79-32.lsanca.fios.verizon.net. [98.119.79.32]) by smtp.gmail.com with ESMTPSA id r1sm8255669pdm.31.2015.08.21.08.39.02 (version=TLSv1 cipher=ECDHE-RSA-RC4-SHA bits=128/128); Fri, 21 Aug 2015 08:39:02 -0700 (PDT) Mime-Version: 1.0 (Mac OS X Mail 7.3 \(1878.6\)) Subject: Re: Mellanox 40Gb support From: aurfalien In-Reply-To: <55D7431C.4010502@freebsd.org> Date: Fri, 21 Aug 2015 08:39:00 -0700 Cc: "K. Macy" , freebsd-net@freebsd.org Message-Id: <1D67A5BA-D9A1-4729-A6F2-4A3241C13EF9@gmail.com> References: <39463A45-148F-431E-9C75-87952B27033A@gmail.com> <55D741FE.3090009@freebsd.org> <55D7431C.4010502@freebsd.org> To: Julian Elischer X-Mailer: Apple Mail (2.1878.6) Content-Type: text/plain; charset=windows-1252 Content-Transfer-Encoding: quoted-printable X-Content-Filtered-By: Mailman/MimeDel 2.1.20 X-BeenThere: freebsd-net@freebsd.org X-Mailman-Version: 2.1.20 Precedence: list List-Id: Networking and TCP/IP with FreeBSD List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , X-List-Received-Date: Fri, 21 Aug 2015 15:39:04 -0000 Ah I see. Well I=92m primitive and use whats in current. I had attempted this before using there standard EN driver and failed = massively. So I=92ll be using there OFED driver. - aurf "Janitorial Services" On Aug 21, 2015, at 8:26 AM, Julian Elischer wrote: > On 8/21/15 11:24 PM, aurfalien wrote: >> Hi, >>=20 >> Well, this is all in a test env of course, but I=92m planning to use = head. >>=20 >> What are your thoughts? > My curiosity was as to whether you hook this into the current NFS or = whether=20 > it's so different that it's almost a new implementation..? >=20 >> =20 >> - aurf >>=20 >> "Janitorial Services" >>=20 >> On Aug 21, 2015, at 8:21 AM, Julian Elischer = wrote: >>=20 >>> On 8/21/15 10:29 PM, aurfalien wrote: >>>> Hi, >>>>=20 >>>> Thanks very much for the response. >>>>=20 >>>> Well, I=92m implementing NFSoRDMA and as a best practices, Mellanox = suggested I use the very latest drivers. >>>>=20 >>>>=20 >>> really? >>> On FreeBSD? >>> Is this a fresh implementation of NFS or using the NFS in head? >>>=20 >>=20 >=20 From owner-freebsd-net@freebsd.org Fri Aug 21 21:46:16 2015 Return-Path: Delivered-To: freebsd-net@mailman.ysv.freebsd.org Received: from mx1.freebsd.org (mx1.freebsd.org [IPv6:2001:1900:2254:206a::19:1]) by mailman.ysv.freebsd.org (Postfix) with ESMTP id 66E5A9C0558; Fri, 21 Aug 2015 21:46:16 +0000 (UTC) (envelope-from rmacklem@uoguelph.ca) Received: from esa-jnhn.mail.uoguelph.ca (esa-jnhn.mail.uoguelph.ca [131.104.91.44]) by mx1.freebsd.org (Postfix) with ESMTP id EA31B1F43; Fri, 21 Aug 2015 21:46:15 +0000 (UTC) (envelope-from rmacklem@uoguelph.ca) IronPort-PHdr: 9a23:19NhgxYJD9UAAixx11n0lZD/LSx+4OfEezUN459isYplN5qZpcW/bnLW6fgltlLVR4KTs6sC0LqN9f25EjRZqb+681k8M7V0HycfjssXmwFySOWkMmbcaMDQUiohAc5ZX0Vk9XzoeWJcGcL5ekGA6ibqtW1aJBzzOEJPK/jvHcaK1oLsh7v0psSYO1wArQH+SI0xBS3+lR/WuMgSjNkqAYcK4TyNnEF1ff9Lz3hjP1OZkkW0zM6x+Jl+73YY4Kp5pIZoGJ/3dKUgTLFeEC9ucyVsvJWq5lH/Sl6X92YaQ2U+nR9BAgyD5xb/Gt/duy37u+418jOTO8ztVvhgVT2k6bZDQwSuiDoFNngw+yfWjpojorhcpUebphd8i6vda4KROf82KrnYdNgZQWdEdttWWDFMBpu8KYAGWblSdd1EppXw8gNd5SC1AhOhUaa2kmdF X-IronPort-Anti-Spam-Filtered: true X-IronPort-Anti-Spam-Result: A2BEAgBFm9dV/61jaINdDoNhaQaDH7pEAQmBbQqFMUoCgW0UAQEBAQEBAQGBCYIdggcBAQQBAQEgBCcgCwULAgEIGAICDRkCAicBCSYCBAEHBwQBGgIEiA0NuHWVfwEBAQEBAQEBAQEBAQEBAQEBARYEgSKKMoQyBgEBHDQHgmmBQwWVLYUFhQiELIdKiH6ESYNoAiaCDhyBFVoiMwd/CBcjgQQBAQE X-IronPort-AV: E=Sophos;i="5.15,723,1432612800"; d="scan'208";a="232337900" Received: from nipigon.cs.uoguelph.ca (HELO zcs1.mail.uoguelph.ca) ([131.104.99.173]) by esa-jnhn.mail.uoguelph.ca with ESMTP; 21 Aug 2015 17:46:08 -0400 Received: from localhost (localhost [127.0.0.1]) by zcs1.mail.uoguelph.ca (Postfix) with ESMTP id E45C915F55D; Fri, 21 Aug 2015 17:46:08 -0400 (EDT) Received: from zcs1.mail.uoguelph.ca ([127.0.0.1]) by localhost (zcs1.mail.uoguelph.ca [127.0.0.1]) (amavisd-new, port 10032) with ESMTP id l7J6bMg-1Frh; Fri, 21 Aug 2015 17:46:08 -0400 (EDT) Received: from localhost (localhost [127.0.0.1]) by zcs1.mail.uoguelph.ca (Postfix) with ESMTP id 25FC715F563; Fri, 21 Aug 2015 17:46:08 -0400 (EDT) X-Virus-Scanned: amavisd-new at zcs1.mail.uoguelph.ca Received: from zcs1.mail.uoguelph.ca ([127.0.0.1]) by localhost (zcs1.mail.uoguelph.ca [127.0.0.1]) (amavisd-new, port 10026) with ESMTP id 9cPtlj9JK9l1; Fri, 21 Aug 2015 17:46:08 -0400 (EDT) Received: from zcs1.mail.uoguelph.ca (zcs1.mail.uoguelph.ca [172.17.95.18]) by zcs1.mail.uoguelph.ca (Postfix) with ESMTP id 0244415F55D; Fri, 21 Aug 2015 17:46:08 -0400 (EDT) Date: Fri, 21 Aug 2015 17:46:07 -0400 (EDT) From: Rick Macklem To: pyunyh@gmail.com, Daniel Braniss Cc: Hans Petter Selasky , FreeBSD stable , FreeBSD Net , Slawa Olhovchenkov , Gleb Smirnoff , Christopher Forgeron Message-ID: <1153838447.28656490.1440193567940.JavaMail.zimbra@uoguelph.ca> In-Reply-To: <20150820023024.GB996@michelle.fasterthan.com> References: <1D52028A-B39F-4F9B-BD38-CB1D73BF5D56@cs.huji.ac.il> <55D333D6.5040102@selasky.org> <1325951625.25292515.1439934848268.JavaMail.zimbra@uoguelph.ca> <55D429A4.3010407@selasky.org> <20150819074212.GB964@michelle.fasterthan.com> <55D43615.1030401@selasky.org> <2013503980.25726607.1439989235806.JavaMail.zimbra@uoguelph.ca> <20150820023024.GB996@michelle.fasterthan.com> Subject: Re: ix(intel) vs mlxen(mellanox) 10Gb performance MIME-Version: 1.0 Content-Type: text/plain; charset=utf-8 Content-Transfer-Encoding: 7bit X-Originating-IP: [172.17.95.12] X-Mailer: Zimbra 8.0.9_GA_6191 (ZimbraWebClient - FF34 (Win)/8.0.9_GA_6191) Thread-Topic: ix(intel) vs mlxen(mellanox) 10Gb performance Thread-Index: Zq6MViNwCd2Hhr1mTS/9wAk6UKubQA== X-BeenThere: freebsd-net@freebsd.org X-Mailman-Version: 2.1.20 Precedence: list List-Id: Networking and TCP/IP with FreeBSD List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , X-List-Received-Date: Fri, 21 Aug 2015 21:46:16 -0000 Yonghyeon PYUN wrote: > On Wed, Aug 19, 2015 at 09:00:35AM -0400, Rick Macklem wrote: > > Hans Petter Selasky wrote: > > > On 08/19/15 09:42, Yonghyeon PYUN wrote: > > > > On Wed, Aug 19, 2015 at 09:00:52AM +0200, Hans Petter Selasky wrote: > > > >> On 08/18/15 23:54, Rick Macklem wrote: > > > >>> Ouch! Yes, I now see that the code that counts the # of mbufs is > > > >>> before > > > >>> the > > > >>> code that adds the tcp/ip header mbuf. > > > >>> > > > >>> In my opinion, this should be fixed by setting if_hw_tsomaxsegcount > > > >>> to > > > >>> whatever > > > >>> the driver provides - 1. It is not the driver's responsibility to > > > >>> know if > > > >>> a tcp/ip > > > >>> header mbuf will be added and is a lot less confusing that expecting > > > >>> the > > > >>> driver > > > >>> author to know to subtract one. (I had mistakenly thought that > > > >>> tcp_output() had > > > >>> added the tc/ip header mbuf before the loop that counts mbufs in the > > > >>> list. > > > >>> Btw, > > > >>> this tcp/ip header mbuf also has leading space for the MAC layer > > > >>> header.) > > > >>> > > > >> > > > >> Hi Rick, > > > >> > > > >> Your question is good. With the Mellanox hardware we have separate > > > >> so-called inline data space for the TCP/IP headers, so if the TCP > > > >> stack > > > >> subtracts something, then we would need to add something to the limit, > > > >> because then the scatter gather list is only used for the data part. > > > >> > > > > > > > > I think all drivers in tree don't subtract 1 for > > > > if_hw_tsomaxsegcount. Probably touching Mellanox driver would be > > > > simpler than fixing all other drivers in tree. > > > > > > > >> Maybe it can be controlled by some kind of flag, if all the three TSO > > > >> limits should include the TCP/IP/ethernet headers too. I'm pretty sure > > > >> we want both versions. > > > >> > > > > > > > > Hmm, I'm afraid it's already complex. Drivers have to tell almost > > > > the same information to both bus_dma(9) and network stack. > > > > > > Don't forget that not all drivers in the tree set the TSO limits before > > > if_attach(), so possibly the subtraction of one TSO fragment needs to go > > > into ip_output() .... > > > > > Ok, I realized that some drivers may not know the answers before > > ether_ifattach(), > > due to the way they are configured/written (I saw the use of > > if_hw_tsomax_update() > > in the patch). > > I was not able to find an interface that configures TSO parameters > after if_t conversion. I'm under the impression > if_hw_tsomax_update() is not designed to use this way. Probably we > need a better one?(CCed to Gleb). > > > > > If it is subtracted as a part of the assignment to if_hw_tsomaxsegcount in > > tcp_output() > > at line#791 in tcp_output() like the following, I don't think it should > > matter if the > > values are set before ether_ifattach()? > > /* > > * Subtract 1 for the tcp/ip header mbuf that > > * will be prepended to the mbuf chain in this > > * function in the code below this block. > > */ > > if_hw_tsomaxsegcount = tp->t_tsomaxsegcount - 1; > > > > I don't have a good solution for the case where a driver doesn't plan on > > using the > > tcp/ip header provided by tcp_output() except to say the driver can add one > > to the > > setting to compensate for that (and if they fail to do so, it still works, > > although > > somewhat suboptimally). When I now read the comment in sys/net/if_var.h it > > is clear > > what it means, but for some reason I didn't read it that way before? (I > > think it was > > the part that said the driver didn't have to subtract for the headers that > > confused me?) > > In any case, we need to try and come up with a clear definition of what > > they need to > > be set to. > > > > I can now think of two ways to deal with this: > > 1 - Leave tcp_output() as is, but provide a macro for the device driver > > authors to use > > that sets if_hw_tsomaxsegcount with a flag for "driver uses tcp/ip > > header mbuf", > > documenting that this flag should normally be true. > > OR > > 2 - Change tcp_output() as above, noting that this is a workaround for > > confusion w.r.t. > > whether or not if_hw_tsomaxsegcount should include the tcp/ip header > > mbuf and > > update the comment in if_var.h to reflect this. Then drivers that don't > > use the > > tcp/ip header mbuf can increase their value for if_hw_tsomaxsegcount by > > 1. > > (The comment should also mention that a value of 35 or greater is much > > preferred to > > 32 if the hardware will support that.) > > > > Both works for me. My preference is 2 just because it's very > common for most drivers that use tcp/ip header mbuf. Thanks for this comment. I tend to agree, both for the reason you state and also because the patch is simple enough that it might qualify as an errata for 10.2. I am hoping Daniel Braniss will be able to test the patch and let us know if it improves performance with TSO enabled? rick > _______________________________________________ > freebsd-stable@freebsd.org mailing list > https://lists.freebsd.org/mailman/listinfo/freebsd-stable > To unsubscribe, send any mail to "freebsd-stable-unsubscribe@freebsd.org" > From owner-freebsd-net@freebsd.org Sat Aug 22 07:28:20 2015 Return-Path: Delivered-To: freebsd-net@mailman.ysv.freebsd.org Received: from mx1.freebsd.org (mx1.freebsd.org [IPv6:2001:1900:2254:206a::19:1]) by mailman.ysv.freebsd.org (Postfix) with ESMTP id 461AD9BE683; Sat, 22 Aug 2015 07:28:20 +0000 (UTC) (envelope-from danny@cs.huji.ac.il) Received: from kabab.cs.huji.ac.il (kabab.cs.huji.ac.il [132.65.116.210]) (using TLSv1 with cipher DHE-RSA-AES256-SHA (256/256 bits)) (Client did not present a certificate) by mx1.freebsd.org (Postfix) with ESMTPS id EB7111697; Sat, 22 Aug 2015 07:28:19 +0000 (UTC) (envelope-from danny@cs.huji.ac.il) Received: from mbpro2.bs.cs.huji.ac.il ([132.65.179.20]) by kabab.cs.huji.ac.il with esmtp id 1ZT3Dh-000P8u-6j; Sat, 22 Aug 2015 10:28:05 +0300 Content-Type: text/plain; charset=utf-8 Mime-Version: 1.0 (Mac OS X Mail 8.2 \(2104\)) Subject: Re: ix(intel) vs mlxen(mellanox) 10Gb performance From: Daniel Braniss In-Reply-To: <1153838447.28656490.1440193567940.JavaMail.zimbra@uoguelph.ca> Date: Sat, 22 Aug 2015 10:28:03 +0300 Cc: pyunyh@gmail.com, Hans Petter Selasky , FreeBSD stable , FreeBSD Net , Slawa Olhovchenkov , Gleb Smirnoff , Christopher Forgeron Content-Transfer-Encoding: quoted-printable Message-Id: <15D19823-08F7-4E55-BBD0-CE230F67D26E@cs.huji.ac.il> References: <1D52028A-B39F-4F9B-BD38-CB1D73BF5D56@cs.huji.ac.il> <55D333D6.5040102@selasky.org> <1325951625.25292515.1439934848268.JavaMail.zimbra@uoguelph.ca> <55D429A4.3010407@selasky.org> <20150819074212.GB964@michelle.fasterthan.com> <55D43615.1030401@selasky.org> <2013503980.25726607.1439989235806.JavaMail.zimbra@uoguelph.ca> <20150820023024.GB996@michelle.fasterthan.com> <1153838447.28656490.1440193567940.JavaMail.zimbra@uoguelph.ca> To: Rick Macklem X-Mailer: Apple Mail (2.2104) X-BeenThere: freebsd-net@freebsd.org X-Mailman-Version: 2.1.20 Precedence: list List-Id: Networking and TCP/IP with FreeBSD List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , X-List-Received-Date: Sat, 22 Aug 2015 07:28:20 -0000 > On Aug 22, 2015, at 12:46 AM, Rick Macklem = wrote: >=20 > Yonghyeon PYUN wrote: >> On Wed, Aug 19, 2015 at 09:00:35AM -0400, Rick Macklem wrote: >>> Hans Petter Selasky wrote: >>>> On 08/19/15 09:42, Yonghyeon PYUN wrote: >>>>> On Wed, Aug 19, 2015 at 09:00:52AM +0200, Hans Petter Selasky = wrote: >>>>>> On 08/18/15 23:54, Rick Macklem wrote: >>>>>>> Ouch! Yes, I now see that the code that counts the # of mbufs is >>>>>>> before >>>>>>> the >>>>>>> code that adds the tcp/ip header mbuf. >>>>>>>=20 >>>>>>> In my opinion, this should be fixed by setting = if_hw_tsomaxsegcount >>>>>>> to >>>>>>> whatever >>>>>>> the driver provides - 1. It is not the driver's responsibility = to >>>>>>> know if >>>>>>> a tcp/ip >>>>>>> header mbuf will be added and is a lot less confusing that = expecting >>>>>>> the >>>>>>> driver >>>>>>> author to know to subtract one. (I had mistakenly thought that >>>>>>> tcp_output() had >>>>>>> added the tc/ip header mbuf before the loop that counts mbufs in = the >>>>>>> list. >>>>>>> Btw, >>>>>>> this tcp/ip header mbuf also has leading space for the MAC layer >>>>>>> header.) >>>>>>>=20 >>>>>>=20 >>>>>> Hi Rick, >>>>>>=20 >>>>>> Your question is good. With the Mellanox hardware we have = separate >>>>>> so-called inline data space for the TCP/IP headers, so if the TCP >>>>>> stack >>>>>> subtracts something, then we would need to add something to the = limit, >>>>>> because then the scatter gather list is only used for the data = part. >>>>>>=20 >>>>>=20 >>>>> I think all drivers in tree don't subtract 1 for >>>>> if_hw_tsomaxsegcount. Probably touching Mellanox driver would be >>>>> simpler than fixing all other drivers in tree. >>>>>=20 >>>>>> Maybe it can be controlled by some kind of flag, if all the three = TSO >>>>>> limits should include the TCP/IP/ethernet headers too. I'm pretty = sure >>>>>> we want both versions. >>>>>>=20 >>>>>=20 >>>>> Hmm, I'm afraid it's already complex. Drivers have to tell almost >>>>> the same information to both bus_dma(9) and network stack. >>>>=20 >>>> Don't forget that not all drivers in the tree set the TSO limits = before >>>> if_attach(), so possibly the subtraction of one TSO fragment needs = to go >>>> into ip_output() .... >>>>=20 >>> Ok, I realized that some drivers may not know the answers before >>> ether_ifattach(), >>> due to the way they are configured/written (I saw the use of >>> if_hw_tsomax_update() >>> in the patch). >>=20 >> I was not able to find an interface that configures TSO parameters >> after if_t conversion. I'm under the impression >> if_hw_tsomax_update() is not designed to use this way. Probably we >> need a better one?(CCed to Gleb). >>=20 >>>=20 >>> If it is subtracted as a part of the assignment to = if_hw_tsomaxsegcount in >>> tcp_output() >>> at line#791 in tcp_output() like the following, I don't think it = should >>> matter if the >>> values are set before ether_ifattach()? >>> /* >>> * Subtract 1 for the tcp/ip header mbuf that >>> * will be prepended to the mbuf chain in this >>> * function in the code below this block. >>> */ >>> if_hw_tsomaxsegcount =3D tp->t_tsomaxsegcount - = 1; >>>=20 >>> I don't have a good solution for the case where a driver doesn't = plan on >>> using the >>> tcp/ip header provided by tcp_output() except to say the driver can = add one >>> to the >>> setting to compensate for that (and if they fail to do so, it still = works, >>> although >>> somewhat suboptimally). When I now read the comment in = sys/net/if_var.h it >>> is clear >>> what it means, but for some reason I didn't read it that way before? = (I >>> think it was >>> the part that said the driver didn't have to subtract for the = headers that >>> confused me?) >>> In any case, we need to try and come up with a clear definition of = what >>> they need to >>> be set to. >>>=20 >>> I can now think of two ways to deal with this: >>> 1 - Leave tcp_output() as is, but provide a macro for the device = driver >>> authors to use >>> that sets if_hw_tsomaxsegcount with a flag for "driver uses = tcp/ip >>> header mbuf", >>> documenting that this flag should normally be true. >>> OR >>> 2 - Change tcp_output() as above, noting that this is a workaround = for >>> confusion w.r.t. >>> whether or not if_hw_tsomaxsegcount should include the tcp/ip = header >>> mbuf and >>> update the comment in if_var.h to reflect this. Then drivers that = don't >>> use the >>> tcp/ip header mbuf can increase their value for = if_hw_tsomaxsegcount by >>> 1. >>> (The comment should also mention that a value of 35 or greater is = much >>> preferred to >>> 32 if the hardware will support that.) >>>=20 >>=20 >> Both works for me. My preference is 2 just because it's very >> common for most drivers that use tcp/ip header mbuf. > Thanks for this comment. I tend to agree, both for the reason you = state and also > because the patch is simple enough that it might qualify as an errata = for 10.2. >=20 > I am hoping Daniel Braniss will be able to test the patch and let us = know if it > improves performance with TSO enabled? send me the patch and I=E2=80=99ll test it ASAP. danny >=20 > rick >=20 >> _______________________________________________ >> freebsd-stable@freebsd.org mailing list >> https://lists.freebsd.org/mailman/listinfo/freebsd-stable >> To unsubscribe, send any mail to = "freebsd-stable-unsubscribe@freebsd.org" >>=20 From owner-freebsd-net@freebsd.org Sat Aug 22 11:59:21 2015 Return-Path: Delivered-To: freebsd-net@mailman.ysv.freebsd.org Received: from mx1.freebsd.org (mx1.freebsd.org [IPv6:2001:1900:2254:206a::19:1]) by mailman.ysv.freebsd.org (Postfix) with ESMTP id 86D239C0207; Sat, 22 Aug 2015 11:59:21 +0000 (UTC) (envelope-from rmacklem@uoguelph.ca) Received: from esa-jnhn.mail.uoguelph.ca (esa-jnhn.mail.uoguelph.ca [131.104.91.44]) by mx1.freebsd.org (Postfix) with ESMTP id 020CBAE5; Sat, 22 Aug 2015 11:59:20 +0000 (UTC) (envelope-from rmacklem@uoguelph.ca) IronPort-PHdr: 9a23:GxIpMRGQwlqbIKuInsPTQp1GYnF86YWxBRYc798ds5kLTJ74o82wAkXT6L1XgUPTWs2DsrQf27GQ7vqrADBIyK3CmU5BWaQEbwUCh8QSkl5oK+++Imq/EsTXaTcnFt9JTl5v8iLzG0FUHMHjew+a+SXqvnYsExnyfTB4Ov7yUtaLyZ/njKbvodaKP01hv3mUX/BbFF2OtwLft80b08NJC50a7V/3mEZOYPlc3mhyJFiezF7W78a0+4N/oWwL46pyv+YJa6jxfrw5QLpEF3xmdjltvIy4/STFVhaFs3sATn0NwF0PBwne8Aq8UI38vyHhuqx6wibdOMT3SbU9X3Om7rx3SRnmj2AJLTM0+nrbz9dshahfrUGdoElTyojVbYXdHuB3eKLGZptOSWNHWNd5XDcHAp6+bs0GBKwAObALgZP6og40rBC9TSylD+DrxzoA0mXz1KY51+kkORzB0xEtG8oO9n/d+oamfJwOWPy4mfGbhQ7IaOlbjHKksNDF X-IronPort-Anti-Spam-Filtered: true X-IronPort-Anti-Spam-Result: A2CwAgD1YthV/61jaINeDoNhaQaDH7o3CoFtCoUxSgKBYRQBAQEBAQEBAYEJgh2CBwEBBAEBASAEJyALEAIBCA4KERkCAgIlAQkmAgQIBwQBGgIEiA0NrhOVOwEBAQEBAQEBAQEBAQEBAQEBFwSLV4QyBgEBGwEZFgUHgmmBQwWVNII/gkaFCYQsh0yJBIRJg2gCJoIOHIEVWiIzB38IFyOBBAEBAQ X-IronPort-AV: E=Sophos;i="5.15,728,1432612800"; d="scan'208";a="232435417" Received: from nipigon.cs.uoguelph.ca (HELO zcs1.mail.uoguelph.ca) ([131.104.99.173]) by esa-jnhn.mail.uoguelph.ca with ESMTP; 22 Aug 2015 07:59:18 -0400 Received: from localhost (localhost [127.0.0.1]) by zcs1.mail.uoguelph.ca (Postfix) with ESMTP id C562115F55D; Sat, 22 Aug 2015 07:59:18 -0400 (EDT) Received: from zcs1.mail.uoguelph.ca ([127.0.0.1]) by localhost (zcs1.mail.uoguelph.ca [127.0.0.1]) (amavisd-new, port 10032) with ESMTP id xdpi1x1-IhQx; Sat, 22 Aug 2015 07:59:17 -0400 (EDT) Received: from localhost (localhost [127.0.0.1]) by zcs1.mail.uoguelph.ca (Postfix) with ESMTP id 7342815F563; Sat, 22 Aug 2015 07:59:17 -0400 (EDT) X-Virus-Scanned: amavisd-new at zcs1.mail.uoguelph.ca Received: from zcs1.mail.uoguelph.ca ([127.0.0.1]) by localhost (zcs1.mail.uoguelph.ca [127.0.0.1]) (amavisd-new, port 10026) with ESMTP id ewYCfPJexIum; Sat, 22 Aug 2015 07:59:17 -0400 (EDT) Received: from zcs1.mail.uoguelph.ca (zcs1.mail.uoguelph.ca [172.17.95.18]) by zcs1.mail.uoguelph.ca (Postfix) with ESMTP id 397B115F55D; Sat, 22 Aug 2015 07:59:17 -0400 (EDT) Date: Sat, 22 Aug 2015 07:59:16 -0400 (EDT) From: Rick Macklem To: Daniel Braniss Cc: pyunyh@gmail.com, Hans Petter Selasky , FreeBSD stable , FreeBSD Net , Christopher Forgeron , Gleb Smirnoff , Slawa Olhovchenkov Message-ID: <818666007.28930310.1440244756872.JavaMail.zimbra@uoguelph.ca> In-Reply-To: <15D19823-08F7-4E55-BBD0-CE230F67D26E@cs.huji.ac.il> References: <1D52028A-B39F-4F9B-BD38-CB1D73BF5D56@cs.huji.ac.il> <55D429A4.3010407@selasky.org> <20150819074212.GB964@michelle.fasterthan.com> <55D43615.1030401@selasky.org> <2013503980.25726607.1439989235806.JavaMail.zimbra@uoguelph.ca> <20150820023024.GB996@michelle.fasterthan.com> <1153838447.28656490.1440193567940.JavaMail.zimbra@uoguelph.ca> <15D19823-08F7-4E55-BBD0-CE230F67D26E@cs.huji.ac.il> Subject: Re: ix(intel) vs mlxen(mellanox) 10Gb performance MIME-Version: 1.0 Content-Type: multipart/mixed; boundary="----=_Part_28930308_990593625.1440244756870" X-Originating-IP: [172.17.95.11] X-Mailer: Zimbra 8.0.9_GA_6191 (ZimbraWebClient - FF34 (Win)/8.0.9_GA_6191) Thread-Topic: ix(intel) vs mlxen(mellanox) 10Gb performance Thread-Index: QZitH8l4Q7RLj+jgyGyTQ4NdEz1bXA== X-BeenThere: freebsd-net@freebsd.org X-Mailman-Version: 2.1.20 Precedence: list List-Id: Networking and TCP/IP with FreeBSD List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , X-List-Received-Date: Sat, 22 Aug 2015 11:59:21 -0000 ------=_Part_28930308_990593625.1440244756870 Content-Type: text/plain; charset=utf-8 Content-Transfer-Encoding: quoted-printable Daniel Braniss wrote: >=20 > > On Aug 22, 2015, at 12:46 AM, Rick Macklem wrote= : > >=20 > > Yonghyeon PYUN wrote: > >> On Wed, Aug 19, 2015 at 09:00:35AM -0400, Rick Macklem wrote: > >>> Hans Petter Selasky wrote: > >>>> On 08/19/15 09:42, Yonghyeon PYUN wrote: > >>>>> On Wed, Aug 19, 2015 at 09:00:52AM +0200, Hans Petter Selasky wrote= : > >>>>>> On 08/18/15 23:54, Rick Macklem wrote: > >>>>>>> Ouch! Yes, I now see that the code that counts the # of mbufs is > >>>>>>> before > >>>>>>> the > >>>>>>> code that adds the tcp/ip header mbuf. > >>>>>>>=20 > >>>>>>> In my opinion, this should be fixed by setting if_hw_tsomaxsegcou= nt > >>>>>>> to > >>>>>>> whatever > >>>>>>> the driver provides - 1. It is not the driver's responsibility to > >>>>>>> know if > >>>>>>> a tcp/ip > >>>>>>> header mbuf will be added and is a lot less confusing that expect= ing > >>>>>>> the > >>>>>>> driver > >>>>>>> author to know to subtract one. (I had mistakenly thought that > >>>>>>> tcp_output() had > >>>>>>> added the tc/ip header mbuf before the loop that counts mbufs in = the > >>>>>>> list. > >>>>>>> Btw, > >>>>>>> this tcp/ip header mbuf also has leading space for the MAC layer > >>>>>>> header.) > >>>>>>>=20 > >>>>>>=20 > >>>>>> Hi Rick, > >>>>>>=20 > >>>>>> Your question is good. With the Mellanox hardware we have separate > >>>>>> so-called inline data space for the TCP/IP headers, so if the TCP > >>>>>> stack > >>>>>> subtracts something, then we would need to add something to the li= mit, > >>>>>> because then the scatter gather list is only used for the data par= t. > >>>>>>=20 > >>>>>=20 > >>>>> I think all drivers in tree don't subtract 1 for > >>>>> if_hw_tsomaxsegcount. Probably touching Mellanox driver would be > >>>>> simpler than fixing all other drivers in tree. > >>>>>=20 > >>>>>> Maybe it can be controlled by some kind of flag, if all the three = TSO > >>>>>> limits should include the TCP/IP/ethernet headers too. I'm pretty = sure > >>>>>> we want both versions. > >>>>>>=20 > >>>>>=20 > >>>>> Hmm, I'm afraid it's already complex. Drivers have to tell almost > >>>>> the same information to both bus_dma(9) and network stack. > >>>>=20 > >>>> Don't forget that not all drivers in the tree set the TSO limits bef= ore > >>>> if_attach(), so possibly the subtraction of one TSO fragment needs t= o go > >>>> into ip_output() .... > >>>>=20 > >>> Ok, I realized that some drivers may not know the answers before > >>> ether_ifattach(), > >>> due to the way they are configured/written (I saw the use of > >>> if_hw_tsomax_update() > >>> in the patch). > >>=20 > >> I was not able to find an interface that configures TSO parameters > >> after if_t conversion. I'm under the impression > >> if_hw_tsomax_update() is not designed to use this way. Probably we > >> need a better one?(CCed to Gleb). > >>=20 > >>>=20 > >>> If it is subtracted as a part of the assignment to if_hw_tsomaxsegcou= nt > >>> in > >>> tcp_output() > >>> at line#791 in tcp_output() like the following, I don't think it shou= ld > >>> matter if the > >>> values are set before ether_ifattach()? > >>> =09=09=09/* > >>> =09=09=09 * Subtract 1 for the tcp/ip header mbuf that > >>> =09=09=09 * will be prepended to the mbuf chain in this > >>> =09=09=09 * function in the code below this block. > >>> =09=09=09 */ > >>> =09=09=09if_hw_tsomaxsegcount =3D tp->t_tsomaxsegcount - 1; > >>>=20 > >>> I don't have a good solution for the case where a driver doesn't plan= on > >>> using the > >>> tcp/ip header provided by tcp_output() except to say the driver can a= dd > >>> one > >>> to the > >>> setting to compensate for that (and if they fail to do so, it still > >>> works, > >>> although > >>> somewhat suboptimally). When I now read the comment in sys/net/if_var= .h > >>> it > >>> is clear > >>> what it means, but for some reason I didn't read it that way before? = (I > >>> think it was > >>> the part that said the driver didn't have to subtract for the headers > >>> that > >>> confused me?) > >>> In any case, we need to try and come up with a clear definition of wh= at > >>> they need to > >>> be set to. > >>>=20 > >>> I can now think of two ways to deal with this: > >>> 1 - Leave tcp_output() as is, but provide a macro for the device driv= er > >>> authors to use > >>> that sets if_hw_tsomaxsegcount with a flag for "driver uses tcp/ip > >>> header mbuf", > >>> documenting that this flag should normally be true. > >>> OR > >>> 2 - Change tcp_output() as above, noting that this is a workaround fo= r > >>> confusion w.r.t. > >>> whether or not if_hw_tsomaxsegcount should include the tcp/ip head= er > >>> mbuf and > >>> update the comment in if_var.h to reflect this. Then drivers that > >>> don't > >>> use the > >>> tcp/ip header mbuf can increase their value for if_hw_tsomaxsegcou= nt > >>> by > >>> 1. > >>> (The comment should also mention that a value of 35 or greater is = much > >>> preferred to > >>> 32 if the hardware will support that.) > >>>=20 > >>=20 > >> Both works for me. My preference is 2 just because it's very > >> common for most drivers that use tcp/ip header mbuf. > > Thanks for this comment. I tend to agree, both for the reason you state= and > > also > > because the patch is simple enough that it might qualify as an errata f= or > > 10.2. > >=20 > > I am hoping Daniel Braniss will be able to test the patch and let us kn= ow > > if it > > improves performance with TSO enabled? >=20 > send me the patch and I=E2=80=99ll test it ASAP. > =09danny >=20 Patch is attached. The one for head will also include an update to the comm= ent in sys/net/if_var.h, but that isn't needed for testing. Thanks for testing this, rick > >=20 > > rick > >=20 > >> _______________________________________________ > >> freebsd-stable@freebsd.org mailing list > >> https://lists.freebsd.org/mailman/listinfo/freebsd-stable > >> To unsubscribe, send any mail to "freebsd-stable-unsubscribe@freebsd.o= rg" > >>=20 >=20 > _______________________________________________ > freebsd-stable@freebsd.org mailing list > https://lists.freebsd.org/mailman/listinfo/freebsd-stable > To unsubscribe, send any mail to "freebsd-stable-unsubscribe@freebsd.org" ------=_Part_28930308_990593625.1440244756870 Content-Type: text/x-patch; name=tsooutby1.patch Content-Disposition: attachment; filename=tsooutby1.patch Content-Transfer-Encoding: base64 LS0tIG5ldGluZXQvdGNwX291dHB1dC5jLnNhdgkyMDE1LTA4LTIyIDA3OjQ4OjEyLjAwMDAwMDAw MCAtMDQwMAorKysgbmV0aW5ldC90Y3Bfb3V0cHV0LmMJMjAxNS0wOC0yMiAwNzo1MDo1Mi4wMDAw MDAwMDAgLTA0MDAKQEAgLTc5NCw3ICs3OTQsMTMgQEAgc2VuZDoKIAogCQkJLyogZXh0cmFjdCBU U08gaW5mb3JtYXRpb24gKi8KIAkJCWlmX2h3X3Rzb21heCA9IHRwLT50X3Rzb21heDsKLQkJCWlm X2h3X3Rzb21heHNlZ2NvdW50ID0gdHAtPnRfdHNvbWF4c2VnY291bnQ7CisJCQkvKgorCQkJICog U3VidHJhY3QgMSBmb3IgdGhlIHRjcC9pcCBoZWFkZXIgbWJ1ZiB0aGF0CisJCQkgKiB3aWxsIGJl IHByZXBlbmRlZCB0byB0aGlzIG1idWYgY2hhaW4gYWZ0ZXIKKwkJCSAqIHRoZSBjb2RlIGluIHRo aXMgc2VjdGlvbiBsaW1pdHMgdGhlIG51bWJlciBvZgorCQkJICogbWJ1ZnMgaW4gdGhlIGNoYWlu IHRvIGlmX2h3X3Rzb21heHNlZ2NvdW50LgorCQkJICovCisJCQlpZl9od190c29tYXhzZWdjb3Vu dCA9IHRwLT50X3Rzb21heHNlZ2NvdW50IC0gMTsKIAkJCWlmX2h3X3Rzb21heHNlZ3NpemUgPSB0 cC0+dF90c29tYXhzZWdzaXplOwogCiAJCQkvKgo= ------=_Part_28930308_990593625.1440244756870-- From owner-freebsd-net@freebsd.org Sat Aug 22 14:02:40 2015 Return-Path: Delivered-To: freebsd-net@mailman.ysv.freebsd.org Received: from mx1.freebsd.org (mx1.freebsd.org [IPv6:2001:1900:2254:206a::19:1]) by mailman.ysv.freebsd.org (Postfix) with ESMTP id 2AD4F9BDF4D for ; Sat, 22 Aug 2015 14:02:40 +0000 (UTC) (envelope-from bugzilla-noreply@freebsd.org) Received: from kenobi.freebsd.org (kenobi.freebsd.org [IPv6:2001:1900:2254:206a::16:76]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (Client did not present a certificate) by mx1.freebsd.org (Postfix) with ESMTPS id 17E8C226 for ; Sat, 22 Aug 2015 14:02:40 +0000 (UTC) (envelope-from bugzilla-noreply@freebsd.org) Received: from bugs.freebsd.org ([127.0.1.118]) by kenobi.freebsd.org (8.15.2/8.15.2) with ESMTP id t7ME2dDX091003 for ; Sat, 22 Aug 2015 14:02:39 GMT (envelope-from bugzilla-noreply@freebsd.org) From: bugzilla-noreply@freebsd.org To: freebsd-net@FreeBSD.org Subject: [Bug 200379] SCTP stack is not FIB aware Date: Sat, 22 Aug 2015 14:02:39 +0000 X-Bugzilla-Reason: CC X-Bugzilla-Type: changed X-Bugzilla-Watch-Reason: None X-Bugzilla-Product: Base System X-Bugzilla-Component: kern X-Bugzilla-Version: 11.0-CURRENT X-Bugzilla-Keywords: X-Bugzilla-Severity: Affects Only Me X-Bugzilla-Who: tuexen@freebsd.org X-Bugzilla-Status: Closed X-Bugzilla-Priority: --- X-Bugzilla-Assigned-To: tuexen@freebsd.org X-Bugzilla-Target-Milestone: --- X-Bugzilla-Flags: X-Bugzilla-Changed-Fields: bug_status resolution Message-ID: In-Reply-To: References: Content-Type: text/plain; charset="UTF-8" Content-Transfer-Encoding: 7bit X-Bugzilla-URL: https://bugs.freebsd.org/bugzilla/ Auto-Submitted: auto-generated MIME-Version: 1.0 X-BeenThere: freebsd-net@freebsd.org X-Mailman-Version: 2.1.20 Precedence: list List-Id: Networking and TCP/IP with FreeBSD List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , X-List-Received-Date: Sat, 22 Aug 2015 14:02:40 -0000 https://bugs.freebsd.org/bugzilla/show_bug.cgi?id=200379 Michael Tuexen changed: What |Removed |Added ---------------------------------------------------------------------------- Status|In Progress |Closed Resolution|--- |FIXED -- You are receiving this mail because: You are on the CC list for the bug. From owner-freebsd-net@freebsd.org Sat Aug 22 21:03:49 2015 Return-Path: Delivered-To: freebsd-net@mailman.ysv.freebsd.org Received: from mx1.freebsd.org (mx1.freebsd.org [IPv6:2001:1900:2254:206a::19:1]) by mailman.ysv.freebsd.org (Postfix) with ESMTP id 7EE4C9BE2FA for ; Sat, 22 Aug 2015 21:03:49 +0000 (UTC) (envelope-from truckman@FreeBSD.org) Received: from gw.catspoiler.org (cl-1657.chi-02.us.sixxs.net [IPv6:2001:4978:f:678::2]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (Client CN "gw.catspoiler.org", Issuer "gw.catspoiler.org" (not verified)) by mx1.freebsd.org (Postfix) with ESMTPS id 277A11C01 for ; Sat, 22 Aug 2015 21:03:49 +0000 (UTC) (envelope-from truckman@FreeBSD.org) Received: from FreeBSD.org (mousie.catspoiler.org [192.168.101.2]) by gw.catspoiler.org (8.15.2/8.15.2) with ESMTP id t7ML3gAx000794 for ; Sat, 22 Aug 2015 14:03:45 -0700 (PDT) (envelope-from truckman@FreeBSD.org) Message-Id: <201508222103.t7ML3gAx000794@gw.catspoiler.org> Date: Sat, 22 Aug 2015 12:46:46 -0700 (PDT) From: Don Lewis Subject: a couple /etc/rc.firewall questions To: freebsd-net@FreeBSD.org MIME-Version: 1.0 Content-Type: TEXT/plain; charset=us-ascii X-BeenThere: freebsd-net@freebsd.org X-Mailman-Version: 2.1.20 Precedence: list List-Id: Networking and TCP/IP with FreeBSD List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , X-List-Received-Date: Sat, 22 Aug 2015 21:03:49 -0000 The example /etc/rc.firewall has provisions to use either in-kernel NAT or natd for the open and client firewall types, but the simple filewall type only has code for natd. Is there any reason that in-kernel NAT could not be used with the simple firewall type? After allowing connections to selected TCP ports and then denying all other incoming TCP setup connections from ${oif}, the simple firewall code in /etc/rc.firewall then permits all other TCP setup connections: # Allow setup of any other TCP connection ${fwcmd} add pass tcp from any to any setup This is potentially undesirable since it allows unrestricted TCP connections between "me" and the inside network. When I changed this to ${fwcmd} add pass tcp from any to any out via ${oif} setup I was able to open TCP connections from the firewall box to the outside, but NATed connections from inside network to the outside were blocked. If I run "ipfw show", it appears that the TCP setup packets are falling through to the final implicit deny all rule, but I don't see any obvious reason. From owner-freebsd-net@freebsd.org Sat Aug 22 23:45:29 2015 Return-Path: Delivered-To: freebsd-net@mailman.ysv.freebsd.org Received: from mx1.freebsd.org (mx1.freebsd.org [IPv6:2001:1900:2254:206a::19:1]) by mailman.ysv.freebsd.org (Postfix) with ESMTP id A6A6C9C010B for ; Sat, 22 Aug 2015 23:45:29 +0000 (UTC) (envelope-from hrs@FreeBSD.org) Received: from mail.allbsd.org (gatekeeper.allbsd.org [IPv6:2001:2f0:104:e001::32]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (Client CN "*.allbsd.org", Issuer "RapidSSL CA" (verified OK)) by mx1.freebsd.org (Postfix) with ESMTPS id BE7FA1FF8; Sat, 22 Aug 2015 23:45:28 +0000 (UTC) (envelope-from hrs@FreeBSD.org) Received: from alph.d.allbsd.org (alph.d.allbsd.org [IPv6:2001:2f0:104:e010:862b:2bff:febc:8956] (may be forged)) (authenticated bits=56) by mail.allbsd.org (8.14.9/8.14.9) with ESMTP id t7MNjEIm020298 (version=TLSv1/SSLv3 cipher=DHE-RSA-AES256-GCM-SHA384 bits=256 verify=NO); Sun, 23 Aug 2015 08:45:16 +0900 (JST) (envelope-from hrs@FreeBSD.org) Received: from localhost (localhost [IPv6:::1]) (authenticated bits=0) by alph.d.allbsd.org (8.14.9/8.14.9) with ESMTP id t7MNjChq081452; Sun, 23 Aug 2015 08:45:14 +0900 (JST) (envelope-from hrs@FreeBSD.org) Date: Sun, 23 Aug 2015 08:44:53 +0900 (JST) Message-Id: <20150823.084453.1715908115913144015.hrs@allbsd.org> To: truckman@FreeBSD.org Cc: freebsd-net@FreeBSD.org Subject: Re: a couple /etc/rc.firewall questions From: Hiroki Sato In-Reply-To: <201508222103.t7ML3gAx000794@gw.catspoiler.org> References: <201508222103.t7ML3gAx000794@gw.catspoiler.org> X-PGPkey-fingerprint: BDB3 443F A5DD B3D0 A530 FFD7 4F2C D3D8 2793 CF2D X-Mailer: Mew version 6.7 on Emacs 24.5 / Mule 6.0 (HANACHIRUSATO) Mime-Version: 1.0 Content-Type: Multipart/Signed; protocol="application/pgp-signature"; micalg=pgp-sha1; boundary="--Security_Multipart(Sun_Aug_23_08_44_53_2015_031)--" Content-Transfer-Encoding: 7bit X-Virus-Scanned: clamav-milter 0.98.6 at gatekeeper.allbsd.org X-Virus-Status: Clean X-Greylist: Sender IP whitelisted, not delayed by milter-greylist-4.4.3 (mail.allbsd.org [IPv6:2001:2f0:104:e001::32]); Sun, 23 Aug 2015 08:45:21 +0900 (JST) X-Spam-Status: No, score=-98.0 required=13.0 tests=CONTENT_TYPE_PRESENT, RCVD_IN_AHBL, RCVD_IN_AHBL_PROXY, RCVD_IN_AHBL_SPAM, RDNS_NONE, USER_IN_WHITELIST autolearn=no autolearn_force=no version=3.4.0 X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on gatekeeper.allbsd.org X-BeenThere: freebsd-net@freebsd.org X-Mailman-Version: 2.1.20 Precedence: list List-Id: Networking and TCP/IP with FreeBSD List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , X-List-Received-Date: Sat, 22 Aug 2015 23:45:29 -0000 ----Security_Multipart(Sun_Aug_23_08_44_53_2015_031)-- Content-Type: Text/Plain; charset=us-ascii Content-Transfer-Encoding: 7bit Don Lewis wrote in <201508222103.t7ML3gAx000794@gw.catspoiler.org>: tr> The example /etc/rc.firewall has provisions to use either in-kernel NAT tr> or natd for the open and client firewall types, but the simple filewall tr> type only has code for natd. Is there any reason that in-kernel NAT tr> could not be used with the simple firewall type? I think there is no particular reason. Simple rule was just not updated. tr> After allowing connections to selected TCP ports and then denying all tr> other incoming TCP setup connections from ${oif}, the simple firewall tr> code in /etc/rc.firewall then permits all other TCP setup connections: tr> # Allow setup of any other TCP connection tr> ${fwcmd} add pass tcp from any to any setup tr> This is potentially undesirable since it allows unrestricted TCP tr> connections between "me" and the inside network. When I changed this to tr> ${fwcmd} add pass tcp from any to any out via ${oif} setup tr> I was able to open TCP connections from the firewall box to the outside, tr> but NATed connections from inside network to the outside were blocked. tr> If I run "ipfw show", it appears that the TCP setup packets are falling tr> through to the final implicit deny all rule, but I don't see any obvious tr> reason. A TCP setup packet coming from a host on the internal LAN to the NAPT router falls into the last deny-all rule because it does not match if you added "out via ${oif}" to that rule. Does the following additional rule work for you? ${fwcmd} add pass tcp from any to any out via ${oif} setup ${fwcmd} add pass tcp from any to not me in via ${iif} setup -- Hiroki ----Security_Multipart(Sun_Aug_23_08_44_53_2015_031)-- Content-Type: application/pgp-signature Content-Transfer-Encoding: 7bit -----BEGIN PGP SIGNATURE----- Version: GnuPG v1 iEYEABECAAYFAlXZCXUACgkQTyzT2CeTzy3c0gCaAnwy7kqPzgurLxz6zWIVahSh m3gAoKGK41yyfHtdKEYLJMevRu/nw0o3 =V1kB -----END PGP SIGNATURE----- ----Security_Multipart(Sun_Aug_23_08_44_53_2015_031)----