Skip site navigation (1)Skip section navigation (2)
Date:      Sat, 17 May 2014 14:40:06 +0200
From:      Damian Danielecki <danieleckid@gmail.com>
To:        freebsd-stable@freebsd.org
Subject:   [9.3 PRE] Intel i210AT ethernet adapter MSI-X problems (igb driver)
Message-ID:  <CANwN-8SsKoEPL6NQCYYcoezJjuYsohwDH9BpipzCKnVc7CDAjA@mail.gmail.com>

next in thread | raw e-mail | index | archive | help
I am receiving as many as 30.000-40.000 interrupts per second under no
big nfs traffic.
Avg traffic is 20Mb/s, avg interrupts number on igb device is 13.000/s.
IMHO. I could compare this to similar igb adapters on other FreeBSD 9
servers and in this case should be  less interrupts (lightweight
traffic conditions).
Additionally all of these interrupts occupies only one IRQ.
The interested thing is CPU usage caused by %interrupts is very small
(under 1%) so this is reason I am still able to use this ethernet
card.

# uname -a
FreeBSD nfsd.xxx.pl 9.3-PRERELEASE FreeBSD 9.3-PRERELEASE #5: Fri May
16 15:41:36 CEST 2014
root@nfsd.xxx.pl:/usr/obj/usr/src/sys/FREEBSD9  amd64
This is custom minimalist kernel.


I see in the sources of igb driver that   i210 is generally supported:
/usr/src/sys/dev/e1000 # grep 'I210' * |wc -l
      64

I guess mine adapter (onboard quad gigabit I210T) is not correctly
handled in the sources of the driver but system recognizes it as igb.

# dmidecode
(...)
Manufacturer: Supermicro
Product Name: X10SLM+-LN4F
(...)

# dmesg |grep igb0
igb0: <Intel(R) PRO/1000 Network Connection version - 2.3.10> port
0xc000-0xc01f mem 0xf7400000-0xf747ffff,0xf7480000-0xf7483fff irq 18
at device 0.0 on pci4
igb0: Using MSIX interrupts with 5 vectors
igb0: Ethernet address: 0c:c4:7a:01:e3:50
igb0: Bound queue 0 to cpu 0
igb0: Bound queue 1 to cpu 1
igb0: Bound queue 2 to cpu 2
igb0: Bound queue 3 to cpu 3
igb0: link state changed to UP


No device description:
#pciconf -vl
igb0@pci0:4:0:0:        class=0x020000 card=0x153315d9 chip=0x15338086
rev=0x03 hdr=0x00
    vendor     = 'Intel Corporation'
    class      = network
    subclass   = ethernet


Number of interrupts taken by device since system startup:
# vmstat -i
irq269: igb0:que 0             501886805      13364
irq270: igb0:que 1                 40477          1
irq271: igb0:que 2                 40417          1
irq272: igb0:que 3               7526720        200
irq273: igb0:link                     12          0

Sample current number of interrupts with not big NFS traffic:
# systat -vm1
Interrupts
34352 total (!!!)
(...)
29937 igb0:que 0
1 igb0:que 1
1 igb0:que 2
1 igb0:que 3
1 igb0:link

As you can see above there are abnormal high numbers of interrupts and
all of them sits on que0.

sysctls untouched by me:

hw.igb.rx_process_limit: 100
hw.igb.num_queues: 4
hw.igb.header_split: 0
hw.igb.buf_ring_size: 4096
hw.igb.max_interrupt_rate: 8000
hw.igb.enable_msix: 1
hw.igb.enable_aim: 1
hw.igb.txd: 1024
hw.igb.rxd: 1024



For example this is properly working IRQ balancing on I350 Gigabit
ethernet adaptor with FreeBSD 9.2-RELEASE:

otherserver# vmstat -i
irq264: igb0:que 0           14586306636        962
irq265: igb0:que 1           12313136472        812
irq266: igb0:que 2           12400518935        818
irq267: igb0:que 3           12230694398        807
irq268: igb0:que 4           12624900681        833
irq269: igb0:que 5           12311037080        812
irq270: igb0:que 6           31682657476       2090
irq271: igb0:que 7           12203814868        805
irq272: igb0:link                     77          0


Regards,
DD



Want to link to this message? Use this URL: <https://mail-archive.FreeBSD.org/cgi/mid.cgi?CANwN-8SsKoEPL6NQCYYcoezJjuYsohwDH9BpipzCKnVc7CDAjA>