Skip site navigation (1)Skip section navigation (2)
Date:      Wed, 10 Nov 2010 13:04:28 +0200
From:      Eugene Perevyazko <john@dnepro.net>
To:        freebsd-net@freebsd.org
Subject:   igb dual-port adapter 1200Mbps limit - what to tune?
Message-ID:  <20101110110428.GA3505@traktor.dnepro.net>

next in thread | raw e-mail | index | archive | help
Hello, freebsd-net.

I have a router running RELENG_7 with two dual-port igbs - igb0 and igb1 are on
82575 on intel s5520ur mb and igb2 and igb3 are on 82576 on ET dual-port card.
82576 is in 8x slot.
Main traffic flows from igb0+igb1 to igb2+igb3, less traffic goes back.
There's no traffic flow in directions igb0 - igb1 and igb2 - igb3.

There are vlans on all interfaces.

igb0 and igb1 are outbound links.
igb2 and igb3 are connected to switch.
CPU is E5620@2.4GHz, 8 cores, irqs bound to different cores skipping HT ones.
Tried 2 queues and 1 queue per iface, neither hitting cpu limit. 

The problem is that traffic through igb2+igb3 is limited at around 1200Mbps Tx
while I was hoping for 1600-1800Mbps Tx.

I've tried aggregating igb2 and igb3 in roundrobin lagg - then load is evenly distributed between them at ~600Mbps. Without aggregating peak load on one of them
corresponds to sag on another. But the sum of igb2 tx + igb3 tx is never higher
than 1200Mbps.
Combination of forwarded traffic and netperf from this host is limited with
the same number.

Is it at all possible to get 1600+Mbps Tx on dual-port card?
What should I tune to get it if yes?
Will adding one more ET adapter help if no?

Current settings are:

rc.conf:
ifconfig_igb0="-tso -lro up"
ifconfig_igb1="-tso -lro up"
ifconfig_igb2="-tso -lro up"
ifconfig_igb3="-tso -lro up"
ifconfig_vlan37="vlan 37 vlandev igb0"
ifconfig_vlan1812="vlan 1812 vlandev igb1"
ifconfig_vlan3003="vlan 3003 vlandev igb2"
ifconfig_vlan3004="vlan 3004 vlandev igb3"

sysctl.conf:
net.inet.ip.intr_queue_maxlen=5000
net.inet.ip.redirect=0
net.inet.icmp.drop_redirect=1
net.inet.ip.fastforwarding=1

loader.conf:
hw.igb.num_queues=1
hw.igb.enable_aim=1
hw.igb.low_latency=1000
hw.igb.ave_latency=2000
hw.igb.bulk_latency=4000
hw.igb.rx_process_limit=1000
hw.igb.fc_setting=0

ifconfig:
igb0: flags=8843<UP,BROADCAST,RUNNING,SIMPLEX,MULTICAST> metric 0 mtu 1500
        options=bb<RXCSUM,TXCSUM,VLAN_MTU,VLAN_HWTAGGING,JUMBO_MTU,VLAN_HWCSUM>
        ether 00:15:17:bd:27:48
        media: Ethernet autoselect (1000baseTX <full-duplex>)
        status: active
igb1: flags=8843<UP,BROADCAST,RUNNING,SIMPLEX,MULTICAST> metric 0 mtu 1500
        options=bb<RXCSUM,TXCSUM,VLAN_MTU,VLAN_HWTAGGING,JUMBO_MTU,VLAN_HWCSUM>
        ether 00:15:17:bd:27:49
        media: Ethernet autoselect (1000baseTX <full-duplex>)
        status: active
igb2: flags=8843<UP,BROADCAST,RUNNING,SIMPLEX,MULTICAST> metric 0 mtu 1500
        options=bb<RXCSUM,TXCSUM,VLAN_MTU,VLAN_HWTAGGING,JUMBO_MTU,VLAN_HWCSUM>
        ether 00:1b:21:5e:f4:34
        media: Ethernet autoselect (1000baseTX <full-duplex>)
        status: active
igb3: flags=8843<UP,BROADCAST,RUNNING,SIMPLEX,MULTICAST> metric 0 mtu 1500
        options=bb<RXCSUM,TXCSUM,VLAN_MTU,VLAN_HWTAGGING,JUMBO_MTU,VLAN_HWCSUM>
        ether 00:1b:21:5e:f4:35
        media: Ethernet autoselect (1000baseTX <full-duplex>)
        status: active
 
dmesg:
igb0: <Intel(R) PRO/1000 Network Connection version - 1.9.6> port 0x2020-0x203f mem 0xb2520000-0xb253ffff,0xb2544000-0xb2547fff irq 40 at device 0.0 on pci1
igb0: Using MSIX interrupts with 2 vectors
igb0: [ITHREAD]
igb0: [ITHREAD]
igb0: Ethernet address: 00:15:17:bd:27:48
igb1: <Intel(R) PRO/1000 Network Connection version - 1.9.6> port 0x2000-0x201f mem 0xb2500000-0xb251ffff,0xb2540000-0xb2543fff irq 28 at device 0.1 on pci1
igb1: Using MSIX interrupts with 2 vectors
igb1: [ITHREAD]
igb1: [ITHREAD]
igb1: Ethernet address: 00:15:17:bd:27:49
igb2: <Intel(R) PRO/1000 Network Connection version - 1.9.6> port 0x1020-0x103f mem 0xb2420000-0xb243ffff,0xb2000000-0xb23fffff,0xb24c4000-0xb24c7fff irq 30 at device 0.0 on pci4
igb2: Using MSIX interrupts with 2 vectors
igb2: [ITHREAD]
igb2: [ITHREAD]
igb2: Ethernet address: 00:1b:21:5e:f4:34
igb3: <Intel(R) PRO/1000 Network Connection version - 1.9.6> port 0x1000-0x101f mem 0xb2400000-0xb241ffff,0xb1c00000-0xb1ffffff,0xb24c0000-0xb24c3fff irq 37 at device 0.1 on pci4
igb3: Using MSIX interrupts with 2 vectors
igb3: [ITHREAD]
igb3: [ITHREAD]
igb3: Ethernet address: 00:1b:21:5e:f4:35

pciconf -lvc:
igb0@pci0:1:0:0:        class=0x020000 card=0x34de8086 chip=0x10a78086 rev=0x02 hdr=0x00
    vendor     = 'Intel Corporation'
    device     = '82575EB Gigabit Network Connection'
    class      = network
    subclass   = ethernet
    cap 01[40] = powerspec 2  supports D0 D3  current D0
    cap 05[50] = MSI supports 1 message, 64 bit
    cap 11[60] = MSI-X supports 10 messages in map 0x1c enabled
    cap 10[a0] = PCI-Express 2 endpoint max data 256(256) link x4(x4)
igb1@pci0:1:0:1:        class=0x020000 card=0x34de8086 chip=0x10a78086 rev=0x02 hdr=0x00
    vendor     = 'Intel Corporation'
    device     = '82575EB Gigabit Network Connection'
    class      = network
    subclass   = ethernet
    cap 01[40] = powerspec 2  supports D0 D3  current D0
    cap 05[50] = MSI supports 1 message, 64 bit
    cap 11[60] = MSI-X supports 10 messages in map 0x1c enabled
    cap 10[a0] = PCI-Express 2 endpoint max data 256(256) link x4(x4)
igb2@pci0:4:0:0:        class=0x020000 card=0xa03c8086 chip=0x10c98086 rev=0x01 hdr=0x00
    vendor     = 'Intel Corporation'
    class      = network
    subclass   = ethernet
    cap 01[40] = powerspec 3  supports D0 D3  current D0
    cap 05[50] = MSI supports 1 message, 64 bit, vector masks
    cap 11[70] = MSI-X supports 10 messages in map 0x1c enabled
    cap 10[a0] = PCI-Express 2 endpoint max data 256(512) link x4(x4)
igb3@pci0:4:0:1:        class=0x020000 card=0xa03c8086 chip=0x10c98086 rev=0x01 hdr=0x00
    vendor     = 'Intel Corporation'
    class      = network
    subclass   = ethernet
    cap 01[40] = powerspec 3  supports D0 D3  current D0
    cap 05[50] = MSI supports 1 message, 64 bit, vector masks
    cap 11[70] = MSI-X supports 10 messages in map 0x1c enabled
    cap 10[a0] = PCI-Express 2 endpoint max data 256(512) link x4(x4)


-- 
Eugene Perevyazko



Want to link to this message? Use this URL: <https://mail-archive.FreeBSD.org/cgi/mid.cgi?20101110110428.GA3505>