Skip site navigation (1)Skip section navigation (2)
Date:      Tue, 30 Jan 2007 09:30:23 -0800
From:      "Jack Vogel" <jfvogel@gmail.com>
To:        "Mike Tancsa" <mike@sentex.net>
Cc:        freebsd-stable@freebsd.org
Subject:   Re: Intel EM tuning (PT1000 adaptors)
Message-ID:  <2a41acea0701300930u4f920b95n61d20972c14576a9@mail.gmail.com>
In-Reply-To: <200701301719.l0UHJ1Kk002345@lava.sentex.ca>
References:  <200701301719.l0UHJ1Kk002345@lava.sentex.ca>

next in thread | previous in thread | raw e-mail | index | archive | help
On 1/30/07, Mike Tancsa <mike@sentex.net> wrote:
> On one of my servers (RELENG_6 as of yesterday), I am seeing what
> appears to be RX overruns. Load avg does not seem to be high, and the
> only odd thing I have done to the kernel is defined
>
> #define EM_FAST_INTR 1
>
>
> The man page talks about setting hw.em.* vars, but but does not
> discuss any of the tunables via dev.em.*.  Is there anything that can
> be tuned there to improve performance ?  Also, the man page talks
> about various controllers having different max values.  How do I know
> what this particular card has available as it seems to have a
> controller (82572GI) not mentioned in the man page.
>
>
> # sysctl -a dev.em.2
> dev.em.2.%desc: Intel(R) PRO/1000 Network Connection Version - 6.2.9
> dev.em.2.%driver: em
> dev.em.2.%location: slot=0 function=0
> dev.em.2.%pnpinfo: vendor=0x8086 device=0x105e subvendor=0x8086
> subdevice=0x115e class=0x020000
> dev.em.2.%parent: pci1
> dev.em.2.debug_info: -1
> dev.em.2.stats: -1
> dev.em.2.rx_int_delay: 0
> dev.em.2.tx_int_delay: 66
> dev.em.2.rx_abs_int_delay: 66
> dev.em.2.tx_abs_int_delay: 66
> dev.em.2.rx_processing_limit: 100
>
>
> Jan 30 11:04:31 FW4a-tor kernel: em2: Adapter hardware address = 0xc4b6f948
> Jan 30 11:04:31 FW4a-tor kernel: em2: CTRL = 0x80c0241 RCTL = 0x8002
> Jan 30 11:04:31 FW4a-tor kernel: em2: Packet buffer = Tx=16k Rx=32k
> Jan 30 11:04:31 FW4a-tor kernel: em2: Flow control watermarks high =
> 30720 low = 29220
> Jan 30 11:04:31 FW4a-tor kernel: em2: tx_int_delay = 66, tx_abs_int_delay = 66
> Jan 30 11:04:31 FW4a-tor kernel: em2: rx_int_delay = 0, rx_abs_int_delay = 66
> Jan 30 11:04:31 FW4a-tor kernel: em2: fifo workaround = 0, fifo_reset_count = 0
> Jan 30 11:04:31 FW4a-tor kernel: em2: hw tdh = 246, hw tdt = 246
> Jan 30 11:04:31 FW4a-tor kernel: em2: Num Tx descriptors avail = 231
> Jan 30 11:04:31 FW4a-tor kernel: em2: Tx Descriptors not avail1 = 0
> Jan 30 11:04:31 FW4a-tor kernel: em2: Tx Descriptors not avail2 = 0
> Jan 30 11:04:31 FW4a-tor kernel: em2: Std mbuf failed = 0
> Jan 30 11:04:31 FW4a-tor kernel: em2: Std mbuf cluster failed = 0
> Jan 30 11:04:31 FW4a-tor kernel: em2: Driver dropped packets = 0
> Jan 30 11:04:31 FW4a-tor kernel: em2: Driver tx dma failure in encap = 0
> Jan 30 11:04:40 FW4a-tor kernel: em2: Excessive collisions = 0
> Jan 30 11:04:40 FW4a-tor kernel: em2: Sequence errors = 0
> Jan 30 11:04:40 FW4a-tor kernel: em2: Defer count = 0
> Jan 30 11:04:40 FW4a-tor kernel: em2: Missed Packets = 47990
> Jan 30 11:04:40 FW4a-tor kernel: em2: Receive No Buffers = 2221
> Jan 30 11:04:40 FW4a-tor kernel: em2: Receive Length Errors = 0
> Jan 30 11:04:40 FW4a-tor kernel: em2: Receive errors = 0
> Jan 30 11:04:40 FW4a-tor kernel: em2: Crc errors = 0
> Jan 30 11:04:40 FW4a-tor kernel: em2: Alignment errors = 0
> Jan 30 11:04:40 FW4a-tor kernel: em2: Carrier extension errors = 0
> Jan 30 11:04:40 FW4a-tor kernel: em2: RX overruns = 61
> Jan 30 11:04:40 FW4a-tor kernel: em2: watchdog timeouts = 0
> Jan 30 11:04:40 FW4a-tor kernel: em2: XON Rcvd = 0
> Jan 30 11:04:40 FW4a-tor kernel: em2: XON Xmtd = 0
> Jan 30 11:04:40 FW4a-tor kernel: em2: XOFF Rcvd = 0
> Jan 30 11:04:40 FW4a-tor kernel: em2: XOFF Xmtd = 0
> Jan 30 11:04:40 FW4a-tor kernel: em2: Good Packets Rcvd = 126019287
> Jan 30 11:04:40 FW4a-tor kernel: em2: Good Packets Xmtd = 78181054
>
>
> em2@pci1:0:0:   class=0x020000 card=0x115e8086 chip=0x105e8086
> rev=0x06 hdr=0x00
>      vendor   = 'Intel Corporation'
>      device   = 'PRO/1000 PT'
>      class    = network
>      subclass = ethernet
> em3@pci1:0:1:   class=0x020000 card=0x115e8086 chip=0x105e8086
> rev=0x06 hdr=0x00
>      vendor   = 'Intel Corporation'
>      device   = 'PRO/1000 PT'
>      class    = network
>      subclass = ethernet
>
>
> em2: <Intel(R) PRO/1000 Network Connection Version - 6.2.9> port
> 0x9000-0x901f mem 0xd1020000-0xd103ffff,0xd1000000-0xd101ffff
> irq 18 at device 0.0 on pci1
> em2: Ethernet address: 00:15:17:0b:46:7c
> em2: [FAST]
> em3: <Intel(R) PRO/1000 Network Connection Version - 6.2.9> port
> 0x9400-0x941f mem 0xd1040000-0xd105ffff,0xd1060000-0xd107ffff
> irq 19 at device 0.1 on pci1
> em3: Ethernet address: 00:15:17:0b:46:7d
> em3: [FAST]

Performance tuning is not something that I have yet had time to focus
on, our Linux team is able to do a lot more of that. Just at a glance,
try increasing your mbuf pool size and the number of receive descriptors
for a start. Oh, and try increasing your processing limit to 200 and see
what effect that has.

Jack



Want to link to this message? Use this URL: <https://mail-archive.FreeBSD.org/cgi/mid.cgi?2a41acea0701300930u4f920b95n61d20972c14576a9>