Skip site navigation (1)Skip section navigation (2)
Date:      Fri, 03 Dec 2010 16:17:51 -0600
From:      Tom Judge <tom@tomjudge.com>
To:        Jack Vogel <jfvogel@gmail.com>
Cc:        freebsd-net@freebsd.org
Subject:   Re: igb and jumbo frames
Message-ID:  <4CF96C8F.6020203@tomjudge.com>
In-Reply-To: <AANLkTinsJhZPjZRM_hiGV0htDc8JUO9M38fATm-6T8Nj@mail.gmail.com>
References:  <4CF93E43.8010801@tomjudge.com> <AANLkTinsJhZPjZRM_hiGV0htDc8JUO9M38fATm-6T8Nj@mail.gmail.com>

next in thread | previous in thread | raw e-mail | index | archive | help
Hi Jack,

Thanks for the response.

On 12/03/2010 04:05 PM, Jack Vogel wrote:
> Since you're already configuring the system into a special non-standard
> way you are playing the admin, so I'd expect you to also configure memory
> pool resources, not to have the driver do so. Its also going to depend on
> the number of queues you have, you can reduce those manually as well.
> 

In light of this is it worth documenting in igb(4) that it can use a lot
of resources?  Maybe adding a little more information about the
allocation in the error messages would be a good idea also.  The reason
I say this that for some reason the drivers requests for jumbos that
where failing did not show up in the denied counters in netstat -m.


I'm currently using the following settings in loader.conf:

kern.ipc.nmbclusters="131072"
kern.ipc.nmbjumbo9="38400"

Not sure where the first one should be but i had to raise it from the
default to get things to work.

The second leaves me at about 50% utilisation for 9k clusters operating
2 NICs at 8192 mtu:
16385/2214/18599/32768 9k jumbo clusters in use (current/cache/total/max)

> I'm glad you're trying this out however, the 9K cluster use is new, and not
> uncontroversial either, I decided to put it in, but if problems occur,
> or someone
> has a strong valid-sounding argument for not using them, I could be
> persuaded
> to take it our and just use 2K and 4K sizes.

I'm not sure it is or will be an issue, but I was hitting my head on the
desk for a few hours before I worked out why this was happening.  I had
to read the code to deduce where the error was coming from and work out
the issue.

Hopefully we don't have the same issue here as with bce(4) where the
memory area get fragmented and the system can't allocate any new 9k
clusters.

> 
> So... any feedback is good right now.
> 

I will provide more feedback in the coming weeks as we load these (4)
systems up.  Currently they are idling waiting for the application jails
to be deployed on them.

Tom

> Jack
> 
> 
> On Fri, Dec 3, 2010 at 11:00 AM, Tom Judge <tom@tomjudge.com
> <mailto:tom@tomjudge.com>> wrote:
> 
>     Hi,
> 
>     So I have been playing around with some new hosts I have been deploying
>     (Dell R710's).
> 
>     The systems have a single dual port card in them:
> 
>     igb0@pci0:5:0:0:        class=0x020000 card=0xa04c8086 chip=0x10c98086
>     rev=0x01 hdr=0x00
>        vendor     = 'Intel Corporation'
>        class      = network
>        subclass   = ethernet
>        cap 01[40] = powerspec 3  supports D0 D3  current D0
>        cap 05[50] = MSI supports 1 message, 64 bit, vector masks
>        cap 11[70] = MSI-X supports 10 messages in map 0x1c enabled
>        cap 10[a0] = PCI-Express 2 endpoint max data 256(512) link x4(x4)
>     igb1@pci0:5:0:1:        class=0x020000 card=0xa04c8086 chip=0x10c98086
>     rev=0x01 hdr=0x00
>        vendor     = 'Intel Corporation'
>        class      = network
>        subclass   = ethernet
>        cap 01[40] = powerspec 3  supports D0 D3  current D0
>        cap 05[50] = MSI supports 1 message, 64 bit, vector masks
>        cap 11[70] = MSI-X supports 10 messages in map 0x1c enabled
>        cap 10[a0] = PCI-Express 2 endpoint max data 256(512) link x4(x4)
> 
> 
>     Running 8.1 these cards panic the system at boot when initializing the
>     jumbo mtu, so to solve this I back ported the stable/8 driver to 8.1 and
>     booted with this kernel.  So far so good.
> 
>     However when configuring the interfaces with and mtu of 8192 the system
>     is unable to allocate the required mbufs for the receive queue.
> 
>     I believe the message was from here:
>     http://fxr.watson.org/fxr/source/dev/e1000/if_igb.c#L1209
> 
>     After a little digging and playing with just one interface i discovered
>     that the default tuning for kern.ipc.nmbjumbo9 was insufficient to run a
>     single interface with jumbo frames as it seemed just the TX queue
>     consumed 90% of the available 9k jumbo clusters.
> 
>     So my question is (well 2 questions really):
> 
>     1) Should igb be auto tuning kern.ipc.nmbjumbo9 and kern.ipc.nmbclusters
>     up to suite its needs?
> 
>     2) Should this be documented in igb(4)?
> 
>     Tom
> 
>     --
>     TJU13-ARIN
> 
> 


-- 
TJU13-ARIN



Want to link to this message? Use this URL: <https://mail-archive.FreeBSD.org/cgi/mid.cgi?4CF96C8F.6020203>