Skip site navigation (1)Skip section navigation (2)
Date:      Tue, 08 May 2012 17:09:53 +0300
From:      Sergey Saley <sergeysaley@gmail.com>
To:        freebsd-net@freebsd.org
Subject:   Re: Too much interrupts on ixgbe
Message-ID:  <4FA92931.7040700@gmail.com>
In-Reply-To: <1319530877390-4935427.post@n5.nabble.com>
References:  <1319449307149-4931883.post@n5.nabble.com> <CAFOYbc=vM6_UMXCTOUz_mTyTVpCxp61Np68ERH8tx4KE9aT8gA@mail.gmail.com> <1319478384269-4933498.post@n5.nabble.com> <CAFOYbcnF8Q-E0Jocja6LQ8HuwTwkL5ghuo_KWM0uku0mxpNHDw@mail.gmail.com> <1319483324861-4933765.post@n5.nabble.com> <CAFMmRNzYiJz_FigTMLr4iNTr4dhVy%2BbiCgUJ%2BT5uuCvYw6vEPA@mail.gmail.com> <1319485884830-4933934.post@n5.nabble.com> <CAFMmRNwOieMM83wHwqukDQy=D5y43ULZOiazVQV0Af5Ad1Tzrw@mail.gmail.com> <1319527328469-4935272.post@n5.nabble.com> <CAFOYbcnk4cUxWHt1pjQt-9XGo=1erqvWoDR7B8HV1LUTQ9iDBw@mail.gmail.com> <1319530877390-4935427.post@n5.nabble.com>

next in thread | previous in thread | raw e-mail | index | archive | help
25.10.2011 11:21, Sergey Saley ???????:
> Jack Vogel wrote:
>> On Tue, Oct 25, 2011 at 12:22 AM, Sergey Saley&lt;sergeysaley@&gt;wrote:
>>
>>> Ryan Stone-2 wrote:
>>>> On Mon, Oct 24, 2011 at 3:51 PM, Sergey Saley&lt;sergeysaley@&gt;
>>> wrote:
>>>>> MPD5, netgraph, pppoe.Types of traffic - any (customer traffic).
>>>>> Bying this card I counted on a 3-4G traffic at 3-4K pppoe sessions.
>>>>> It turned to 600-700Mbit/s, about 50K pps at 700-800 pppoe sessions.
>>>> PPPoE is your problem.  The Intel cards can't load-balance PPPoE
>>>> traffic, so everything goes to one queue.  It may be possible to write
>>>> a netgraph module to load-balance the traffic across your CPUs.
>>>>
>>> OK, thank You for explanation.
>>> And what about the large number of interrupts?
>>> As for me, it's too much...
>>> irq256: ix0:que 0              240536944       6132
>>> irq257: ix0:que 1               89090444       2271
>>> irq258: ix0:que 2               93222085       2376
>>> irq259: ix0:que 3               89435179       2280
>>> irq260: ix0:link                       1          0
>>> irq261: ix1:que 0              269468769       6870
>>> irq262: ix1:que 1                 110974          2
>>> irq263: ix1:que 2                 434214         11
>>> irq264: ix1:que 3                 112281          2
>>> irq265: ix1:link                       1          0
>>>
>>>
>> How do you decide its 'too much' ?  It may be that with your traffic you
>> end
>> up
>> not being able to use offloads, just thinking. Its not like the hardware
>> just "makes
>> it up", it interrupts on the last descriptor of a packet which has the RS
>> bit set.
>> With TSO you will get larger chunks of data and thus less interrupts but
>> your
>> traffic probably doesn't qualify for it.
>>
> It's easy. I have several servers with a similar task and load.
> About 30K pps, about 500-600M traffic, about 600-700 pppoe connections.
> One difference - em
> Here is a typical vmstat -i
>
> point06# vmstat -i
> interrupt                          total       rate
> irq17: atapci0                   6173367          0
> cpu0: timer                   3904389748        465
> irq256: em0                   3754877950        447
> irq257: em1                   2962728160        352
> cpu2: timer                   3904389720        465
> cpu1: timer                   3904389720        465
> cpu3: timer                   3904389721        465
> Total                        22341338386       2661
>
> point05# vmstat -i
> interrupt                          total       rate
> irq14: ata0                           35          0
> irq19: atapci1                   8323568          0
> cpu0: timer                   3905440143        465
> irq256: em0                   3870403571        461
> irq257: em1                   1541695487        183
> cpu1: timer                   3905439895        465
> cpu3: timer                   3905439895        465
> cpu2: timer                   3905439895        465
> Total                        21042182489       2506
>
> point04# vmstat -i
> interrupt                          total       rate
> irq19: atapci0                   6047874          0
> cpu0: timer                   3901683760        464
> irq256: em0                    823774953         98
> irq257: em1                   1340659093        159
> cpu1: timer                   3901683730        464
> cpu2: timer                   3901683730        464
> cpu3: timer                   3901683730        464
> Total                        17777216870       2117
>
>

BTW, maybe there is a possibility to make a traffic separation per 
several queues by vlan tag?
That would be a partial solution...





Want to link to this message? Use this URL: <https://mail-archive.FreeBSD.org/cgi/mid.cgi?4FA92931.7040700>