Skip site navigation (1)Skip section navigation (2)
Date:      Mon, 3 Aug 2009 17:14:49 +0100
From:      Ray Kinsella <raykinsella78@gmail.com>
To:        Invernizzi Fabrizio <fabrizio.invernizzi@telecomitalia.it>
Cc:        freebsd-performance@freebsd.org
Subject:   Re: Test on 10GBE Intel based network card
Message-ID:  <584ec6bb0908030914m74b79dceq9af2581e1b02449a@mail.gmail.com>
In-Reply-To: <584ec6bb0908030819vee58480p43989b742e1b7fd2@mail.gmail.com>
References:  <36A93B31228D3B49B691AD31652BCAE9A4560DF911@GRFMBX702BA020.griffon.local> <584ec6bb0908030819vee58480p43989b742e1b7fd2@mail.gmail.com>

next in thread | previous in thread | raw e-mail | index | archive | help
Hi Fabizio,

Ignore my last mail direct to you, 638976 PPS is awful.
(today is a national holiday here, my brain is not switched on).

To me it looks like interrupt coalescing is not switched on for some reason=
.
Are you passing any parameters to the driver in boot.conf.
Could you retest with vmstat switched on "vmstat 3" and send us the output.
I expect we are going to see alot of interrupts.

Regards

Ray Kinsella



On Mon, Aug 3, 2009 at 4:19 PM, Ray Kinsella <raykinsella78@gmail.com>wrote=
:

> Hi Fabrizio,
>
> I am an Intel Network Software Engineer, I test/improve the performance o=
f
> network device drivers among other things. I will do my best to help you.
>
> The first thing I would say is that I haven't used the 10GB NICs yet, but=
 a
> rate of ~5 million PPS ((312*1024*1024)/64) is good or bad depending on w=
hat
> you are doing. i.e. How many NICs are sending on and how many are recievi=
ng
> on? In a situation where you operate cards in pairs, for instance all the
> traffic from card A goes to card B and all the traffic from card B goes t=
o
> card A , I would consider this quiet low. In a situation where any card w=
ill
> talk to any card, for instance traffic from card A can go to card B, C or=
 D,
> 5 million pps might be ok.
>
> The first thing you need to do is play with irq affinities, check out thi=
s
> blog post http://bramp.net/blog/post. You want to set the irq affinities
> such that the rx threads are bound to one smp thread on one core. The nex=
t
> thing is if possible configure so that network cards that have a 1-1
> relationship to execute on seperate smp threads on the same core. This
> should improve the line rate you are seeing.
>
> Regards
>
> Ray Kinsella
>
>
> On Mon, Aug 3, 2009 at 9:46 AM, Invernizzi Fabrizio <
> fabrizio.invernizzi@telecomitalia.it> wrote:
>
>> Hi all
>>
>> I am doing some tests on a BSD system with a 10gbe Intel based network
>> card and I have some doubts about the configuration since the performanc=
e I
>> am experiencing looks (very) poor.
>>
>> This is the system I am doing test on:
>>
>>
>>
>> - HP 380 G5 (XEON X5420, CPU speed: 2.50GHz, BUS speed: 1333 MHz, L2 cac=
he
>> size: 12 MB, L2 cache speed: 2,5 GHz) with 1 quad-core installed.
>>
>> - Network card: Silicom PE10G2i-LR - Dual Port Fiber (LR) 10 Gigabit
>> Ethernet PCI Express Server Adapter Intel=AE based (chip 82598EB).
>>        Driver ixgbe-1.8.6
>>
>> - FreeBSD 7.2-RELEASE (64 bit) with this options compiled in the kernel
>>        options             ZERO_COPY_SOCKETS        # Turn on zero copy
>> send code
>>        options             HZ=3D1000
>>        options             BPF_JITTER
>>
>>
>>
>> I worked on the driver settings in order to have big TX/RX rings and low
>> interrupt rate (traffic latency is not an issue).
>>
>>
>>
>> In order to tune up the system i started with some echo request tests.
>>
>> These are the maximum Bandwidths I can send without loss:
>>
>> - 64 byte packets: 312 Mbps (1,64% CPU idle)
>>
>> - 512 byte packets: 2117 Mbps (1,63% CPU idle)
>>
>> - 1492 byte packets: 5525 Mbps (1,93% CPU idle)
>>
>>
>>
>> Am I right considering these figures lower than expected?
>> The system is just managing network traffic!
>>
>>
>>
>> Now I have started with netgraph tests, in particular with ng_bpf and th=
e
>> overall system is going even worst.
>>
>> I sent some HTTP traffic (597 bytes-long packets) and I configured an
>> ng_bpf to filter TCP traffic out from the incoming interface (ix0).
>>
>> If I use the ngctl msg to see counters on the ng_bpf node, I see extreme=
ly
>> poor performance:
>>
>>
>>
>> - Sending 96Mbps of this traffic I figured out 0.1% packet loss. This
>> looks very strange. May be some counter bug?
>>
>> - Sending 5500Mbps, the netgraph (not the network card driver) is loosin=
g
>> 21% of the number of sent packets. See below a snapshot of the CPU load
>> under traffic load
>>
>>
>>
>> CPU:  0.0% user,  0.0% nice, 87.0% system,  9.1% interrupt,  3.9% idle
>>
>> Mem: 16M Active, 317M Inact, 366M Wired, 108K Cache, 399M Buf, 7222M Fre=
e
>>
>> Swap: 2048M Total, 2048M Free
>>
>>
>>
>>  PID USERNAME    THR PRI NICE   SIZE    RES STATE  C   TIME   WCPU COMMA=
ND
>>
>>   12 root          1 171 ki31     0K    16K RUN    2  20.2H 68.80% idle:
>> cpu2
>>
>>   11 root          1 171 ki31     0K    16K RUN    3  20.1H 64.70% idle:
>> cpu3
>>
>>   14 root          1 171 ki31     0K    16K RUN    0  20.2H 64.26% idle:
>> cpu0
>>
>>   13 root          1 171 ki31     0K    16K RUN    1  20.2H 63.67% idle:
>> cpu1
>>
>>   38 root          1 -68    -     0K    16K CPU1   1   1:28 34.67% ix0 r=
xq
>>
>>   40 root          1 -68    -     0K    16K CPU2   0   1:26 34.18% ix0 r=
xq
>>
>>   34 root          1 -68    -     0K    16K CPU3   3   1:27 34.08% ix0 r=
xq
>>
>>   36 root          1 -68    -     0K    16K RUN    2   1:26 34.08% ix0 r=
xq
>>
>>   33 root          1 -68    -     0K    16K WAIT   3   0:40  4.05% irq26=
0:
>> ix0
>>
>>   39 root          1 -68    -     0K    16K WAIT   2   0:41  3.96% irq26=
3:
>> ix0
>>
>>   35 root          1 -68    -     0K    16K WAIT   0   0:39  3.66% irq26=
1:
>> ix0
>>
>>   37 root          1 -68    -     0K    16K WAIT   1   0:42  3.47% irq26=
2:
>> ix0
>>
>>   16 root          1 -32    -     0K    16K WAIT   0  14:53  2.49% swi4:
>> clock sio
>>
>>
>>
>>
>>
>>
>>
>> Am I missing something?
>>
>> Does someone know some (more) system tuning to have higher traffic rate
>> supported?
>>
>>
>>
>> Any help is greatly appreciated.
>>
>>
>>
>> Fabrizio
>>
>>
>>
>>
>>
>> ------------------------------------------------------------------
>> Telecom Italia
>> Fabrizio INVERNIZZI
>> Technology - TILAB
>> Accesso Fisso e Trasporto
>> Via Reiss Romoli, 274 10148 Torino
>> Tel.  +39 011 2285497
>> Mob. +39 3316001344
>> Fax +39 06 41867287
>>
>>
>> Questo messaggio e i suoi allegati sono indirizzati esclusivamente alle
>> persone indicate. La diffusione, copia o qualsiasi altra azione derivant=
e
>> dalla conoscenza di queste informazioni sono rigorosamente vietate. Qual=
ora
>> abbiate ricevuto questo documento per errore siete cortesemente pregati =
di
>> darne immediata comunicazione al mittente e di provvedere alla sua
>> distruzione, Grazie.
>>
>> This e-mail and any attachments is confidential and may contain privileg=
ed
>> information intended for the addressee(s) only. Dissemination, copying,
>> printing or use by anybody else is unauthorised. If you are not the inte=
nded
>> recipient, please delete this message and any attachments and advise the
>> sender by return e-mail, Thanks.
>>
>> _______________________________________________
>> freebsd-performance@freebsd.org mailing list
>> http://lists.freebsd.org/mailman/listinfo/freebsd-performance
>> To unsubscribe, send any mail to "
>> freebsd-performance-unsubscribe@freebsd.org"
>>
>
>



Want to link to this message? Use this URL: <https://mail-archive.FreeBSD.org/cgi/mid.cgi?584ec6bb0908030914m74b79dceq9af2581e1b02449a>