Skip site navigation (1)Skip section navigation (2)
Date:      Mon, 3 Aug 2009 12:22:23 +0300
From:      Stefan Lambrev <stefan.lambrev@moneybookers.com>
To:        Invernizzi Fabrizio <fabrizio.invernizzi@telecomitalia.it>
Cc:        "freebsd-performance@freebsd.org" <freebsd-performance@freebsd.org>
Subject:   Re: Test on 10GBE Intel based network card
Message-ID:  <0E567C7E-4EAA-4B89-9A8D-FD0450D32ED7@moneybookers.com>
In-Reply-To: <36A93B31228D3B49B691AD31652BCAE9A4560DF911@GRFMBX702BA020.griffon.local>
References:  <36A93B31228D3B49B691AD31652BCAE9A4560DF911@GRFMBX702BA020.griffon.local>

next in thread | previous in thread | raw e-mail | index | archive | help
Hi,

The limitation that you see is about the max number of packets that =20
FreeBSD can handle - it looks like your best performance is reached at =20=

64 byte packets?
Am I correct that the maximum you can reach is around 639,000 packets =20=

per second?
Also you are not routing the traffic, but instead the server handles =20
the requests itself and eat CPU to reply?

On Aug 3, 2009, at 11:46 AM, Invernizzi Fabrizio wrote:

> Hi all
>
> I am doing some tests on a BSD system with a 10gbe Intel based =20
> network card and I have some doubts about the configuration since =20
> the performance I am experiencing looks (very) poor.
>
> This is the system I am doing test on:
>
>
>
> - HP 380 G5 (XEON X5420, CPU speed: 2.50GHz, BUS speed: 1333 MHz, L2 =20=

> cache size: 12 MB, L2 cache speed: 2,5 GHz) with 1 quad-core =20
> installed.
>
> - Network card: Silicom PE10G2i-LR - Dual Port Fiber (LR) 10 Gigabit =20=

> Ethernet PCI Express Server Adapter Intel=AE based (chip 82598EB).
>        Driver ixgbe-1.8.6
>
> - FreeBSD 7.2-RELEASE (64 bit) with this options compiled in the =20
> kernel
>        options             ZERO_COPY_SOCKETS        # Turn on zero =20
> copy send code
>        options             HZ=3D1000
>        options             BPF_JITTER
>
>
>
> I worked on the driver settings in order to have big TX/RX rings and =20=

> low interrupt rate (traffic latency is not an issue).
>
>
>
> In order to tune up the system i started with some echo request tests.
>
> These are the maximum Bandwidths I can send without loss:
>
> - 64 byte packets: 312 Mbps (1,64% CPU idle)
>
> - 512 byte packets: 2117 Mbps (1,63% CPU idle)
>
> - 1492 byte packets: 5525 Mbps (1,93% CPU idle)
>
>
>
> Am I right considering these figures lower than expected?
> The system is just managing network traffic!
>
>
>
> Now I have started with netgraph tests, in particular with ng_bpf =20
> and the overall system is going even worst.
>
> I sent some HTTP traffic (597 bytes-long packets) and I configured =20
> an ng_bpf to filter TCP traffic out from the incoming interface (ix0).
>
> If I use the ngctl msg to see counters on the ng_bpf node, I see =20
> extremely poor performance:
>
>
>
> - Sending 96Mbps of this traffic I figured out 0.1% packet loss. =20
> This looks very strange. May be some counter bug?
>
> - Sending 5500Mbps, the netgraph (not the network card driver) is =20
> loosing 21% of the number of sent packets. See below a snapshot of =20
> the CPU load under traffic load
>
>
>
> CPU:  0.0% user,  0.0% nice, 87.0% system,  9.1% interrupt,  3.9% idle
>
> Mem: 16M Active, 317M Inact, 366M Wired, 108K Cache, 399M Buf, 7222M =20=

> Free
>
> Swap: 2048M Total, 2048M Free
>
>
>
>  PID USERNAME    THR PRI NICE   SIZE    RES STATE  C   TIME   WCPU =20
> COMMAND
>
>   12 root          1 171 ki31     0K    16K RUN    2  20.2H 68.80% =20
> idle: cpu2
>
>   11 root          1 171 ki31     0K    16K RUN    3  20.1H 64.70% =20
> idle: cpu3
>
>   14 root          1 171 ki31     0K    16K RUN    0  20.2H 64.26% =20
> idle: cpu0
>
>   13 root          1 171 ki31     0K    16K RUN    1  20.2H 63.67% =20
> idle: cpu1
>
>   38 root          1 -68    -     0K    16K CPU1   1   1:28 34.67% =20
> ix0 rxq
>
>   40 root          1 -68    -     0K    16K CPU2   0   1:26 34.18% =20
> ix0 rxq
>
>   34 root          1 -68    -     0K    16K CPU3   3   1:27 34.08% =20
> ix0 rxq
>
>   36 root          1 -68    -     0K    16K RUN    2   1:26 34.08% =20
> ix0 rxq
>
>   33 root          1 -68    -     0K    16K WAIT   3   0:40  4.05% =20
> irq260: ix0
>
>   39 root          1 -68    -     0K    16K WAIT   2   0:41  3.96% =20
> irq263: ix0
>
>   35 root          1 -68    -     0K    16K WAIT   0   0:39  3.66% =20
> irq261: ix0
>
>   37 root          1 -68    -     0K    16K WAIT   1   0:42  3.47% =20
> irq262: ix0
>
>   16 root          1 -32    -     0K    16K WAIT   0  14:53  2.49% =20
> swi4: clock sio
>
>
>
>
>
>
>
> Am I missing something?
>
> Does someone know some (more) system tuning to have higher traffic =20
> rate supported?
>
>
>
> Any help is greatly appreciated.
>
>
>
> Fabrizio
>
>
>
>
>
> ------------------------------------------------------------------
> Telecom Italia
> Fabrizio INVERNIZZI
> Technology - TILAB
> Accesso Fisso e Trasporto
> Via Reiss Romoli, 274 10148 Torino
> Tel.  +39 011 2285497
> Mob. +39 3316001344
> Fax +39 06 41867287
>
>
> Questo messaggio e i suoi allegati sono indirizzati esclusivamente =20
> alle persone indicate. La diffusione, copia o qualsiasi altra azione =20=

> derivante dalla conoscenza di queste informazioni sono rigorosamente =20=

> vietate. Qualora abbiate ricevuto questo documento per errore siete =20=

> cortesemente pregati di darne immediata comunicazione al mittente e =20=

> di provvedere alla sua distruzione, Grazie.
>
> This e-mail and any attachments is confidential and may contain =20
> privileged information intended for the addressee(s) only. =20
> Dissemination, copying, printing or use by anybody else is =20
> unauthorised. If you are not the intended recipient, please delete =20
> this message and any attachments and advise the sender by return e-=20
> mail, Thanks.
>
> _______________________________________________
> freebsd-performance@freebsd.org mailing list
> http://lists.freebsd.org/mailman/listinfo/freebsd-performance
> To unsubscribe, send any mail to =
"freebsd-performance-unsubscribe@freebsd.org=20
> "

--
Best Wishes,
Stefan Lambrev
ICQ# 24134177








Want to link to this message? Use this URL: <https://mail-archive.FreeBSD.org/cgi/mid.cgi?0E567C7E-4EAA-4B89-9A8D-FD0450D32ED7>