Skip site navigation (1)Skip section navigation (2)
Date:      Tue, 05 Sep 2006 18:21:07 +0200
From:      Thomas Herrlin <junics-fbsdstable@atlantis.maniacs.se>
To:        Danny Braniss <danny@cs.huji.ac.il>
Cc:        freebsd-net@freebsd.org, freebsd-stable@freebsd.org
Subject:   Re: tcp/udp performance
Message-ID:  <44FDA3F3.6090003@atlantis.maniacs.se>
In-Reply-To: <2a41acea0608301145j7bbed961j33ce903a27d8963d@mail.gmail.com>
References:  <E1GIMNJ-0000Dd-QH@cs1.cs.huji.ac.il> <2a41acea0608301145j7bbed961j33ce903a27d8963d@mail.gmail.com>

next in thread | previous in thread | raw e-mail | index | archive | help
Jack Vogel wrote:
> On 8/30/06, Danny Braniss <danny@cs.huji.ac.il> wrote:
>>
>> ever since 6.1 I've seen fluctuations in the performance of
>> the em (Intel(R) PRO/1000 Gigabit Ethernet).
>>
>>             motherboard                 OBN (On Board NIC)
>>             ----------------            ------------------
>>         1- Intel SE7501WV2S             Intel 82546EB::2.1
>>         2- Intel SE7320VP2D2            INTEL 82541
>>         3- Sun Fire X4100 Server        Intel(R) PRO/1000
>>
>> test 1: writing to a NetApp filer via NFS/UDP
>>            FreeBSD              Linux
>>                       MegaBytes/sec
>>         1- Average: 18.48       32.61
>>         2- Average: 15.69       35.72
>>         3- Average: 16.61       29.69
>> (interstingly, doing NFS/TCP instead of NFS/UDP shows an increase in
>> speed of
>> around 60% on FreeBSD but none on Linux)
>>
>> test2: iperf using 1 as server:
>>                 FreeBSD(*)      Linux
>>                      Mbits/sec
>>         1-      926             905 (this machine was busy)
>>         2-      545             798
>>         3-      910             912
>>  *: did a 'sysctl net.inet.tcp.sendspace=65536'
>>
>>
>> So, it seems to me something is not that good in the UDP department, but
>> I can't find what to tweek.
>>
>> Any help?
>>
>>         danny
>
> Have discussed this some internally, the best idea I've heard is that
> UDP is not giving us the interrupt rate that TCP would, so we end up
> not cleaning up as often, and thus descriptors might not be as quickly
> available.. Its just speculation at this point.
If a high interrupt rate is a problem and your NIC+driver supports it,
then try enabling polling(4) aswell. This has helped me for bulk
transfers on slower boxes but i have noticed problems with ALTQ/dummynet
and other highly realtime dependent networking code. YMMV.
More info in the man 4 polling.
I think recent linux kernels/drivers have this implemented so it will
enable it dynamically on high load. However i only skimmed the documents
and i'm not a linux expert so i may be wrong on that.
/Junics
>
> Try this: the default is only to have 256 descriptors, try going for
> the MAX
> which is 4K.
>
> Cheers,
>
> Jack
> _______________________________________________
> freebsd-stable@freebsd.org mailing list
> http://lists.freebsd.org/mailman/listinfo/freebsd-stable
> To unsubscribe, send any mail to "freebsd-stable-unsubscribe@freebsd.org"
>




Want to link to this message? Use this URL: <https://mail-archive.FreeBSD.org/cgi/mid.cgi?44FDA3F3.6090003>