Skip site navigation (1)Skip section navigation (2)
Date:      Fri, 10 Nov 2006 15:04:51 -0500
From:      Mike Tancsa <mike@sentex.net>
To:        Scott Long <scottl@samsco.org>
Cc:        freebsd-net <freebsd-net@freebsd.org>, freebsd-stable@freebsd.org, Jack Vogel <jfvogel@gmail.com>
Subject:   Re: Proposed 6.2 em RELEASE patch
Message-ID:  <200611102004.kAAK4iO9027778@lava.sentex.ca>
In-Reply-To: <200611092200.kA9M0q1E020473@lava.sentex.ca>
References:  <2a41acea0611081719h31be096eu614d2f2325aff511@mail.gmail.com> <200611091536.kA9FaltD018819@lava.sentex.ca> <45534E76.6020906@samsco.org> <200611092200.kA9M0q1E020473@lava.sentex.ca>

next in thread | previous in thread | raw e-mail | index | archive | help
At 05:00 PM 11/9/2006, Mike Tancsa wrote:
>At 10:51 AM 11/9/2006, Scott Long wrote:
>>Mike Tancsa wrote:
>>>At 08:19 PM 11/8/2006, Jack Vogel wrote:
>>>
>>>>BUT, I've added the FAST_INTR changes back into the code, so
>>>>if you go into your Makefile and add -DEM_FAST_INTR you will
>>>>then get the taskqueue stuff.
>>>It certainly does make a difference performance wise.  I did some 
>>>quick testing with netperf and netrate.  Back to back boxes, using 
>>>an AMD x2 with bge nic and one intel box
>>>CPU: AMD Athlon(tm) 64 X2 Dual Core Processor 3800+ (2009.27-MHz 
>>>686-class CPU)
>>>CPU: Intel(R) Core(TM)2 CPU          6400  @ 2.13GHz (2144.01-MHz 
>>>686-class CPU)
>>>The intel is a  DG965SS with integrated em nic, the AMD a Tyan 
>>>with integrated bge.  Both running SMP kernels with pf built in, no inet6.
>>>
>>>Intel box as sender. In this test its with the patch from 
>>>yesterday. The first set with the patch as is, the second test 
>>>with -DEM_FAST_INTR.
>>
>>Thanks for the tests.  One thing to note is that Gleb reported a higher
>>rate of dropped packets with INTR_FAST.  He is the only one who has
>>reported this, so I'd like to find out if there is something unique to
>>his environment, or if there is a larger problem to be addressed.  There
>>are ways that we can change the driver to not drop any packets at all
>>for Gleb, but they expose the system to risk if there is ever an
>>accidental (or malicious) RX flood on the interface.
>
>With a high rate of packets, I am able to live lock the box.  I 
>setup the following

Some more tests. I tried again with what was committed to today's 
RELENG_6. I am guessing its pretty well the same patch.  Polling is 
the only way to avoid livelock at a high pps rate.  Does anyone know 
of any simple tools to measure end to end packet loss ? Polling will 
end up dropping some packets and I want to be able to compare.  Same 
hardware from the previous post.

SMP kernel  fastfwd  pf  ipfw  FAST_INTR   streams  np
                                                              (Mb)
x              x                   x          2           livelock
x              x      x            x          2     468   livelock
x                                  x          2     453   lost 
packets, box sluggish
x                     x            x          2           lost 
packets, box sluggish
x                                             2     468   lost 
packets, box 
sluggish
x              x                              2     468   livelock
x              x      x                       2     468   livelock
                                               2     475   livelock
                x                              2           livelock

P                                             2           OK
P                     x                       2           OK
P                         x                   2           OK


The P is for Uniproc, but with Polling enabled (also kern.polling.idle_poll=1)


UP single stream 58Kpps, no polling in kernel
[bsd6-32bit]# ./netblast 192.168.44.1 500 10 10


start:             1163184051.627479975
finish:            1163184061.628200458
send calls:        5869051
send errors:       0
approx send rate:  586905
approx error rate: 0


with polling

[bsd6-32bit]# ./netblast 192.168.44.1 500 10 10

start:             1163184606.651001121
finish:            1163184616.651288588
send calls:        5866199
send errors:       1
approx send rate:  586619
approx error rate: 0



With polling and 2 streams at the same time (a lot of pps! and its 
still totally responsive!!)

[r6-32bit]# ./netblast 192.168.88.218 500 10 10

start:             1163184712.103954688
finish:            1163184722.104388542
send calls:        4528941
send errors:       0
approx send rate:  452894
approx error rate: 0
[r6-32bit]#


[bsd6-32bit]# ./netblast 192.168.44.1 500 10 20

start:             1163184793.172036336
finish:            1163184813.173028921
send calls:        11550594
send errors:       0
approx send rate:  577529
approx error rate: 0
[bsd6-32bit]#


polling, 2 streams at the same time

[bsd6-32bit]# ./netblast 192.168.44.1 500 10 20

start:             1163185058.477137404
finish:            1163185078.478025226
send calls:        11679831
send errors:       0
approx send rate:  583991
approx error rate: 0
[bsd6-32bit]# ./netblast 192.168.44.1 500 10 20

start:             1163185167.969551943
finish:            1163185187.970435295
send calls:        11706825
send errors:       0
approx send rate:  585341
approx error rate: 0
[bsd6-32bit]#






Want to link to this message? Use this URL: <https://mail-archive.FreeBSD.org/cgi/mid.cgi?200611102004.kAAK4iO9027778>