Skip site navigation (1)Skip section navigation (2)
Date:      Mon, 18 Apr 2011 19:59:58 +0200
From:      "K. Macy" <kmacy@freebsd.org>
To:        Ingo Flaschberger <if@xip.at>
Cc:        freebsd-net@freebsd.org, Ingo Flaschberger <if@freebsd.org>
Subject:   Re: Routing enhancement - reduce routing table locking
Message-ID:  <BANLkTim6HMGibDB4ucs%2BtEfqv-LBnF4O-w@mail.gmail.com>
In-Reply-To: <BANLkTim0hoHDnrweYz%2Bvc7zOvMubddJmGg@mail.gmail.com>
References:  <alpine.LRH.2.00.1104050303140.2152@filebunker.xip.at> <alpine.LRH.2.00.1104061426350.2152@filebunker.xip.at> <alpine.LRH.2.00.1104180051450.8693@filebunker.xip.at> <BANLkTik39HvVire6Hzi9U6J2BwKV7apCCg@mail.gmail.com> <alpine.LRH.2.00.1104181852420.8693@filebunker.xip.at> <BANLkTim0hoHDnrweYz%2Bvc7zOvMubddJmGg@mail.gmail.com>

next in thread | previous in thread | raw e-mail | index | archive | help
On Mon, Apr 18, 2011 at 7:28 PM, K. Macy <kmacy@freebsd.org> wrote:
> 400kpps is not a large enough measure to reach any conclusions. A
> system like that should be able to push at least 2.3Mpps with
> flowtable. I'm not saying that what you've done is not an improvement,
> but rather that you're hitting some other bottleneck. The output of
> pmc and LOCK_PROFILING might be insightful.


It occurred to me that I should add a couple of qualifications to the
previous statements. 1.6Mpps is line rate for GigE and I only know of
it to be achievable by igb hardware. The most I've seen em hardware
achieve is 1.1Mpps. Furthermore, in order to achieve that you would
have to enable IFNET_MULTIQUEUE in the driver, because by default the
driver uses the traditional (slow) IFQ as opposed overloading
if_transmit and doing its own queueing when needed. Support for
efficient multi-queue software queueing is provided by buf_ring, a
lock-free multi-producer ring buffer written just for this purpose.

Thus, the fairly low transmit rate may be attributable to driver locking.

Cheers

>
> Thanks,
> Kip
>
> On Mon, Apr 18, 2011 at 7:12 PM, Ingo Flaschberger <if@xip.at> wrote:
>>
>>> It would be great to see flowtable going back to its intended use.
>>> However, I would be surprised if this actually scales to Mpps. I don't
>>> have any high end hardware at the moment to test, what is the highest
>>> packet rate you've seen? i.e. simply generating small packets.
>>
>> Currently I have no tests available, but I have seen at a appliance with=
:
>> Intel Q35
>> Quad Core cpu
>> Intel em desktop pcie cards
>>
>> ~ 200mbit 64byte packets - ~ 400kpps without packetloss.
>>
>> Without patch flowtable and fastforward had the same speed as flowtable,
>> fastfoward and standard forward.
>>
>> That means, with the patch the standard forward patch had the same speed=
 as
>> the fastforward path.
>>
>> It seems, I'm hitting some other speedlimits at my system, so there was =
no
>> real difference between flowtable, fastforward with and without the patc=
h.
>>
>> I would be great if someone could load a system with a full tables (400k
>> routes) and do some tests at 10gbe speed.
>>
>> Kind regards,
>> =A0 =A0 =A0 =A0Ingo Flaschberger
>>
>> _______________________________________________
>> freebsd-net@freebsd.org mailing list
>> http://lists.freebsd.org/mailman/listinfo/freebsd-net
>> To unsubscribe, send any mail to "freebsd-net-unsubscribe@freebsd.org"
>>
>



Want to link to this message? Use this URL: <https://mail-archive.FreeBSD.org/cgi/mid.cgi?BANLkTim6HMGibDB4ucs%2BtEfqv-LBnF4O-w>