Skip site navigation (1)Skip section navigation (2)
Date:      Tue, 2 Apr 2013 14:16:34 +0800
From:      Sepherosa Ziehau <sepherosa@gmail.com>
To:        Andre Oppermann <andre@freebsd.org>
Cc:        Sami Halabi <sodynet1@gmail.com>, "Alexander V. Chernikov" <melifaro@freebsd.org>, "Alexander V. Chernikov" <melifaro@ipfw.ru>, "freebsd-net@freebsd.org" <freebsd-net@freebsd.org>
Subject:   Re: MPLS
Message-ID:  <CAMOc5czL9V6LH%2BxD6OXTA0y6Nc=wMdfiPn_ssANx7yBYHHSDSA@mail.gmail.com>
In-Reply-To: <51471974.3090300@freebsd.org>
References:  <CAEW%2Bogb_b6fYLvcEJdhzRnoyjr0ORto9iNyJ-iiNfniBRnPxmA@mail.gmail.com> <CAEW%2BogZTE4Uw-0ROEoSex=VtC%2B0tChupE2RAW5RFOn=OQEuLLw@mail.gmail.com> <CAEW%2BogYbCkCfbFHT0t2v-VmqUkXLGVHgAHPET3X5c2DnsT=Enw@mail.gmail.com> <5146121B.5080608@FreeBSD.org> <514649A5.4090200@freebsd.org> <3659B942-7C37-431F-8945-C8A5BCD8DC67@ipfw.ru> <51471974.3090300@freebsd.org>

next in thread | previous in thread | raw e-mail | index | archive | help
On Mon, Mar 18, 2013 at 9:41 PM, Andre Oppermann <andre@freebsd.org> wrote:
> On 18.03.2013 13:20, Alexander V. Chernikov wrote:
>>
>> On 17.03.2013, at 23:54, Andre Oppermann <andre@freebsd.org> wrote:
>>
>>> On 17.03.2013 19:57, Alexander V. Chernikov wrote:
>>>>
>>>> On 17.03.2013 13:20, Sami Halabi wrote:
>>>>>>
>>>>>> ITOH OpenBSD has a complete implementation of MPLS out of the box,
>>>>>> maybe
>>>>
>>>> Their control plane code is mostly useless due to design approach
>>>> (routing daemons talk via kernel).
>>>
>>>
>>> What's your approach?
>>
>> It is actually not mine. We have discussed this a bit in radix-related
>> thread. Generally quagga/bird (and other hiperf hardware-accelerated and
>> software routers) have feature-rich RIb from which best routes (possibly
>> multipath) are installed to kernel/fib. Kernel main task should be to do
>> efficient lookups while every other advanced feature should be implemented
>> in userland.
>
>
> Yes, we have started discussing it but haven't reached a conclusion among
> the
> two philosophies.  We have also agreed that the current radix code is
> horrible
> in terms of cache misses per lookup.  That however doesn't preclude an
> agnostic
> FIB+RIB approach.  It's mostly a matter of structure layout to keep it
> efficient.
>
>
>>>> Their data plane code, well.. Yes, we can use some defines from their
>>>> headers, but that's all :)
>>>>>>
>>>>>> porting it would be short and more straight forward than porting linux
>>>>>> LDP
>>>>>> implementation of BIRD.
>>>>
>>>>
>>>> It is not 'linux' implementation. LDP itself is cross-platform.
>>>> The most tricky place here is control plane.
>>>> However, making _fast_ MPLS switching is tricky too, since it requires
>>>> chages in our netisr/ethernet
>>>> handling code.
>>>
>>>
>>> Can you explain what changes you think are necessary and why?
>
>>
>>
>> We definitely need ability to dispatch chain of mbufs - this was already
>> discussed in intel rx ring lock thread in -net.
>
>
> Actually I'm not so convinced of that.  Packet handling is a tradeoff
> between
> doing process-to-completion on each packet and doing context switches on
> batches
> of packets.
>
> Every few years the balance tilts forth and back between
> process-to-completion
> and batch processing.  DragonFly went with a batch-lite token-passing
> approach
> throughout their kernel.  It seems it didn't work out to the extent they
> expected.
> Now many parts are moving back to the more traditional locking approach.

At least, the per-CPU netisr and other related per-CPU network stuffs
(e.g. routing table) work quite well as we have _expected_ (the
measured bi-directional IPv4 forwarding performance w/ fastforwarding
is 5.6Mpps+, w/o fastforwarding 4.6Mpps+, w/ 4 igb(4) on i7-2600,
using 90% cpu time on each HT in Dfly's polling(4) mode); it is _not_
using traditional locking approach on major network paths at all and
for IPv4 forwarding Dfly is _not_ doing "process-to-completion".

And as a side note: There was a paper compared the message-based
parallelism TCP implementation, connection-based thread serialization
TCP implementaion (Dfly is using) and connection-based lock
serialization TCP implementation.  The conclusion was connection-based
thread serialization TCP implementation (Dfly is using) had too many
scheduling cost.  The paper's conclusion _no longer_ holds for Dfly
nowadays; we have wiped out major scheduling cost on the hot TCP
paths.  So as far as I could see, its _not_ the problem of the model
itself sometimes, but how the model should be implemented.

Best Regards,
sephe

--
Tomorrow Will Never Die



Want to link to this message? Use this URL: <https://mail-archive.FreeBSD.org/cgi/mid.cgi?CAMOc5czL9V6LH%2BxD6OXTA0y6Nc=wMdfiPn_ssANx7yBYHHSDSA>