Skip site navigation (1)Skip section navigation (2)
Date:      Sat, 11 Jun 2011 16:16:34 +0200
From:      "K. Macy" <kmacy@freebsd.org>
To:        "K. Macy" <kmacy@freebsd.org>
Cc:        "freebsd-hackers@freebsd.org" <freebsd-hackers@freebsd.org>, grarpamp <grarpamp@gmail.com>, "freebsd-net@freebsd.org" <freebsd-net@freebsd.org>
Subject:   Re: FreeBSD I/OAT (QuickData now?) driver
Message-ID:  <BANLkTikuDT3kUgZcmX7dVSK73umygsSuOw@mail.gmail.com>
In-Reply-To: <BANLkTin%2BMRY0VvKYkmBvbjfEwD0iQm3DKw@mail.gmail.com>
References:  <BANLkTinuOS_yZYrqZ4cmU4cim%2BKFHNA=hQ@mail.gmail.com> <BANLkTin%2BMRY0VvKYkmBvbjfEwD0iQm3DKw@mail.gmail.com>

next in thread | previous in thread | raw e-mail | index | archive | help
Oops, second 10 GigE should obviously be 1GigE

On Tuesday, June 7, 2011, K. Macy <kmacy@freebsd.org> wrote:
> All 10GigE NICs and some newer 10 GigE NICs have multiple hardware
> queues with a separate MSI-x vector per queue, where each vector is
> directed to a different CPU. The current operating model is to have a
> separate interrupt thread per vector. This obviously gets bogged down
> if one has multiple cards as the interrupt threads end up requiring
> the scheduler to distribute work fairly between cards as multiple
> threads will end up running on the same CPUs. Nokia had a reasonable
> interface for coping with this that was reminiscent of NAPI whereby
> cooperative sharing between interfaces was provided by having a single
> taskqueue thread per-core and the cards would queue tasks (which would
> be re-queued if more than a certain amount of work were required) as
> interrupts were delivered. There has been talk off and on of porting
> this "net_task" interface to freebsd.
>
> None of this addresses PF_RING's facility for pushing packets in to
> userland - but presumably Rizzo's netmap work addresses those in need
> of that sufficiently.
>
> Cheers,
> Kip
>
> On Tue, Jun 7, 2011 at 4:13 AM, grarpamp <grarpamp@gmail.com> wrote:
>> Is this work part of what's needed to enable the FreeBSD
>> equivalent of TNAPI?
>>
>> I know we've got polling. And probably MSI-X in a couple drivers.
>> Pretty sure there is still one CPU doing the interrupt work?
>> And none of the multiple queue thread spreading tech exists?
>>
>> http://www.ntop.org/blog
>> http://www.ntop.org/TNAPI.html
>> TNAPI attempts to solve the following problems:
>> =A0 =A0* Distribute the traffic across cores (i.e. the more core the mor=
e
>> scalable is your networking application) for improving scalability.
>> =A0 =A0* Poll packets simultaneously from each RX queue (contraty to
>> sequential NAPI polling) for fetching packets as fast as possible
>> hence improve performance.
>> =A0 =A0* Through PF_RING, expose the RX queues to the userland so that
>> the application can spawn one thread per queue hence avoid using
>> semaphores at all.
>> TNAPI achieves all this by starting one thread per RX queue. Received
>> packets are then pushed to PF_RING (if available) or through the
>> standard Linux stack. However in order to fully exploit this
>> technology it is necessary to use PF_RING as it provides a straight
>> packet path from kernel to userland. Furthermore it allows to create a
>> virtual ethernet card per RX queue.
>> _______________________________________________
>> freebsd-net@freebsd.org mailing list
>> http://lists.freebsd.org/mailman/listinfo/freebsd-net
>> To unsubscribe, send any mail to "freebsd-net-unsubscribe@freebsd.org"
>>
>



Want to link to this message? Use this URL: <https://mail-archive.FreeBSD.org/cgi/mid.cgi?BANLkTikuDT3kUgZcmX7dVSK73umygsSuOw>