Skip site navigation (1)Skip section navigation (2)
Date:      Tue, 7 Jun 2011 07:24:00 +0200
From:      Luigi Rizzo <rizzo@iet.unipi.it>
To:        grarpamp <grarpamp@gmail.com>
Cc:        freebsd-hackers@freebsd.org, freebsd-net@freebsd.org
Subject:   Re: FreeBSD I/OAT (QuickData now?) driver
Message-ID:  <20110607052400.GC4840@onelab2.iet.unipi.it>
In-Reply-To: <BANLkTinuOS_yZYrqZ4cmU4cim%2BKFHNA=hQ@mail.gmail.com>
References:  <BANLkTinuOS_yZYrqZ4cmU4cim%2BKFHNA=hQ@mail.gmail.com>

next in thread | previous in thread | raw e-mail | index | archive | help
On Mon, Jun 06, 2011 at 10:13:51PM -0400, grarpamp wrote:
> Is this work part of what's needed to enable the FreeBSD
> equivalent of TNAPI?
> 
> I know we've got polling. And probably MSI-X in a couple drivers.
> Pretty sure there is still one CPU doing the interrupt work?
> And none of the multiple queue thread spreading tech exists?

i have heard of some Gsoc work that addresses the problem
for cards that have a single queue, but drivers for other cards with
native multiqueue (e.g. ixgbe, e1000 drivers) seem to have
the ability to use one cpu per queue.

I'd argue that for many types of applications (basically all for
which PF_RING/TNAPI were designed), spreading
work across cores is a second order problem, you should
first avoid doing useless work.  Please have a look at

	http://info.iet.unipi.it/~luigi/netmap/

which addresses both issues.

cheers
luigi

> http://www.ntop.org/blog
> http://www.ntop.org/TNAPI.html
> TNAPI attempts to solve the following problems:
>     * Distribute the traffic across cores (i.e. the more core the more
> scalable is your networking application) for improving scalability.
>     * Poll packets simultaneously from each RX queue (contraty to
> sequential NAPI polling) for fetching packets as fast as possible
> hence improve performance.
>     * Through PF_RING, expose the RX queues to the userland so that
> the application can spawn one thread per queue hence avoid using
> semaphores at all.
> TNAPI achieves all this by starting one thread per RX queue. Received
> packets are then pushed to PF_RING (if available) or through the
> standard Linux stack. However in order to fully exploit this
> technology it is necessary to use PF_RING as it provides a straight
> packet path from kernel to userland. Furthermore it allows to create a
> virtual ethernet card per RX queue.
> _______________________________________________
> freebsd-hackers@freebsd.org mailing list
> http://lists.freebsd.org/mailman/listinfo/freebsd-hackers
> To unsubscribe, send any mail to "freebsd-hackers-unsubscribe@freebsd.org"



Want to link to this message? Use this URL: <https://mail-archive.FreeBSD.org/cgi/mid.cgi?20110607052400.GC4840>