Skip site navigation (1)Skip section navigation (2)
Date:      Mon, 16 Jul 2018 14:32:01 +0000
From:      bugzilla-noreply@freebsd.org
To:        net@FreeBSD.org
Subject:   [Bug 203856] [igb] PPPoE RX traffic is limitied to one queue
Message-ID:  <bug-203856-7501-bCPuP6aFmD@https.bugs.freebsd.org/bugzilla/>
In-Reply-To: <bug-203856-7501@https.bugs.freebsd.org/bugzilla/>
References:  <bug-203856-7501@https.bugs.freebsd.org/bugzilla/>

next in thread | previous in thread | raw e-mail | index | archive | help
https://bugs.freebsd.org/bugzilla/show_bug.cgi?id=3D203856

--- Comment #11 from Eugene Grosbein <eugen@freebsd.org> ---
There seems to be common mis-understanging in how hardware receive queues w=
ork
in igb(4) chipsets.

First, one should read Intel's datasheet on the NIC. For example of 82576-b=
ased
NIC this is
https://www.intel.com/content/dam/www/public/us/en/documents/datasheets/825=
76eb-gigabit-ethernet-controller-datasheet.pdf

The section 7.1.1.7 of the datasheet states that NIC "supports a single hash
function, as defined by Microsoft RRS". Reading on, one learns this means t=
hat
only frames containing IPv4 or IPv6 packets are hashed using their IP addre=
sses
as hash function arguments and, optionally, TCP port numbers.

This means that all incoming PPPoE ethernet frames are NOT hashed by such N=
IC
in hardware, as any other frames carrying no plain IPv4 nor IPv6 packets. T=
his
is the reason why all incoming PPPoE ethernet frames get to the same (zero)
queue.=20

The igb(4) driver has nothing to do with this problem, and mentioned "patch"
cannot solve the problem too. However, there are other ways.

Most performant way for production use is usage of several igb NICs combined
with lagg(4) logical channel connected to managed switch that is configured=
 to
distribute traffic flows between ports of the logical channel based on sour=
ce
MAC address of a frame. This is useful for mass-servicing of clients when o=
ne
has multiple PPPoE clients generating flows of PPPoE frames each using dist=
inct
MAC address. This way is not really useful for PPPoE client receiving all
frames from single PPPoE server.

There is another way. By default, FreeBSD kernel performs all processing of
received PPPoE frame within driver interrupt context: decapsulation, option=
al
decompression/decryption, network address translation, routing lookups, pac=
ket
filtering and so on. This can result in overloaded single CPU core in defau=
lt
configuration when sysctl net.isr.dispatch=3Ddirect. Since FreeBSD 8 we have
netisr(8) network dispatch service allowing any NIC driver just enqueue
received ethernet frame and cease its following processing freeing this CPU
core. Other kernel threads using other CPU cores will then dequeue received
frames to complete decapsilation etc. loading all CPU cores evenly.

So, one just should make sure it has "net.isr.maxthreads" and
"net.isr.numthreads" greater than 1 and switch net.isr.dispatch to "deferre=
d"
value that permits NIC drivers to use netisr(9) queues to distribute load
between CPU cores.

--=20
You are receiving this mail because:
You are the assignee for the bug.=



Want to link to this message? Use this URL: <https://mail-archive.FreeBSD.org/cgi/mid.cgi?bug-203856-7501-bCPuP6aFmD>