Skip site navigation (1)Skip section navigation (2)
Date:      Mon, 26 Aug 2013 11:28:52 +0200
From:      Harald Schmalzbauer <h.schmalzbauer@omnilan.de>
To:        Adrian Chadd <adrian@freebsd.org>
Cc:        FreeBSD Stable Mailing List <freebsd-stable@freebsd.org>
Subject:   Re: if_em, legacy nic and GbE saturation
Message-ID:  <521B1FD4.4050702@omnilan.de>
In-Reply-To: <CAJ-VmokRNbDXC1Er6pxOWSLJs=DvCmbCfcViZ3K4Twxb9V5BKw@mail.gmail.com>
References:  <521AFE7E.2040705@omnilan.de> <CAJ-VmokRNbDXC1Er6pxOWSLJs=DvCmbCfcViZ3K4Twxb9V5BKw@mail.gmail.com>

next in thread | previous in thread | raw e-mail | index | archive | help
This is an OpenPGP/MIME signed message (RFC 2440 and 3156)
--------------enig0C892B502D9084CC08DA245B
Content-Type: text/plain; charset=ISO-8859-1
Content-Transfer-Encoding: quoted-printable

 Bez=FCglich Adrian Chadd's Nachricht vom 26.08.2013 10:34 (localtime):
> Hi,
>
> There's bus limits on how much data you can push over a PCI bus. You
> can look around online to see what 32/64 bit, 33/66MHz PCI throughput
> estimates are.
>
> It changes massively if you use small versus large frames as well.
>
> The last time I tried it i couldn't hit gige on PCI; I only managed to
> get to around 350mbit doing TCP tests.

Thanks, I'm roughly aware about the PCI bus limit, but I guess it should
be good for almost GbE: 33*10^6*32=3D1056, so if one considers overhead
and other bus-blocking things (nothing of significance is active on the
PCI bus in this case), I'd expect at least 800Mbis/s, which is what I
get with jumbo frames.
I also know that lagg won't help in regard of concurrent throughput
because of the PCI limit. But it's the redundancy why I also use 2 nics
in that parking-maschine.

I just have no explanation why I see that noticable difference between
mtu 1500 and 9000 on legacy if_em nic, which doesn't show up with the
second on-board nic (82566), which uses different if_em code.
I can imagine that it's related to PCI transfer limits (the 82566 is
ICH9 integrated which connects via DMI to the CPU, so no PCI
constraint), but if someone has more than an imagination, an explanation
was highly appreciated :-)

But if you saw similar constraints on other (non-if_em?) PCI-connected
nics, I'll leave it as it is. Just wanted some kind of confirmation that
it's normal that single-GbE doesn't play well with PCI.

Thank you,

-Harry



--------------enig0C892B502D9084CC08DA245B
Content-Type: application/pgp-signature; name="signature.asc"
Content-Description: OpenPGP digital signature
Content-Disposition: attachment; filename="signature.asc"

-----BEGIN PGP SIGNATURE-----
Version: GnuPG v2.0.18 (FreeBSD)

iEYEARECAAYFAlIbH9QACgkQLDqVQ9VXb8iWmgCfenebOJrkeWbv5ux+hlg3Cwt5
400AnRfeET7T6kwHzFoH8HwmCLTOXyUF
=Vf8y
-----END PGP SIGNATURE-----

--------------enig0C892B502D9084CC08DA245B--



Want to link to this message? Use this URL: <https://mail-archive.FreeBSD.org/cgi/mid.cgi?521B1FD4.4050702>