Skip site navigation (1)Skip section navigation (2)
Date:      Thu, 18 Nov 2004 02:09:46 +0100
From:      Emanuel Strobl <Emanuel.Strobl@gmx.net>
To:        freebsd-stable@freebsd.org
Cc:        freebsd-current@freebsd.org
Subject:   Re: serious networking (em) performance (ggate and NFS) problem
Message-ID:  <200411180209.52817.Emanuel.Strobl@gmx.net>
In-Reply-To: <419BE654.6020705@mac.com>
References:  <200411172357.47735.Emanuel.Strobl@gmx.net> <419BE654.6020705@mac.com>

next in thread | previous in thread | raw e-mail | index | archive | help
--nextPart1764708.j2PJtahPCE
Content-Type: text/plain;
  charset="iso-8859-1"
Content-Transfer-Encoding: quoted-printable
Content-Disposition: inline

Am Donnerstag, 18. November 2004 01:01 schrieb Chuck Swiger:
> Emanuel Strobl wrote:
> [ ... ]
>
> > Tests were done with two Intel GigaBit Ethernet cards (82547EI, 32bit P=
CI
> > Desktop adapter MT) connected directly without a switch/hub
>
> If filesharing via NFS is your primary goal, it's reasonable to test that,

GEOM_GATE is my primary goal, and I can remember that when Pavel wrote this=
=20
great feature, he took care about performance and easyly outperformed NFS=20
(with 100baseTX AFAIK)

> however it would be easier to make sense of your results by testing your
> network hardware at a lower level.  Since you're already running
> portmap/RPC, consider using spray to blast some packets rapidly and see
> what kind of bandwidth you max out using that.  Or use ping with -i & -s
> set to reasonable values depending on whether you're using jumbo frames or
> not.
>
> If the problem is your connection is dropping a few packets, this will sh=
ow
> up better here.  Using "ping -f" is also a pretty good troubleshooter.  If
> you can dig up a gigabit switch with management capabilities to test with,
> taking a look at the per-port statistics for errors would also be worth
> doing.  A dodgy network cable can still work well enough for the cards to
> have a green link light, but fail to handle high traffic properly.

I'll do some tests regarding these issues to make sure I'm not suffering fr=
om=20
ill conditions, but I'm quite sure my testbed's feeling fine. I don't have=
=20
one of these nice managed GigaBit switches, just a x-over cable....

>
> [ ... ]
>
> > - em seems to have problems with MTU greater than 1500
>
> Have you tried using an MTU of 3K or 7K?
>
> I also seem to recall that there were performance problems with em in 5.3
> and a fix is being tested in -CURRENT.  [I just saw Scott's response to t=
he
> list, and your answer, so maybe nevermind this point.]
>
> > - UDP seems to have performance disadvantages over TCP regarding NFS
> > which should be vice versa AFAIK
>
> Hmm, yeah...again, this makes me wonder whether you are dropping packets.
> NFS over TCP does better than UDP does in lossy network conditions.

Of course, but If I connect two GbE cards (wich implies that auto-mdi-X and=
=20
full duplex is mandatory in 1000baseTX mode) I don't expect any UDP packet =
to=20
get lost.
But I'll verify tomorrow.

>
> > - polling and em (GbE) with HZ=3D256 is definitly no good idea, even
> > 10Base-2 can compete
>
> You should be setting HZ to 1000, 2000, or so when using polling, and a

Yep, I know that HZ set to 256 with polling enabled isn't really useful, bu=
t I=20
don't want to drive my GbE card in polling mode at all, instead I try to=20
prevent my machine from spending time doing nothing, so HZ shouldn't be too=
=20
high.

Thank you,

=2DHarry

> higher HZ is definitely recommmended when you add in jumbo frames and GB
> speeds.

--nextPart1764708.j2PJtahPCE
Content-Type: application/pgp-signature

-----BEGIN PGP SIGNATURE-----
Version: GnuPG v1.2.6 (FreeBSD)

iD8DBQBBm/ZgBylq0S4AzzwRAm9jAJ0ZO2P8GyFYlpLba5UWijZKEPjyhACfUeVT
6MNXIn3mODVb/sCa/ZD0FzM=
=6OzF
-----END PGP SIGNATURE-----

--nextPart1764708.j2PJtahPCE--



Want to link to this message? Use this URL: <https://mail-archive.FreeBSD.org/cgi/mid.cgi?200411180209.52817.Emanuel.Strobl>