Skip site navigation (1)Skip section navigation (2)
Date:      Thu, 22 Dec 2005 09:41:47 -0800
From:      Jack Vogel <jfvogel@gmail.com>
To:        Gleb Smirnoff <glebius@freebsd.org>
Cc:        freebsd-stable@freebsd.org
Subject:   Re: em bad performance
Message-ID:  <2a41acea0512220941y61c9b5acs8053e6df8a96a1e4@mail.gmail.com>
In-Reply-To: <20051222105215.GB41381@cell.sick.ru>
References:  <20051222103027.GZ41381@cell.sick.ru> <E1EpNpZ-000IKU-8d@cs1.cs.huji.ac.il> <20051222105215.GB41381@cell.sick.ru>

next in thread | previous in thread | raw e-mail | index | archive | help
On 12/22/05, Gleb Smirnoff <glebius@freebsd.org> wrote:
> On Thu, Dec 22, 2005 at 12:37:53PM +0200, Danny Braniss wrote:
> D> > On Thu, Dec 22, 2005 at 12:24:42PM +0200, Danny Braniss wrote:
> D> > D> ------------------------------------------------------------
> D> > D> Server listening on TCP port 5001
> D> > D> TCP window size: 64.0 KByte (default)
> D> > D> ------------------------------------------------------------
> D> > D> [  4] local 132.65.16.100 port 5001 connected with [6.0/SE7501WV2=
] port 58122
> D> > D> (intel westvill)
> D> > D> [ ID] Interval       Transfer     Bandwidth
> D> > D> [  4]  0.0-10.0 sec  1.01 GBytes   867 Mbits/sec
> D> > D> [  4] local 132.65.16.100 port 5001 connected with [5.4/SE7501WV2=
] port 55269
> D> > D> (intel westvill)
> D> > D> [ ID] Interval       Transfer     Bandwidth
> D> > D> [  4]  0.0-10.0 sec   967 MBytes   811 Mbits/sec
> D> > D> [  5] local 132.65.16.100 port 5001 connected with [6.0/SR1435VP2=
 port 58363
> D> > D> (intel dual xeon/emt64)
> D> > D> [ ID] Interval       Transfer     Bandwidth
> D> > D> [  5]  0.0-10.0 sec   578 MBytes   485 Mbits/sec
> D> > D>
> D> > D> i've run this several times, and the results are very similar.
> D> > D> i also tried i386, and the same bad results.
> D> > D> all hosts are connected at 1gb to the same switch.
> D> >
> D> > So we see a strong drawback between SE7501WV2 and SR1435VP2. Let's c=
ompare the NIC
> D> > hardware. Can you plese show pciconf -lv | grep -A3 ^em on both moth=
erboards?
> D>
> D> on a SE7501WV2:
> D> em0@pci3:7:0:   class=3D0x020000 card=3D0x341a8086 chip=3D0x10108086 r=
ev=3D0x01
> D> hdr=3D0x00
> D>     vendor   =3D 'Intel Corporation'
> D>     device   =3D '82546EB Dual Port Gigabit Ethernet Controller (Coppe=
r)'
> D>     class    =3D network
> D>
> D> on a SR1435VP2:
> D> em0@pci4:3:0:   class=3D0x020000 card=3D0x34668086 chip=3D0x10768086 r=
ev=3D0x05
> D> hdr=3D0x00
> D>     vendor   =3D 'Intel Corporation'
> D>     device   =3D '82547EI Gigabit Ethernet Controller'
> D>     class    =3D network
>
> The first one 82546EB is attached to fast PCI-X bus, and the 82547EI is
> on CSA bus. The CSA bus is twice faster than old PCI bus, CSA can handle
> 266 Mbps. I'm not sure but may be it has same ~50% overhead as old PCI bu=
s.
>
> Probably our em(4) driver is not optimized enough and does too many acces=
ses
> to the PCI bus, thus utilizing more bandwidth than needed to handle traff=
ic.
> In this case we see that NIC on slower bus (but enough to handle Gigabit)=
 is
> must slower than NIC on faster bus. (This paragraph is my own theory, it
> can be complete bullshit.)

CSA bus? I've never heard of it.

To get the best gig performance you really want to see it on PCI Express.
I see 930ish Mb/s. I'm not really familiar with this motherboard/lom.

You say you run iperf -s on the server side, but what are you using as
parameters on the client end of the test?

Jack



Want to link to this message? Use this URL: <https://mail-archive.FreeBSD.org/cgi/mid.cgi?2a41acea0512220941y61c9b5acs8053e6df8a96a1e4>