Skip site navigation (1)Skip section navigation (2)
Date:      Fri, 23 Dec 2005 09:16:08 +0200
From:      Danny Braniss <danny@cs.huji.ac.il>
To:        Jack Vogel <jfvogel@gmail.com>
Cc:        Gleb Smirnoff <glebius@freebsd.org>, freebsd-stable@freebsd.org
Subject:   Re: em bad performance 
Message-ID:  <E1Eph9s-0002F4-8O@cs1.cs.huji.ac.il>
In-Reply-To: Message from Jack Vogel <jfvogel@gmail.com> of "Thu, 22 Dec 2005 09:41:47 PST." <2a41acea0512220941y61c9b5acs8053e6df8a96a1e4@mail.gmail.com> 

next in thread | previous in thread | raw e-mail | index | archive | help
> On 12/22/05, Gleb Smirnoff <glebius@freebsd.org> wrote:
> > On Thu, Dec 22, 2005 at 12:37:53PM +0200, Danny Braniss wrote:
> > D> > On Thu, Dec 22, 2005 at 12:24:42PM +0200, Danny Braniss wrote:
> > D> > D> ------------------------------------------------------------
> > D> > D> Server listening on TCP port 5001
> > D> > D> TCP window size: 64.0 KByte (default)
> > D> > D> ------------------------------------------------------------
> > D> > D> [  4] local 132.65.16.100 port 5001 connected with [6.0/SE750=
1WV2] port 58122
> > D> > D> (intel westvill)
> > D> > D> [ ID] Interval       Transfer     Bandwidth
> > D> > D> [  4]  0.0-10.0 sec  1.01 GBytes   867 Mbits/sec
> > D> > D> [  4] local 132.65.16.100 port 5001 connected with [5.4/SE750=
1WV2] port 55269
> > D> > D> (intel westvill)
> > D> > D> [ ID] Interval       Transfer     Bandwidth
> > D> > D> [  4]  0.0-10.0 sec   967 MBytes   811 Mbits/sec
> > D> > D> [  5] local 132.65.16.100 port 5001 connected with [6.0/SR143=
5VP2 port 58363
> > D> > D> (intel dual xeon/emt64)
> > D> > D> [ ID] Interval       Transfer     Bandwidth
> > D> > D> [  5]  0.0-10.0 sec   578 MBytes   485 Mbits/sec
> > D> > D>
> > D> > D> i've run this several times, and the results are very similar=
=2E
> > D> > D> i also tried i386, and the same bad results.
> > D> > D> all hosts are connected at 1gb to the same switch.
> > D> >
> > D> > So we see a strong drawback between SE7501WV2 and SR1435VP2. Let=
's compare the NIC
> > D> > hardware. Can you plese show pciconf -lv | grep -A3 ^em on both =
motherboards?
> > D>
> > D> on a SE7501WV2:
> > D> em0@pci3:7:0:   class=3D0x020000 card=3D0x341a8086 chip=3D0x101080=
86 rev=3D0x01
> > D> hdr=3D0x00
> > D>     vendor   =3D 'Intel Corporation'
> > D>     device   =3D '82546EB Dual Port Gigabit Ethernet Controller (C=
opper)'
> > D>     class    =3D network
> > D>
> > D> on a SR1435VP2:
> > D> em0@pci4:3:0:   class=3D0x020000 card=3D0x34668086 chip=3D0x107680=
86 rev=3D0x05
> > D> hdr=3D0x00
> > D>     vendor   =3D 'Intel Corporation'
> > D>     device   =3D '82547EI Gigabit Ethernet Controller'
> > D>     class    =3D network
> >
> > The first one 82546EB is attached to fast PCI-X bus, and the 82547EI =
is
> > on CSA bus. The CSA bus is twice faster than old PCI bus, CSA can han=
dle
> > 266 Mbps. I'm not sure but may be it has same ~50% overhead as old PC=
I bus.
> >
> > Probably our em(4) driver is not optimized enough and does too many a=
ccesses
> > to the PCI bus, thus utilizing more bandwidth than needed to handle t=
raffic.
> > In this case we see that NIC on slower bus (but enough to handle Giga=
bit) is
> > must slower than NIC on faster bus. (This paragraph is my own theory,=
 it
> > can be complete bullshit.)
> =

> CSA bus? I've never heard of it.
> =

> To get the best gig performance you really want to see it on PCI Expres=
s.
> I see 930ish Mb/s. I'm not really familiar with this motherboard/lom.
> =

> You say you run iperf -s on the server side, but what are you using as
> parameters on the client end of the test?
> =

iperf -c host

i'm begining to believe that the problem is elsewhere, i just put in
an ethernet nic in a PCI-X/Express slot, and the performance is similar, =
bad.

danny

> Jack





Want to link to this message? Use this URL: <https://mail-archive.FreeBSD.org/cgi/mid.cgi?E1Eph9s-0002F4-8O>