Skip site navigation (1)Skip section navigation (2)
Date:      Fri, 19 Nov 2004 12:56:05 +0000 (GMT)
From:      Robert Watson <rwatson@freebsd.org>
To:        Emanuel Strobl <Emanuel.Strobl@gmx.net>
Cc:        freebsd-stable@freebsd.org
Subject:   Re: serious networking (em) performance (ggate and NFS) problem
Message-ID:  <Pine.NEB.3.96L.1041119124719.92822D-100000@fledge.watson.org>
In-Reply-To: <200411191318.46405.Emanuel.Strobl@gmx.net>

next in thread | previous in thread | raw e-mail | index | archive | help

On Fri, 19 Nov 2004, Emanuel Strobl wrote:

> Am Donnerstag, 18. November 2004 13:27 schrieb Robert Watson:
> > On Wed, 17 Nov 2004, Emanuel Strobl wrote:
> > > I really love 5.3 in many ways but here're some unbelievable transfer
> 
> First, thanks a lot to all of you paying attention to my problem again. 
> I'll use this as a cumulative answer to the many postings of you, first
> answering Roberts questions and at the bottom those of the others. 
> 
> I changed cables and couldn't reproduce that bad results so I changed
> cables back but also cannot reproduce them, especially the ggate write,
> formerly with 2,6MB/s now performs at 15MB/s, but I haven't done any
> polling tests anymore, just interrupt driven, since Matt explained that
> em doesn't benefit of polling in any way. 
> 
> Results don't indicate a serious problem now but are still about a third
> of what I'd expected with my hardware. Do I really need Gigahertz Class
> CPUs to transfer 30MB/s over GbE? 

Well, the claim that if_em doesn't benefit from polling is inaccurate in
the general case, but quite accurate in the specific case.  In a box with
multiple NIC's, using polling can make quite a big difference, not just by
mitigating interrupt load, but also by helping to prioritize and manage
the load, preventing live lock.  As I indicated in my earlier e-mail,
however, on your system it shouldn't make much difference -- 4k-8k
interrupts/second is not a big deal, and quite normal for use of an if_em
card in the interrupt-driven configuration.
 
It looks like the netperf TCP test is getting just under 27MB/s, or
214Mb/s.  That does seem on the low side for the PCI bus, but it's also
instructive to look at the netperf UDP_STREAM results, which indicate that
the box believes it is transmitting 417Mb/s but only 67Mb/s are being
received or processed fast enough by netserver on the remote box.  This
means you've achieved a send rate to the card of about 54Mb/s.  Note that
you can actually do the math on cycles/packet or cycles/byte here -- with
TCP_STREAM, it looks like some combination of recipient CPU and latency
overhead is the limiting factor, with netserver running at 94% busy.

Could you try using geom gate to export a malloc-backed md device, and see
what performance you see there?  This would eliminate the storage round
trip and guarantee the source is in memory, eliminating some possible
sources of synchronous operation (which would increase latency, reducing
throughput).  Looking at CPU consumption here would also be helpful, as it
would allow us to reason about where the CPU is going.

> I was aware of that and because of lacking a GbE switch anyway I decided
> to use a simple cable ;) 

Yes, this is my favorite configuration :-).

> > (5) Next, I'd measure CPU consumption on the end box -- in particular, use
> >     top -S and systat -vmstat 1 to compare the idle condition of the
> >     system and the system under load.
> >
> 
> I additionally added these values to the netperf results.

Thanks for your very complete and careful testing and reporting :-).

Robert N M Watson             FreeBSD Core Team, TrustedBSD Projects
robert@fledge.watson.org      Principal Research Scientist, McAfee Research



Want to link to this message? Use this URL: <https://mail-archive.FreeBSD.org/cgi/mid.cgi?Pine.NEB.3.96L.1041119124719.92822D-100000>