Skip site navigation (1)Skip section navigation (2)
Date:      Fri, 17 Sep 2004 04:39:01 -0700
From:      Julian Elischer <julian@elischer.org>
To:        donatas <donatas@lrtc.net>
Cc:        freebsd-net@freebsd.org
Subject:   Re: ng_one2many - very slow
Message-ID:  <414ACCD5.2090607@elischer.org>
In-Reply-To: <030a01c49c9f$7c215970$f2f109d9@donatas>
References:  <030a01c49c9f$7c215970$f2f109d9@donatas>

next in thread | previous in thread | raw e-mail | index | archive | help
donatas wrote:
> Hello,
> 
> we need a 400Mbit link between two intel machines (Xeon 2.4, Raid, 512DDr, 2 ports-em(1000Mbit),2 ports-fxp(100Mbit))
> 
> configuration taken from ng_one2many man page:
> _____________________________________________________________________
> ifconfig em0 up media 100BaseTX mediaopt full-duplex
> ifconfig em1 up media 100BaseTX mediaopt full-duplex
> ifconfig fxp0 up media 100BaseTX mediaopt full-duplex
> ifconfig fxp1 up media 100BaseTX mediaopt full-duplex
> 
> ngctl mkpeer em0: one2many upper one
>        ngctl connect em0: em0:upper lower many0
>        ngctl connect em1: em0:upper lower many1
>        ngctl connect fxp0: em0:upper lower many2
>        ngctl connect fxp1: em0:upper lower many3
>        ngctl msg em1: setpromisc 1
>        ngctl msg fxp0: setpromisc 1
>        ngctl msg fxp1: setpromisc 1
>        ngctl msg em1: setautosrc 0
>        ngctl msg fxp0: setautosrc 0
>        ngctl msg fxp1: setautosrc 0
>        ngctl msg em0:upper setconfig "{ xmitAlg=1 failAlg=1 enabledLinks=[ 1 1 1 1 ] }"
>        ifconfig em0 192.168.1.1/24 (and 1.2/24 on the second machine)
> _______________________________________________________________________
> kernel is compiled with the following options:
> NETGRAPH
> NETGRAPH_BRIDGE
> NETGRAPH_ECHO
> NETGRAPH_ETHER
> NETGRAPH_FACE
> NETGRAPH_ONE2MANY
> ________________________________________________________________________
> OS - FreeBSD  5.2.1 - freshly installed
> machines are connected directly(port -to- port) with crossed UTP CAT5 cables
> ________________________________________________________________________
> we used iperf to test TCP throughput between those machines:
> Results:
> 10sec.    Transfered 250MBytes        Bandwidth 210Mbits/sec        -in simplex mode
> 
> and in duplex mode:
> 10sec.    Transfered 169MBytes        Bandwidth 141Mbits/sec
> 10sec.    Transfered 163MBytes        Bandwidth 136Mbits/sec
> 
> after changing "enabledLinks=[1 1 1 1] to [1 1] the results are allmost the same:
> ________________________________________________________________________
> 10sec.    Transfered 242MBytes        Bandwidth 203Mbits/sec        -in simplex mode
> 
> and in duplex mode:
> 10sec.    Transfered 163MBytes        Bandwidth 136Mbits/sec
> 10sec.    Transfered 150MBytes        Bandwidth 125Mbits/sec
> ________________________________________________________________________
> 60 second transfer indicated 223Mbits/sec in simplex mode
> 
> truth, we've tested direct link between em adapters in gigabit mode and using TCP packets 850Mbit throughput was achieved. And Nearly 1Gbit with UDP packets.
> 
> as you see one2many test results aren't even close to 400Mbit
> Is it possible that em and fxp cannot work together or something. what can you suggest to solve this small problem?
> 
> thanks in advance
> _______________________________________________
> freebsd-net@freebsd.org mailing list
> http://lists.freebsd.org/mailman/listinfo/freebsd-net
> To unsubscribe, send any mail to "freebsd-net-unsubscribe@freebsd.org"

netgraph was not originally designed to be a super-high speed facility,
but rather to be a convenient prototyping environment and a production
environment for convoluted but slower wan type links.

It has in fact turned out a lot more useful in normal networking
environments than we had feared. If you want to do bundling, however
I suggest you might want to look at the ng_fec node instead as it handles
issues not handled by ng_one2many, such as loss of link. and it is a bit
more optimised, needing fewer nodes.


Having said that we will be looking at netgraph performance in the future.
(It has always been "fast enough" so we've never really looked at tuning it
until now (especially in 5.x)).



Want to link to this message? Use this URL: <https://mail-archive.FreeBSD.org/cgi/mid.cgi?414ACCD5.2090607>