Skip site navigation (1)Skip section navigation (2)
Date:      Sat, 01 Mar 2008 00:33:32 +0100
From:      Willem Jan Withagen <wjw@digiware.nl>
To:        Ingo Flaschberger <if@xip.at>
Cc:        =?ISO-8859-1?Q?alves?= <daniel@dgnetwork.com.br>, =?ISO-8859-1?Q?Daniel_Dias_Gon=E7?=, freebsd-net@freebsd.org, freebsd-performance@freebsd.org, Kevin Oberman <oberman@es.net>
Subject:   Re: FBSD 1GBit router?
Message-ID:  <47C8964C.9080309@digiware.nl>
In-Reply-To: <alpine.LFD.1.00.0802260132240.9719@filebunker.xip.at>
References:  <20080226003107.54CD94500E@ptavv.es.net> <alpine.LFD.1.00.0802260132240.9719@filebunker.xip.at>

next in thread | previous in thread | raw e-mail | index | archive | help
> I have a 1.2Ghz Pentium-M appliance, with 4x 32bit, 33MHz pci intel 
> e1000 cards.
> With maximum tuning I can "route" ~400mbps with big packets and ~80mbps 
> with 64byte packets.
> around 100kpps, whats not bad for a pci architecture.
> 
> To reach higher bandwiths, better busses are needed.
> pci-express cards are currently the best choice.
> one dedicated pci-express lane (1.25gbps) has more bandwith than a whole 
> 32bit, 33mhz pci-bus.

Like you say routing 400 Mb/s is close to the max of the PCI bus, which
has a theoretical max of 33*4*8 ~ 1Gbps. Now routing is 500Mb/s in, 
500Mb/s out. So you are within 80% of the bus-max, not counting 
memory-access and others.

PCI express will give you a bus per PCI-E device into a central hub, 
thus upping the limit to the speed of the FrontSideBus in Intel 
architectures. Which at the moment is a lot higher than what a single 
PCI bus does.

What it does not explain is why you can only get 80Mb/s with 64byte 
packets, which would suggest other bottlenecks than just the bus.

--WjW



Want to link to this message? Use this URL: <https://mail-archive.FreeBSD.org/cgi/mid.cgi?47C8964C.9080309>