Skip site navigation (1)Skip section navigation (2)
Date:      Mon, 07 Jul 2008 11:02:06 +0200
From:      Andre Oppermann <andre@freebsd.org>
To:        Ingo Flaschberger <if@xip.at>
Cc:        FreeBSD Net <freebsd-net@freebsd.org>, Paul <paul@gtcomm.net>
Subject:   Re: Freebsd IP Forwarding performance (question, and some info) [7-stable, current, em, smp]
Message-ID:  <4871DB8E.5070903@freebsd.org>
In-Reply-To: <alpine.LFD.1.10.0807052356130.2145@filebunker.xip.at>
References:  <4867420D.7090406@gtcomm.net> <486986D9.3000607@monkeybrains.net>	<48699960.9070100@gtcomm.net>	<ea7b9c170806302005n2a66f592h2127f87a0ba2c6d2@mail.gmail.com>	<20080701033117.GH83626@cdnetworks.co.kr>	<ea7b9c170806302050p2a3a5480t29923a4ac2d7c852@mail.gmail.com>	<4869ACFC.5020205@gtcomm.net> <4869B025.9080006@gtcomm.net>	<486A7E45.3030902@gtcomm.net> <486A8F24.5010000@gtcomm.net>	<486A9A0E.6060308@elischer.org> <486B41D5.3060609@gtcomm.net>	<alpine.LFD.1.10.0807021052041.557@filebunker.xip.at>	<486B4F11.6040906@gtcomm.net>	<alpine.LFD.1.10.0807021155280.557@filebunker.xip.at>	<486BC7F5.5070604@gtcomm.net>	<20080703160540.W6369@delplex.bde.org>	<486C7F93.7010308@gtcomm.net>	<20080703195521.O6973@delplex.bde.org>	<486D35A0.4000302@gtcomm.net>	<alpine.LFD.1.10.0807041106591.19613@filebunker.xip.at>	<486DF1A3.9000409@gtcomm.net>	<alpine.LFD.1.10.0807041303490.20760@filebunker.xip.at>	<486E65E6.3060301@gtcomm.net> <alpine.LFD.1.10.0807052356130.2145@filebunker.xip.at>

next in thread | previous in thread | raw e-mail | index | archive | help
Ingo Flaschberger wrote:
> Dear Paul,
> 
>> I tried all of this :/  still, 256/512 descriptors seem to work the best.
>> Happy to let you log into the machine and fiddle around if you want :)
> 
> yes, but I'm shure I will also not be able to achieve much more pps.
> As it seems that you hit hardware-software-level-barriers, my only idea 
> is to test dragonfly bsd, which seems to have less software overhead.

I tested DragonFly some time ago with an Agilent N2X tester and it
was by far the slowest of the pack.

> I don't think you will be able to route 64byte packets at 1gbit 
> wirespeed (2Mpps) with a current x86 platform.

You have to take inter-frame gap and other overheads too.  That gives
about 1.244Mpps max on a 1GigE interface.

In general the chipsets and buses are able to transfer quite a bit of
data.  On a dual-opteron 848 I was able to sink 2.5Mpps into the machine
with "ifconfig em[01] monitor" without hitting the cpu ceiling.  This
means that the bus and interrupt handling is not where most of the time
is spent.

When I did my profiling the saturation point was the cache miss penalty
for accessing the packet headers.  At saturation point about 50% of the
time was spent waiting for the memory to make its way into the CPU.

> I hoped to reach 1Mpps with the hardware I mentioned some mails before, 
> but 2Mpps is far far away.
> Currently I get 160kpps via pci-32mbit-33mhz-1,2ghz mobile pentium.

This is more or less expected.  PCI32 is not able to sustain high
packet rates.  The bus setup times kill the speed.  For larger packets
the ratio gets much better and some reasonable throughput can be achieved.

> Perhaps you have some better luck at some different hardware systems
> (ppc, mips, ..?) or use freebsd only for routing-table-updates and 
> special network-cards (netfpga) for real routing.

NetFPGA doesn't have enough TCAM space to be useful for real routing
(as in Internet sized routing table).  The trick many embedded networking
CPUs use is cache prefetching that is integrated with the network
controller.  The first 64-128bytes of every packet are transferred
automatically into the L2 cache by the hardware.  This allows relatively
slow CPUs (700 MHz Broadcom BCM1250 in Cisco NPE-G1 or 1.67-GHz Freescale
7448 in NPE-G2) to get more than 1Mpps.  Until something like this is
possible on Intel or AMD x86 CPUs we have a ceiling limited by RAM speed.

-- 
Andre



Want to link to this message? Use this URL: <https://mail-archive.FreeBSD.org/cgi/mid.cgi?4871DB8E.5070903>