Skip site navigation (1)Skip section navigation (2)
Date:      Mon, 18 Apr 2011 19:12:15 +0200 (CEST)
From:      Ingo Flaschberger <if@xip.at>
To:        "K. Macy" <kmacy@freebsd.org>
Cc:        freebsd-net@freebsd.org, Ingo Flaschberger <if@freebsd.org>
Subject:   Re: Routing enhancement - reduce routing table locking
Message-ID:  <alpine.LRH.2.00.1104181852420.8693@filebunker.xip.at>
In-Reply-To: <BANLkTik39HvVire6Hzi9U6J2BwKV7apCCg@mail.gmail.com>
References:  <alpine.LRH.2.00.1104050303140.2152@filebunker.xip.at> <alpine.LRH.2.00.1104061426350.2152@filebunker.xip.at> <alpine.LRH.2.00.1104180051450.8693@filebunker.xip.at> <BANLkTik39HvVire6Hzi9U6J2BwKV7apCCg@mail.gmail.com>

next in thread | previous in thread | raw e-mail | index | archive | help

> It would be great to see flowtable going back to its intended use.
> However, I would be surprised if this actually scales to Mpps. I don't
> have any high end hardware at the moment to test, what is the highest
> packet rate you've seen? i.e. simply generating small packets.

Currently I have no tests available, but I have seen at a appliance with:
Intel Q35
Quad Core cpu
Intel em desktop pcie cards

~ 200mbit 64byte packets - ~ 400kpps without packetloss.

Without patch flowtable and fastforward had the same speed as flowtable, 
fastfoward and standard forward.

That means, with the patch the standard forward patch had the same speed 
as the fastforward path.

It seems, I'm hitting some other speedlimits at my system, so there was no 
real difference between flowtable, fastforward with and without the patch.

I would be great if someone could load a system with a full tables 
(400k routes) and do some tests at 10gbe speed.

Kind regards,
 	Ingo Flaschberger




Want to link to this message? Use this URL: <https://mail-archive.FreeBSD.org/cgi/mid.cgi?alpine.LRH.2.00.1104181852420.8693>