Date: Thu, 26 Oct 2000 07:56:19 +1100 From: Peter Jeremy <peter.jeremy@alcatel.com.au> To: David Miller <dmiller@search.sparks.net> Cc: freebsd-hardware@FreeBSD.ORG Subject: Re: Multiple PCI busses? Message-ID: <00Oct26.075621est.115449@border.alcanet.com.au> In-Reply-To: <no.id>; from dmiller@search.sparks.net on Sat, Sep 23, 2000 at 08:08:22AM -0400
next in thread | previous in thread | raw e-mail | index | archive | help
[Catching up on old mail] On 2000-Sep-23 08:08:22 -0400, David Miller <dmiller@search.sparks.net> wrote: >Anyone have any idea what the upper end of thruput is? I'm sure a few >thousand packets per second is doable, but how abot the tens of >thousands? Last February I did some experimenting using a P-133 box and could route just over 10,000 (small) packets/sec (CPU limited) between different LANs. The throughput testing gave me pretty much wire speed[1]. This was using a couple of Intel Pro/100+ cards connecting to 100baseTX half-duplex hubs. Based on this, I'd say you'd be looking at hundreds of thousands of packets/sec on a high-end processor. Your overall throughput would come down to bus bandwidth (PCI and RAM). > Is this an area where a big cache on a >xeon processor would help more than extra CPU cycles? As long as routing code, device driver code and your routing tables fit into the cache, you should be OK. Cache is pretty much irrelevant to the actual packets you are routing - the CPU only needs to read the destination IP address out of the header once for each packet (and do a few mbuf management accesses). It primarily just gets in the way of the PCI DMA :-). [1] Given a decent NIC, the CPU load is pretty much determined by the packet rate, independent of the packet size. Peter To Unsubscribe: send mail to majordomo@FreeBSD.org with "unsubscribe freebsd-hardware" in the body of the message
Want to link to this message? Use this URL: <https://mail-archive.FreeBSD.org/cgi/mid.cgi?00Oct26.075621est.115449>