Skip site navigation (1)Skip section navigation (2)
Date:      Mon, 29 Jun 1998 12:49:45 -0600 (MDT)
From:      Atipa <freebsd@atipa.com>
To:        Don Lewis <Don.Lewis@tsc.tdk.com>
Cc:        Chris Dillon <cdillon@wolves.k12.mo.us>, Ulf Zimmermann <ulf@Alameda.net>, hackers@FreeBSD.ORG
Subject:   Re: Will 8 Intel EtherExpress PRO 10/100's be a problem?
Message-ID:  <Pine.BSF.3.96.980629124322.7863C-100000@altrox.atipa.com>
In-Reply-To: <199806291935.MAA26508@salsa.gv.tsc.tdk.com>

next in thread | previous in thread | raw e-mail | index | archive | help

> } Doh.. I knew that, but didn't put that in my calculation.  Anyway, I'm not
> } needing full wire-speed from these things.  I think I'd be happy with
> } 1/5th that. :-)  I'm expecting that if ftp.freebsd.org can do about,
> } 5MB/sec on average, along with thousands of FTP clients, without breaking
> } a sweat on a PPro200, then a PII-350 or 400 should be able to do
> } line-speed at least between two networks at a time.  If and when I do
> } this, expect me to perform some benchmarks. :-)
> 
> With FTP clients, a sizeable percentage of the packets will be large
> and will account for most of the bandwidth.  You may find yourself
> running out of CPU if much of your bandwidth is used by small packets,
> since there is a fixed amount of per-packet CPU overhead.  We found out
> that while our Cisco 4000 can run at wire speed (10 Mb/s) while forwarding
> our normal traffic mix which contains many large packets, it runs out of
> CPU if it gets blasted with tinygrams.

I think a P2 would keep up... 
 
> } As for the "main PCI bus" being the bottleneck, I'm really hoping they
> } used three host-to-PCI bridges, and not a single host-to-PCI bridge and
> } two PCI-to-PCI bridges.  Even if not, I could push about 100MB/sec across 
> } the bus (assuming the CPU could push that), and thats more than enough
> } for me.
> 
> I suspect that it only has one host-to-PCI bridge, since the silicon is
> pretty common for that.  Supporting multiple host-to-PCI bridges would
> either require a custom chipset with multiple bridges built in (which
> would require a *lot* of pins), or would require bridge chips that can
> arbitrate for access on the host side.  The latter would be difficult
> to get to work because of the high speeds on the host side of the bridge.

It would almost certainly be a single bridge chip at 133MB/sec access to
CPU and RAM, and 133MB/sec for all inter-PCI transport.

> } I imagine a Cisco of _equal price_ wouldn't even come close to the
> } throughput I'm going to do.  I could be wrong, of course.
> 
> When I was looking a for a router to support a handful of 100 Mb/s ports,
> I came to the conclusion that it would be a lot cheaper to build it with
> a PC rather than buying a Cisco with enough grunt.  On the low end, a
> Cisco solution is more reasonably priced and has fewer pieces to break,
> and the PC solution runs out of gas on the high end.

That's why I receommended mulitple PC's, for redundancy and scalability.
Any time you buy weird stuff (like the mobo he wants), you pay a high
price, and run the risk of obscurity on the support side. Unless he wants
to buy one of those boards for each of the core arch developers, he'd not
have a lot of luck w/ patches, etc.

Mainstream stuff is well tested, easy to replace, and cheap to acquire.

Kevin


To Unsubscribe: send mail to majordomo@FreeBSD.org
with "unsubscribe freebsd-hackers" in the body of the message



Want to link to this message? Use this URL: <https://mail-archive.FreeBSD.org/cgi/mid.cgi?Pine.BSF.3.96.980629124322.7863C-100000>