Skip site navigation (1)Skip section navigation (2)
Date:      Mon, 29 Jun 1998 12:35:48 -0700
From:      Don Lewis <Don.Lewis@tsc.tdk.com>
To:        Chris Dillon <cdillon@wolves.k12.mo.us>, Ulf Zimmermann <ulf@Alameda.net>
Cc:        Atipa <freebsd@atipa.com>, hackers@FreeBSD.ORG
Subject:   Re: Will 8 Intel EtherExpress PRO 10/100's be a problem?
Message-ID:  <199806291935.MAA26508@salsa.gv.tsc.tdk.com>
In-Reply-To: Chris Dillon <cdillon@wolves.k12.mo.us> "Re: Will 8 Intel EtherExpress PRO 10/100's be a problem?" (Jun 26,  5:47pm)

next in thread | previous in thread | raw e-mail | index | archive | help
On Jun 26,  5:47pm, Chris Dillon wrote:
} Subject: Re: Will 8 Intel EtherExpress PRO 10/100's be a problem?
} On Fri, 26 Jun 1998, Ulf Zimmermann wrote:
} 
} > On Fri, Jun 26, 1998 at 11:03:01AM -0500, Chris Dillon wrote:
} > > On Thu, 25 Jun 1998, Atipa wrote:

} > > I'm rather hoping that three 133MB/sec PCI busses won't have any trouble
} > > passing at max about 30MB/sec worth of data (10MB/sec per card, three
} > > cards per bus).  Theoretically even one PCI bus could handle all 8 of
} > > those cards.. _theoretically_... :-) 
} > 
} > Double that number, Full Duplex is what you usual now use in routers.
} > I also wouldn't say the single bus is the problem, but the main PCI bus and
} > the CPU will be a bottleneck. You will definatly not be able to run 8
} > cards at full speed (8 x 10Mbyte/sec x 2 (FullDuplex) = 160 MByte/sec)

You can only use Full Duplex if the port is connected directly to another
host or to a switch.  From the initial description it sounded like each
port would be connected to a number of other hosts through a hub, which
would require Half Duplex to be used.

} Doh.. I knew that, but didn't put that in my calculation.  Anyway, I'm not
} needing full wire-speed from these things.  I think I'd be happy with
} 1/5th that. :-)  I'm expecting that if ftp.freebsd.org can do about,
} 5MB/sec on average, along with thousands of FTP clients, without breaking
} a sweat on a PPro200, then a PII-350 or 400 should be able to do
} line-speed at least between two networks at a time.  If and when I do
} this, expect me to perform some benchmarks. :-)

With FTP clients, a sizeable percentage of the packets will be large
and will account for most of the bandwidth.  You may find yourself
running out of CPU if much of your bandwidth is used by small packets,
since there is a fixed amount of per-packet CPU overhead.  We found out
that while our Cisco 4000 can run at wire speed (10 Mb/s) while forwarding
our normal traffic mix which contains many large packets, it runs out of
CPU if it gets blasted with tinygrams.

} As for the "main PCI bus" being the bottleneck, I'm really hoping they
} used three host-to-PCI bridges, and not a single host-to-PCI bridge and
} two PCI-to-PCI bridges.  Even if not, I could push about 100MB/sec across 
} the bus (assuming the CPU could push that), and thats more than enough
} for me.

I suspect that it only has one host-to-PCI bridge, since the silicon is
pretty common for that.  Supporting multiple host-to-PCI bridges would
either require a custom chipset with multiple bridges built in (which
would require a *lot* of pins), or would require bridge chips that can
arbitrate for access on the host side.  The latter would be difficult
to get to work because of the high speeds on the host side of the bridge.

} I imagine a Cisco of _equal price_ wouldn't even come close to the
} throughput I'm going to do.  I could be wrong, of course.

When I was looking a for a router to support a handful of 100 Mb/s ports,
I came to the conclusion that it would be a lot cheaper to build it with
a PC rather than buying a Cisco with enough grunt.  On the low end, a
Cisco solution is more reasonably priced and has fewer pieces to break,
and the PC solution runs out of gas on the high end.

To Unsubscribe: send mail to majordomo@FreeBSD.org
with "unsubscribe freebsd-hackers" in the body of the message



Want to link to this message? Use this URL: <https://mail-archive.FreeBSD.org/cgi/mid.cgi?199806291935.MAA26508>