Skip site navigation (1)Skip section navigation (2)
Date:      Sun, 28 Jun 1998 14:29:28 -0500 (CDT)
From:      Chris Dillon <cdillon@wolves.k12.mo.us>
To:        Mike Tancsa <mike@sentex.net>
Cc:        hackers@FreeBSD.ORG
Subject:   Re: Will 8 Intel EtherExpress PRO 10/100's be a problem?
Message-ID:  <Pine.BSF.3.96.980628132323.22776A-100000@duey.hs.wolves.k12.mo.us>
In-Reply-To: <3595e10d.1180692826@mail.sentex.net>

next in thread | previous in thread | raw e-mail | index | archive | help
On Sun, 28 Jun 1998, Mike Tancsa wrote:

> On Thu, 25 Jun 1998 23:32:35 -0500 (CDT), in sentex.lists.freebsd.misc
> you wrote:
> >Within the next few months, i will be needing to set up a router for our
> >internal network, tying together 7 networks, with some room to grow. I
> >plan on buying a rather expensive chassis from Industrial Computer source.
> 
> You could probably buy 3 plain-jane pentiums for the price of the
> fancy one you are talking about and not have to worry about cramming
> so many cards into one box and overloading the PCI bus.  If the boxes
> are going to act as routers and NAT machines, you really dont need
> that much horse power/RAM/HD space. 

Where did I mention how much RAM and HD space I was going to use? :-) I do
know that those are not something you need an excess of for a router (RAM,
sometimes). I only planned on putting 64MB ECC SDRAM in it.  It could
probably do with 16 or 32, but RAM is cheap these days.  I would even do
without an HD (without moving parts, that is), if I could.  I'm thinking
about using these flash based drives I've been seeing lately, if the price
is right.  Even PicoBSD on a floppy (or bootable ZIP disk?) might be an
option.  As for the horsepower, the more the better, since it would
increase my peak bandwidth capacity (assuming the bus isn't already
saturated, which it shouldn't be) and reduce latency. It is true that
going with something ultra-fast means a lot of heat, and reduced
reliability and life.

I do agree that multiple cheaper boxes might be better for removing a
single point of failure (i.e., if one box failed, the networks on that box
would be separated, but the rest would be communicating), but that would
require a few more NICs total (not a big deal cost-wise), a switch to
interconnect all of them (now we're talking money, and if not a switch,
then a hub, or then even more NICs are required for point-to-point links
between boxes). I'd be shoving each of these machines totally full of NICs
for interconnection along with the NICs required to service the our
networks, filling up their relatively small PCI busses.  The total cost is
back up to where it was with the single high-capacity high-reliability
machine, and complexity is increased by several orders of magnitude. 

I'm going to take a quick guess at how I could use multiple boxes, just to
see if it would work for me.  Assuming I used common motherboards with 4
PCI slots, I would need one NIC for interconnection, leaving three slots
free for our internal networks.  So, I'd need three of these boxes and a
switch for the interconnection.  Four boxes if I want to grow.  The switch
just became a single point of failure, not to mention a 100Mbit/sec
bottleneck between any two boxes.  So, instead, I put two NICs in each
machine for interconnection, creating some kind of simple star or ring. 
This still creates a sort of bottleneck between boxes, but increases
availability significantly.  With two slots free, I'd now need four boxes
to meet my needs, five for growth.  Using boards with 5 PCI slots would
change this scenario a little bit, but would still pretty much stick me in
the same boat, especially if I grow.  Basically, I'm just buying a box
with 9 PCI slots.  If I grow past 8 networks (which I seriously doubt),
I'll buy another and I'll have "multiple" boxes to share the job.  :-) 

I do believe in splitting up work between multiple boxes when possible,
but in _this particular scenario_, it just doesn't seem ultimately
feasable. 

I do appreciate all of the comments people are giving me.  That was the
secondary reason why I posted the question to the list (first being wether
what I had originally planned would actually work), rather than go
blindly. :-) All of the responses to this point have made me think twice
about one thing or another, or altered my direction slightly.  Your
response, Mike, along with Kevin's from Atipa, made me think about how I
actually _could_ use multiple boxes, which I basically just laid out
above. 


-- Chris Dillon - cdillon@wolves.k12.mo.us - cdillon@inter-linc.net
/* FreeBSD: The fastest and most stable server OS on the planet.
   For Intel x86 and compatibles (SPARC and Alpha under development)
   (http://www.freebsd.org)                                         */




To Unsubscribe: send mail to majordomo@FreeBSD.org
with "unsubscribe freebsd-hackers" in the body of the message



Want to link to this message? Use this URL: <https://mail-archive.FreeBSD.org/cgi/mid.cgi?Pine.BSF.3.96.980628132323.22776A-100000>