Skip site navigation (1)Skip section navigation (2)
Date:      Thu, 25 Jun 1998 22:33:33 -0600 (MDT)
From:      Atipa <freebsd@atipa.com>
To:        Chris Dillon <cdillon@wolves.k12.mo.us>
Cc:        hackers@FreeBSD.ORG
Subject:   Re: Will 8 Intel EtherExpress PRO 10/100's be a problem?
Message-ID:  <Pine.BSF.3.96.980625222607.24370A-100000@altrox.atipa.com>
In-Reply-To: <Pine.BSF.3.96.980625222746.12068B-100000@duey.hs.wolves.k12.mo.us>

next in thread | previous in thread | raw e-mail | index | archive | help

> I really hope -hackers is the best place for this... i didn't want to
> crosspost.
> 
> Within the next few months, i will be needing to set up a router for our
> internal network, tying together 7 networks, with some room to grow. I
> plan on buying a rather expensive chassis from Industrial Computer source.
> It has an interesting partially-passive backplane with a PII-233 or faster
> and chipset mounted on it (LX or BX chipset, I believe) with everything
> else on a daughtercard and 9PCI/8ISA slots. Something like the model
> 7520K9-44H-B4 with redundant power supplies.

Cool.

> Basically my questions are:  
> 
> 1) Will there be any problems with using three or more host-to-PCI
> bridges? 

Maybe not in the kernel, but I'd start to worry about saturating your
buses. You are really bumping up against some I/O bottlenecks in my
estimation.

> 2) Will there be any problems using up to 8 Intel Etherexpress Pro
> 10/100's?  If so, can I use a combination of those and some DEC
> 21[0,1]4[0,1] cards?

If the answer to question #1 is "No", then the same should be true for
question #2.

> 3) If i ever end up using natd for all of this, would there be any
> problems with it servicing those 7 networks (probably max 100 hosts per
> network)?

Dunno. Never used natd, but I would not _expect_ any difficulties.
 
> I initially thought of just getting a nice ATX rackmount case and a nice
> ASUS motherboard and using some of those ZNYX 4-port fast-ethernet cards. 
> Several reasons why I like the above idea better is because the support
> for the Intel cards is apparently better, and replacing bad NICs would be
> simple and inexpensive.  If I DO end up going the ZNYX route, are there
> any known problems with those 4-port cards?  I'd need two of them, of
> course, and the motherboard would most likely have an Intel card built
> onto it also.  Maybe I'll even eventually throw an ETInc sync serial card
> in there for my T1 and use our Cisco 2514 elsewhere. 

Yow. I think you should diversify your services, and spread out the I/O
and interfaces over a couple machines. You really don't want to put all
your eggs in one basket. Smaller, more digestible chunks would mean
cheaper hardware (to the point that your NET would probably be less), less
disastrous failures, and fewer bottlenecks related to architecture (PCI,
RAM, disk I/O, etc.).
 
> Other options I would have are either a 8-port or more Cisco router (ugh,
> expensive), or a 3COM gigabit layer-3 IP switch (THAT would be nice, but
> the pricetag is in the 5-digit area).  I would MUCH rather use a very nice
> FreeBSD system for this job. 

Or two ? :)

> By the way, anyone know of any place cheaper than ICS for the components I
> need?  Even just someplace that sells good ATX rackmount cases and power
> supplies (Jinco maybe)?

www.atipa.com :)

Regards,
Kevin


To Unsubscribe: send mail to majordomo@FreeBSD.org
with "unsubscribe freebsd-hackers" in the body of the message



Want to link to this message? Use this URL: <https://mail-archive.FreeBSD.org/cgi/mid.cgi?Pine.BSF.3.96.980625222607.24370A-100000>