From owner-freebsd-hardware Thu Jul 30 22:51:43 1998 Return-Path: Received: (from majordom@localhost) by hub.freebsd.org (8.8.8/8.8.8) id WAA06060 for freebsd-hardware-outgoing; Thu, 30 Jul 1998 22:51:43 -0700 (PDT) (envelope-from owner-freebsd-hardware@FreeBSD.ORG) Received: from cyclone.wspout.com (cyclone.waterspout.com [206.230.5.48]) by hub.freebsd.org (8.8.8/8.8.8) with ESMTP id WAA06055 for ; Thu, 30 Jul 1998 22:51:41 -0700 (PDT) (envelope-from csg@waterspout.com) Received: from tsunami.waterspout.com (tsunami.waterspout.com [199.233.104.138]) by cyclone.wspout.com (8.8.7/8.8.4) with ESMTP id AAA04139; Fri, 31 Jul 1998 00:51:37 -0500 (EST) Received: from tsunami.waterspout.com (localhost [127.0.0.1]) by tsunami.waterspout.com (8.9.1/8.9.1) with ESMTP id AAA13188; Fri, 31 Jul 1998 00:51:37 -0500 (EST) Message-Id: <199807310551.AAA13188@tsunami.waterspout.com> To: Richard Archer cc: freebsd-hardware@FreeBSD.ORG Subject: Re: Support for passive backplane chassis? In-reply-to: Your message of "Fri, 31 Jul 1998 13:51:24 +1000." Date: Fri, 31 Jul 1998 00:51:37 -0500 From: "C. Stephen Gunn" Sender: owner-freebsd-hardware@FreeBSD.ORG Precedence: bulk X-Loop: FreeBSD.org In message , Richard Archer writes: >I am thinking of using a passive backplane system with 16 PCI slots. >This would allow each router to handle up to 64 ethernet segments. >But I can't find much information about how these interact with FreeBSD. Richard, This would scare the heck out of me. I use a FreeBSD box at my day job to route between 5 Ethernet Interfaces. While it's a fast box, and it all works fine, I don't want to think about the bandwidth aggregation problems you might have with 64 ethernet cards on one machine. At that level you're not looking for a CPU to make decisions on the packets. You want a Switch. I would check out Lucent's Cajun Switch, or some of the nicer Cisco 10/100 switches that can take a route processor. The Lucent one claims to be 10/100 on lots of ports (140 or so) and provide Layer-3 switching (basically routing) in hardware, at wire speed. While you're looking at $25K or so, racks of BSD machines aren't free either. Don't get me wrong here, FreeBSD is great, but PCI isn't going to handle what you want. At least not at high saturation levels for each subnet. Just wondering, how does this building hook to the rest of the universe? - Steve To Unsubscribe: send mail to majordomo@FreeBSD.org with "unsubscribe freebsd-hardware" in the body of the message