From owner-freebsd-questions Tue Dec 17 05:39:04 1996 Return-Path: Received: (from root@localhost) by freefall.freebsd.org (8.8.4/8.8.4) id FAA23753 for questions-outgoing; Tue, 17 Dec 1996 05:39:04 -0800 (PST) Received: from spoon.beta.com ([199.165.180.33]) by freefall.freebsd.org (8.8.4/8.8.4) with ESMTP id FAA23748 for ; Tue, 17 Dec 1996 05:39:00 -0800 (PST) Received: from spoon.beta.com (localhost [127.0.0.1]) by spoon.beta.com (8.8.2/8.6.9) with ESMTP id IAA24038 for ; Tue, 17 Dec 1996 08:38:53 -0500 (EST) Message-Id: <199612171338.IAA24038@spoon.beta.com> To: questions@freebsd.org Subject: Cyclades cards (hardware question) Date: Tue, 17 Dec 1996 08:38:53 -0500 From: "Brian J. McGovern" Sender: owner-questions@freebsd.org X-Loop: FreeBSD.org Precedence: bulk I want to thank everyone who has recommended the async cards from Cyclades for high density Async communications. I've purchased two 32 port ISA cards to do some testing, and am pleased with what I'm seeing over other vendors. However, when running 16 ports at 57600, I start getting buffer overruns in the neighborhood of about 1% of the total datasize (ie - I drop about 1K every 10 seconds - totalling about 1% of the total file being transfered) while FTP'ing files across all 16 ports at the same time. Running TOP, the "interrupt" CPU time can approach as high as 24% on a 586/133. The overruns seem to sync. up with the drive writes from the FTP sessions. Now, my thoughts are to switch to PCI cards with less density - 2 16 port PCI cards), or possibly 4 8 port cards (PCI) to alleviate the problem. However, I don't want to drive the cost up by requiring mode interface cards if its not going to help the real problem - which may just be that I shouldn't be hitting the hard disk while streaming these cards. At the same time, I'm thinking if these cards are doing DMA to offload their data, the PCI Bus will help immensely. However, I worry that if I then double my port capacity to 32 per PC, I don't want to take what I've gained and lose a lot more. Anyhow, I'd like to hear some serious insight from people who have worked this type of density hardware before. In the long run, since this is a project for Cisco systems using FreeBSD, cost will be less of an issue than the fact it works as promised. To summarize: I'm looking to maximize async density on a PC to test a new upcoming Cisco Dial-router product. All ports must be able to stream at at least 57600 in order to drive 28.8, 33.6, and 56K modems. There may be eventual need to drive the ports at 115.2K, but we can scale down the port density at that time. I need to know what people think we can load in to a PC without killing it, and how they'd break the hardware down. Thanks. -Brian