From owner-freebsd-questions Tue Feb 27 11:37:51 1996 Return-Path: owner-questions Received: (from root@localhost) by freefall.freebsd.org (8.7.3/8.7.3) id LAA16559 for questions-outgoing; Tue, 27 Feb 1996 11:37:51 -0800 (PST) Received: from pelican.com (pelican.com [206.16.90.21]) by freefall.freebsd.org (8.7.3/8.7.3) with SMTP id LAA16552 for ; Tue, 27 Feb 1996 11:37:47 -0800 (PST) Received: by pelican.com (Smail3.1.29.1 #10) id m0trVDP-0000SMC; Tue, 27 Feb 96 11:37 PST Message-Id: Date: Tue, 27 Feb 96 11:37 PST From: pete@pelican.com (Pete Carah) To: questions@freebsd.org Subject: Re: Telnet Slowdown (fwd) In-Reply-To: <199602230045.LAA20707@genesis.atrad.adelaide.edu.au> Sender: owner-questions@freebsd.org Precedence: bulk In article <199602230045.LAA20707@genesis.atrad.adelaide.edu.au> msmith writes: > >Stephen Hovey stands accused of saying: >> > >> > > Im not out to start a fight - I think FreeBSD has many wonderful features >> > > and I use it for a few things even though the tcp/ip has troubles. >> > >> > I have seen no evidence to support this claim. Pony up. >> >> The basic symptom is a stall - as though the sockets werent any good >> anymore without an error message. >> >> If my trumpet users do not have van jacobson compression turned on for >> instance, they can connect to my freebsd news server, but cannot >> successfully pull over the entire active headers - it stops after a >> couple records. If the trumpet users are dialing to the annex and not directly to the freebsd machine, there is *NO DIFFERENCE* _except_for_timing_ in what the freebsd system sees. You might see some similar problems talking to SGI systems on the same network? Their tcp implementation is the highest-performance I've seen on small workstations; it'll wipe out freebsd systems with small buffers on the ethernet card unless you set window sizes down.... Works fine with the elite-ultra or 8013 cards, though, with no mods to the configs. I do set window and mtu down on my 56k serial link to get better interactive performance during the newsfeed. (see 'man route') The other problems you comment on could ALL be explained by your using ethernet cards with 2K buffers or with ISA DMA. If so, get good cards... (I use SMC elite ultras for all ISA apps and the 21040/1 SMC card for all PCI ones. Have no such problems, and I haven't even upped NMBCLUSTERS (yet). Remember that ftp.cdrom.com is one of the busiest servers on the net next to wuarchive and simtel, and it runs a (very greatly increased config) freebsd... Maybe david can post the config size parms again; I normally use maxusers 200, open_max=child_max=200, and ttyhog=16384. To that (per DG) I'm adding nmbclusters=4096; is there anything else? We use PM2e's and Wellfleet routers at both of my ISPs... There are various winsocks in use and 2 or 3 different PPPs on macs. Even freebsd's pppd works fine WITH the tcp options enabled into the livingston, and we don't have the latest firmware (ours won't route partial class C groups :-(. The only router problem we have at the moment is getting the Wellfleets expanded enough to run BGP4 to the net at large; they are *very* finicky about the ram you add... At least one ISP is using a freebsd system with multiple ET cards as a full-BGP router with no such problems... I'm using fbsd systems (the secondary web server and another) as ethernet switch/routers in-house just because the router doesn't have enough ports (we have a couple of in-building customers on ethernet connections); they work fine too using elite-ultra cards for the extra connections (and PCI smc cards for the backbone). Michael is also running more than one freebsd system and there are LOTS of small ISPs using freebsd for servers and you are the first I've heard to complain this way since 1.1.5.1 days when the system would run out of mbufs all by itself after being up for a month :-) (so would SGI before 4.0.5; this isn't a unique problem). As far as I know all mbuf leaks are fixed in 2.1 (DG may know otherwise?). I see buffnet is connected via sprint on a T1... Their routings have had problems over the last month or so for a day or two at a time; this could have something to do with these problems too. (there were two simultaneous cable cuts in the link south from NYC, and the resulting routing problems lasted about a week...) Seems smooth today. >> If I ftp to ftp.cdrom.com using a freebsd on my ethernet ring to theirs, >> I can connect ok, and things seem ok, unless I cd and ls too many times. >> I can maybe do 10 or 15 commands, and then it stalls. I can issue ls and >> it returns back like there are no files there or something. But I can cd >> and ls till Im blue with one of my sco's connected to that same ftp server. I have no problem with this and I run an ISP with 6 fbsd servers all with multiple virtual hosts (and another with 5); I ftp a lot to various sites including cdrom.com, though now that I've mirrored 2.1R and the sup I usually use mine (also on a freebsd system :-) Mirror has a problem but that is known (memory leaks in perl...) The only TCP hang I ever see is a known one with rlogin/rlogind which can sometimes hang when you hit ^C with lots of output queued. >Again, this isn't a generally-observed symptom. >If you're in a position to reduce the problem to the fewest required parts >and document it in a repeatable fashion, I'm certain that something could >be done to identify and resolve the problem. > >Meantime, nobody else can help because we don't see these problems. Note again - LOTS of small ISPs use freebsd servers. It is a famous problem to get reproducible trouble reports (amazing how things don't change - since the early 360 days the universal operator complaint on failure is 'lost console'.... Yes, I've been in the software business entirely too long :-) Even the motherboards can matter if you have bus-master cards, though that tends to crash the system on failure and you aren't complaining about *that* :-) The in-house systems at my place are on two different motherboards; ASUS Triton/pentium and IBM Blue-lightning (non-FPU 486 clones). Both work great once we got rid of the ULSI coprocessors (which are a documented problem); now the BL boards have no coprocessor and run awk real slow but who cares... -- Pete