From owner-freebsd-hackers Thu Dec 7 12:07:56 1995 Return-Path: owner-hackers Received: (from root@localhost) by freefall.freebsd.org (8.7.3/8.7.3) id MAA12824 for hackers-outgoing; Thu, 7 Dec 1995 12:07:56 -0800 (PST) Received: from dream.demos.su (dream.demos.su [194.87.1.2]) by freefall.freebsd.org (8.7.3/8.7.3) with SMTP id MAA12798 for ; Thu, 7 Dec 1995 12:07:39 -0800 (PST) Received: by dream.demos.su id XAA01214; (8.6.8/D) Thu, 7 Dec 1995 23:06:10 +0300 To: Luigi Rizzo , Terry Lambert Cc: hackers@FreeBSD.ORG References: <199512061932.UAA18137@labinfo.iet.unipi.it> In-Reply-To: <199512061932.UAA18137@labinfo.iet.unipi.it>; from Luigi Rizzo at Wed, 6 Dec 1995 20:32:28 +0100 (MET) Message-ID: Organization: Demos, Moscow, Russia Date: Thu, 7 Dec 1995 23:06:10 +0300 X-Mailer: Mail/@ [v2.22 FreeBSD] From: apg@demos.net (Paul Antonov) X-NCC-RegID: su.demos Subject: Queueing (was: Re: How long are queues on a typical router?) Lines: 50 MIME-Version: 1.0 Content-Type: text/plain; charset=us-ascii Content-Transfer-Encoding: 7bit Sender: owner-hackers@FreeBSD.ORG Precedence: bulk In message <199512061932.UAA18137@labinfo.iet.unipi.it> Luigi Rizzo writes: >I know. but a colleague here said that Ciscos (at least some models) >come with a default "pool size" of 40 slots (whatever is a slot) and he >usually brings it up to 330. In general, setting proper queueing policy for overloaded line is a big challenge. Here in Europe we often have so badly congested lines that wihout proper queueing policy it woudln't reasonably work at all. >From my experience, here's the main points: 1. never make large queues - TCP better adopts to packet drops rather than to large RTT + even _minor_ packet loss (if your traffic exceeds bandwith of the line, drops will occur anyway, but just drops is much better than saturation). 2. Use technique called "custom queueing" where you can assign different queues for different types of traffic, limiting both number of packets in queue and summary byte length of particular queue. (with latter you have slightly TDM-like effect). This will save your interactive traffic reasonably fast and will protect you from nasty things like excessive ping's. (more advanced cases include separation of "established" tcp flow and TCP startup packets to different queues, so SYN's etc. won't be lost). If anybody interested I can send some complicated examples ... 3. (for Cisco's) - if you have excessive traffic, increase _input_ queue lengths for fast interfaces like Ethernet, FDDI, HSSI to, say, 300. (with default values you will experience bursts of input packet drops sometimes - that's funny :) Cisco recently introduced new technique called "weighted fair-queueing" that keeps track of transit TCP connections, sorts them as "high-traffic" and "low-traffic" and then assigns better priority for "low-traffic" streams. But it still doesn't work well for over-congested lines because of impossibility to do some tuning and assign relative priorities and limits to traffic streams manually (depending on TOS etc.) >nor i can understand where the 20% comes from, how it relates to the >loss rate measured by a (20-minutes long) sequence of pings, and if >it is reasonable that this is a steady-state situation. I think that we're working with lines overloaded by 200-300-400% and everything works relatively well (by "overload" i mean sum of customer's channels bandwith compared to bandwith of our international lines). PS. It's not quite a freebsd issue, but since many isp folks are here .. ;) -- Paul