Skip site navigation (1)Skip section navigation (2)
Date:      Sun, 15 Jul 2001 01:13:11 -0700
From:      Julian Elischer <julian@elischer.org>
To:        Matt Dillon <dillon@earth.backplane.com>
Cc:        Leo Bicknell <bicknell@ufp.org>, Drew Eckhardt <drew@PoohSticks.ORG>, hackers@FreeBSD.ORG
Subject:   Re: Network performance tuning.
Message-ID:  <3B515097.6551A530@elischer.org>
References:  <200107130128.f6D1SFE59148@earth.backplane.com> <200107130217.f6D2HET67695@revolt.poohsticks.org> <20010712223042.A77503@ussenterprise.ufp.org> <200107131708.f6DH8ve65071@earth.backplane.com>

next in thread | previous in thread | raw e-mail | index | archive | help
Matt Dillon wrote:

> 
>     I took a look at the paper Leo pointed out to me:
> 
>         http://www.psc.edu/networking/auto.html
> 
>     It's a very interesting paper, and the graphs do in fact show the type
>     of instability that can occur.  The code is a mess, though.  I think it
>     is possible to generate a much less noisy patch set by taking a higher
>     level approach to solving the problem.

there are a couple of problems that can occur:
Imagine the following scenario..

machine  (with fixed buffer size) is transmitting at N bps on average,
but occasionally cannot send because the window is less than that
needed for continuous sending. because of that, an intermediate queue
does not overflow..

Now, we add adjustable queue sizes.. and suddenly we are overflowing the
intermediate
queue, and dropping packets. Since we don't have SACK we are resending
lots of data and dropping back the window size at regular intervals. thus
it is possible that under some situations teh adjustable buffer size
may result in WORSE throughput. 
That brings up one thing I never liked about the current TCP,
which is that we need to keep testing the upper window size to ensure that
we notice if the bandwidth increases. Unfortunatly the only way we can do this
is by
increasing the windowsize, until we lose a packet (again).

There was an interesting paper that explored loss-avoidance techniques.
these included noticing teh increased latency that can occur when
an intermediate node starts to become overloaded. Unfortunatly,
usually we are not the person overloading it so us backing off
doesn't help a lot in many cases. I did some work at whistle
trying to predict and control remote congestion, but it was mostly useful
when the slowest link was your local loop and didn't help much if the 
link was firther away.
Still, it did allow interactive sessions to run in prarllel with bulk
sessions and still get reasonable reaction times. basically I metered 
out the ACKS going the other way (out) in order to minimise the 
incoming queue size at the remote end of the incoming link. :-)

This is all getting a bit far from the original topic, but 
I do worry that we may increase our packet loss with variable buffers and thus
reduce throughout in the cases where teh fixed buffer was getting 80%
or so of the theoretical throughout.

julian


> 
>                                                 -Matt
> 
> To Unsubscribe: send mail to majordomo@FreeBSD.org
> with "unsubscribe freebsd-hackers" in the body of the message

-- 
+------------------------------------+       ______ _  __
|   __--_|\  Julian Elischer         |       \     U \/ / hard at work in 
|  /       \ julian@elischer.org     +------>x   USA    \ a very strange
| (   OZ    )                                \___   ___ | country !
+- X_.---._/    presently in San Francisco       \_/   \\
          v

To Unsubscribe: send mail to majordomo@FreeBSD.org
with "unsubscribe freebsd-hackers" in the body of the message




Want to link to this message? Use this URL: <https://mail-archive.FreeBSD.org/cgi/mid.cgi?3B515097.6551A530>