Skip site navigation (1)Skip section navigation (2)
Date:      Sun, 15 Jul 2001 12:29:48 -0700 (PDT)
From:      Matt Dillon <dillon@earth.backplane.com>
To:        Julian Elischer <julian@elischer.org>
Cc:        Leo Bicknell <bicknell@ufp.org>, Drew Eckhardt <drew@PoohSticks.ORG>, hackers@FreeBSD.ORG
Subject:   Re: Network performance tuning.
Message-ID:  <200107151929.f6FJTme08965@earth.backplane.com>
References:  <200107130128.f6D1SFE59148@earth.backplane.com> <200107130217.f6D2HET67695@revolt.poohsticks.org> <20010712223042.A77503@ussenterprise.ufp.org> <200107131708.f6DH8ve65071@earth.backplane.com> <3B515097.6551A530@elischer.org>

next in thread | previous in thread | raw e-mail | index | archive | help

:Now, we add adjustable queue sizes.. and suddenly we are overflowing the
:intermediate
:queue, and dropping packets. Since we don't have SACK we are resending
:lots of data and dropping back the window size at regular intervals. thus
:it is possible that under some situations teh adjustable buffer size
:may result in WORSE throughput. 
:That brings up one thing I never liked about the current TCP,
:which is that we need to keep testing the upper window size to ensure that
:we notice if the bandwidth increases. Unfortunatly the only way we can do this
:is by
:increasing the windowsize, until we lose a packet (again).
:
:There was an interesting paper that explored loss-avoidance techniques.
:these included noticing teh increased latency that can occur when
:an intermediate node starts to become overloaded. Unfortunatly,
:usually we are not the person overloading it so us backing off
:doesn't help a lot in many cases. I did some work at whistle
:trying to predict and control remote congestion, but it was mostly useful
:when the slowest link was your local loop and didn't help much if the 
:link was firther away.
:Still, it did allow interactive sessions to run in prarllel with bulk
:sessions and still get reasonable reaction times. basically I metered 
:out the ACKS going the other way (out) in order to minimise the 
:incoming queue size at the remote end of the incoming link. :-)
:
:This is all getting a bit far from the original topic, but 
:I do worry that we may increase our packet loss with variable buffers and thus
:reduce throughout in the cases where teh fixed buffer was getting 80%
:or so of the theoretical throughout.
:
:julian

    Well, it can't be worse then it is now... now it increases the window
    size until it hits the sendspace limit or hits packet loss.

    I tried both mechanisms... checking for the bandwidth to plateau 
    while increasing the window size, which didn't work very well,
    and looking for the increased latency, which worked quite nicely.
    When decreasing the window size checking for the latency to bottom-out
    didn't work very well but checking for the bandwidth to start to
    drop did.  The algorithm as posted is still not very stable - I had
    to use 5% hysteresis to get anything approaching a reasonable result,
    but it shouldn't go off into the weeds either (I hope).

    The method definitely work best when the constriction is near either
    end of the pipe, i.e. like your DSL line or T1 or modem, or the 
    destination's DSL line or T1 or modem or whatever.  When the
    constriction is in the middle of the network I completely agree with
    you... the algorithm breaks down.  You can still figure it out 
    statistically, but it takes far too long to remove the noise from the
    measurements.   

    On the otherhand, if the routers were able to insert a feedback
    metric in the packet (e.g. like ttl but measuring something else),
    I think the middle-of-the-network problem could be solved.

						-Matt



To Unsubscribe: send mail to majordomo@FreeBSD.org
with "unsubscribe freebsd-hackers" in the body of the message




Want to link to this message? Use this URL: <https://mail-archive.FreeBSD.org/cgi/mid.cgi?200107151929.f6FJTme08965>