Skip site navigation (1)Skip section navigation (2)
Date:      Sun, 5 Nov 2000 17:29:14 -0500 (EST)
From:      "Richard A. Steenbergen" <ras@e-gerbil.net>
To:        David Greenman <dg@root.com>
Cc:        freebsd-net@freebsd.org
Subject:   Re: tcp sendspace/recvspace
Message-ID:  <Pine.BSF.4.21.0011051715520.306-100000@overlord.e-gerbil.net>
In-Reply-To: <200011052202.OAA24207@implode.root.com>

next in thread | previous in thread | raw e-mail | index | archive | help
On Sun, 5 Nov 2000, David Greenman wrote:

>    I've been messing around with the net.inet.tcp.sendspace and 
> net.inet.tcp.recvspace parameters on ftp.freesoftware.com and have found
> that there is a significant performance improvement when increasing these
> to 32768 bytes. Apparantly there are enough systems out there with higher
> window maxes that it really does make a difference. By significant 
> improvement, I mean about a average of a 20% increase in Mbps per user,
> and this was just the change over a 30 minute period with lots of connections
> still using the old 16K values.
>    Any objections to increasing the defaults in FreeBSD to 32K?

One of the projects I've been working on in my spare time is an
implimentation of auto-tuning the socket buffers based on feedback from
the tcp congestion window. Remember that these numbers don't actually
allocate any memory and there are no pools, but mearly set an allocation
limit. Any situation where an artifically advertised TCP window based on a
non-existant memory limitation is keeping the number of packets allowed in
flight below what would be permitted by the cwnd is probably a bad thing
for performance, at least on high latency high bandwidth connections.

The only time you will see memory actually allocated in these buffers is
during packet loss recovery, when data in flight is being buffered while
awaiting a retransmission. It is relatively straight forward to instead
set a fixed limit of amount of memory which can be allocated for this task
on a system and per user basis, and then intelligently share this among
the tcp connections in question. I believe this will be a much better
system in the long run.

BTW a blanket 32k in both directions, while not an outright bad idea, is
not optimal and probably overly wasteful. In most cases you can achieve
your increased thruput by setting the recv buffer higher without needing
to make the sendbuf match, but the numbers you're looking for are probably
closet to 65535 without rfc1323 window scaling or at least 256k with, in
order to get optimal thruput. You can obviously see problems coming from
this. Among other things its just plain stupid, not every connection needs
the memory it just needs the potential for the memory for any given
connection at any given time, and you open yourself to mbuf exhaustion and
various forms of attacks by trying to achieve that with blanket numbers.

But while sticking with the existing system, turning up the socket buffers
in applications like ftp w/setsockopt() is not a bad idea. :P

-- 
Richard A Steenbergen <ras@e-gerbil.net>   http://www.e-gerbil.net/humble
PGP Key ID: 0x138EA177  (67 29 D7 BC E8 18 3E DA  B2 46 B3 D8 14 36 FE B6)



To Unsubscribe: send mail to majordomo@FreeBSD.org
with "unsubscribe freebsd-net" in the body of the message




Want to link to this message? Use this URL: <https://mail-archive.FreeBSD.org/cgi/mid.cgi?Pine.BSF.4.21.0011051715520.306-100000>