Skip site navigation (1)Skip section navigation (2)
Date:      Wed, 7 Feb 2007 21:14:51 +0000 (GMT)
From:      Robert Watson <rwatson@FreeBSD.org>
To:        Andre Oppermann <andre@FreeBSD.org>
Cc:        cvs-src@FreeBSD.org, src-committers@FreeBSD.org, cvs-all@FreeBSD.org
Subject:   Re: cvs commit: src/sys/netinet tcp_input.c tcp_output.c tcp_usrreq.c tcp_var.h
Message-ID:  <20070207211116.J23167@fledge.watson.org>
In-Reply-To: <200702011832.l11IWEGu090482@repoman.freebsd.org>
References:  <200702011832.l11IWEGu090482@repoman.freebsd.org>

next in thread | previous in thread | raw e-mail | index | archive | help
On Thu, 1 Feb 2007, Andre Oppermann wrote:

> andre       2007-02-01 18:32:14 UTC
>
>  FreeBSD src repository
>
>  Modified files:
>    sys/netinet          tcp_input.c tcp_output.c tcp_usrreq.c
>                         tcp_var.h
>  Log:
>  Auto sizing TCP socket buffers.
>
>  Normally the socket buffers are static (either derived from global
>  defaults or set with setsockopt) and do not adapt to real network
>  conditions. Two things happen: a) your socket buffers are too small
>  and you can't reach the full potential of the network between both
>  hosts; b) your socket buffers are too big and you waste a lot of
>  kernel memory for data just sitting around.
>
>  With automatic TCP send and receive socket buffers we can start with a
>  small buffer and quickly grow it in parallel with the TCP congestion
>  window to match real network conditions.
>
>  FreeBSD has a default 32K send socket buffer. This supports a maximal
>  transfer rate of only slightly more than 2Mbit/s on a 100ms RTT
>  trans-continental link. Or at 200ms just above 1Mbit/s. With TCP send
>  buffer auto scaling and the default values below it supports 20Mbit/s
>  at 100ms and 10Mbit/s at 200ms. That's an improvement of factor 10, or
>  1000%. For the receive side it looks slightly better with a default of
>  64K buffer size.

Following a rather busy last two months, I've recently gotten up and running 
with my performance testing environment at the CL.  Running simple TCP 
benchmarks using netperf, I see a marginal performance improvement on the send 
side, but on the receive side I see performance go from about 1.4gbps to 
1.2gbps (15% performance loss).  Do you have any suggestions about how I could 
further diagnose what is going on?  This is with the test Neterion driver and 
a 1500 MTU; this driver does not currently have TSO support.

Thanks,

Robert N M Watson
Computer Laboratory
University of Cambridge

>
>  New sysctls are:
>    net.inet.tcp.sendbuf_auto=1 (enabled)
>    net.inet.tcp.sendbuf_inc=8192 (8K, step size)
>    net.inet.tcp.sendbuf_max=262144 (256K, growth limit)
>    net.inet.tcp.recvbuf_auto=1 (enabled)
>    net.inet.tcp.recvbuf_inc=16384 (16K, step size)
>    net.inet.tcp.recvbuf_max=262144 (256K, growth limit)
>
>  Tested by:      many (on HEAD and RELENG_6)
>  Approved by:    re
>  MFC after:      1 month
>
>  Revision  Changes    Path
>  1.312     +81 -3     src/sys/netinet/tcp_input.c
>  1.122     +70 -4     src/sys/netinet/tcp_output.c
>  1.144     +2 -0      src/sys/netinet/tcp_usrreq.c
>  1.138     +2 -0      src/sys/netinet/tcp_var.h
>



Want to link to this message? Use this URL: <https://mail-archive.FreeBSD.org/cgi/mid.cgi?20070207211116.J23167>