Skip site navigation (1)Skip section navigation (2)
Date:      Wed, 13 Jul 2011 07:57:18 -0400
From:      John Baldwin <jhb@freebsd.org>
To:        freebsd-net@freebsd.org
Cc:        Vladimir Budnev <vladimir.budnev@gmail.com>
Subject:   Re: (TCP/IP) Server side sends RST after 3-way handshake.Syn flood defense or queue overflow?
Message-ID:  <201107130757.19178.jhb@freebsd.org>
In-Reply-To: <CAAvRK97hwamb8mpu6G6FEbkYATQ3BWNZoFYbsvmgKDwHNXFsLA@mail.gmail.com>
References:  <CAAvRK97hwamb8mpu6G6FEbkYATQ3BWNZoFYbsvmgKDwHNXFsLA@mail.gmail.com>

next in thread | previous in thread | raw e-mail | index | archive | help
On Wednesday, July 13, 2011 6:37:26 am Vladimir Budnev wrote:
> Hello.
> 
> Iv faced some problem and seems don't realize the mechanincs why does it
> occures.
> First of all i'd like to notice that my freebsd knowledge and expirience is
> limited, especially in such "strange" cases.
> 
> In details(code example ill be at the end):
> 
> System:
> # uname -spr
> FreeBSD 7.2-RELEASE amd64
> 
> We have simple TCP client and server.
> 
> Server is very casual, it listens on some port in while->select->accept
> loop.
> 
> Client knocks on server port and sends some data and closes the port.
> First it was designed for sending data 3-5/times per minute. But once during
> test I'v noticed that if client sends data very fast (more precisely it
> opens and closes socket to server very fast) there is
> "54 - Connection reset by peer" error on call to "connect".(on client side
> ofc)
> 
> Some illustration from tcpdump:
> (test on localhost.server side is port 10002.i'v cuted other fields for
> simplicity)
> <...>
> 13:48:58.491229 IP 127.0.0.1.56677 > 127.0.0.1.10002: S
> 13:48:58.491255 IP 127.0.0.1.10002 > 127.0.0.1.56677: S  ack
> 13:48:58.491266 IP 127.0.0.1.56677 > 127.0.0.1.10002: . ack
> 13:48:58.491300 IP 127.0.0.1.56677 > 127.0.0.1.10002: P
> 13:48:58.491346 IP 127.0.0.1.56677 > 127.0.0.1.10002: F
> 13:48:58.491365 IP 127.0.0.1.10002 > 127.0.0.1.56677: . ack
> 
> 13:48:58.491466 IP 127.0.0.1.55238 > 127.0.0.1.10002: S        //
> 13:48:58.491490 IP 127.0.0.1.10002 > 127.0.0.1.55238: S ack // handshake
> 13:48:58.491503 IP 127.0.0.1.55238 > 127.0.0.1.10002: . ack  //
> 13:48:58.491536 IP 127.0.0.1.55238 > 127.0.0.1.10002: P       <--data
> 13:48:58.491580 IP 127.0.0.1.55238 > 127.0.0.1.10002: F       <-- client
> closes session
> 13:48:58.491599 IP 127.0.0.1.10002 > 127.0.0.1.55238: . ack  // OK
> 
> 13:48:58.491701 IP 127.0.0.1.60212 > 127.0.0.1.10002: S
> 13:48:58.491726 IP 127.0.0.1.10002 > 127.0.0.1.60212: S ack
> 13:48:58.491738 IP 127.0.0.1.60212 > 127.0.0.1.10002: . ack
> 13:48:58.491745 IP 127.0.0.1.10002 > 127.0.0.1.60212: R    <-- this is
> strange answer.Why?
> 
> 13:48:58.491887 IP 127.0.0.1.60804 > 127.0.0.1.10002: S
> 13:48:58.491914 IP 127.0.0.1.10002 > 127.0.0.1.60804: S ack
> 13:48:58.491924 IP 127.0.0.1.60804 > 127.0.0.1.10002: . ack
> 13:48:58.491931 IP 127.0.0.1.10002 > 127.0.0.1.60804: R
> <...>
> 
> Some connections were OK but then server application begin to send RST right
> after handshake.
> Tuning listen backlog parameter doesn't help much, BUT what really solves
> the "problem"  is increasing time interval between client
> requests.egusleep(10000) solves the "problem" at all.
> 
> Iv maned for some tuning like syncache and syncookies but nothing usefull,
> or i missed something.But here they are:
> 
> # sysctl -a | grep syncache
> net.inet.tcp.syncache.rst_on_sock_fail: 1
> net.inet.tcp.syncache.rexmtlimit: 3
> net.inet.tcp.syncache.hashsize: 512
> net.inet.tcp.syncache.count: 0
> net.inet.tcp.syncache.cachelimit: 15360
> net.inet.tcp.syncache.bucketlimit: 30
> 
> 
> ipfw rules allows any from any, nothing special.
> 
> QUESTION:
> So the question is why such thing happening?Is this is some freebsd defense
> from syn flood?(to be honest don't think so because seems there is no fixed
> "ALLOWED syn/syn-ack/ack per seconds"). It more looks like some queue
> overflow,but I don't know what queue and how it can be enlarged.
> 
> If i can provide some more info to clear some aspects I will!
> 
> Thanks in advance, Vladimir.

Do you have any of these in your netstat -s -p tcp output:

        6186 embryonic connections dropped
        63889 syncache entries added
                0 retransmitted
                0 dupsyn
                0 dropped
                63889 completed
                0 bucket overflow
                0 cache overflow
                0 reset
                0 stale
                0 aborted
                0 badack
                0 unreach
                0 zone failures
        63889 cookies sent
        0 cookies received

It is normal for syncache entries added == completed == cookies sent, I'm 
mostly curious about anything else besides that.  It is possible when using 
the syncache to have the network stack decide it can't create a connection 
until it gets to the end of the 3-way handshake due to resource limits, etc.  
In that case the end of the 3-way handshake will get a RST in response.  
However, if your app just sends data and calls close() without doing any 
reads, it might close() succesfully while the data is in flight before the 
client machine sees the RST, so the client app will not see any errors.  If 
the RST arrives before you finish calling write() and close() then you will 
get ECONNRESET errors from write() and close().

You can try turning off the syncache and syncookies as a test.  This will 
probably trigger more ECONNRESET errors in connect() (which your app will need 
to retry on).  However, the better fix is to track down what is causing your 
connections to be dropped in the first place, e.g. if you are hitting the 
limit on inpcbs (look for failures in vmstat -z output) and fix that.
 
-- 
John Baldwin



Want to link to this message? Use this URL: <https://mail-archive.FreeBSD.org/cgi/mid.cgi?201107130757.19178.jhb>