Skip site navigation (1)Skip section navigation (2)
Date:      Fri, 30 May 2008 13:34:13 -0400
From:      Robert Blayzor <rblayzor.bulk@inoc.net>
To:        Matthew Dillon <dillon@apollo.backplane.com>
Cc:        freebsd-stable@freebsd.org
Subject:   Re: Sockets stuck in FIN_WAIT_1
Message-ID:  <69B2392D-E349-4E29-B028-900C8D1693A8@inoc.net>
In-Reply-To: <200805301643.m4UGhSa0033918@apollo.backplane.com>
References:  <B42F9BDF-1E00-45FF-BD88-5A07B5B553DC@inoc.net>	<1A19ABA2-61CD-4D92-A08D-5D9650D69768@mac.com>	<23C02C8B-281A-4ABD-8144-3E25E36EDAB4@inoc.net>	<483DE2E0.90003@FreeBSD.org>	<B775700E-7494-42C1-A9B2-A600CE176ACB@inoc.net>	<483E36CE.3060400@FreeBSD.org>	<483E3C26.3060103@paradise.net.nz>	<483E4657.9060906@FreeBSD.org>	<483EA513.4070409@earthlink.net>	<96AFE8D3-7EAC-4A4A-8EFF-35A5DCEC6426@inoc.net>	<483EAED1.2050404@FreeBSD.org>	<200805291912.m4TJCG56025525@apollo.backplane.com>	<14DA211A-A9C5-483A-8CB9-886E5B19A840@inoc.net>	<200805291930.m4TJUeGX025815@apollo.backplane.com>	<0C827F66-09CE-476D-86E9-146AB255926B@inoc.net>	<200805292132.m4TLWhCv026720@apollo.backplane.com>	<CCBAEE3E-35A5-4BF8-A0B7-321272533B62@inoc.net>	<200805300055.m4U0tkqx027965@apollo.backplane.com> <EB975E1A-7995-4214-A2CC-AE2D789B19AB@inoc.net> <483F6F66.4050909@FreeBSD.org> <C1CC6D9D-6584-43BD-8675-021A0495FDA3@inoc.net> <200805301643.m4UGhSa0033918@apollo.backplane.com>

next in thread | previous in thread | raw e-mail | index | archive | help
On May 30, 2008, at 12:43 PM, Matthew Dillon wrote:
>    I would be very careful with any type of ruleset (IPFW or PF) which
>    relies on keep-state.  You can wind up causing legitimate  
> connections
>    to drop if it isn't carefully tuned.



Thanks again Matt...

I do agree on the firewall and keep-state and scaling issue.  It  
wasn't the magic bullet I thought it may have been.  The stuck  
connections just dropped off due to the load dropping at night.  The  
bandaid I have is the tcpdrop hack that was posted here.  That seems  
to clear all the stuck sessions.  While it's probably not the best  
thing to do, it protects the server at least.  I don't know what more  
to do at this point.  While these may be broken client issues, it's  
breaking the server.  I don't know if it makes sense to push something  
upstream to see if some type of knob can be implemented into the  
network stack to force close/drop these or to just let it go and deal  
with it as-is.  I have a message into the clamav-devel list to see if  
this is a problem on the freshclam client and the way it handles  
closing connections/broken connections.  It's quite possible it's  
something broken in freshclam where it's failing to deal with a  
network failure properly....

-- 
Robert Blayzor, BOFH
INOC, LLC
rblayzor@inoc.net
http://www.inoc.net/~rblayzor/






Want to link to this message? Use this URL: <https://mail-archive.FreeBSD.org/cgi/mid.cgi?69B2392D-E349-4E29-B028-900C8D1693A8>