From owner-freebsd-pf@FreeBSD.ORG Sat Jul 8 08:44:01 2006 Return-Path: X-Original-To: freebsd-pf@freebsd.org Delivered-To: freebsd-pf@freebsd.org Received: from mx1.FreeBSD.org (mx1.freebsd.org [216.136.204.125]) by hub.freebsd.org (Postfix) with ESMTP id 5AF4F16A4DD for ; Sat, 8 Jul 2006 08:44:01 +0000 (UTC) (envelope-from dhartmei@insomnia.benzedrine.cx) Received: from insomnia.benzedrine.cx (insomnia.benzedrine.cx [62.65.145.30]) by mx1.FreeBSD.org (Postfix) with ESMTP id E568943D45 for ; Sat, 8 Jul 2006 08:43:55 +0000 (GMT) (envelope-from dhartmei@insomnia.benzedrine.cx) Received: from insomnia.benzedrine.cx (dhartmei@localhost [127.0.0.1]) by insomnia.benzedrine.cx (8.13.4/8.13.4) with ESMTP id k688hnHT025038 (version=TLSv1/SSLv3 cipher=DHE-DSS-AES256-SHA bits=256 verify=NO); Sat, 8 Jul 2006 10:43:49 +0200 (MEST) Received: (from dhartmei@localhost) by insomnia.benzedrine.cx (8.13.4/8.12.10/Submit) id k688hio1000253; Sat, 8 Jul 2006 10:43:44 +0200 (MEST) Date: Sat, 8 Jul 2006 10:43:43 +0200 From: Daniel Hartmeier To: "Douglas K. Rand" Message-ID: <20060708084343.GA32262@insomnia.benzedrine.cx> References: <87ejwx1edf.wl%rand@meridian-enviro.com> <87zmfl466d.fsf@delta.meridian-enviro.com> Mime-Version: 1.0 Content-Type: text/plain; charset=us-ascii Content-Disposition: inline In-Reply-To: <87zmfl466d.fsf@delta.meridian-enviro.com> User-Agent: Mutt/1.5.10i Cc: mcbride@openbsd.org, freebsd-pf@freebsd.org Subject: Re: pfsync & carp problems X-BeenThere: freebsd-pf@freebsd.org X-Mailman-Version: 2.1.5 Precedence: list List-Id: "Technical discussion and general questions about packet filter \(pf\)" List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , X-List-Received-Date: Sat, 08 Jul 2006 08:44:01 -0000 On Fri, Jul 07, 2006 at 01:32:26PM -0500, Douglas K. Rand wrote: > Some more information after I discovered the -x loud option to > pfctl. When the master firewall goes down and the already established > TCP session hangs, I get these messages on the slave: > > pf: BAD state: TCP 67.134.74.224:52173 67.134.74.224:52173 204.152.184.134:80 [lo=2943781408 high=2943846943 win=33304 modulator=0 wscale=1] [lo=3255565389 high=3255629101 win=65535 modulator=0 wscale=0] 4:4 A seq=3255634893 ack=2943781408 len=1448 ackskew=0 pkts=21109:24835 dir=in,rev > pf: State failure on: 1 | This means the web server is trying to send data to the client that is out of (what pf thinks is legal for) its window. The last ACK from the client that pf's state saw was 3255562493 (advertising th_win 33304 wscale factor 2^1), hence the upper boundary of what the client accepts is 3255562493 + 2*33304 == seqhi 3255629101. The packet's end, th_seq 3255634893 + len 1448 == 3255636341 is larger than the client's seqhi 3255629101 (by 7240, which is 5*1448). Hence it is blocked. The fact that the server retransmits the same segment over and over without going back to older segments probably means that it has gotten an ACK from the client for 3255634893. So how can the server have received an ACK up to 3255634893 when pf's state has only seen an ACK for 3255562493? I guess this depends on how you shut down the master in the first place. For instance, if its kernel would, for a brief period of time, continue to forward packets while pf is no longer seeing packets, this would be possible. Also, there's a certain latency between pf updating its state entry based on a passing packet and pfsync actually transmitting that update to the slave. If an update was lost because the box was shutting down precisely in that moment, I guess there is a chance for such a race. How are you disconnecting the master? Does this occur when you physically disconnect the ethernet cable towards the server first? I'm not sure if there's any code that should try to prevent this scenario in a normal shutdown/reboot case (like disabling forwarding or taking down interfaces in a certain order first). Ryan, do we address this, or is it just a rare but expected case that this might occur? Or did I miss anything and this shouldn't occur for some reason? Daniel