Skip site navigation (1)Skip section navigation (2)
Date:      Wed, 04 Aug 2010 14:10:11 +0600
From:      "Rushan R. Shaymardanov" <rush@clink.ru>
To:        Daniel Hartmeier <daniel@benzedrine.cx>
Cc:        freebsd-pf@freebsd.org
Subject:   Re: Keeping state of tcp connections
Message-ID:  <4C592063.7090605@clink.ru>
In-Reply-To: <20100804074915.GB3834@insomnia.benzedrine.cx>
References:  <4C58D456.5010701@clink.ru> <20100804062907.GA3834@insomnia.benzedrine.cx> <4C591915.7050807@clink.ru> <20100804074915.GB3834@insomnia.benzedrine.cx>

next in thread | previous in thread | raw e-mail | index | archive | help
>
> Are you using adaptive timeouts?
>
> # pfctl -st | grep adaptive
Yes (they are used by default):

# pfctl -st | grep adaptive
adaptive.start             6000 states
adaptive.end              12000 states


>
> What's your state limit?
>
> # pfctl -sm | grep states

# pfctl -sm | grep states
states        hard limit   131072

>
> When the problem occurs, how many states do you have?
>
> # pfctl -si | grep current
 # pfctl -si | grep current
current entries                   120600
>
> If this value is higher than the adaptive.start value,
> timeout values get scaled down, which could possibly explain
> what you see. If so, try increasing the state limit and/or
> the adaptive thresholds:
>
>   set limit states 50000
>   set timeout { adaptive.start 50000 adaptive.end 60000 }
>

That was the problem. I increased states limit, but adaptive.start and
adaptive end remained default. No I switched adaptive timeouts off by
using set timeout { adaptive.start 0 adaptive.end 0 }

Thank you very much!

Shaymaradnov Rushan

> Other causes: do you use pfsync to synchronize states between
> multiple pf machines? If so, are their clocks synchronized and
> accurate?
>
> Did you change any (kernel) settings related to time, like HZ
> or such? Is your time synchronized in a special way, i.e. not
> just by ntpd?
>
> Daniel



Want to link to this message? Use this URL: <https://mail-archive.FreeBSD.org/cgi/mid.cgi?4C592063.7090605>