From owner-freebsd-current@FreeBSD.ORG Sun Jan 27 14:49:58 2008 Return-Path: Delivered-To: freebsd-current@freebsd.org Received: from mx1.freebsd.org (mx1.freebsd.org [IPv6:2001:4f8:fff6::34]) by hub.freebsd.org (Postfix) with ESMTP id 8BBD316A419 for ; Sun, 27 Jan 2008 14:49:58 +0000 (UTC) (envelope-from max@love2party.net) Received: from moutng.kundenserver.de (moutng.kundenserver.de [212.227.126.183]) by mx1.freebsd.org (Postfix) with ESMTP id 1AEF013C457 for ; Sun, 27 Jan 2008 14:49:58 +0000 (UTC) (envelope-from max@love2party.net) Received: from amd64.laiers.local (dslb-088-066-063-169.pools.arcor-ip.net [88.66.63.169]) by mrelayeu.kundenserver.de (node=mrelayeu6) with ESMTP (Nemesis) id 0ML29c-1JJ8pV3dQC-0003e5; Sun, 27 Jan 2008 15:49:54 +0100 From: Max Laier Organization: FreeBSD To: Stefan Lambrev Date: Sun, 27 Jan 2008 15:49:46 +0100 User-Agent: KMail/1.9.7 References: <479A2389.2000802@moneybookers.com> <200801271422.23340.max@love2party.net> <479C953C.1010304@moneybookers.com> In-Reply-To: <479C953C.1010304@moneybookers.com> X-Face: ,,8R(x[kmU]tKN@>gtH1yQE4aslGdu+2]; R]*pL,U>^H?)gW@49@wdJ`H<=?utf-8?q?=25=7D*=5FBD=0A=09U=5For=3D=5CmOZf764=26nYj=3DJYbR1PW0ud?=>|!~,,CPC.1-D$FG@0h3#'5"k{V]a~.<=?utf-8?q?mZ=7D44=23Se=7Em=0A=09Fe=7E=5C=5DX5B=5D=5Fxj?=(ykz9QKMw_l0C2AQ]}Ym8)fU MIME-Version: 1.0 Content-Type: multipart/signed; boundary="nextPart3454006.VeOnS32Bod"; protocol="application/pgp-signature"; micalg=pgp-sha1 Content-Transfer-Encoding: 7bit Message-Id: <200801271549.52791.max@love2party.net> X-Provags-ID: V01U2FsdGVkX19/fYjNHL6laUQqgakE7AlvbfNE3F66BmllVkN sO5xDKZ7tX4hTB3y5sSSgSu1CZsxQQYS8GHMmSlEe7dXfFGw5c 03CPZOsxvyW3BoGDmGCYHqAsJrUa97uIRSfagyLUFU= Cc: freebsd-current@freebsd.org Subject: Re: FreeBSD 7, bridge, PF and syn flood = very bad performance X-BeenThere: freebsd-current@freebsd.org X-Mailman-Version: 2.1.5 Precedence: list List-Id: Discussions about the use of FreeBSD-current List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , X-List-Received-Date: Sun, 27 Jan 2008 14:49:58 -0000 --nextPart3454006.VeOnS32Bod Content-Type: text/plain; charset="windows-1251" Content-Transfer-Encoding: quoted-printable Content-Disposition: inline On Sunday 27 January 2008, Stefan Lambrev wrote: > Greetings, > > Max Laier wrote: > > -cut- > > >> Well I think the interesting lines from this experiment are: > >> max total wait_total count avg > >> wait_avg cnt_hold cnt_lock name > >> 39 25328476 70950955 9015860 2 7 > >> 5854948 6309848 /usr/src/sys/contrib/pf/net/pf.c:6729 (sleep > >> mutex:pf task mtx) > >> 936935 10645209 350 50 212904 7 > >> 110 47 /usr/src/sys/contrib/pf/net/pf.c:980 (sleep > >> mutex:pf task mtx) > > > > Yeah, those two mostly are the culprit, but a quick fix is not really > > available. You can try to "set timeout interval" to something bigger > > (e.g. 60 seconds) which will decrease the average hold time of the > > second lock instance at the cost of increased peak memory usage. > > I'll try and this. At least memory doesn't seems to be a problem :) > > > I have the ideas how to fix this, but it will take much much more > > time than I currently have for FreeBSD :-\ In general this requires > > a bottom up redesign of pf locking and some data structures involved > > in the state tree handling. > > > > The first(=3Dmain) lock instance is also far from optimal (i.e. pf is a > > congestion point in the bridge forwarding path). For this I have > > also a plan how to make at least state table lookups run in parallel > > to some extend, but again the lack of free time to spend coding > > prevents me from doing it at the moment :-\ > > Well, now we know where the issue is. The same problem seems to affect > synproxy state btw. > Can I expect better performance with IPFW's dynamic rules? Not significantly better, I'd predict. IPFW's dynamic rules are also=20 protected by a single mutex leading to similar congestion problems as pf. = =20 There should be a measureable constant improvement as IPFW does much less=20 sanity checks. i.e. better performance at the expense of less security. =20 It really depends on your needs which is better suited for your setup. > I wonder how one can protect himself on gigabit network and service > more then 500pps. > For example in my test lab I see incoming ~400k packets per second, but > if I activate PF, > I see only 130-140k packets per second. Is this expected behavior, if > PF cannot handle so many packets? As you can see from the hwpmc trace starting this thread, we don't spend=20 that much time in pf. The culprit is the pf task mutext, which forces=20 serialization in pf congesting the whole forward path. Under different=20 circumstances pf can handle more pps. > The missing 250k+ are not listed as discarded or other errors, which is > weird. As you slow down the forwarding protocols like TCP will automatically slow= =20 down. Unless you have UDP bombs blasting at your network this is quite=20 usual behavior. =2D-=20 /"\ Best regards, | mlaier@freebsd.org \ / Max Laier | ICQ #67774661 X http://pf4freebsd.love2party.net/ | mlaier@EFnet / \ ASCII Ribbon Campaign | Against HTML Mail and News --nextPart3454006.VeOnS32Bod Content-Type: application/pgp-signature; name=signature.asc Content-Description: This is a digitally signed message part. -----BEGIN PGP SIGNATURE----- Version: GnuPG v2.0.4 (FreeBSD) iD8DBQBHnJoQXyyEoT62BG0RAilkAJ9IF2Kx9/aIiJVb/tXQMuh/bPkfggCfU4N0 PEPMMD/KLFmbPaSq7mdPPKg= =syjV -----END PGP SIGNATURE----- --nextPart3454006.VeOnS32Bod--