From owner-freebsd-current@FreeBSD.ORG Mon Jun 16 22:04:26 2008 Return-Path: Delivered-To: freebsd-current@freebsd.org Received: from mx1.freebsd.org (mx1.freebsd.org [IPv6:2001:4f8:fff6::34]) by hub.freebsd.org (Postfix) with ESMTP id 6F6251065674 for ; Mon, 16 Jun 2008 22:04:26 +0000 (UTC) (envelope-from bounces@nabble.com) Received: from kuber.nabble.com (kuber.nabble.com [216.139.236.158]) by mx1.freebsd.org (Postfix) with ESMTP id 264AE8FC1D for ; Mon, 16 Jun 2008 22:04:26 +0000 (UTC) (envelope-from bounces@nabble.com) Received: from isper.nabble.com ([192.168.236.156]) by kuber.nabble.com with esmtp (Exim 4.63) (envelope-from ) id 1K8MUj-0008MY-LM for freebsd-current@freebsd.org; Mon, 16 Jun 2008 14:44:09 -0700 Message-ID: <17873869.post@talk.nabble.com> Date: Mon, 16 Jun 2008 14:44:09 -0700 (PDT) From: arc_gabriel To: freebsd-current@freebsd.org In-Reply-To: <200801271549.52791.max@love2party.net> MIME-Version: 1.0 Content-Type: text/plain; charset=us-ascii Content-Transfer-Encoding: 7bit X-Nabble-From: m.pagulayan@auckland.ac.nz References: <479A2389.2000802@moneybookers.com> <200801262017.52091.max@love2party.net> <479B9F4F.5010705@moneybookers.com> <200801262227.36970.max@love2party.net> <479C4F31.7090804@moneybookers.com> <200801271422.23340.max@love2party.net> <479C953C.1010304@moneybookers.com> <200801271549.52791.max@love2party.net> Subject: Re: FreeBSD 7, bridge, PF and syn flood = very bad performance X-BeenThere: freebsd-current@freebsd.org X-Mailman-Version: 2.1.5 Precedence: list List-Id: Discussions about the use of FreeBSD-current List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , X-List-Received-Date: Mon, 16 Jun 2008 22:04:26 -0000 Hi Guys, I am in the same boat as you guys are. I am using pf from 7.0-RELEASE FreeBSD 7.0-RELEASE Hardware Specs are IBM x3655 Interfaces: em0 and em1 (intel 1G cards) FW Setup: As Bridge I get very high load on CPU whenever em1 taskq and em0 taskq reaches it peaks. I noticed this when traffic goes to about more than 17MB/s or I can see drops as well on em1(external interface) when packets go over 14K packets/s. Which is a bit slow for this new hardware. But the other thing we setup with PF is Altq. Basically We have queues on both the em0(internal) and em1(externaL) interfaces. May this is another bottleneck? I was just wondering on how you remedied this problem. I am badly in need of solutions. Our internet has been crappy for the past week because when the CPU load goes high it drops heaps of packets. Best Regads, Mark Max Laier wrote: > > On Sunday 27 January 2008, Stefan Lambrev wrote: >> Greetings, >> >> Max Laier wrote: >> >> -cut- >> >> >> Well I think the interesting lines from this experiment are: >> >> max total wait_total count avg >> >> wait_avg cnt_hold cnt_lock name >> >> 39 25328476 70950955 9015860 2 7 >> >> 5854948 6309848 /usr/src/sys/contrib/pf/net/pf.c:6729 (sleep >> >> mutex:pf task mtx) >> >> 936935 10645209 350 50 212904 7 >> >> 110 47 /usr/src/sys/contrib/pf/net/pf.c:980 (sleep >> >> mutex:pf task mtx) >> > >> > Yeah, those two mostly are the culprit, but a quick fix is not really >> > available. You can try to "set timeout interval" to something bigger >> > (e.g. 60 seconds) which will decrease the average hold time of the >> > second lock instance at the cost of increased peak memory usage. >> >> I'll try and this. At least memory doesn't seems to be a problem :) >> >> > I have the ideas how to fix this, but it will take much much more >> > time than I currently have for FreeBSD :-\ In general this requires >> > a bottom up redesign of pf locking and some data structures involved >> > in the state tree handling. >> > >> > The first(=main) lock instance is also far from optimal (i.e. pf is a >> > congestion point in the bridge forwarding path). For this I have >> > also a plan how to make at least state table lookups run in parallel >> > to some extend, but again the lack of free time to spend coding >> > prevents me from doing it at the moment :-\ >> >> Well, now we know where the issue is. The same problem seems to affect >> synproxy state btw. >> Can I expect better performance with IPFW's dynamic rules? > > Not significantly better, I'd predict. IPFW's dynamic rules are also > protected by a single mutex leading to similar congestion problems as pf. > There should be a measureable constant improvement as IPFW does much less > sanity checks. i.e. better performance at the expense of less security. > It really depends on your needs which is better suited for your setup. > >> I wonder how one can protect himself on gigabit network and service >> more then 500pps. >> For example in my test lab I see incoming ~400k packets per second, but >> if I activate PF, >> I see only 130-140k packets per second. Is this expected behavior, if >> PF cannot handle so many packets? > > As you can see from the hwpmc trace starting this thread, we don't spend > that much time in pf. The culprit is the pf task mutext, which forces > serialization in pf congesting the whole forward path. Under different > circumstances pf can handle more pps. > >> The missing 250k+ are not listed as discarded or other errors, which is >> weird. > > As you slow down the forwarding protocols like TCP will automatically slow > down. Unless you have UDP bombs blasting at your network this is quite > usual behavior. > > -- > /"\ Best regards, | mlaier@freebsd.org > \ / Max Laier | ICQ #67774661 > X http://pf4freebsd.love2party.net/ | mlaier@EFnet > / \ ASCII Ribbon Campaign | Against HTML Mail and News > > > -- View this message in context: http://www.nabble.com/FreeBSD-7%2C-bridge%2C-PF-and-syn-flood-%3D-very-bad-performance-tp15093449p17873869.html Sent from the freebsd-current mailing list archive at Nabble.com.