From owner-freebsd-isp Mon Aug 27 16: 0: 9 2001 Delivered-To: freebsd-isp@freebsd.org Received: from aries.ai.net (aries.ai.net [205.134.163.4]) by hub.freebsd.org (Postfix) with ESMTP id A607A37B405; Mon, 27 Aug 2001 15:59:57 -0700 (PDT) (envelope-from deepak@ai.net) Received: from blood (pool-138-88-46-58.res.east.verizon.net [138.88.46.58]) by aries.ai.net (8.9.3/8.9.3) with SMTP id TAA02891; Mon, 27 Aug 2001 19:04:45 -0400 (EDT) (envelope-from deepak@ai.net) Reply-To: From: "Deepak Jain" To: "FreeBSD-Questions" , "freebsd-isp@FreeBSD. ORG" Subject: Interesting Router Question Date: Mon, 27 Aug 2001 19:03:59 -0400 Message-ID: MIME-Version: 1.0 Content-Type: text/plain; charset="iso-8859-1" Content-Transfer-Encoding: 7bit X-Priority: 3 (Normal) X-MSMail-Priority: Normal X-Mailer: Microsoft Outlook IMO, Build 9.0.2416 (9.0.2910.0) X-MimeOLE: Produced By Microsoft MimeOLE V5.50.4522.1200 Importance: Normal Sender: owner-freebsd-isp@FreeBSD.ORG Precedence: bulk List-ID: List-Archive: (Web Archive) List-Help: (List Instructions) List-Subscribe: List-Unsubscribe: X-Loop: FreeBSD.org We've got a customer running a FreeBSD router with 2 x 1GE interfaces [ti0 and ti1]. At no point was bandwidth an issue. The router was under some kind of ICMP attack: For about 30 minutes: icmp-response bandwidth limit 96304/200 pps icmp-response bandwidth limit 97801/200 pps icmp-response bandwidth limit 97936/200 pps icmp-response bandwidth limit 97966/200 pps icmp-response bandwidth limit 98230/200 pps icmp-response bandwidth limit 97998/200 pps icmp-response bandwidth limit 98132/200 pps icmp-response bandwidth limit 98326/200 pps icmp-response bandwidth limit 98091/200 pps icmp-response bandwidth limit 87236/200 pps icmp-response bandwidth limit 85108/200 pps icmp-response bandwidth limit 84609/200 pps icmp-response bandwidth limit 86915/200 pps icmp-response bandwidth limit 88917/200 pps icmp-response bandwidth limit 88218/200 pps icmp-response bandwidth limit 72871/20000 pps icmp-response bandwidth limit 74934/20000 pps icmp-response bandwidth limit 74507/20000 pps icmp-response bandwidth limit 82928/20000 pps icmp-response bandwidth limit 75657/20000 pps The router is a dual 600mhz PIII and had a load average of about 0.2 peak during the entire event, but was running out of buffer space. A ping would return "No buffer space available". Performance became atrocious with high packet loss and latency, but completely buffer related. The mbuf settings are as follows: 1235/2640/67584 mbufs in use (current/peak/max): 1195 mbufs allocated to data 40 mbufs allocated to packet headers 592/1054/16896 mbuf clusters in use (current/peak/max) 2768 Kbytes allocated to network (5% of mb_map in use) 0 requests for memory denied 0 requests for memory delayed 0 calls to protocol drain routines sysctl settings: net.inet.ip.redirect: 0 net.local.stream.sendspace: 255360 net.local.stream.recvspace: 8192 net.inet.icmp.drop_redirect: 1 net.inet.icmp.log_redirect: 1 net.inet.icmp.bmcastecho: 0 net.inet.tcp.sendspace: 524288 net.inet.tcp.recvspace: 524288 net.inet.udp.recvspace: 524288 What settings need to be tweaked to allow more ICMP-related buffers to allow the system's CPU to discard packets normally. ipfw didn't help or hurt this performance [i.e., blocking ICMPs or not] same result. The solution was to install an ICMP filter on the Cisco feeding this customer. Under normal circumstances, this is what a netstat -i 1 returns: input (Total) output packets errs bytes packets errs bytes colls 43001 0 12845737 42965 0 12715776 0 42589 0 12426503 42624 0 12299112 0 42485 0 12804047 42409 0 12675087 0 42059 0 12324347 42060 0 12197342 0 42989 0 13004977 42985 0 12875017 0 42331 0 12608670 42353 0 12481620 0 42327 0 12941571 42252 0 12815136 0 42435 0 12414956 42451 0 12288774 0 43408 0 13065007 43369 0 12932819 0 42849 0 12649420 42853 0 12521309 0 42328 0 12918886 42349 0 12788549 0 44085 0 13469072 44009 0 13337215 0 47849 0 14434350 47686 0 14272423 0 Thanks for any assistance, Deepak Jain AiNET To Unsubscribe: send mail to majordomo@FreeBSD.org with "unsubscribe freebsd-isp" in the body of the message