From owner-freebsd-ipfw@FreeBSD.ORG Wed Nov 26 12:43:45 2003 Return-Path: Delivered-To: freebsd-ipfw@freebsd.org Received: from mx1.FreeBSD.org (mx1.freebsd.org [216.136.204.125]) by hub.freebsd.org (Postfix) with ESMTP id BD85B16A4CE for ; Wed, 26 Nov 2003 12:43:45 -0800 (PST) Received: from mail.anteva.net (smtp.anteva.net [209.63.222.5]) by mx1.FreeBSD.org (Postfix) with ESMTP id 700B443FBF for ; Wed, 26 Nov 2003 12:43:44 -0800 (PST) (envelope-from freebsd@itpsg.com) Received: from localhost (fury.anteva.net [127.0.0.1]) by localhost (Postfix) with ESMTP id AE37F82DA9 for ; Wed, 26 Nov 2003 13:43:43 -0700 (MST) Received: from mail.anteva.net ([209.63.222.5]) by localhost (fury.anteva.net [127.0.0.1]) (amavisd-new, port 10025) with ESMTP id 31694-01 for ; Wed, 26 Nov 2003 13:43:43 -0700 (MST) Received: from VECTOR (unknown [204.176.204.140]) by mail.anteva.net (Postfix) with SMTP id B371782D96 for ; Wed, 26 Nov 2003 13:43:42 -0700 (MST) Message-ID: <054c01c3b45d$d0cc8b50$fe3d10ac@VECTOR> From: "Vector" To: Date: Wed, 26 Nov 2003 13:42:31 -0700 MIME-Version: 1.0 Content-Type: text/plain; charset="iso-8859-1" Content-Transfer-Encoding: 7bit X-Priority: 3 X-MSMail-Priority: Normal X-Mailer: Microsoft Outlook Express 6.00.2800.1158 X-MimeOLE: Produced By Microsoft MimeOLE V6.00.2800.1165 X-Virus-Scanned: by amavisd-new at anteva.net Subject: multiple pipes cause slowdown X-BeenThere: freebsd-ipfw@freebsd.org X-Mailman-Version: 2.1.1 Precedence: list List-Id: IPFW Technical Discussions List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , X-List-Received-Date: Wed, 26 Nov 2003 20:43:45 -0000 I've got a FreeBSD system setup and I'm using dummynet to manage bandwidth. Here is what I am seeing: We are communicating with a server on a 100Mbit ethernet segment in the freebsd box as fxp0 and an 11Mbit wireless client that is getting throttled with ipfw pipes. If I add two pipes limiting my two clients A and B to 1Mbit each then here is what happens. Client A does a transfer to/from the server and gets 1Mbps up and 1Mbps down Client B does a transfer to/from the server and gets 1Mbps up and 1Mbps down Clients A & B do simultaneous transfers to the server and each get between 670 and 850 Kbps If I delete the pipes and the firewall rules, they behave like regular 11Mbit unthrottled clients sharing the available wireless bandwidth (although not necessarily equally). It gets worse when I start doing 3 or 4 clients each at 1Mbit, I've also tried setting up 4 clients at 512Kbps and the performance does the same thing, essentially gets cut significantly the more pipes we have. Here are the rules I'm using: ipfw add 100 pipe 100 all from any to 192.168.1.50 xmit wi0 ipfw add 100 pipe 5100 all from 192.168.1.50 to any recv wi0 ipfw pipe 100 config bw 1024Kbits/s ipfw pipe 5100 config bw 1024Kbits/s ipfw add 101 pipe 101 all from any to 192.168.1.51 xmit wi0 ipfw add 101 pipe 5101 all from 192.168.1.51 to any recv wi0 ipfw pipe 101 config bw 1024Kbits/s ipfw pipe 5101 config bw 1024Kbits/s I've played with using in/out instead of recv/xmit and even not specifying a direction at all (which makes traffic to the client get cut in half but traffic from the client remains as high as if I specify which interface to throttle on). ipfw pipe list shows no dropped packets and looks like it's behaving normally, other than the slowdown for multiple clients. I'm not specifying a delay and latency does not seem abnormally high. I am using 5.0 Release and I have HZ=1000 compiled in the kernel. Here are my sysctl vars: net.inet.ip.fw.enable: 1 net.inet.ip.fw.autoinc_step: 100 net.inet.ip.fw.one_pass: 0 net.inet.ip.fw.debug: 0 net.inet.ip.fw.verbose: 0 net.inet.ip.fw.verbose_limit: 1 net.inet.ip.fw.dyn_buckets: 256 net.inet.ip.fw.curr_dyn_buckets: 256 net.inet.ip.fw.dyn_count: 2 net.inet.ip.fw.dyn_max: 4096 net.inet.ip.fw.static_count: 72 net.inet.ip.fw.dyn_ack_lifetime: 300 net.inet.ip.fw.dyn_syn_lifetime: 20 net.inet.ip.fw.dyn_fin_lifetime: 1 net.inet.ip.fw.dyn_rst_lifetime: 1 net.inet.ip.fw.dyn_udp_lifetime: 10 net.inet.ip.fw.dyn_short_lifetime: 5 net.inet.ip.fw.dyn_keepalive: 1 net.link.ether.bridge_ipfw: 0 net.link.ether.bridge_ipfw_drop: 0 net.link.ether.bridge_ipfw_collisions: 0 net.link.ether.bdg_fw_avg: 0 net.link.ether.bdg_fw_ticks: 0 net.link.ether.bdg_fw_count: 0 net.link.ether.ipfw: 0 net.inet6.ip6.fw.enable: 0 net.inet6.ip6.fw.debug: 0 net.inet6.ip6.fw.verbose: 0 net.inet6.ip6.fw.verbose_limit: 1 net.inet.ip.dummynet.hash_size: 64 net.inet.ip.dummynet.curr_time: 99067502 net.inet.ip.dummynet.ready_heap: 16 net.inet.ip.dummynet.extract_heap: 16 net.inet.ip.dummynet.searches: 0 net.inet.ip.dummynet.search_steps: 0 net.inet.ip.dummynet.expire: 1 net.inet.ip.dummynet.max_chain_len: 16 net.inet.ip.dummynet.red_lookup_depth: 256 net.inet.ip.dummynet.red_avg_pkt_size: 512 net.inet.ip.dummynet.red_max_pkt_size: 1500 Am I just doing something stupid or does the dummynet/QoS implementation in FreeBSD need some work. If so, I may be able to help and contribute. Thanks, vec