Date: Thu, 4 Dec 2003 02:10:24 +0100 From: =?iso-8859-1?Q?Sten_Daniel_S=F8rsdal?= <sten.daniel.sorsdal@wan.no> To: "Vector" <freebsd@itpsg.com>, <freebsd-ipfw@freebsd.org> Subject: RE: multiple pipes cause slowdown Message-ID: <0AF1BBDF1218F14E9B4CCE414744E70F07DF7A@exchange.wanglobal.net>
next in thread | raw e-mail | index | archive | help
I read somewhere that dummynet was designed to simulate different = network connections. So dummynet is not at fault here, the effects congestion has on tcp is. Use queues to let the small ACK's have higher priority through the = pipes. The effect you describe for the wireless is the same thing only with a = few more variables (packetloss/retransmission/etc). > -----Original Message----- > From: Vector [mailto:freebsd@itpsg.com]=20 > Sent: 26. november 2003 21:43 > To: freebsd-ipfw@freebsd.org > Subject: multiple pipes cause slowdown >=20 > I've got a FreeBSD system setup and I'm using dummynet to=20 > manage bandwidth. > Here is what I am seeing: >=20 > We are communicating with a server on a 100Mbit ethernet=20 > segment in the > freebsd box as fxp0 and an 11Mbit wireless client that is=20 > getting throttled > with ipfw pipes. > If I add two pipes limiting my two clients A and B to 1Mbit=20 > each then here > is what happens. >=20 > Client A does a transfer to/from the server and gets 1Mbps up=20 > and 1Mbps down > Client B does a transfer to/from the server and gets 1Mbps up=20 > and 1Mbps down > Clients A & B do simultaneous transfers to the server and=20 > each get between > 670 and 850 Kbps >=20 > If I delete the pipes and the firewall rules, they behave like regular > 11Mbit unthrottled clients sharing the available wireless bandwidth > (although not necessarily equally). >=20 > It gets worse when I start doing 3 or 4 clients each at=20 > 1Mbit, I've also > tried setting up 4 clients at 512Kbps and the performance=20 > does the same > thing, essentially gets cut significantly the more pipes we=20 > have. Here are > the rules I'm using: >=20 > ipfw add 100 pipe 100 all from any to 192.168.1.50 xmit wi0 > ipfw add 100 pipe 5100 all from 192.168.1.50 to any recv wi0 > ipfw pipe 100 config bw 1024Kbits/s > ipfw pipe 5100 config bw 1024Kbits/s >=20 > ipfw add 101 pipe 101 all from any to 192.168.1.51 xmit wi0 > ipfw add 101 pipe 5101 all from 192.168.1.51 to any recv wi0 > ipfw pipe 101 config bw 1024Kbits/s > ipfw pipe 5101 config bw 1024Kbits/s >=20 > I've played with using in/out instead of recv/xmit and even=20 > not specifying a > direction at all (which makes traffic to the client get cut=20 > in half but > traffic from the client remains as high as if I specify which=20 > interface to > throttle on). ipfw pipe list shows no dropped packets and=20 > looks like it's > behaving normally, other than the slowdown for multiple=20 > clients. I'm not > specifying a delay and latency does not seem abnormally high. >=20 > I am using 5.0 Release and I have HZ=3D1000 compiled in the kernel. > Here are my sysctl vars: > net.inet.ip.fw.enable: 1 > net.inet.ip.fw.autoinc_step: 100 > net.inet.ip.fw.one_pass: 0 > net.inet.ip.fw.debug: 0 > net.inet.ip.fw.verbose: 0 > net.inet.ip.fw.verbose_limit: 1 > net.inet.ip.fw.dyn_buckets: 256 > net.inet.ip.fw.curr_dyn_buckets: 256 > net.inet.ip.fw.dyn_count: 2 > net.inet.ip.fw.dyn_max: 4096 > net.inet.ip.fw.static_count: 72 > net.inet.ip.fw.dyn_ack_lifetime: 300 > net.inet.ip.fw.dyn_syn_lifetime: 20 > net.inet.ip.fw.dyn_fin_lifetime: 1 > net.inet.ip.fw.dyn_rst_lifetime: 1 > net.inet.ip.fw.dyn_udp_lifetime: 10 > net.inet.ip.fw.dyn_short_lifetime: 5 > net.inet.ip.fw.dyn_keepalive: 1 > net.link.ether.bridge_ipfw: 0 > net.link.ether.bridge_ipfw_drop: 0 > net.link.ether.bridge_ipfw_collisions: 0 > net.link.ether.bdg_fw_avg: 0 > net.link.ether.bdg_fw_ticks: 0 > net.link.ether.bdg_fw_count: 0 > net.link.ether.ipfw: 0 > net.inet6.ip6.fw.enable: 0 > net.inet6.ip6.fw.debug: 0 > net.inet6.ip6.fw.verbose: 0 > net.inet6.ip6.fw.verbose_limit: 1 >=20 >=20 > net.inet.ip.dummynet.hash_size: 64 > net.inet.ip.dummynet.curr_time: 99067502 > net.inet.ip.dummynet.ready_heap: 16 > net.inet.ip.dummynet.extract_heap: 16 > net.inet.ip.dummynet.searches: 0 > net.inet.ip.dummynet.search_steps: 0 > net.inet.ip.dummynet.expire: 1 > net.inet.ip.dummynet.max_chain_len: 16 > net.inet.ip.dummynet.red_lookup_depth: 256 > net.inet.ip.dummynet.red_avg_pkt_size: 512 > net.inet.ip.dummynet.red_max_pkt_size: 1500 >=20 > Am I just doing something stupid or does the dummynet/QoS=20 > implementation in > FreeBSD need some work. If so, I may be able to help and contribute. > Thanks, >=20 > vec >=20 >=20 > _______________________________________________ > freebsd-ipfw@freebsd.org mailing list > http://lists.freebsd.org/mailman/listinfo/freebsd-ipfw > To unsubscribe, send any mail to=20 > "freebsd-ipfw-unsubscribe@freebsd.org" >=20 >=20
Want to link to this message? Use this URL: <https://mail-archive.FreeBSD.org/cgi/mid.cgi?0AF1BBDF1218F14E9B4CCE414744E70F07DF7A>