Skip site navigation (1)Skip section navigation (2)
Date:      Fri, 28 Nov 2003 17:44:36 -0500
From:      Haesu <haesu@towardex.com>
To:        Vector <freebsd@itpsg.com>, freebsd-ipfw@freebsd.org
Subject:   Re: multiple pipes cause slowdown
Message-ID:  <20031128224436.GA97746@scylla.towardex.com>
In-Reply-To: <054c01c3b45d$d0cc8b50$fe3d10ac@VECTOR>
References:  <054c01c3b45d$d0cc8b50$fe3d10ac@VECTOR>

next in thread | previous in thread | raw e-mail | index | archive | help
try doing src-port 0xFFFF ?

-hc

-- 
Haesu C.
TowardEX Technologies, Inc.
Consulting, colocation, web hosting, network design and implementation
http://www.towardex.com | haesu@towardex.com
Cell: (978)394-2867     | Office: (978)263-3399 Ext. 170
Fax: (978)263-0033      | POC: HAESU-ARIN

On Wed, Nov 26, 2003 at 01:42:31PM -0700, Vector wrote:
> I've got a FreeBSD system setup and I'm using dummynet to manage bandwidth.
> Here is what I am seeing:
> 
> We are communicating with a server on a 100Mbit ethernet segment in the
> freebsd box as fxp0 and an 11Mbit wireless client that is getting throttled
> with ipfw pipes.
> If I add two pipes limiting my two clients A and B to 1Mbit each then here
> is what happens.
> 
> Client A does a transfer to/from the server and gets 1Mbps up and 1Mbps down
> Client B does a transfer to/from the server and gets 1Mbps up and 1Mbps down
> Clients A & B do simultaneous transfers to the server and each get between
> 670 and 850 Kbps
> 
> If I delete the pipes and the firewall rules, they behave like regular
> 11Mbit unthrottled clients sharing the available wireless bandwidth
> (although not necessarily equally).
> 
> It gets worse when I start doing 3 or 4 clients each at 1Mbit, I've also
> tried setting up 4 clients at 512Kbps and the performance does the same
> thing, essentially gets cut significantly the more pipes we have.  Here are
> the rules I'm using:
> 
> ipfw add 100 pipe 100 all from any to 192.168.1.50 xmit wi0
> ipfw add 100 pipe 5100 all from 192.168.1.50 to any recv wi0
> ipfw pipe 100 config bw 1024Kbits/s
> ipfw pipe 5100 config bw 1024Kbits/s
> 
> ipfw add 101 pipe 101 all from any to 192.168.1.51 xmit wi0
> ipfw add 101 pipe 5101 all from 192.168.1.51 to any recv wi0
> ipfw pipe 101 config bw 1024Kbits/s
> ipfw pipe 5101 config bw 1024Kbits/s
> 
> I've played with using in/out instead of recv/xmit and even not specifying a
> direction at all (which makes traffic to the client get cut in half but
> traffic from the client remains as high as if I specify which interface to
> throttle on).  ipfw pipe list shows no dropped packets and looks like it's
> behaving normally, other than the slowdown for multiple clients.  I'm not
> specifying a delay and latency does not seem abnormally high.
> 
> I am using 5.0 Release and I have HZ=1000 compiled in the kernel.
> Here are my sysctl vars:
> net.inet.ip.fw.enable: 1
> net.inet.ip.fw.autoinc_step: 100
> net.inet.ip.fw.one_pass: 0
> net.inet.ip.fw.debug: 0
> net.inet.ip.fw.verbose: 0
> net.inet.ip.fw.verbose_limit: 1
> net.inet.ip.fw.dyn_buckets: 256
> net.inet.ip.fw.curr_dyn_buckets: 256
> net.inet.ip.fw.dyn_count: 2
> net.inet.ip.fw.dyn_max: 4096
> net.inet.ip.fw.static_count: 72
> net.inet.ip.fw.dyn_ack_lifetime: 300
> net.inet.ip.fw.dyn_syn_lifetime: 20
> net.inet.ip.fw.dyn_fin_lifetime: 1
> net.inet.ip.fw.dyn_rst_lifetime: 1
> net.inet.ip.fw.dyn_udp_lifetime: 10
> net.inet.ip.fw.dyn_short_lifetime: 5
> net.inet.ip.fw.dyn_keepalive: 1
> net.link.ether.bridge_ipfw: 0
> net.link.ether.bridge_ipfw_drop: 0
> net.link.ether.bridge_ipfw_collisions: 0
> net.link.ether.bdg_fw_avg: 0
> net.link.ether.bdg_fw_ticks: 0
> net.link.ether.bdg_fw_count: 0
> net.link.ether.ipfw: 0
> net.inet6.ip6.fw.enable: 0
> net.inet6.ip6.fw.debug: 0
> net.inet6.ip6.fw.verbose: 0
> net.inet6.ip6.fw.verbose_limit: 1
> 
> 
> net.inet.ip.dummynet.hash_size: 64
> net.inet.ip.dummynet.curr_time: 99067502
> net.inet.ip.dummynet.ready_heap: 16
> net.inet.ip.dummynet.extract_heap: 16
> net.inet.ip.dummynet.searches: 0
> net.inet.ip.dummynet.search_steps: 0
> net.inet.ip.dummynet.expire: 1
> net.inet.ip.dummynet.max_chain_len: 16
> net.inet.ip.dummynet.red_lookup_depth: 256
> net.inet.ip.dummynet.red_avg_pkt_size: 512
> net.inet.ip.dummynet.red_max_pkt_size: 1500
> 
> Am I just doing something stupid or does the dummynet/QoS implementation in
> FreeBSD need some work.  If so, I may be able to help and contribute.
> Thanks,
> 
> vec
> 
> 
> _______________________________________________
> freebsd-ipfw@freebsd.org mailing list
> http://lists.freebsd.org/mailman/listinfo/freebsd-ipfw
> To unsubscribe, send any mail to "freebsd-ipfw-unsubscribe@freebsd.org"



Want to link to this message? Use this URL: <https://mail-archive.FreeBSD.org/cgi/mid.cgi?20031128224436.GA97746>