Skip site navigation (1)Skip section navigation (2)
Date:      Fri, 28 May 2010 12:52:38 +0200
From:      'Luigi Rizzo' <rizzo@iet.unipi.it>
To:        Nuno Diogo <nuno@diogonet.com>
Cc:        freebsd-ipfw@freebsd.org
Subject:   Re: Performance issue with new pipe profile feature in FreeBSD 8.0 RELEASE
Message-ID:  <20100528105238.GC19972@onelab2.iet.unipi.it>
In-Reply-To: <004b01caf8f0$82f78720$88e69560$@com>
References:  <005a01caf6a4$e8cf9c70$ba6ed550$@com> <AANLkTikfs5K4soO5G_WpkHrDCfArGRkwWmh8ZGEJ4mUI@mail.gmail.com> <20100521073601.GA58353@onelab2.iet.unipi.it> <004b01caf8f0$82f78720$88e69560$@com>

next in thread | previous in thread | raw e-mail | index | archive | help
On Fri, May 21, 2010 at 10:18:33AM -0400, Nuno Diogo wrote:
> Thank you for the break down, I get it now I hope.
> Delay is applied AFTER the bandwidth bottleneck, therefore emulating other
> hops the packet may have to traverse.
> Profile 'delay' is applied IN the bandwidth bottleneck, emulating overhead
> and unavailability for that one hop.
> You are right, that parameter should be called something besides 'delay'.
> Also the diagram in "Dummynet Revisited" page three, shows "delay" being
> applied within the "bw" bottleneck instead of after, so that threw me off as
> well. 
> 
> So unfortunately utilizing the profile delay distribution to emulate a
> typical internet connection's fluctuating latency such as my ping to yahoo
> below will not achieve accurate throughput emulation.
> Since you already have the code that varies the overhead based on empirical
> curve, how hard would it be to extend that mechanism to the delay so that
> these fluctuating latencies can be emulated with dummynet?

the correct way to emulate such latencies would be to add
an additional traffic source that sends packets to the same
pipe, and as a result causes queueing and delays to fluctuate.

You can tweak the code in ip_dn_io.c::serve_sched()

                /* a regular mbuf received */
                done++;
                len_scaled = (bw == 0) ? 0 : hz *
                        (m->m_pkthdr.len * 8 + extra_bits(m, s));
                si->credit -= len_scaled;
                /* Move packet in the delay line */
---->           dn_tag_get(m)->output_time += s->link.delay ;
                mq_append(&si->dline.mq, m);

where the line marked adds the delay. However the change is
not entirely trivial because you should:
1. make sure that those variable delays do not cause packet reordering
   or other odd effects such as packets coming out of the link
   at a rate much higher than the nominal rate;
2. implement some mechanism to push into the kernel the parameters
   that control the delay variations;
3. make the configuration backward compatible.  

My estimate is that #2 and #3 require a fair amount of time
to design the thing correctly, and perhaps a bit of code refactoring
to reuse the existing mechanisms used to handle 'profiles'.

Unfortunate as it is, in dummynet and ipfw, 1/3 of the code is
related to packet processing, and 2/3 are for managing configuration.

> wc dummynet2/ip_dn_io.c  dummynet2/ip_dummynet.c 
     929    3781   26667 dummynet2/ip_dn_io.c
    2335    8534   60659 dummynet2/ip_dummynet.c


cheers
luigi

> Can you point me to the source code that handles that?  I'm not a developer
> by any stretch of the imagination but maybe I can learn something while
> trying to hack at it?
> 
> Thank you for your reply and your time.
> 
> C:\Users\nuno>ping www.yahoo.com -t -l 1470
> 
> Pinging any-fp.wa1.b.yahoo.com [69.147.125.65] with 1470 bytes of data:
> Reply from 69.147.125.65: bytes=1470 time=45ms TTL=49
> Reply from 69.147.125.65: bytes=1470 time=45ms TTL=49
> Reply from 69.147.125.65: bytes=1470 time=48ms TTL=49
> Reply from 69.147.125.65: bytes=1470 time=44ms TTL=49
> Reply from 69.147.125.65: bytes=1470 time=46ms TTL=49
> Reply from 69.147.125.65: bytes=1470 time=42ms TTL=49
> Reply from 69.147.125.65: bytes=1470 time=50ms TTL=49
> Reply from 69.147.125.65: bytes=1470 time=43ms TTL=49
> Reply from 69.147.125.65: bytes=1470 time=45ms TTL=49
> Reply from 69.147.125.65: bytes=1470 time=45ms TTL=49
> Reply from 69.147.125.65: bytes=1470 time=45ms TTL=49
> Reply from 69.147.125.65: bytes=1470 time=43ms TTL=49
> Reply from 69.147.125.65: bytes=1470 time=72ms TTL=49
> Reply from 69.147.125.65: bytes=1470 time=45ms TTL=49
> Reply from 69.147.125.65: bytes=1470 time=46ms TTL=49
> Reply from 69.147.125.65: bytes=1470 time=44ms TTL=49
> Reply from 69.147.125.65: bytes=1470 time=46ms TTL=49
> Reply from 69.147.125.65: bytes=1470 time=59ms TTL=49
> Reply from 69.147.125.65: bytes=1470 time=43ms TTL=49
> Reply from 69.147.125.65: bytes=1470 time=43ms TTL=49
> Reply from 69.147.125.65: bytes=1470 time=45ms TTL=49
> Reply from 69.147.125.65: bytes=1470 time=42ms TTL=49
> 
> Ping statistics for 69.147.125.65:
>     Packets: Sent = 22, Received = 22, Lost = 0 (0% loss),
> Approximate round trip times in milli-seconds:
>     Minimum = 42ms, Maximum = 72ms, Average = 46ms
> 
> ____________________________________________________________________________
> ___
> Nuno Diogo
> 
> 
> -----Original Message-----
> From: Luigi Rizzo [mailto:rizzo@iet.unipi.it] 
> Sent: Friday, May 21, 2010 3:36 AM
> To: Nuno Diogo
> Cc: freebsd-ipfw@freebsd.org
> Subject: Re: Performance issue with new pipe profile feature in FreeBSD 8.0
> RELEASE
> 
> top post for convenience:
> 
> you are making a common mistake -- "delay" and "profile" are
> not the same thing.
> + With "delay" you set the propagation delay of the link: once a
>   packet is outside of the bottleneck, it takes some extra time to
>   reach its destination. However, during this time, other traffic
>   will flow through the bottleneck;
> + with "profile" you specify a distribution of the extra time that
>   the packet will take to go through the bottleneck link (e.g.
>   due to preambles, crc, framing and other stuff). The bottleneck
>   is effectively unavailable for other traffic during this time.
> 
> So the throughput you measure with a "profile" of X ms is usually
> much lower than the one you see with a "delay" of X ms.
> 
> cheers
> luigi
> 
> On Thu, May 20, 2010 at 06:56:41PM -0400, Nuno Diogo wrote:
> > Hi all,
> > Sorry to spam the list with this issue, but I do believe that this is not
> > working as intended so I performed some more testing in a controlled
> > environment.
> > Using a dedicated FreeBSD 8-RELEASE-p2 i386 with GENERIC kernel + the
> > following additions:
> > 
> >    - options HZ=2000
> >    - device if_bridge
> >    - options IPFIREWALL
> >    - options IPFIREWALL_DEFAULTS_TO_ACCEPT
> >    - options DUMMYNET
> > 
> > Routing between VR0 and EM0 interfaces.
> > Ipfer TCP transfers between a Win 7 laptop and a Linux virtual server.
> > Only one variable changed at a time:
> > 
> > #So lets start with your typical pipe rule using bandwidth and delay
> > statement:
> > 
> > *Test 1 with 10Mbps 10ms:*
> > 
> > #Only one rule pushing packets to PIPE 1 if they're passing between these
> > two specific interfaces
> > FreeBSD-Test# ipfw list
> > 0100 pipe 1 ip from any to any recv em0 xmit vr0
> > 65535 allow ip from any to any
> > 
> > #Pipe configured with 10M bandwidth, 10ms delay and 50 slot queue
> > FreeBSD-Test# ipfw pipe 1 show
> > 00001:  10.000 Mbit/s   10 ms   50 sl. 1 queues (1 buckets) droptail
> >          burst: 0 Byte
> >     mask: 0x00 0x00000000/0x0000 -> 0x00000000/0x0000
> > BKT Prot ___Source IP/port____ ____Dest. IP/port____ Tot_pkt/bytes
> Pkt/Byte
> > Drp
> > 0 icmp  192.168.100.10/0         10.168.0.99/0     112431 154127874  0
> 0
> > 168
> > 
> >  #Traceroute from laptop to server showing just that one hop
> > C:\Users\nuno>tracert -d 10.168.0.99
> > Tracing route to 10.168.0.99 over a maximum of 30 hops
> >   1    <1 ms    <1 ms    <1 ms  192.168.100.1
> >   2    10 ms    10 ms    10 ms  10.168.0.99
> > Trace complete.
> > 
> > #Ping result for 1470 byte packet
> > C:\Users\nuno>ping 10.168.0.99 -t -l 1470
> > 
> > 
> > 
> > Pinging 10.168.0.99 with 1470 bytes of data:
> > 
> > Reply from 10.168.0.99: bytes=1470 time=12ms TTL=63
> > 
> > Reply from 10.168.0.99: bytes=1470 time=12ms TTL=63
> > 
> > Reply from 10.168.0.99: bytes=1470 time=12ms TTL=63
> > 
> > Reply from 10.168.0.99: bytes=1470 time=12ms TTL=63
> > 
> > Reply from 10.168.0.99: bytes=1470 time=12ms TTL=63
> > 
> > Reply from 10.168.0.99: bytes=1470 time=12ms TTL=63
> > 
> > Reply from 10.168.0.99: bytes=1470 time=12ms TTL=63
> > 
> > Reply from 10.168.0.99: bytes=1470 time=12ms TTL=63
> > 
> > 
> > #Iperf performance, as we can see it utilizes the entire emulated pipe
> > 
> > bin/iperf.exe -c 10.168.0.99 -P 1 -i 1 -p 5001 -f k -t 10000
> > 
> > ------------------------------------------------------------
> > 
> > Client connecting to 10.168.0.99, TCP port 5001
> > 
> > TCP window size: 63.0 KByte (default)
> > 
> > ------------------------------------------------------------
> > 
> > [148] local 192.168.100.10 port 49225 connected with 10.168.0.99 port 5001
> > 
> > [ ID] Interval       Transfer     Bandwidth
> > 
> > [148]  0.0- 1.0 sec  1392 KBytes  11403 Kbits/sec
> > 
> > [148]  1.0- 2.0 sec  1184 KBytes  9699 Kbits/sec
> > 
> > [148]  2.0- 3.0 sec  1192 KBytes  9765 Kbits/sec
> > 
> > [148]  3.0- 4.0 sec  1184 KBytes  9699 Kbits/sec
> > 
> > [148]  4.0- 5.0 sec  1184 KBytes  9699 Kbits/sec
> > 
> > [148]  5.0- 6.0 sec  1184 KBytes  9699 Kbits/sec
> > 
> > [148]  6.0- 7.0 sec  1184 KBytes  9699 Kbits/sec
> > 
> > [148]  7.0- 8.0 sec  1176 KBytes  9634 Kbits/sec
> > 
> > [148]  8.0- 9.0 sec  1192 KBytes  9765 Kbits/sec
> > 
> > [148]  9.0-10.0 sec  1200 KBytes  9830 Kbits/sec
> > 
> > [148] 10.0-11.0 sec  1120 KBytes  9175 Kbits/sec
> > 
> > [148] 11.0-12.0 sec  1248 KBytes  10224 Kbits/sec
> > 
> > [148] 12.0-13.0 sec  1184 KBytes  9699 Kbits/sec
> > 
> > [148] 13.0-14.0 sec  1184 KBytes  9699 Kbits/sec
> > 
> > [148] 14.0-15.0 sec  1184 KBytes  9699 Kbits/sec
> > 
> > [148] 15.0-16.0 sec  1184 KBytes  9699 Kbits/sec
> > 
> > [148] 16.0-17.0 sec  1184 KBytes  9699 Kbits/sec
> > 
> > [148] 17.0-18.0 sec  1184 KBytes  9699 Kbits/sec
> > 
> > [148] 18.0-19.0 sec  1184 KBytes  9699 Kbits/sec
> > 
> > [148] 19.0-20.0 sec  1192 KBytes  9765 Kbits/sec
> > 
> > 
> > 
> > #Now let configure the same emulation (from my understanding) but with a
> > profile
> > 
> > FreeBSD-Test# cat ./profile
> > 
> > name Test
> > 
> > samples 100
> > 
> > bw 10M
> > 
> > loss-level 1.0
> > 
> > prob delay
> > 
> > 0.00 10
> > 
> > 1.00 10
> > 
> > 
> > #Pipe 1 configured with the above profile file and no additional bandwidth
> > or delay parameters
> > 
> > FreeBSD-Test# ipfw pipe 1 show
> > 
> > 00001:  10.000 Mbit/s    0 ms   50 sl. 1 queues (1 buckets) droptail
> > 
> >          burst: 0 Byte
> > 
> >          profile: name "Test" loss 1.000000 samples 100
> > 
> >     mask: 0x00 0x00000000/0x0000 -> 0x00000000/0x0000
> > 
> > BKT Prot ___Source IP/port____ ____Dest. IP/port____ Tot_pkt/bytes
> Pkt/Byte
> > Drp
> > 
> >   0 icmp  192.168.100.10/0         10.168.0.99/0     131225 181884981  0
> 0
> > 211
> > 
> > 
> > #Ping time for a 1470 byte packet remains the same
> > 
> > C:\Users\nuno>ping 10.168.0.99 -t -l 1470
> > 
> > 
> > 
> > Pinging 10.168.0.99 with 1470 bytes of data:
> > 
> > Reply from 10.168.0.99: bytes=1470 time=12ms TTL=63
> > 
> > Reply from 10.168.0.99: bytes=1470 time=12ms TTL=63
> > 
> > Reply from 10.168.0.99: bytes=1470 time=14ms TTL=63
> > 
> > Reply from 10.168.0.99: bytes=1470 time=12ms TTL=63
> > 
> > Reply from 10.168.0.99: bytes=1470 time=12ms TTL=63
> > 
> > Reply from 10.168.0.99: bytes=1470 time=12ms TTL=63
> > 
> > Reply from 10.168.0.99: bytes=1470 time=11ms TTL=63
> > 
> > Reply from 10.168.0.99: bytes=1470 time=12ms TTL=63
> > 
> > Reply from 10.168.0.99: bytes=1470 time=12ms TTL=63
> > 
> > Reply from 10.168.0.99: bytes=1470 time=12ms TTL=63
> > 
> > Reply from 10.168.0.99: bytes=1470 time=12ms TTL=63
> > 
> > Reply from 10.168.0.99: bytes=1470 time=12ms TTL=63
> > 
> > #Iperf transfer however drops considerable!
> > 
> > bin/iperf.exe -c 10.168.0.99 -P 1 -i 1 -p 5001 -f k -t 10000
> > 
> > ------------------------------------------------------------
> > 
> > Client connecting to 10.168.0.99, TCP port 5001
> > 
> > TCP window size: 63.0 KByte (default)
> > 
> > ------------------------------------------------------------
> > 
> > [148] local 192.168.100.10 port 49226 connected with 10.168.0.99 port 5001
> > 
> > [ ID] Interval       Transfer     Bandwidth
> > 
> > [148]  0.0- 1.0 sec   248 KBytes  2032 Kbits/sec
> > 
> > [148]  1.0- 2.0 sec  56.0 KBytes   459 Kbits/sec
> > 
> > [148]  2.0- 3.0 sec   176 KBytes  1442 Kbits/sec
> > 
> > [148]  3.0- 4.0 sec   128 KBytes  1049 Kbits/sec
> > 
> > [148]  4.0- 5.0 sec   120 KBytes   983 Kbits/sec
> > 
> > [148]  5.0- 6.0 sec   128 KBytes  1049 Kbits/sec
> > 
> > [148]  6.0- 7.0 sec   128 KBytes  1049 Kbits/sec
> > 
> > [148]  7.0- 8.0 sec  96.0 KBytes   786 Kbits/sec
> > 
> > [148]  8.0- 9.0 sec   144 KBytes  1180 Kbits/sec
> > 
> > [148]  9.0-10.0 sec   128 KBytes  1049 Kbits/sec
> > 
> > [148] 10.0-11.0 sec   128 KBytes  1049 Kbits/sec
> > 
> > [148] 11.0-12.0 sec   120 KBytes   983 Kbits/sec
> > 
> > [148] 12.0-13.0 sec   120 KBytes   983 Kbits/sec
> > 
> > [148] 13.0-14.0 sec   128 KBytes  1049 Kbits/sec
> > 
> > [148] 14.0-15.0 sec   120 KBytes   983 Kbits/sec
> > 
> > [148] 15.0-16.0 sec   128 KBytes  1049 Kbits/sec
> > 
> > [148] 16.0-17.0 sec   120 KBytes   983 Kbits/sec
> > 
> > [148] 17.0-18.0 sec   120 KBytes   983 Kbits/sec
> > 
> > [148] 18.0-19.0 sec   128 KBytes  1049 Kbits/sec
> > 
> > [148] 19.0-20.0 sec  64.0 KBytes   524 Kbits/sec
> > 
> > 
> > Lets do the exact same but this time reducing the emulate latency down to
> > just 2ms.
> > *Test 2 with 10Mbps 2ms:*
> > #Pipe 1 configured for 10Mbps bandwidth, 2ms latency and 50 slot queue
> > 
> > FreeBSD-Test# ipfw pipe 1 show
> > 
> > 00001:  10.000 Mbit/s    2 ms   50 sl. 1 queues (1 buckets) droptail
> > 
> >          burst: 0 Byte
> > 
> >     mask: 0x00 0x00000000/0x0000 -> 0x00000000/0x0000
> > 
> > BKT Prot ___Source IP/port____ ____Dest. IP/port____ Tot_pkt/bytes
> Pkt/Byte
> > Drp
> > 
> >   0 icmp  192.168.100.10/0         10.168.0.99/0     21020 19358074  0
> 0
> > 123
> > 
> > 
> > #Ping time from laptop to server
> > 
> > C:\Users\nuno>ping 10.168.0.99 -t -l 1470
> > 
> > 
> > 
> > Pinging 10.168.0.99 with 1470 bytes of data:
> > 
> > Reply from 10.168.0.99: bytes=1470 time=4ms TTL=63
> > 
> > Reply from 10.168.0.99: bytes=1470 time=4ms TTL=63
> > 
> > Reply from 10.168.0.99: bytes=1470 time=4ms TTL=63
> > 
> > Reply from 10.168.0.99: bytes=1470 time=4ms TTL=63
> > 
> > Reply from 10.168.0.99: bytes=1470 time=4ms TTL=63
> > 
> > Reply from 10.168.0.99: bytes=1470 time=4ms TTL=63
> > 
> > Reply from 10.168.0.99: bytes=1470 time=4ms TTL=63
> > 
> > Reply from 10.168.0.99: bytes=1470 time=4ms TTL=63
> > 
> > Reply from 10.168.0.99: bytes=1470 time=3ms TTL=63
> > 
> > Reply from 10.168.0.99: bytes=1470 time=4ms TTL=63
> > 
> > Reply from 10.168.0.99: bytes=1470 time=3ms TTL=63
> > 
> > Reply from 10.168.0.99: bytes=1470 time=4ms TTL=63
> > 
> > Reply from 10.168.0.99: bytes=1470 time=4ms TTL=63
> > 
> > 
> > #Ipfer throughput, again we can use all of the emulated bandwidth
> > 
> > bin/iperf.exe -c 10.168.0.99 -P 1 -i 1 -p 5001 -f k -t 10000
> > 
> > ------------------------------------------------------------
> > 
> > Client connecting to 10.168.0.99, TCP port 5001
> > 
> > TCP window size: 63.0 KByte (default)
> > 
> > ------------------------------------------------------------
> > 
> > [148] local 192.168.100.10 port 49196 connected with 10.168.0.99 port 5001
> > 
> > [ ID] Interval       Transfer     Bandwidth
> > 
> > [148]  0.0- 1.0 sec  1264 KBytes  10355 Kbits/sec
> > 
> > [148]  1.0- 2.0 sec  1192 KBytes  9765 Kbits/sec
> > 
> > [148]  2.0- 3.0 sec  1184 KBytes  9699 Kbits/sec
> > 
> > [148]  3.0- 4.0 sec  1184 KBytes  9699 Kbits/sec
> > 
> > [148]  4.0- 5.0 sec  1184 KBytes  9699 Kbits/sec
> > 
> > [148]  5.0- 6.0 sec  1192 KBytes  9765 Kbits/sec
> > 
> > [148]  6.0- 7.0 sec  1184 KBytes  9699 Kbits/sec
> > 
> > [148]  7.0- 8.0 sec  1184 KBytes  9699 Kbits/sec
> > 
> > [148]  8.0- 9.0 sec  1184 KBytes  9699 Kbits/sec
> > 
> > [148]  9.0-10.0 sec  1152 KBytes  9437 Kbits/sec
> > 
> > [148] 10.0-11.0 sec  1240 KBytes  10158 Kbits/sec
> > 
> > [148] 11.0-12.0 sec  1184 KBytes  9699 Kbits/sec
> > 
> > [148] 12.0-13.0 sec  1184 KBytes  9699 Kbits/sec
> > 
> > [148] 13.0-14.0 sec  1176 KBytes  9634 Kbits/sec
> > 
> > [148] 14.0-15.0 sec   984 KBytes  8061 Kbits/sec
> > 
> > [148] 15.0-16.0 sec  1192 KBytes  9765 Kbits/sec
> > 
> > [148] 16.0-17.0 sec  1184 KBytes  9699 Kbits/sec
> > 
> > [148] 17.0-18.0 sec  1184 KBytes  9699 Kbits/sec
> > 
> > [148] 18.0-19.0 sec  1184 KBytes  9699 Kbits/sec
> > 
> > [148] 19.0-20.0 sec  1208 KBytes  9896 Kbits/sec
> > 
> > 
> > #Now lets configure the profile file to emulate 10Mbps and 2ms of added
> > overhead
> > 
> > FreeBSD-Test# cat ./profile
> > 
> > name Test
> > 
> > samples 100
> > 
> > bw 10M
> > 
> > loss-level 1.0
> > 
> > prob delay
> > 
> > 0.00 2
> > 1.00 2
> > 
> > 
> > 
> > #Pipe 1 configured with the above profile file and no additional bandwidth
> > or delay parameters
> > 
> > FreeBSD-Test# ipfw pipe 1 show
> > 
> > 00001:  10.000 Mbit/s    0 ms   50 sl. 1 queues (1 buckets) droptail
> > 
> >          burst: 0 Byte
> > 
> >          profile: name "Test" loss 1.000000 samples 100
> > 
> >     mask: 0x00 0x00000000/0x0000 -> 0x00000000/0x0000
> > 
> > BKT Prot ___Source IP/port____ ____Dest. IP/port____ Tot_pkt/bytes
> Pkt/Byte
> > Drp
> >   0 icmp  192.168.100.10/0         10.168.0.99/0     39570 46750171  0
> 0
> > 186
> > 
> > #Again, ping remains constant with this configuration
> > 
> > C:\Users\nuno>ping 10.168.0.99 -t -l 1470
> > 
> > 
> > 
> > Pinging 10.168.0.99 with 1470 bytes of data:
> > 
> > Reply from 10.168.0.99: bytes=1470 time=3ms TTL=63
> > 
> > Reply from 10.168.0.99: bytes=1470 time=4ms TTL=63
> > 
> > Reply from 10.168.0.99: bytes=1470 time=4ms TTL=63
> > 
> > Reply from 10.168.0.99: bytes=1470 time=4ms TTL=63
> > 
> > Reply from 10.168.0.99: bytes=1470 time=4ms TTL=63
> > 
> > Reply from 10.168.0.99: bytes=1470 time=4ms TTL=63
> > 
> > Reply from 10.168.0.99: bytes=1470 time=4ms TTL=63
> > 
> > Reply from 10.168.0.99: bytes=1470 time=4ms TTL=63
> > 
> > Reply from 10.168.0.99: bytes=1470 time=3ms TTL=63
> > 
> > Reply from 10.168.0.99: bytes=1470 time=4ms TTL=63
> > 
> > Reply from 10.168.0.99: bytes=1470 time=4ms TTL=63
> > 
> > Reply from 10.168.0.99: bytes=1470 time=4ms TTL=63
> > 
> > Reply from 10.168.0.99: bytes=1470 time=4ms TTL=63
> > 
> > Reply from 10.168.0.99: bytes=1470 time=4ms TTL=63
> > 
> > Reply from 10.168.0.99: bytes=1470 time=4ms TTL=63
> > 
> > 
> > #Iperf throughput again takes a big hit, although not as much as when
> we're
> > adding 10ms or overhead
> > 
> > bin/iperf.exe -c 10.168.0.99 -P 1 -i 1 -p 5001 -f k -t 10000
> > 
> > ------------------------------------------------------------
> > 
> > Client connecting to 10.168.0.99, TCP port 5001
> > 
> > TCP window size: 63.0 KByte (default)
> > 
> > ------------------------------------------------------------
> > 
> > [148] local 192.168.100.10 port 49197 connected with 10.168.0.99 port 5001
> > 
> > [ ID] Interval       Transfer     Bandwidth
> > 
> > [148]  0.0- 1.0 sec   544 KBytes  4456 Kbits/sec
> > 
> > [148]  1.0- 2.0 sec   440 KBytes  3604 Kbits/sec
> > 
> > [148]  2.0- 3.0 sec   440 KBytes  3604 Kbits/sec
> > 
> > [148]  3.0- 4.0 sec   432 KBytes  3539 Kbits/sec
> > 
> > [148]  4.0- 5.0 sec   440 KBytes  3604 Kbits/sec
> > 
> > [148]  5.0- 6.0 sec   448 KBytes  3670 Kbits/sec
> > 
> > [148]  6.0- 7.0 sec   432 KBytes  3539 Kbits/sec
> > 
> > [148]  7.0- 8.0 sec   440 KBytes  3604 Kbits/sec
> > 
> > [148]  8.0- 9.0 sec   440 KBytes  3604 Kbits/sec
> > 
> > [148]  9.0-10.0 sec   448 KBytes  3670 Kbits/sec
> > 
> > [148] 10.0-11.0 sec   440 KBytes  3604 Kbits/sec
> > 
> > [148] 11.0-12.0 sec   440 KBytes  3604 Kbits/sec
> > 
> > [148] 12.0-13.0 sec   392 KBytes  3211 Kbits/sec
> > 
> > [148] 13.0-14.0 sec   488 KBytes  3998 Kbits/sec
> > 
> > [148] 14.0-15.0 sec   440 KBytes  3604 Kbits/sec
> > 
> > [148] 15.0-16.0 sec   440 KBytes  3604 Kbits/sec
> > 
> > [148] 16.0-17.0 sec   440 KBytes  3604 Kbits/sec
> > 
> > [148] 17.0-18.0 sec   440 KBytes  3604 Kbits/sec
> > 
> > [148] 18.0-19.0 sec   440 KBytes  3604 Kbits/sec
> > 
> > [148] 19.0-20.0 sec   448 KBytes  3670 Kbits/sec
> > 
> > 
> >  From my understanding, since the emulated RTT of the link remains the
> same,
> > Iperf performance should also stay the same.
> > 
> > Regardless of how or why the RTT is present, (geographically induced
> > latency, MAC overhead, congestion etc) the effects on a TCP transmission
> > should be the same (assuming as in this test no jitter and packet loss)
> > 
> > 
> > On the first test we see throughput drop from ~9.7Mbps to 980Kbps-1050Kbps
> > with the addition of just 10ms of overhead in the profile!
> > 
> > On the second test we see throughput drop from ~9.7Mbps to ~3.6Mbps with
> the
> > addition of just 2ms of overhead in the profile!
> > 
> > So is this feature not working as intended or am I completely missing
> > something here?
> > 
> > 
> > I (and hopefully others) would highly appreciate any opinions as this new
> > feature could really expand the use of dummynet as a WAN emulator, but it
> > seems that in it's current implementation it does not allow for the full
> > utilization of the emulated bandwidth regardless of how little or static
> the
> > extra delay is set to.
> > 
> > 
> > Sincerely,
> > 
> > Nuno Diogo
> > 
> > On Tue, May 18, 2010 at 12:12 PM, Nuno Diogo <nuno@diogonet.com> wrote:
> > 
> > >  Hi all,
> > >
> > > I?m encountering the same situation, and I?m not quite understanding
> > > Luigi?s explanation.
> > >
> > > If a pipe is configured with 10Mbps bandwidth and 25ms delay, it will
> take
> > > approximately 26.7ms for a 1470 byte packet to pass through it as per
> the
> > > below math.
> > >
> > > IPerf can fully utilize the available emulated bandwidth with that
> delay.
> > >
> > >
> > >
> > > If we configure a profile with the same characteristics, 10Mbps and 25ms
> > > overhead/extra-airtime/delay isn?t the end result the same?
> > >
> > > A 1470 byte packet should still take ~26.7ms to pass through the pipe
> and
> > > IPerf should still be able to fully utilize the emulated bandwidth, no?
> > >
> > >
> > >
> > > IPerf does not know how that delay is being emulated or configured, it
> just
> > > knows that it?s taking ~26.7ms to get ACKs back etc, so I guess I?m
> missing
> > > something here?
> > >
> > >
> > >
> > > I use dummynet often for WAN acceleration testing, and have been trying
> to
> > > use the new profile method to try and emulate ?jitter?.
> > >
> > > With pings it works great, but when trying to use the full configured
> > > bandwidth, I get the same results as Charles.
> > >
> > > Regardless of delay/overhead/bandwidth configuration IPerf can?t push
> more
> > > than a fraction of the configured bandwidth with lots of packets queuing
> and
> > > dropping.
> > >
> > >
> > >
> > > Your patience is appreciated.
> > >
> > >
> > >
> > > Sincerely,
> > >
> > >
> > >
> > >
> > >
> ____________________________________________________________________________
> ___
> > >
> > > Nuno Diogo
> > >
> > >
> > >
> > > Luigi Rizzo
> > > Tue, 24 Nov 2009 21:21:56 -0800
> > >
> > > Hi,
> > >
> > > there is no bug, the 'pipe profile' code is working correctly.
> > >
> > >
> > >
> > > In your mail below you are comparing two different things.
> > >
> > >
> > >
> > >    "pipe config bw 10Mbit/s delay 25ms"
> > >
> > >         means that _after shaping_ at 10Mbps, all traffic will
> > >
> > >         be subject to an additional delay of 25ms.
> > >
> > >         Each packet (1470 bytes) will take Length/Bandwidth sec
> > >
> > >         to come out or 1470*8/10M = 1.176ms , but you won't
> > >
> > >         see them until you wait another 25ms (7500km at the speed
> > >
> > >         of light).
> > >
> > >
> > >
> > >    "pipe config bw 10Mbit/s profile "test" ..."
> > >
> > >         means that in addition to the Length/Bandwidth,
> > >
> > >         _each packet transmission_ will consume
> > >
> > >         some additional air-time as specified in the profile
> > >
> > >         (25ms in your case)
> > >
> > >
> > >
> > >         So, in your case with 1470bytes/pkt each transmission
> > >
> > >         will take len/bw (1.176ms) + 25ms (extra air time) = 26.76ms
> > >
> > >         That is 25 times more than the previous case and explains
> > >
> > >         the reduced bandwidth you see.
> > >
> > >
> > >
> > > The 'delay profile' is effectively extra air time used for each
> > >
> > > transmission. The name is probably confusing, i should have called
> > >
> > > it 'extra-time' or 'overhead' and not 'delay'
> > >
> > >
> > >
> > > cheers
> > >
> > > luigi
> > >
> > >
> > >
> > > On Tue, Nov 24, 2009 at 12:40:31PM -0500, Charles Henri de Boysson
> wrote:
> > >
> > > > Hi,
> > >
> > > >
> > >
> > > > I have a simple setup with two computer connected via a FreeBSD bridge
> > >
> > > > running 8.0 RELEASE.
> > >
> > > > I am trying to use dummynet to simulate a wireless network between the
> > >
> > > > two and for that I wanted to use the pipe profile feature of FreeBSD
> > >
> > > > 8.0.
> > >
> > > > But as I was experimenting with the pipe profile feature I ran into
> some
> > >
> > > > issues.
> > >
> > > >
> > >
> > > > I have setup ipfw to send traffic coming for either interface of the
> > >
> > > > bridge to a respective pipe as follow:
> > >
> > > >
> > >
> > > > # ipfw show
> > >
> > > > 00100 ?? ?? 0 ?? ?? ?? ?? 0 allow ip from any to any via lo0
> > >
> > > > 00200 ?? ?? 0 ?? ?? ?? ?? 0 deny ip from any to 127.0.0.0/8
> > >
> > > > 00300 ?? ?? 0 ?? ?? ?? ?? 0 deny ip from 127.0.0.0/8 to any
> > >
> > > > 01000 ?? ?? 0 ?? ?? ?? ?? 0 pipe 1 ip from any to any via vr0 layer2
> > >
> > > > 01100 ?? ?? 0 ?? ?? ?? ?? 0 pipe 101 ip from any to any via vr4 layer2
> > >
> > > > 65000 ??7089 ?? ??716987 allow ip from any to any
> > >
> > > > 65535 ?? ?? 0 ?? ?? ?? ?? 0 deny ip from any to any
> > >
> > > >
> > >
> > > > When I setup my pipes as follow:
> > >
> > > >
> > >
> > > > # ipfw pipe 1 config bw 10Mbit delay 25 mask proto 0
> > >
> > > > # ipfw pipe 101 config bw 10Mbit delay 25 mask proto 0
> > >
> > > > # ipfw pipe show
> > >
> > > >
> > >
> > > > 00001: ??10.000 Mbit/s ?? 25 ms ?? 50 sl. 0 queues (1 buckets)
> droptail
> > >
> > > > burst: 0 Byte
> > >
> > > > 00101: ??10.000 Mbit/s ?? 25 ms ?? 50 sl. 0 queues (1 buckets)
> droptail
> > >
> > > > burst: 0 Byte
> > >
> > > >
> > >
> > > > With this setup, when I try to pass traffic through the bridge with
> > >
> > > > iperf, I obtain the desired speed: iperf reports about 9.7Mbits/sec in
> > >
> > > > UDP mode and 9.5 in TCP mode (I copied and pasted the iperf runs at
> > >
> > > > the end of this email).
> > >
> > > >
> > >
> > > > The problem arise when I setup pipe 1 (the downlink) with an
> > >
> > > > equivalent profile (I tried to simplify it as much as possible).
> > >
> > > >
> > >
> > > > # ipfw pipe 1 config profile test.pipeconf   mask proto 0
> > >
> > > > # ipfw pipe show
> > >
> > > > 00001:  10.000 Mbit/s    0 ms   50 sl. 0 queues (1 buckets) droptail
> > >
> > > >        burst: 0 Byte
> > >
> > > >        profile: name "test" loss 1.000000 samples 2
> > >
> > > > 00101:  10.000 Mbit/s   25 ms   50 sl. 0 queues (1 buckets) droptail
> > >
> > > >        burst: 0 Byte
> > >
> > > >
> > >
> > > > # cat test.pipeconf
> > >
> > > > name        test
> > >
> > > > bw          10Mbit
> > >
> > > > loss-level  1.0
> > >
> > > > samples     2
> > >
> > > > prob        delay
> > >
> > > > 0.0         25
> > >
> > > > 1.0         25
> > >
> > > >
> > >
> > > > The same iperf TCP tests then collapse to about 500Kbit/s with the
> > >
> > > > same settings (copy and pasted the output of iperf bellow)
> > >
> > > >
> > >
> > > > I can't figure out what is going on. There is no visible load on the
> bridge.
> > >
> > > > I have an unmodified GENERIC kernel with the following sysctl.
> > >
> > > >
> > >
> > > > net.link.bridge.ipfw: 1
> > >
> > > > kern.hz: 1000
> > >
> > > >
> > >
> > > > The bridge configuration is as follow:
> > >
> > > >
> > >
> > > > bridge0: flags=8843<UP,BROADCAST,RUNNING,SIMPLEX,MULTICAST> metric 0
> mtu 1500
> > >
> > > > ether 1a:1f:2e:42:74:8d
> > >
> > > > id 00:00:00:00:00:00 priority 32768 hellotime 2 fwddelay 15
> > >
> > > > maxage 20 holdcnt 6 proto rstp maxaddr 100 timeout 1200
> > >
> > > > root id 00:00:00:00:00:00 priority 32768 ifcost 0 port 0
> > >
> > > > member: vr4 flags=143<LEARNING,DISCOVER,AUTOEDGE,AUTOPTP>
> > >
> > > > ?? ?? ?? ??ifmaxaddr 0 port 6 priority 128 path cost 200000
> > >
> > > > member: vr0 flags=143<LEARNING,DISCOVER,AUTOEDGE,AUTOPTP>
> > >
> > > > ?? ?? ?? ??ifmaxaddr 0 port 2 priority 128 path cost 200000
> > >
> > > >
> > >
> > > >
> > >
> > > > iperf runs without the profile set:
> > >
> > > > % iperf -B 10.1.0.1 -c 10.0.0.254 -t 15
> > >
> > > > ------------------------------------------------------------
> > >
> > > > Client connecting to 10.0.0.254, TCP port 5001
> > >
> > > > Binding to local address 10.1.0.1
> > >
> > > > TCP window size: 16.0 KByte (default)
> > >
> > > > ------------------------------------------------------------
> > >
> > > > [  3] local 10.1.0.1 port 5001 connected with 10.0.0.254 port 5001
> > >
> > > > [ ID] Interval       Transfer     Bandwidth
> > >
> > > > [  3]  0.0-15.0 sec  17.0 MBytes  9.49 Mbits/sec
> > >
> > > >
> > >
> > > > % iperf -B 10.1.0.1 -c 10.0.0.254 -t 15 -u -b 10Mbit
> > >
> > > > ------------------------------------------------------------
> > >
> > > > Client connecting to 10.0.0.254, UDP port 5001
> > >
> > > > Binding to local address 10.1.0.1
> > >
> > > > Sending 1470 byte datagrams
> > >
> > > > UDP buffer size:   110 KByte (default)
> > >
> > > > ------------------------------------------------------------
> > >
> > > > [  3] local 10.1.0.1 port 5001 connected with 10.0.0.254 port 5001
> > >
> > > > [ ID] Interval       Transfer     Bandwidth
> > >
> > > > [  3]  0.0-15.0 sec  18.8 MBytes  10.5 Mbits/sec
> > >
> > > > [  3] Sent 13382 datagrams
> > >
> > > > [  3] Server Report:
> > >
> > > > [  3]  0.0-15.1 sec  17.4 MBytes  9.72 Mbits/sec  0.822 ms  934/13381
> (7%)
> > >
> > > > [  3]  0.0-15.1 sec  1 datagrams received out-of-order
> > >
> > > >
> > >
> > > >
> > >
> > > > iperf runs with the profile set:
> > >
> > > > % iperf -B 10.1.0.1 -c 10.0.0.254 -t 15
> > >
> > > > ------------------------------------------------------------
> > >
> > > > Client connecting to 10.0.0.254, TCP port 5001
> > >
> > > > Binding to local address 10.1.0.1
> > >
> > > > TCP window size: 16.0 KByte (default)
> > >
> > > > ------------------------------------------------------------
> > >
> > > > [  3] local 10.1.0.1 port 5001 connected with 10.0.0.254 port 5001
> > >
> > > > [ ID] Interval       Transfer     Bandwidth
> > >
> > > > [  3]  0.0-15.7 sec    968 KBytes    505 Kbits/sec
> > >
> > > >
> > >
> > > > % iperf -B 10.1.0.1 -c 10.0.0.254 -t 15 -u -b 10Mbit
> > >
> > > > ------------------------------------------------------------
> > >
> > > > Client connecting to 10.0.0.254, UDP port 5001
> > >
> > > > Binding to local address 10.1.0.1
> > >
> > > > Sending 1470 byte datagrams
> > >
> > > > UDP buffer size:   110 KByte (default)
> > >
> > > > ------------------------------------------------------------
> > >
> > > > [  3] local 10.1.0.1 port 5001 connected with 10.0.0.254 port 5001
> > >
> > > > [ ID] Interval       Transfer     Bandwidth
> > >
> > > > [  3]  0.0-15.0 sec  18.8 MBytes  10.5 Mbits/sec
> > >
> > > > [  3] Sent 13382 datagrams
> > >
> > > > [  3] Server Report:
> > >
> > > > [  3]  0.0-16.3 sec    893 KBytes    449 Kbits/sec  1.810 ms
> 12757/13379 (95%)
> > >
> > > >
> > >
> > > >
> > >
> > > > Let me know what other information you would need to help me debugging
> this.
> > >
> > > > In advance, thank you for your help
> > >
> > > >
> > >
> > > > --
> > >
> > > > Charles-Henri de Boysson
> > >
> > > > _______________________________________________
> > >
> > > > freebsd-ipfw@freebsd.org mailing list
> > >
> > > > http://lists.freebsd.org/mailman/listinfo/freebsd-ipfw
> > >
> > > > To unsubscribe, send any mail to
> "freebsd-ipfw-unsubscr...@freebsd.org"
> > >
> > > _______________________________________________
> > >
> > > freebsd-ipfw@freebsd.org mailing list
> > >
> > > http://lists.freebsd.org/mailman/listinfo/freebsd-ipfw
> > >
> > > To unsubscribe, send any mail to "freebsd-ipfw-unsubscr...@freebsd.org"
> > >
> > >
> > >
> > 
> > 
> > 
> > -- 
> >
> ----------------------------------------------------------------------------
> ---------------------
> > 
> > Nuno Diogo
> > _______________________________________________
> > freebsd-ipfw@freebsd.org mailing list
> > http://lists.freebsd.org/mailman/listinfo/freebsd-ipfw
> > To unsubscribe, send any mail to "freebsd-ipfw-unsubscribe@freebsd.org"



Want to link to this message? Use this URL: <https://mail-archive.FreeBSD.org/cgi/mid.cgi?20100528105238.GC19972>