Skip site navigation (1)Skip section navigation (2)
Date:      Wed, 28 Jan 2009 22:36:14 +0200
From:      KES <kes-kes@yandex.ru>
To:        Ian Smith <smithi@nimnet.asn.au>
Cc:        Sebastian Mellmann <sebastian.mellmann@net.t-labs.tu-berlin.de>, freebsd-questions@freebsd.org
Subject:   Re[3]: IPFW DUMMYNET: Several pipes after each other
Message-ID:  <804368613.20090128223614@yandex.ru>
In-Reply-To: <20090129014910.V86094@sola.nimnet.asn.au>
References:  <58305.62.206.221.107.1233071856.squirrel@anubis.getmyip.com> <20090128183250.O86094@sola.nimnet.asn.au> <5510133465.20090128101516@yandex.ru> <20090129014910.V86094@sola.nimnet.asn.au>

next in thread | previous in thread | raw e-mail | index | archive | help
Здравствуйте, Ian.

Вы писали 28 января 2009 г., 18:01:45:

IS> On Wed, 28 Jan 2009, KES wrote:

 >> ????????????, Ian.
 >> 
 >> May be this will be usefull for you

IS> Yes, but I need to read it more times :)  Nicely answers the question 
IS> about stats per flow/queue anyway, not too hard to parse for logging.

 >> #1. ping -D -S 10.10.16.16 -s 1472 -i 0.01 10.0.16.1
 >> #2. ping -S 10.10.16.17 10.0.16.1

IS> Results suggest that #1 was -S 10.10.16.19 ?  A script running the same
IS> number of #2 before killing #1 (or such) would make comparisons between
IS> different runs easier to follow maybe?

IS> Thanks, lots of useful info; hoping to try some weighted queueing soon.

Yes, you are right -S 10.10.16.19.
both ping are run simulteneously

I have experimented with pipes after pipes. With dummy it is possible
to do next: put two flows to pipe 512Kbit, pipe will be devided by
equal parts: 256. in case the only one flow is active it will be 320Kbit

pipe 1 bw 512kbit
queue 1 pipe 1
pipe 2 bw 320kbit

ipfw add 1 pipe 2 all from any to any
ipfw add 2 queue 1 all from any to any
First of all packets will be piped to 320Kbit/s then they will be
queued to 512Kbit. Because of flow of 320 is less then 512kbit packets
will leave queue with speed of 320. In case two flows both will be
piped to 320 kbit in sum this will be 640kbit, Because of queue bw is
512kbit some packets will be droped. This will down each flow speed to 256
NOTICE:
A) you must create its own pipe for each flow so you must use mask 0xFFFFFFFF.
I use:  pipe 1 config bw 512k mask src-ip 0xffffffff gred 0.002/10/30/0.1
pipe 2 config bw 320k mask src-ip 0xffffffff gred 0.002/10/30/0.1
B) you must put all flows to one queue so they share available bw so
you must use mask 0x00000000
I use: queue 1 config pipe 1 mask src-ip 0x00000000 gred 0.002/10/30/0.1

keep in mind IPFW.man:
     In practice, pipes can be used to set hard limits to the bandwidth that a
     flow can use, whereas queues can be used to determine how different flow
     share the available bandwidth.

So when you give user some bandwidth to user you must put its flow to pipe
If users will share some bandwidth then put their flows to queue



Suggestion: What queue inherit from pipe?
It seems queue inherit only bandwidth parameter from pipe. If so then
it is boring to create useless pipe to inherit only bw parameter. It
will be handy directly write this parameter in queue and remove
DEPRECATED (I think so) 'pipe' opts from queue. In any case this is
"black box" how pipe is coupled with queue. This is unclear section in
man.



Also I notice next BUG:
     There are two modes of dummynet operation: normal and fast.  Normal mode
     tries to emulate real link: dummynet scheduler ensures packet will not
     leave pipe faster than it would be on real link with given bandwidth.
     Fast mode allows certain packets to bypass dummynet scheduler (if packet
     flow does not exceed pipe's bandwidth). Thus fast mode requires less cpu
     cycles per packet (in average) but packet latency can be significantly
     lower comparing to real link with same bandwidth. Default is normal mode,
     fast mode can be enabled by setting net.inet.ip.dummynet.io_fast
     sysctl(8) variable to non-zero value.


kes# ping 10.10.16.18
PING 10.10.16.18 (10.10.16.18): 56 data bytes
64 bytes from 10.10.16.18: icmp_seq=0 ttl=128 time=18.441 ms
64 bytes from 10.10.16.18: icmp_seq=1 ttl=128 time=11.501 ms
64 bytes from 10.10.16.18: icmp_seq=2 ttl=128 time=11.516 ms
64 bytes from 10.10.16.18: icmp_seq=3 ttl=128 time=11.557 ms
64 bytes from 10.10.16.18: icmp_seq=4 ttl=128 time=11.534 ms
^C
--- 10.10.16.18 ping statistics ---
5 packets transmitted, 5 packets received, 0.0% packet loss
round-trip min/avg/max/stddev = 11.501/12.910/18.441/2.766 ms
#ipfw pipe 1 show
00001:  65.536 Kbit/s    0 ms    5 sl. 12 queues (64 buckets)
          GRED w_q 0.001999 min_th 10 max_th 30 max_p 0.099991
    mask: 0x00 0xffffffff/0x0000 -> 0x00000000/0x0000
BKT Prot ___Source IP/port____ ____Dest. IP/port____ Tot_pkt/bytes Pkt/Byte Drp
kes# ipfw add 1 pipe 1 all from 10.10.16.1 to 10.10.16.18
00001 pipe 1 ip from 10.10.16.1 to 10.10.16.18
kes# ping -s 1472 10.10.16.18
PING 10.10.16.18 (10.10.16.18): 1472 data bytes
1480 bytes from 10.10.16.18: icmp_seq=0 ttl=128 time=192.354 ms
1480 bytes from 10.10.16.18: icmp_seq=1 ttl=128 time=184.393 ms
1480 bytes from 10.10.16.18: icmp_seq=2 ttl=128 time=184.614 ms
1480 bytes from 10.10.16.18: icmp_seq=3 ttl=128 time=184.217 ms
1480 bytes from 10.10.16.18: icmp_seq=4 ttl=128 time=184.402 ms
^C
--- 10.10.16.18 ping statistics ---
5 packets transmitted, 5 packets received, 0.0% packet loss
round-trip min/avg/max/stddev = 184.217/185.996/192.354/3.181 ms

As I describe earlier:
bw capability of this pipe is 8Kbytes/sec. It means 8Kb will be
trasfered with timeout of 1sec.
1500bytes will be trasfered with timeout: 1500/8000 ~0.187sec
You can see time=184 in ping result. All is Ok.

Now when I:
kes# sysctl net.inet.ip.dummynet.io_fast=1
net.inet.ip.dummynet.io_fast: 0 -> 1
kes# ping -s 1472 10.10.16.18
PING 10.10.16.18 (10.10.16.18): 1472 data bytes
1480 bytes from 10.10.16.18: icmp_seq=0 ttl=128 time=191.224 ms
1480 bytes from 10.10.16.18: icmp_seq=1 ttl=128 time=183.568 ms
1480 bytes from 10.10.16.18: icmp_seq=2 ttl=128 time=183.595 ms
^C
--- 10.10.16.18 ping statistics ---
3 packets transmitted, 3 packets received, 0.0% packet loss
round-trip min/avg/max/stddev = 183.568/186.129/191.224/3.603 ms

As you can see there is no differences with previous test ping.
BUT!!!
"packet flow does not exceed pipe's bandwidth"! it is 1500 of 8000
So I EXPECT "packets bypass dummynet scheduler" and "packet latency
can be significantly lower"

Real link speed is 100Mbps so EXPECTED time is as without pipe:
kes# ping -s 1472 10.10.16.18
PING 10.10.16.18 (10.10.16.18): 1472 data bytes
1480 bytes from 10.10.16.18: icmp_seq=0 ttl=128 time=1.255 ms
1480 bytes from 10.10.16.18: icmp_seq=1 ttl=128 time=0.620 ms
1480 bytes from 10.10.16.18: icmp_seq=2 ttl=128 time=0.624 ms
^C
--- 10.10.16.18 ping statistics ---
3 packets transmitted, 3 packets received, 0.0% packet loss
round-trip min/avg/max/stddev = 0.620/0.833/1.255/0.298 ms


So if any know pls answer:
1. "certain packets to bypass dummynet scheduler"
Which packets will bypass scheduler?
2. in my case "packet flow does not exceed pipe's bandwidth", so
"latency MUST be significantly lower" but it IS NOT. Why?


-- 
С уважением,
 KES                          mailto:kes-kes@yandex.ru




Want to link to this message? Use this URL: <https://mail-archive.FreeBSD.org/cgi/mid.cgi?804368613.20090128223614>