Skip site navigation (1)Skip section navigation (2)
Date:      Mon, 05 Oct 2009 10:25:56 -0700
From:      Julian Elischer <julian@elischer.org>
To:        rihad <rihad@mail.ru>
Cc:        freebsd-net@freebsd.org, Luigi Rizzo <rizzo@iet.unipi.it>
Subject:   Re: dummynet dropping too many packets
Message-ID:  <4ACA2C24.2060205@elischer.org>
In-Reply-To: <4AC9E29B.6080908@mail.ru>
References:  <4AC8A76B.3050502@mail.ru>	<20091005025521.GA52702@svzserv.kemerovo.su>	<20091005061025.GB55845@onelab2.iet.unipi.it>	<4AC9B400.9020400@mail.ru>	<20091005090102.GA70430@svzserv.kemerovo.su>	<4AC9BC5A.50902@mail.ru>	<20091005095600.GA73335@svzserv.kemerovo.su>	<4AC9CFF7.3090208@mail.ru>	<20091005110726.GA62598@onelab2.iet.unipi.it>	<4AC9D87E.7000005@mail.ru>	<20091005120418.GA63131@onelab2.iet.unipi.it> <4AC9E29B.6080908@mail.ru>

next in thread | previous in thread | raw e-mail | index | archive | help
rihad wrote:
> Luigi Rizzo wrote:
>> On Mon, Oct 05, 2009 at 04:29:02PM +0500, rihad wrote:
>>> Luigi Rizzo wrote:
>> ...
>>>> you keep omitting the important info i.e. whether individual
>>>> pipes have drops, significant queue lenghts and so on.
>>>>
>>> Sorry. Almost everyone has 0 in the last Drp column, but some have 
>>> above zero. I'm not just sure how this can be helpful to anyone.
>>
>> because you were complaining about 'dummynet causing drops and
>> waste of bandwidth'.
>> Now, drops could be due to either
>> 1) some saturation in the dummynet machine (memory shortage, cpu
>>    shortage, etc.) which cause unwanted drops;
>>
> I too think the box is hitting some other global limit and dropping 
> packets. If not, then how come that between 4a.m. and 10a.m. when the 
> traffic load is at 250-330 mbit/s there isn't a single drop?
> 
>> 2) intentional drops introduced by dummynet because a flow exceeds
>>    its queue size. These drops are those shown in the 'Drop'
>>    column in 'ipfw pipe show' (they are cumulative, so you
>>    should do an 'ipfw pipe delete; ipfw pipe 5120 config ...'
>>    whenever you want to re-run the stats, or compute the
>>    differences between subsequent reads, to figure out what
>>    happens.
>>
>> If all drops you are seeing are of type 2, then there is nothing
>> you can do to remove them: you set a bandwidth limit, the
>> client is sending faster than it should, perhaps with UDP
>> so even RED/GRED won't help you, and you see the drops
>> once the queue starts to fill up.
>> Examples below: the entries in bucket 4 and 44
>>
> Then I guess I'm left with increasing slots and see how it goes. 
> Currently it's set to 10000 for each pipe. Thanks for yours and Eugene's 
> efforts, I appreciate it.
> 
>> If you are seeing drops that are not listed in 'pipe show'
>> then yun need to investigate where the packets are lost,
>> again it could be on the output queue of the interface
>> (due to the burstiness introduced by dummynet), or shortage
>> of mbufs (but this did not seem to be the case from your
>> previous stats) or something else.
>>
> This indeed is not a problem, proved by the fact that, like I said, 
> short-circuiting "ipfw allow ip from any to any" before dummynet pipe 
> rules instantly eliminates all drops, and bce0 and bce1 load evens out 
> (bce0 used for input, and bce1 for output).

no it could be a problem because dummy net releases all the packets 
for a slot that are going ot be let for a tick out at once, instead of 
having them arrive spread out through the tick.  also it does one pipe 
at a time which means that related packets arrive at once followed by
packets from other sessions..   this may produce differences in some 
cases.

> 
>> It's  all up to you to run measurements, possibly
>> without omitting potentially significant data
>> (e.g. sysctl -a net.inet.ip)
>> or making assumptions (e.g. you have configured
>> 5000 slots per queue, but with only 50k mbufs in total
>> there is no chance to guarantee 5000 slots to each
>> queue -- all you will achieve is give a lot of slots
>> to the greedy nodes, and very little to the other ones)
>>
> Well, I've been monitoring this stuff. It has never reached above 20000 
> mbufs (111111 is the current limit).
> 
> _______________________________________________
> freebsd-net@freebsd.org mailing list
> http://lists.freebsd.org/mailman/listinfo/freebsd-net
> To unsubscribe, send any mail to "freebsd-net-unsubscribe@freebsd.org"




Want to link to this message? Use this URL: <https://mail-archive.FreeBSD.org/cgi/mid.cgi?4ACA2C24.2060205>