Skip site navigation (1)Skip section navigation (2)
Date:      Sun, 08 Mar 1998 17:56:56 +0000
From:      Brian Somers <brian@Awfulhak.org>
To:        Luigi Rizzo <luigi@labinfo.iet.unipi.it>
Cc:        brian@Awfulhak.org (Brian Somers), hackers@FreeBSD.ORG
Subject:   Re: weird problem (lost packets) in iijppp 
Message-ID:  <199803081756.RAA01605@awfulhak.org>
In-Reply-To: Your message of "Sun, 08 Mar 1998 05:09:21 %2B0100." <199803080409.FAA04388@labinfo.iet.unipi.it> 

next in thread | previous in thread | raw e-mail | index | archive | help
> > Would you be able to try this with the ppp from -current, -stable or 
> > http://www.FreeBSD.org/~brian ?
> 
> downloading the files right now... in any case i did some more tests
> yesterday night, and every time i have a lost reply the "miss" or
> "uncompress" (depending on the direction) counts below increase:
> 
>     PPP ON prova> show compress
>     Out:  780 (compress) / 909 (total)  14 (miss) / 300 (search)
>     In:  819 (compress), 107 (uncompress)  0 (error),  0 (tossed)
> 
> I have no idea if this only happens with ICMP packets or also with
> regular traffic.

This sounds like a compression dictionary problem.  I haven't read 
the Predictor rfc myself (yet), but certainly the DEFLATE rfc 
requires that incoming packets that are uncompressed are passed 
through the dictionary to keep it in sync with the sender.  pred.c 
doesn't do anything with these packets.

I got your other mail that says it doesn't happen with the latest 
version.  Can you try the latest version with `deny deflate' and 
`disable deflate' ?  This will force pred1 compression again :-)

> Speaking of ppp, i was wondering if you are also looking at the memory
> allocation used in the program. It seems to do a few malloc() on each packets
> (one for the header, one for the payload... and this appears kind of
> useless since the queues are short anyways and using a fixed array
> would be probably much more efficient.

This area needs looking at.  The two mallocs (one for the mbuf and 
one for the data) can be consolidated into one, but the queues may 
get quite long.  It's possible that packets may be sent that are 
blocked by the dial filter, but not by the output filter.  These 
packets must sit in the output queue 'till the link is brought up.  
This isn't working correctly in the MP branch (where development is 
currently being done).  I probably also need to time-out some of 
these queued packets....

> Also, would you like to help in implementing the 'preemption'
> feature that i had in mind ? The basic idea would be to define some
> negotiable mechanism (e.g. HDLC_ESC+something) to suspend/resume
> transmission of a packet when there is a higher-priority one. This
> would be used to improve interactive response when you also have
> background bulk traffic. Of course one has to be careful in the
> interaction with the "pred1" compression.

Currently, there are two output queue sets, one IP queue set and one 
modem queue set.  Each queue set consists of two queues, a fast one 
(interactive traffic in the IP set, non-NCP traffic in the modem set) 
and a slow one.  Stuff is compressed just before moving from the IP 
set into the slow modem queue.

Traffic from the slow modem queue is moved into the fast modem queue 
just before a CCP RESET (and should probably just be dropped), but 
otherwise is always sent directly from the slow queue.  When stuff is 
read from the modem queue set, the packet is written in one go to the 
device.  Short writes just cause the the written bit of the mbuf to 
be removed.  This `in progress' mbuf is the `out' part of `struct 
physical' in the MP branch.

This is my idea of what you're talking about:

If we want to be able to split `in progress' packets, AFAICT we'd 
have to do shorter writes to the device instead of mbuf::cnt bytes.  
We'd also never be able to compress this interactive traffic, without 
having a separate compression stream for interactive traffic.  We'd 
have to agree with the other side to use some escape sequence to say 
"here's a fast packet" (probably using a new LCP option), and the other 
side would have to be willing to do the right thing with this packet 
with respect to compression.

Is this what you have in mind ?  Wouldn't we be better off suffering 
the overhead of a smaller MTU ?  After all, with VJ compression, 
there isn't that much overhead in sending out shorter packets.

> This mechanism could also be useful (assuming it's not already there)
> with parallel ppp connections, since it would allow you to spread a
> packet on a number of parallel links.

This, I reckon will be a whole programming exercise in itself.  I 
haven't finished cleaning up all the global/static stuff in ppp yet.  
When that's done, I have to implement the MP LCP stuff and plug in 
the logical link layer (shouldn't actually be that difficult).

The problem with re-sequencing stuff will be the same though 
(slightly compounded by the fact that there's a sequence number 
involved too).

> 	cheers
> 	luigi
> -----------------------------+--------------------------------------
> Luigi Rizzo                  |  Dip. di Ingegneria dell'Informazione
> email: luigi@iet.unipi.it    |  Universita' di Pisa
> tel: +39-50-568533           |  via Diotisalvi 2, 56126 PISA (Italy)
> fax: +39-50-568522           |  http://www.iet.unipi.it/~luigi/
> _____________________________|______________________________________

-- 
Brian <brian@Awfulhak.org>, <brian@FreeBSD.org>, <brian@OpenBSD.org>
      <http://www.Awfulhak.org>;
Don't _EVER_ lose your sense of humour....



To Unsubscribe: send mail to majordomo@FreeBSD.org
with "unsubscribe freebsd-hackers" in the body of the message



Want to link to this message? Use this URL: <https://mail-archive.FreeBSD.org/cgi/mid.cgi?199803081756.RAA01605>