Skip site navigation (1)Skip section navigation (2)
Date:      Sun, 07 May 2000 23:21:54 +0100
From:      Brian Somers <brian@Awfulhak.org>
To:        "Rodney W. Grimes" <freebsd@gndrsh.dnsmgr.net>
Cc:        brian@Awfulhak.org (Brian Somers), brian@FreeBSD.org (Brian Somers), cvs-committers@FreeBSD.org, cvs-all@FreeBSD.org, brian@hak.lan.Awfulhak.org
Subject:   Re: cvs commit: src/usr.sbin/ppp mbuf.c 
Message-ID:  <200005072221.XAA51544@hak.lan.Awfulhak.org>
In-Reply-To: Message from "Rodney W. Grimes" <freebsd@gndrsh.dnsmgr.net>  of "Sun, 07 May 2000 12:04:06 PDT." <200005071904.MAA22237@gndrsh.dnsmgr.net> 

next in thread | previous in thread | raw e-mail | index | archive | help
> > > > brian       2000/05/07 03:09:26 PDT
> > > > 
> > > >   Modified files:        (Branch: RELENG_4)
> > > >     usr.sbin/ppp         mbuf.c 
> > > >   Log:
> > > >   MFC: Correct a bad bug in m_prepend()
> > > 
> > > Can you describe the ``bug''?
> > 
> > Errum, it was m_append() :-/  The bug was in the user-land mbuf code 
> > (nothing to do with anything but ppp(8)) and meant that if the mbuf 
> > had more than one segment, the last segment in the chain was returned 
> > as the new head segment.
> > 
> > I'm not entirely sure that multi-segment mbufs were ever actually 
> > passed to m_append(), but I suspect they were.
> > 
> > The result of this bug was that the packet would be pushed into the 
> > tun device and promptly dropped as garbage.  I believe that people 
> > with ipfw configured in the kernel would see a message something like:
> > 
> > ipfw: -1 Refuse UDP 194.242.139.171 213.1.151.12 in via tun1 Fragment = 185
> > 
> > but I'm not 100% sure about that either !
> > 
> > I figured it was pretty important to get the fix in asap rather than 
> > waiting 'till I had time to prove the consequences.
> 
> Thank you for the clarification!!  (I am keeping a keen eye on anything
> ppp related right now as we are having a very strange, ever few days
> where it seems that data rates go to hell in a hand basket on a 4B channel
> ISDN mppp setup, you can ping, you can get some data through, but other
> connections go to a snails pace.  Unfortanetly it is a production link
> so I have to put it back up asap and can't do much debugging on it, and
> trying to duplicate it in the lab has turned up zilch :-(.
> 
> Killing off and restarting ppp clears it right up.  We started out with 3.4
> and are now at 4.0-stable and still seeing it.  I am starting to suspect
> the equipment on the other end (Max TNT TOS 7.2.3).

It would be interesting to know if the memory footprint was 
growing...  If this bug did actually cause problems, it would 
eventually cause enough (probably fragmented) leakage to have a 
noticeable effect.

However, as m_append() is only called from nat_cmd.c, you'd need to 
be using NAT to be affected by this in the first place.

FWIW, I've now got a ppp-over-ppp setup with the top ppp being a 
permanent (-ddial) link, and I don't see any slowdown - *BUT*, I 
re-open the link twice a day when the transport-level ppp switches 
ISP.  It may be interesting to just ``open lcp'' next time you see 
the slow-down.  If that solves the problem temporarily, it indicates 
that it's probably not anything local that's going wrong.

> -- 
> Rod Grimes - KD7CAX @ CN85sl - (RWG25)               rgrimes@gndrsh.dnsmgr.net

-- 
Brian <brian@Awfulhak.org>                        <brian@[uk.]FreeBSD.org>
      <http://www.Awfulhak.org>;                   <brian@[uk.]OpenBSD.org>
Don't _EVER_ lose your sense of humour !




To Unsubscribe: send mail to majordomo@FreeBSD.org
with "unsubscribe cvs-all" in the body of the message




Want to link to this message? Use this URL: <https://mail-archive.FreeBSD.org/cgi/mid.cgi?200005072221.XAA51544>