Skip site navigation (1)Skip section navigation (2)
Date:      Fri, 8 Mar 1996 11:32:48 -0800
From:      Matthew Dillon <dillon@backplane.com>
To:        "Garrett A. Wollman" <wollman@lcs.mit.edu>
Cc:        bugs@FreeBSD.ORG
Subject:   Re: bug in netinet/tcp_input.c
Message-ID:  <199603081932.LAA02322@apollo.backplane.com>

next in thread | raw e-mail | index | archive | help
:
:As you can see, the MSS option that we send is not supposed to be
:related to the Path MTU for packets that we are sending; after all,
:the MSS option is about what THEY send back TO US, and their path back
:to us is often different.  We really should take the maximum over all
:interfaces as suggested in the RFC, but I didn't get around to doing
:that.
:
:RFC 1191 goes on to say:
:
:          Note: At the moment, we see no reason to send an MSS greater
:          than the maximum MTU of the connected networks, and we
:          recommend that hosts do not use 65495.  It is quite possible
:          that some IP implementations have sign-bit bugs that would be
:          tickled by unnecessary use of such a large MSS.
:
:In other words, the current implementation is operating according to
:specification.
:
:-GAWollman
:
:--
:Garrett A. Wollman   | Shashish is simple, it's discreet, it's brief. ... 
:wollman@lcs.mit.edu  | Shashish is the bonding of hearts in spite of distance.
:Opinions not those of| It is a bond more powerful than absence.  We like people
:MIT, LCS, ANA, or NSA| who like Shashish.  - Claude McKenzie + Florent Vollant

    This may be true, but it makes all those new route table fields 
    completely useless because they wind up being operated asymetrically.

    The whole point of the 'mtu' field in the route entry, as I understand
    it, is to limit the maximum segment size for TCP connections from *and*
    TO the destination.  If you do not *use* that field when calculating 
    the mss you send to the other side for outgoing connections, what
    use is it?

    For example, if you attempt to streamline TCP operation to a given
    destination by, say, setting the mtu to 296 and setting the recvpipe
    and sendpipe to, say, 768, the expected result only works in one
    direction... you get the desired effect for data you send, but not
    for data you receive.

    Since reception of data causes greater buffering and latency problems
    then transmission, especially over SLIP and PPP links, not setting
    the advertised mss based on the route table entry basically breaks the
    whole mechanism and makes it useless for any kind of tuning whatsoever.
    You might as well remove the -sendpipe, -recvpipe, and -mtu options
    entirely.

						-Matt

    Matthew Dillon   Engineering, BEST Internet Communications, Inc.
		     <dillon@apollo.backplane.com>
    [always include a portion of the original email in any response!]



Want to link to this message? Use this URL: <https://mail-archive.FreeBSD.org/cgi/mid.cgi?199603081932.LAA02322>