Skip site navigation (1)Skip section navigation (2)
Date:      Tue, 15 Dec 1998 12:27:57 GMT
From:      Michael Robinson <robinson@netrinsics.com>
To:        mike@smith.net.au, robinson@netrinsics.com
Cc:        freebsd-stable@FreeBSD.ORG
Subject:   Re: MLEN < write length < MINCLSIZE "bug"
Message-ID:  <199812151227.MAA06983@netrinsics.com>
In-Reply-To: <199812150247.SAA02006@dingo.cdrom.com>

next in thread | previous in thread | raw e-mail | index | archive | help
Mike Smith <mike@smith.net.au> writes:
>>   4. One solution is to make MINCLSIZE a kernel config option.  This is ugly,
>>      but simple to implement and relatively non-intrusive.
>
>It should be a sysctl variable, not a kernel option, in this case.  But 
>that's certainly the simplest way to go.  Want to implement this?

Will do.  Will patches against the 2.2.7-RELEASE CVS repository be acceptable?
If there hasn't been too much drift, they should easily merge into -STABLE,
and maybe even -CURRENT.

>>      With a socket option, applications that wanted low latency (at the
>>      expense of more memory usage) could specify that on a per-socket basis.
>>      This is less ugly, but requires extensive changes to documentation, 
>>      header files, and application software.
>
>Do you think you could come up with a heuristic that would be able to 
>detect when the current behaviour was losing, reliably?  If so, you 
>could use this to switch the option...

By adding one or two bookkeeping fields to the socket structure it would be
possible to implement such an heuristic.  However, there are three reasons I
don't think that is such a good idea:

 1. Protocol inefficiencies from sending multiple packets are, by their 
    nature, protocol dependent.  Ergo, we would want different heuristics for
    different families (LOCAL, INET, ISO, etc.) and types (STREAM, DGRAM,
    etc.) of socket.  Which gets really ugly really fast.

 2. There is no method to determine how much any given application cares
    about protocol inefficiencies on any given socket.  The heuristic would
    be making perhaps unwarranted assumptions about what performance
    characteristics were desireable in a particular instance.  Which is how
    we got into this mess in the first place.

 3. It would be non-trivial extra code in a pretty performance-sensitive
    part of the kernel (the innermost loop of sosend).

For these reasons, I think a socket option makes more sense than trying to 
resolve the problem automagically in the kernel.

Perhaps a more tractable solution would be a system-wide heuristic, such as
is used with filesystem tuning.  I.e., if there are mbuf clusters to burn,
burn 'em, baby.  Otherwise, send multiple packets.

	-Michael Robinson



To Unsubscribe: send mail to majordomo@FreeBSD.org
with "unsubscribe freebsd-stable" in the body of the message



Want to link to this message? Use this URL: <https://mail-archive.FreeBSD.org/cgi/mid.cgi?199812151227.MAA06983>