Skip site navigation (1)Skip section navigation (2)
Date:      Fri, 2 Mar 2007 11:14:51 -0500 (EST)
From:      Andrew Gallatin <gallatin@cs.duke.edu>
To:        Andre Oppermann <andre@freebsd.org>
Cc:        freebsd-net@freebsd.org, freebsd-current@freebsd.org, rwatson@freebsd.org, kmacy@freebsd.org
Subject:   Re: New optimized soreceive_stream() for TCP sockets, proof of concept
Message-ID:  <17896.19835.258246.284397@grasshopper.cs.duke.edu>
In-Reply-To: <45E8276D.60105@freebsd.org>
References:  <45E8276D.60105@freebsd.org>

next in thread | previous in thread | raw e-mail | index | archive | help

Andre Oppermann writes:
 > Instead of the unlock-lock dance soreceive_stream() pulls a properly sized
 > (relative to the receive system call buffer space) from the socket buffer drops
 > the lock and gives copyout as much time as it needs.  In the mean time the lower
 > half can happily add as many new packets as it wants without having to wait for
 > a lock.  It also allows the upper and lower halfs to run on different CPUs without
 > much interference.  There is a unsolved nasty race condition in the patch though.

Excellent.  This sounds very exciting!

 > Any testing, especially on 10Gig cards, and feedback appreciated.

I'll try to test sometime soon, but possibly not until next week..  Is
there any particular config you're interested in?  If not, I'll just
compare the pre/post-patch performance of a fast (linux) sender to an
SMP (FreeBSD) receiver, using the default "out of the box" settings
for a jumbo and standard MTU.

Drew



Want to link to this message? Use this URL: <https://mail-archive.FreeBSD.org/cgi/mid.cgi?17896.19835.258246.284397>