Skip site navigation (1)Skip section navigation (2)
Date:      Fri, 31 Jan 2014 01:18:31 -0500 (EST)
From:      wollman@freebsd.org
To:        j.david.lists@gmail.com
Cc:        freebsd-net@freebsd.org
Subject:   Re: Terrible NFS performance under 9.2-RELEASE?
Message-ID:  <201401310618.s0V6IVJv027167@hergotha.csail.mit.edu>
In-Reply-To: <CABXB=RTx9_gE=0G9UAzwJ3LuYv8fy=sAOZp1e2D7cJ6_=kgd9A@mail.gmail.com>
References:  <CABXB=RR1eDvdUAaZd73Vv99EJR=DFzwRvMTw3WFER3aQ%2B2%2B2zQ@mail.gmail.com> <87942875.478893.1391121843834.JavaMail.root@uoguelph.ca>

next in thread | previous in thread | raw e-mail | index | archive | help
In article
<CABXB=RTx9_gE=0G9UAzwJ3LuYv8fy=sAOZp1e2D7cJ6_=kgd9A@mail.gmail.com>,
J David writes:

>The process of TCP segmentation, whether offloaded or not, is
>performed on a single TCP packet.  It operates by reusing that
>packet's header over and over for each segment with slight
>modifications.  Consequently the maximum size that can be offloaded is
>the maximum size that can be segmented: one packet.

This is almost entirely wrong in its description of the non-offload
case.  A segment is a PDU at the transport layer.  In normal
operation, TCP figures out how much it can send, constructs a header,
and copies an mbuf chain referencing one segment's worth of data out
of the socket's transmit buffer.  tcp_output() repeats this process
(possibly using the same mbuf cluster multiple times, if it's larger
than the receiver's or the path's maximum segment size) until it
either runs out of stuff to send, or runs out of transmit window to
send into.  THAT IS WHY TSO IS A WIN: as you describe, the packet
headers are mostly identical, and (if the transmit window allows) it's
much cheaper to build the header and do the DMA setup once, then let
the NIC take over from there, rather than having to DMA a different
(but nearly identical) header for every individual segment.

>NFS is not sending packets to the TCP stack, it is sending stream
>data.  With TCP_NODELAY it should be possible to engineer a one send =
>one packet correlation, but that's true if and only if that send is
>less than the max packet size.

Yes and no.  NFS constructs a chain of mbufs and calls the socket's
sosend() routine.  This ultimately results in a call to tcp_output(),
and in the normal case where there is no data awaiting transmission,
that mbuf chain will be shallow-copied (bumping all the mbuf cluster
reference counts) up to the limit of what the transmit window allows,
and Ethernet, IP, and TCP headers will be prepended (possibly in a
separate mbuf).  The whole mess is then passed on to the hardware for
offload, if it fits.  RPC responses will only get smushed together if
tcp_output() wasn't able to schedule the transmit immediately, and if
the network is working properly, that will only happen if there's more
than one client-side-receive-window's-worth of data to be transmitted.

This shallow-copy behavior, by the way, is why the drivers need
m_defrag() rather than m_collapse(): M_WRITABLE is never true for
clusters coming out of tcp_output(), because the refcount will never
be less than 2 (one for the socket buffer and at least one for the
interface's transmit queue, depending on how many segments include
some data from the cluster).  But it's also part of why having a
"gigantic" cluster (e.g., 128k) would be a big win for NFS.

-GAWollman




Want to link to this message? Use this URL: <https://mail-archive.FreeBSD.org/cgi/mid.cgi?201401310618.s0V6IVJv027167>