Skip site navigation (1)Skip section navigation (2)
Date:      Thu, 30 Jan 2014 15:30:16 -0500
From:      J David <j.david.lists@gmail.com>
To:        Rick Macklem <rmacklem@uoguelph.ca>
Cc:        Bryan Venteicher <bryanv@freebsd.org>, Garrett Wollman <wollman@freebsd.org>, freebsd-net@freebsd.org
Subject:   Re: Terrible NFS performance under 9.2-RELEASE?
Message-ID:  <CABXB=RR1eDvdUAaZd73Vv99EJR=DFzwRvMTw3WFER3aQ%2B2%2B2zQ@mail.gmail.com>
In-Reply-To: <1879662319.18746958.1391052668182.JavaMail.root@uoguelph.ca>
References:  <CAGaYwLcDVMA3=1x4hXXVvRojCBewWFZUyZfdiup=jo685%2B51%2BA@mail.gmail.com> <1879662319.18746958.1391052668182.JavaMail.root@uoguelph.ca>

next in thread | previous in thread | raw e-mail | index | archive | help
On Wed, Jan 29, 2014 at 10:31 PM, Rick Macklem <rmacklem@uoguelph.ca> wrote:
>> I've been busy the last few days, and won't be able to get to any
>> code
>> until the weekend.

Is there likely to be more to it than just cranking the MAX_TX_SEGS
value and recompiling?  If so, is it something I could take on?

> Well, NFS hands TCP a list of 34 mbufs. If TCP obly adds one, then
> increasing it from 34 to 35 would be all it takes. However, see below.

One thing I don't want to miss here is that an NFS block size of
65,536 is really suboptimal.  The largest size of a TCP datagram is
65535.  So by the time NFS adds the overhead on and the total amount
of data to be sent winds up in that ~65k range, it guarantees that the
operation has to be split it into at least two TCP packets, one
max-size and one tiny one.  This doubles a lot of the network stack
overhead, regardless of whether the packet ends up being segmented
into tiny bits down the road or not.

If NFS could be modified to respect the actual size of a TCP packet,
generating a steady stream of 63.9k (or thereabout) writes instead of
the current 64k-1k-64k-1k, performance would likely see another
significant boost.  This would nearly double the average throughput
per packet, which would help with network latency and CPU load.

It's also not 100% clear but it seems like in some cases the existing
behavior also causes the TCP stack to park on the "leftover" bit and
wait for more data, which comes in another >64k chunk, and from there
on out there's no more correlation between TCP packets and NFS
operations, so an operation doesn't begin on a packet boundary.  That
continues as long as load keeps up.  That's probably not good for
performance either.  And it certainly confuses the heck out of
tcpdump.

Probably 60k would be the next most reasonable size, since it's the
largest page size multiple that will fit into a TCP packet while still
leaving room for overhead.

Since the max size of TCP packets is not an area where there's really
any flexibility, what would have to happen to NFS to make that (or
arbitrary values) perform at its best within that constraint?

It's apparent from even trivial testing that performance is
dramatically affected if the "use a power of two for NFS rsize/wsize"
recommendation isn't followed, but what is the origin of that?  Is it
something that could be changed?

> I don't think that m_collapse() is more likely to fail, since it
> only copies data to the previous mbuf when the entire mbuf that
> follows will fit and it's allowed. I'd assume that a ref count
> copied mbuf cluster doesn't allow this copy or things would be
> badly broken.)

m_collapse checks M_WRITEABLE which appears to cover the ref count
case.  (It's a dense macro, but it seems to require a ref count of 1
if a cluster is used.)

The cases where m_collapse can succeed are pretty slim.  It pretty
much requires two consecutive underutilizied buffers, which probably
explains why it fails so often in this code path.  Since one of its
two methods outright skips the packet header mbuf (to avoid risk of
moving it), possibly the only case where it succeeds is when the last
data mbuf is short enough that whatever NFS trailers are being
appended can fit with it.

> Bottom line, I think calling either m_collapse() or m_defrag()
> should be considered a "last resort".

It definitely seems more designed for a case where 8 different stack
layers each put their own little header/trailer fingerprint on the
packet, and that's not what's happening here.

> Maybe the driver could reduce the size of if_hw_tsomax whenever
> it finds it needs to call one of these functions, to try and avoid
> a re-occurrence?

Since the issue is one of segment length rather than packet length,
this seems risky.  If one of those touched-by-everybody packets goes
by, it may not be that large, but it would risk permanently (until
reboot) dropping the throughput of that interface.

Thanks!



Want to link to this message? Use this URL: <https://mail-archive.FreeBSD.org/cgi/mid.cgi?CABXB=RR1eDvdUAaZd73Vv99EJR=DFzwRvMTw3WFER3aQ%2B2%2B2zQ>