Skip site navigation (1)Skip section navigation (2)
Date:      Tue, 19 Mar 2013 11:15:13 -0400 (EDT)
From:      Rick Macklem <rmacklem@uoguelph.ca>
To:        Garrett Wollman <wollman@freebsd.org>
Cc:        freebsd-net@freebsd.org, andre@freebsd.org, Ivan Voras <ivoras@freebsd.org>
Subject:   Re: Limits on jumbo mbuf cluster allocation
Message-ID:  <1807531371.4052865.1363706113648.JavaMail.root@erie.cs.uoguelph.ca>
In-Reply-To: <1784154272.4050462.1363703851697.JavaMail.root@erie.cs.uoguelph.ca>

next in thread | previous in thread | raw e-mail | index | archive | help
I wrote:
> Garrett Wollman wrote:
> > <<On Tue, 12 Mar 2013 23:48:00 -0400 (EDT), Rick Macklem
> > <rmacklem@uoguelph.ca> said:
> >
> > > I've attached a patch that has assorted changes.
> >
> > So I've done some preliminary testing on a slightly modified form of
> > this patch, and it appears to have no major issues. However, I'm
> > still waiting for my user with 500 VMs to have enough free to be
> > able
> > to run some real stress tests for me.
> >
> > I was able to get about 2.5 Gbit/s throughput for a single streaming
> > client over local 10G interfaces with jumbo frames (through a single
> > switch and with LACP on both sides -- how well does lagg(4) interact
> > with TSO and checksum offload?) This is a little bit disappointing
> > (considering that the filesystem can do 14 Gbit/s locally) but still
> > pretty decent for one single-threaded client. This obviously does
> > not
> > implicate the DRC changes at all, but does suggest that there is
> > room
> > for more performance improvement. (In previous tests last year, I
> > was able to get a sustained 8 Gbit/s when using multiple clients.) I
> > also found that one of our 10G switches is reordering TCP segments
> > in
> > a way that causes poor performance.
> >
> If the server for this test isn't doing anything else yet, you could
> try a test run with a single nfsd thread and see if that improves
> performance.
> 
> ken@ emailed yesterday mentioning that out of order reads was
> resulting
> in poor performance related to ZFS and that a single nfsd thread
> improved
> that for his test.
> 
> Although a single nfsd thread isn't practical, it suggests that the
> nfsd
> thread affinity code that I had forgotten about and has never been
> ported
> to the new server, might be needed for this. (I'm not sure how to do
> the
> affinity stuff for NFSv4, but it should at least be easy to port the
> code
> so that it works for NFSv3 mounts.)
> 
Oh, and don't hesitate to play with the rsize and readahead options on
the client mount. It is not obvious what is an optimal setting for a
given LAN/server config. (I think the Linux client has a readahead option?)

rick

> rick
> ps: For a couple of years I had assumed that Isilon would be doing
> this,
> but they are no longer working on the FreeBSD NFS server, so the
> affinity stuff slipped through the cracks.
> 
> > I'll hopefully have some proper testing results later in the week.
> >
> > -GAWollman
> > _______________________________________________
> > freebsd-net@freebsd.org mailing list
> > http://lists.freebsd.org/mailman/listinfo/freebsd-net
> > To unsubscribe, send any mail to
> > "freebsd-net-unsubscribe@freebsd.org"
> _______________________________________________
> freebsd-net@freebsd.org mailing list
> http://lists.freebsd.org/mailman/listinfo/freebsd-net
> To unsubscribe, send any mail to "freebsd-net-unsubscribe@freebsd.org"



Want to link to this message? Use this URL: <https://mail-archive.FreeBSD.org/cgi/mid.cgi?1807531371.4052865.1363706113648.JavaMail.root>