Skip site navigation (1)Skip section navigation (2)
Date:      Tue, 19 Mar 2013 00:29:57 -0400
From:      Garrett Wollman <wollman@freebsd.org>
To:        Rick Macklem <rmacklem@uoguelph.ca>
Cc:        freebsd-net@freebsd.org, andre@freebsd.org, Ivan Voras <ivoras@freebsd.org>
Subject:   Re: Limits on jumbo mbuf cluster allocation
Message-ID:  <20807.59845.764047.618551@hergotha.csail.mit.edu>
In-Reply-To: <75232221.3844453.1363146480616.JavaMail.root@erie.cs.uoguelph.ca>
References:  <20798.44871.601547.24628@hergotha.csail.mit.edu> <75232221.3844453.1363146480616.JavaMail.root@erie.cs.uoguelph.ca>

next in thread | previous in thread | raw e-mail | index | archive | help
<<On Tue, 12 Mar 2013 23:48:00 -0400 (EDT), Rick Macklem <rmacklem@uoguelph.ca> said:

> I've attached a patch that has assorted changes.

So I've done some preliminary testing on a slightly modified form of
this patch, and it appears to have no major issues.  However, I'm
still waiting for my user with 500 VMs to have enough free to be able
to run some real stress tests for me.

I was able to get about 2.5 Gbit/s throughput for a single streaming
client over local 10G interfaces with jumbo frames (through a single
switch and with LACP on both sides -- how well does lagg(4) interact
with TSO and checksum offload?)  This is a little bit disappointing
(considering that the filesystem can do 14 Gbit/s locally) but still
pretty decent for one single-threaded client.  This obviously does not
implicate the DRC changes at all, but does suggest that there is room
for more performance improvement.  (In previous tests last year, I
was able to get a sustained 8 Gbit/s when using multiple clients.)  I
also found that one of our 10G switches is reordering TCP segments in
a way that causes poor performance.

I'll hopefully have some proper testing results later in the week.

-GAWollman



Want to link to this message? Use this URL: <https://mail-archive.FreeBSD.org/cgi/mid.cgi?20807.59845.764047.618551>