Skip site navigation (1)Skip section navigation (2)
Date:      Mon, 19 Sep 2016 15:59:52 -0500
From:      "Dean E. Weimer" <dweimer@dweimer.net>
To:        Lyndon Nerenberg <lyndon@orthanc.ca>
Cc:        FreeBSD Stable <freebsd-stable@freebsd.org>
Subject:   Re: LAGG and Jumbo Frames
Message-ID:  <04c9065ee4a780c6f8986d1b204c4198@dweimer.net>
In-Reply-To: <alpine.BSF.2.20.1609191326280.93154@orthanc.ca>
References:  <48926c6013f938af832c17e4ad10b232@dweimer.net> <alpine.BSF.2.20.1609191326280.93154@orthanc.ca>

next in thread | previous in thread | raw e-mail | index | archive | help
On 2016-09-19 3:28 pm, Lyndon Nerenberg wrote:
> This is almost certainly a PMTUd issue.
> 
> Unless your end-to-end paths to everything you talk to have
> jumboframes configured, there's no benefit to setting them up on the
> lagg.  Just go with the default MTU.
> 
> --lyndon

Everything on physical Ethernet has support for it Including the LAN 
interface of Firewall, and talks to it just fine over a single interface 
with Jumbo frames enabled. Just when I introduced the LAGG interface 
other devices with Jumbo frames enabled stopped talking. I was trying to 
speed up my backups (Bacula runs on one of the jails, NAT Reflection 
isn't used for the Bacula services) which take about 7.5 hours over a 
single interface to complete on the weekly fulls, I Have two 
simultaneous jobs running at the start, and I was hoping that the LAGG 
would speed them up, but I suspect the loss of Jumbo frames on the 
transfer would be slower than the single interface. Its also possible it 
won't have an impact either way and the disk write is the bottle neck. 
The 930G written in during the backup is the only network load I have 
that is pushing the network anywhere close to a heavy load.

FYI I do have net.inet.tcp.pmtud_blackhole_detection enabled on the 
server.

-- 
Thanks,
    Dean E. Weimer
    http://www.dweimer.net/



Want to link to this message? Use this URL: <https://mail-archive.FreeBSD.org/cgi/mid.cgi?04c9065ee4a780c6f8986d1b204c4198>