Skip site navigation (1)Skip section navigation (2)
Date:      Fri, 30 Jan 2009 01:11:41 +1100 (EST)
From:      Bruce Evans <brde@optusnet.com.au>
To:        Brent Jones <brent@servuhome.net>
Cc:        pathiaki2@yahoo.com, freebsd-performance@FreeBSD.org, freebsd-stable@FreeBSD.org
Subject:   Re: ZFS, NFS and Network tuning
Message-ID:  <20090129234158.B46285@delplex.bde.org>
In-Reply-To: <ee9f3b480901290043v54a547bk678458ed36887ec2@mail.gmail.com>
References:  <ee9f3b480901282321h5b57fa49ud39265c7523e0cdf@mail.gmail.com> <ee9f3b480901290043v54a547bk678458ed36887ec2@mail.gmail.com>

next in thread | previous in thread | raw e-mail | index | archive | help
On Thu, 29 Jan 2009, Brent Jones wrote:

> On Wed, Jan 28, 2009 at 11:21 PM, Brent Jones <brent@servuhome.net> wrote:

>> ...
>> The issue I am seeing, is that for certain file types, the FreeBSD NFS
>> client will either issue an ASYNC write, or an FSYNC.
>> However, NFSv3 and v4 both support "safe" ASYNC writes in the TCP
>> versions of the protocol, so that should be the default.
>> Issuing FSYNC's for every compete block transmitted adds substantial
>> overhead and slows everything down.

I use some patches (mainly for nfs write clustering on the server) by
Bjorn Gronwall and some local fixes (mainly for vfs write clustering
on the server, and tuning off excessive nfs[io]d daemons which get in
each other's way due to poor scheduling, and things that only help for
lots of small files), and see reasonable performance in all cases (~90%
of disk bandwidth with all-async mounts, and half that with the client
mounted noasync on an old version of FreeBSD.  The client in -current 
is faster.)  Writing is actually faster than reading here.

>> ...
>> My NFS mount command lines I have tried to get all data to ASYNC write:
>>
>> $ mount_nfs -3T -o async 192.168.0.19:/pdxfilu01/obsmtp /mnt/obsmtp/
>> $ mount_nfs -3T 192.168.0.19:/pdxfilu01/obsmtp /mnt/obsmtp/
>> $ mount_nfs -4TL 192.168.0.19:/pdxfilu01/obsmtp /mnt/obsmtp/

Also try -r16384 -w16384, and udp, and async on the server.  I think
block sizes default to 8K for udp and 32K for tcp.  8K is too small,
and 32K may be too large (it increases latency for little benefit
if the server fs block size is 16K).  udp gives lower latency.  async
on the server makes little difference provided the server block size
is not too small.

> I have found a 4 year old bug, which may be related to this. cp uses
> mmap for small files (and I imagine lots of things use mmap for file
> operations) and causes slowdowns via NFS, due to the fsync data
> provided above.
>
> http://www.freebsd.org/cgi/query-pr.cgi?pr=bin/87792

mmap apparently breaks the async mount preference in the following code:
from vnode_pager.c:

% 	/*
% 	 * pageouts are already clustered, use IO_ASYNC t o force a bawrite()
% 	 * rather then a bdwrite() to prevent paging I/O from saturating 
% 	 * the buffer cache.  Dummy-up the sequential heuristic to cause
% 	 * large ranges to cluster.  If neither IO_SYNC or IO_ASYNC is set,
% 	 * the system decides how to cluster.
% 	 */
% 	ioflags = IO_VMIO;
% 	if (flags & (VM_PAGER_PUT_SYNC | VM_PAGER_PUT_INVAL))
% 		ioflags |= IO_SYNC;

This apparently gives lots of sync writes.  (Sync writes are the default for
nfs, but we mount with async to try to get async writes.)

% 	else if ((flags & VM_PAGER_CLUSTER_OK) == 0)
% 		ioflags |= IO_ASYNC;

nfs doesn't even support this flag.  In fact, ffs is the only file
system that supports it, and here is the only place that sets it.  This
might explain some slowness.

One of the bugs in vfs clustering that I don't have is related to this.
IIRC, mounting the server with -o async doesn't work as well as it
should because the buffer cache becomes congested with i/o that should
have been sent to the disk.  Some writes must be done async as explained
above, but one place in vfs_cache.c is too agressive in delaying async
writes for file systems that are mounted async.  This problem is more
noticeable for nfs, at least with networks not much faster than disks,
since it results in the client and server taking turns waiting for
each other.  (The names here are very confusing -- the async mount
flag normally delays both sync and async writes for as long as possible,
except for nfs it doesn't affect delays but asks for async writes
instead of sync writes on the server, while the IO_ASYNC flag asks for
async writes and thus often has the opposite sense to the async mount
flag.)

% 	ioflags |= (flags & VM_PAGER_PUT_INVAL) ? IO_INVAL: 0;
% 	ioflags |= IO_SEQMAX << IO_SEQSHIFT;

Bruce



Want to link to this message? Use this URL: <https://mail-archive.FreeBSD.org/cgi/mid.cgi?20090129234158.B46285>