Date: Mon, 21 Dec 2009 00:47:55 -0500 From: Zaphod Beeblebrox <zbeeble@gmail.com> To: Dan Nelson <dnelson@allantgroup.com> Cc: freebsd-hackers@freebsd.org Subject: Re: scp more perfectly fills the pipe than NFS/TCP Message-ID: <5f67a8c40912202147t9d9b64al88060bd8a73c28b0@mail.gmail.com> In-Reply-To: <20091220052703.GA98917@dan.emsphone.com> References: <5f67a8c40912182147t1adc158ew9fd3d94c4c4c955f@mail.gmail.com> <20091220052703.GA98917@dan.emsphone.com>
next in thread | previous in thread | raw e-mail | index | archive | help
On Sun, Dec 20, 2009 at 12:27 AM, Dan Nelson <dnelson@allantgroup.com> wrot= e: > In the last episode (Dec 19), Zaphod Beeblebrox said: >> Here's an interesting conundrum. =A0I don't know what's different betwee= n >> the TCP that scp uses from the TCP that NFS uses, but given the same two >> FreeBSD machines, SCP fills the pipe with packets better. >> >> Examine the following graphic: http://www.eicat.ca/~dgilbert/example-mrt= g.png >> >> The system doing the scp and the NFS server is FreeBSD-7.2-p1. =A0The sy= stem >> receiving the scp and the NFS client is FreeBSD-8.0-p1 >> >> The scp transfer is the left hand side of the graph and the NFS transfer >> is on the right. >> >> The NFS is mounted with "-3 -T -b -l -i" and no other options. =A0Files = are >> being moved over NFS with the system "mv" command. =A0The files in each = case >> are large (50 to 500 meg files). > > If you increase the NFS blocksize (-r 32768 for example) you will get > slightly better performance, but you will likely never match the scp > results. =A0They're doing two different things under the hood: scp is > streaming the entire file in one operation, while NFS is performing many > "read 8k at offset 0", "read 8k at offset 8k", etc requests one after > another, so a high-latency connection will take a performance hit due to = the > latency in issuing each command. =A0According to the mount_nfs manpage, i= t > looks like there is some prefetching that can be enabled with the "-a ##" > option. =A0It doesn't say what the default is, though. While the link is slow, it is really directly connected with a latency of 10ms or so. Isn't mv mmap()'ing large enough regions to cause there to be a reasonable queue to transfer?
Want to link to this message? Use this URL: <https://mail-archive.FreeBSD.org/cgi/mid.cgi?5f67a8c40912202147t9d9b64al88060bd8a73c28b0>