Skip site navigation (1)Skip section navigation (2)
Date:      Tue, 24 Jan 95 10:06:51 MST
From:      terry@cs.weber.edu (Terry Lambert)
To:        alan@picard.isocor.ie (Alan Byrne)
Cc:        questions@FreeBSD.org, alan@buster.internet-eireann.ie
Subject:   Re: Slow ftp transfer times on Ethernet
Message-ID:  <9501241706.AA12331@cs.weber.edu>
In-Reply-To: <Pine.PTX.3.91.950124154639.15526C@picard.isocor.ie> from "Alan Byrne" at Jan 24, 95 04:09:56 pm

next in thread | previous in thread | raw e-mail | index | archive | help
> I have just finished installing FreeBSD 1.1.5.1 on a Pentium P90. I 
> intend using it as an NFS fileserver for our engineering group. Just 
> after I installed the bindist, I configured the network card (SMC) and 
> started to transfer files via ftp accross our ethernet. 
> The results were very dissapointing. 
> >From other servers (Sequent, SUN, SCO) to the FreeBSD system was very 
> slow - around 30K to 100K/sec transfer rate.
> >From the FreeBSD server to any other system the transfer rates ranged 
> from 400k to 800k/sec, somewhat more normal.
> Is this a problem with my hardware or with ftp itself. (I think it's the 
> hardware/configuration myself).
> During the slow ftp transfers, it seems to grab around 40-80K then 
> pauses, the disk access light flashes, grabs some more, etc....

This is because NFS writes are synchronus; this converts into an apparent
request/response before the client is permitted to send the next packet.

The current Sun and SVR4 NFS servers are configured by default to use
async writes, and are apparently faster because of this, at a trade-off
in reliability (crash your server, and your client believes he has
written data that actually did not get to disk).

You can make the writes async (I forget how -- anyone?  I need code in
front of me to answer this one, I don't have the NFS code memorized), but
clients using O_WRITESYNC are going to be screwed by thinking they are
doing something that they are not... just like clients of Sun/Sequent/SCO
are being screwed by default.

With async writes, the packet propogation and data write latency are
counted once over a run of n packets.  With a request/response protocol
architecture, the latency is counted once per packet.  This is the
reason protocols like SMB (LanMan) and NCP (Novell) really suck out.
Novell helps a little in the case of sequential I/O with "packet burst",
which cranks the average up; for instance, NetWare for UNIX gets better
numbers than Native NetWare when using packet burst, but worse without
because of the stack latency in UnixWare (I got the cache prefetch for
sequential reads down, and the actual system I/O on cached data is on
the order of 200uS if the file is mmapped).

The same issues are also present when attempting to use a box as a router
between high speed networks (currently, the UofU uses dedicated RS/6000's
as T3->T1 fan-out-units, solving the problem by throwing compute power
at it).

It would actually be of benefit to BSD (or any OS, for that matter) to
try to get this benchmark improved to ensure a reduced latency in internal
operations and for things like Samba which are, and will remain, defined
as request/response.


					Terry Lambert
					terry@cs.weber.edu
---
Any opinions in this posting are my own and not those of my present
or previous employers.



Want to link to this message? Use this URL: <https://mail-archive.FreeBSD.org/cgi/mid.cgi?9501241706.AA12331>