Skip site navigation (1)Skip section navigation (2)
Date:      Tue, 21 Nov 2006 14:47:23 +0000
From:      Dieter <freebsd@sopwith.solgatos.com>
To:        freebsd-questions@freebsd.org
Subject:   Re: TCP parameters and interpreting tcpdump output 
Message-ID:  <200611212247.WAA29357@sopwith.solgatos.com>
In-Reply-To: Your message of "Mon, 20 Nov 2006 11:19:52 EST." <20061120111952.4213dacb.wmoran@collaborativefusion.com> 

next in thread | previous in thread | raw e-mail | index | archive | help
In message <20061120111952.4213dacb.wmoran@collaborativefusion.com>, Bill Moran writes:

> > But... if I do something like copy a large file from one disk to another,
> > and then do something that needs to read from a third disk, the new process
> > may hang for a very very long time.  If I suspend (^Z) the copy process for
> > a moment, the new process gets its data.  I suspect that the kernel is
> > letting the copy process kick everything else out of memory.  To some extent
> > that makes sense.  It is caching the most recently accessed data.  What I
> > haven't figured out is why the new process is allowed to hang for so long.
> 
> I'm surprised that you're seeing that much of a "hang".  Even if the disks
> are busy, the system should slow down all disk processes equally, so no
> one process "blocks", but they're all a little slower.

That is what I would expect, but that is not what I get.

> > I had thought of putting in a circular buffer, but figured that it should
> > be unnecessary since the normal Unix write-behind should buffer the
> > writes from the disk I/O for me.  I'll give it a try, maybe it will help.
> 
> First, use the /dev/null test to verify whether or not the disks really
> are the problem.  You don't want to waste a lot of time on something
> that may be unrelated.

It works much better with /dev/null.  With a circular buffer, it seems to
work unless I generate a lot of disk I/O to the same disk it is writing to.
I can do a "cat big_file > /dev/null" and it works if big_file is on a
different disk.  But with I/O to the same disk it fails.  Seeks are slow
so this isn't surprising.  Despite the seeking, if I do a
"cat big_file > /dev/null" (with big_file on the same drive) I still get
about 45-50 MiB/s according to systat -vmstat.

The weird thing is *how* it fails.  The write doesn't fail or return
a short count.  The buffer never fills up.  It is the read from the
socket that fails:

	reading 1316 bytes from port
	read() from socket failed: Connection reset by peer

SWAG: 
I wonder if having stdout be non-blocking means that the kernel is
doing some sort of behind-the-scenes locking of the memory specified
in the write call?  So the buffer is really filling up, it just isn't
visible to me?  Then when the read pointer catches up to the locked
memory the kernel blocks the read call, and the src machine eventually
gets tired of waiting and drops the connection?



Want to link to this message? Use this URL: <https://mail-archive.FreeBSD.org/cgi/mid.cgi?200611212247.WAA29357>