Skip site navigation (1)Skip section navigation (2)
Date:      Mon, 3 Dec 2007 16:12:08 +1030
From:      Ian West <ian@niw.com.au>
To:        freebsd-stable@freebsd.org
Subject:   Swapping caused by very large (regular) file size
Message-ID:  <20071203054207.GA1153@aleph.niw.com.au>

next in thread | raw e-mail | index | archive | help
Hello, I have noticed while benchmarking a system with a fair bit of ram
(3G usable of 4G installed) that when using a very large file (3G
upwards) in a simple benchmark it will cause the system to swap, even
though the actual process does not show in top to be using a lot of
memory, as soon as the swapping starts the throughput degrades
dramatically. The 'inactive' ram shown in top increases rapidly and
'free' ram reduces, this seems fair and sensible, but allowing it to
then page to possibly the same spindle/array seems like a bad idea ?

I have tested this on a 4.11 system with 512M of ram as well as a
RELENG-6 system with an areca raid controller, both behave in the same
way, once the file gets to a certain size the system starts paging. Is
there any way to tune this behaviour ?

The test I have been doing is just generating a big file full of nulls,
but bonnie++ causes the same behaviour with very large file sizes.

dd if=/dev/zero bs=32768 of=junkfile count=100000 seems to do it quite
reliably on all the boxes I have tested ?

Using cp to copy the file doesnt appear to cause the problem.

Any thoughts or suggestions would be much appreciated ?





Want to link to this message? Use this URL: <https://mail-archive.FreeBSD.org/cgi/mid.cgi?20071203054207.GA1153>