Skip site navigation (1)Skip section navigation (2)
Date:      Fri, 18 Nov 2005 19:24:17 -0500
From:      Mikhail Teterin <mi+mx@aldan.algebra.com>
To:        questions@freebsd.org
Subject:   throttling NFS writes
Message-ID:  <200511181924.17282.mi%2Bmx@aldan.algebra.com>

next in thread | raw e-mail | index | archive | help
Hi!

We have an unusual problem with NFS writes being _too fast_ for our good.

The system is accepting database dumps from NFS-clients and begins compressing 
each dump as soon, as it begins arriving (waiting for more via kevent, if 
needed).

The NFS-clients (database servers) run on slow Sparc processors and can not be 
bothered to compress their data...

The setup works quite well, if the to-be compressed data is still in memory, 
when the compressor gets to it.

"Unfortunately", those Sparc systems have rather fast I/O rates and manage to 
write their dumps faster, than the compressor can compress it. When this 
happens, the overall performance of the backup script goes down through the 
floor :-(, because it forces the disk to read the middle of a file (for 
compression), while data keeps arriving (from the NFS-client) at the end of 
it...

So we'd like to stall the client's dumping, so that the compressor can keep 
up. Short of limiting NFS-bandwidth via ipfw, is there a way to control NFS 
speed dynamically?

The uncompressed dumps are _huge_, although they compress very well. So we can 
not just accept all of them first and then start compressing -- we don't have 
enough room. There is enough to keep about 3 full-dumps worth of compressed 
data, but even a single uncompressed full dump would not fit...

	-mi



Want to link to this message? Use this URL: <https://mail-archive.FreeBSD.org/cgi/mid.cgi?200511181924.17282.mi%2Bmx>