Skip site navigation (1)Skip section navigation (2)
Date:      Fri, 14 Jul 2000 02:01:50 -0700
From:      Doug Barton <DougB@gorean.org>
To:        "Eric J. Schwertfeger" <ejs@bfd.com>
Cc:        questions@freebsd.org
Subject:   Re: any faster way to rm -rf huge directory?
Message-ID:  <396ED6FE.74BA8123@gorean.org>
References:  <Pine.BSF.4.21.0007131356240.65575-100000@harlie.bfd.com>

next in thread | previous in thread | raw e-mail | index | archive | help
"Eric J. Schwertfeger" wrote:
> 
> Thanks to a programmer-induced glitch in a data-pushing perl script, we
> are the proud owners of a directory with (currently) a quarter million
> files in it.  The directory is on a vinum partition striped across three
> seagate baracudas, with soft-updates being used.
> 
> I did "rm -rf <dirname>" to clean up the directory, and that was
> Monday.  At the current rate of deletions (just under 10/minute), it's
> going to be a week or two before it gets done, as it should get faster
> once the directory gets smaller.

	It's only deleting 10 files per minute? How big are the files? As for
speeding up 'rm -r directory' I don't see a way to do that, but how is the
system load? I would go into the directory and do 'rm a* & rm b* & rm c* &
rm d*...', until the load got too close to whatever safety margin you think
is too high. 

Doug


To Unsubscribe: send mail to majordomo@FreeBSD.org
with "unsubscribe freebsd-questions" in the body of the message




Want to link to this message? Use this URL: <https://mail-archive.FreeBSD.org/cgi/mid.cgi?396ED6FE.74BA8123>