Date: Thu, 13 Jul 2000 14:08:10 -0700 (PDT) From: "Eric J. Schwertfeger" <ejs@bfd.com> To: questions@freebsd.org Subject: any faster way to rm -rf huge directory? Message-ID: <Pine.BSF.4.21.0007131356240.65575-100000@harlie.bfd.com>
next in thread | raw e-mail | index | archive | help
Thanks to a programmer-induced glitch in a data-pushing perl script, we are the proud owners of a directory with (currently) a quarter million files in it. The directory is on a vinum partition striped across three seagate baracudas, with soft-updates being used. I did "rm -rf <dirname>" to clean up the directory, and that was Monday. At the current rate of deletions (just under 10/minute), it's going to be a week or two before it gets done, as it should get faster once the directory gets smaller. I understand at a technical level why it is going so slow, so I'm not complaining (I'm the one that insists that any directory with over 10,000 files be split up). My question is, short of backing up the rest of the disk, newfs, and restore (not an option, this is the main partition of a live server), is there a faster way to do this? Not a critical issue, as we have plenty of room, and despite the fact that all the drive lights are flickering madly nonstop, the system's performance isn't off too much, so it's more a matter of curiosity. To Unsubscribe: send mail to majordomo@FreeBSD.org with "unsubscribe freebsd-questions" in the body of the message
Want to link to this message? Use this URL: <https://mail-archive.FreeBSD.org/cgi/mid.cgi?Pine.BSF.4.21.0007131356240.65575-100000>