Date: Thu, 20 Jan 2005 15:47:07 +0100 From: David Landgren <david@landgren.net> To: freebsd-stable@FreeBSD.ORG Subject: Re: Very large directory Message-ID: <41EFC46B.6070909@landgren.net> In-Reply-To: <200501201130.j0KBUKMZ066099@lurza.secnetix.de> References: <200501201130.j0KBUKMZ066099@lurza.secnetix.de>
next in thread | previous in thread | raw e-mail | index | archive | help
Oliver Fromme wrote: > Peter Jeremy <PeterJeremy@optushome.com.au> wrote: > > On Wed, 2005-Jan-19 21:30:53 -0600, Phillip Salzman wrote: > > > They've been running for a little while now - and recently we've noticed a > > > lot of disk space disappearing. Shortly after that, a simple du into our > > > /var/spool returned a not so nice error: > > > > > > du: fts_read: Cannot allocate memory > > > > > > No matter what command I run on that directory, I just don't seem to have > > > enough available resources to show the files let alone delete them (echo *, > > > ls, find, rm -rf, etc.) > > > > I suspect you will need to write something that uses dirent(3) to scan > > the offending directory and delete (or whatever) the files one by one. > > > > Skeleton code (in perl) would look like: > > [...] > > I would suggest trying this simple hack: > > cd /var/spool/directory ; cat . | strings | xargs rm -f > > It's a dirty hack, but might work, if the file names in > that directory aren't too strange (no spaces etc.). why suggest a dirty hack that might not work, when the proposed Perl script would have worked perfectly? David
Want to link to this message? Use this URL: <https://mail-archive.FreeBSD.org/cgi/mid.cgi?41EFC46B.6070909>