Skip site navigation (1)Skip section navigation (2)
Date:      Wed, 17 Dec 2008 13:19:34 -0500
From:      John Almberg <jalmberg@identry.com>
To:        freebsd-questions@freebsd.org
Subject:   Re: How to find files that are eating up disk space
Message-ID:  <5D4D65A5-2E5A-47D6-8829-CC69BA137C86@identry.com>
In-Reply-To: <18761.15838.256303.685029@jerusalem.litteratus.org>
References:  <283ACBF4-8227-4A24-9E17-80A17CA2A098@identry.com> <7B241EE7-10A4-4BAA-9ABC-8DA5D4C1048B@identry.com> <18761.15838.256303.685029@jerusalem.litteratus.org>

next in thread | previous in thread | raw e-mail | index | archive | help
>
>>> Is there a command line tool that will help me figure out where the
>>> problem is?
>>
>>  I should probably have mentioned that what I currently do is run
>>
>>  	du -h -d0 /
>>
>>  and gradually work my way down the tree, until I find the
>>  directory that is hogging disk space. This works, but is not
>>  exactly efficient.
>
> 	"-d0" limits the search to the indicated directory; i.e. what
> you can see by doing "ls -al /".  Not superior to "ls -al /" and
> using the Mark I eyeball.

sorry... I meant du -h -d1 <directory>

> 	What (I think) you want is "du -x -h /": infinite depth, but do
> not cross filesystem mount-points.  This is still broken in that it
> returns a list where the numbers are in a fixed-width fiend which
> are visually distinguished only by the last letter.
> 	Try this:
>
> 	du -x /
>
> 	and run the resu;ts through "sort":
>
> 	sort -nr
>
> 	and those results through "head":
>
> 	head -n 20

Thanks to everyone that suggested this. A much faster way to find the  
big offenders

>
>
> 	I have a cron job which does this for /usr and e-mails me the
> output every morning.  After a few days, weeks at most, I know what
> should be on that list ... and what shouldn't and needs
> investigating.
>

And this is a great proactive measure. Thanks

-- John




Want to link to this message? Use this URL: <https://mail-archive.FreeBSD.org/cgi/mid.cgi?5D4D65A5-2E5A-47D6-8829-CC69BA137C86>