Skip site navigation (1)Skip section navigation (2)
Date:      Thu, 28 Aug 2008 11:50:05 -0400
From:      "Bob Johnson" <fbsdlists@gmail.com>
To:        prad <prad@towardsfreedom.com>
Cc:        "freebsd-questions @ freebsd. org" <freebsd-questions@freebsd.org>
Subject:   Re: defrag
Message-ID:  <54db43990808280850o29352e83me250d067f0c76717@mail.gmail.com>
In-Reply-To: <20080827172946.5a1d4103@gom.home>
References:  <20080827172946.5a1d4103@gom.home>

next in thread | previous in thread | raw e-mail | index | archive | help
On 8/27/08, prad <prad@towardsfreedom.com> wrote:
> something that has puzzled me for years (but i've never got around to
> asking) is how does *nix get away without regular defrag as with
> windoze.
>

Essentially, the UFS file system (and its close relatives) is
intentionally fragmented in a controlled way as the files are written,
so that the effect of the fragmentation is limited. Files are written
at sort-of-random locations all over the disk, rather than starting at
one end and working toward the other, and there is a limit to how much
sequential disk space a single file can occupy (a large file
essentially gets broken up and stored as if it were a collection of
smaller files). The result is that as long as there is a reasonable
amount of empty disk space available, it will be possible to find
space to store a new file efficiently. This is why the filesystem
wants to have at least 8% empty space. If you have less than 8% empty
space left on the filesystem, it switches from the speed optimizing
mode that I just described to a mode that tries to pack things into
the remaining space as efficiently as possible, at the cost of speed.
FreeBSD also by default reserves some disk space for administrative
use that is not available to normal users.

One result of this scheme (and other issues) is that access time for
large files suffers a bit (but not as much as it would if they were
heavily fragmented). If you are setting up a volume mainly for storing
large files, you can adjust some of the parameters (e.g. using
tunefs(8)) so the filesystem will handle large files more efficiently,
at the expense of wasting space on small files.

> fsck is equivalent to scandisk, right?

Pretty much. It looks for errors and tries to fix them. It does not
attempt to defragment the disk. Unless the disk is almost full,
defragmenting probably wouldn't improve things enough to matter.

>
> so when you delete files and start getting 'holes', how does *nix deal
> with it?
>

The process of scattering files all over the disk intentionally leaves
holes all over the disk (that's what I mean by controlled
fragmentation). When you add and delete files, those holes get bigger
and smaller, and merge or split apart, but until the disk gets very
full, there should always be holes big enough to efficiently store new
files. The difference between this and what happens in a FAT
filesystem is that the process is designed so that there is a
statistically high likelihood that the holes produced will be large
enough to be used efficiently.

> --
> In friendship,
> prad

I hope that helps. And as usual, if I got any of that wrong, someone
please correct me. If I answer this question often enough, I will
eventually get it right, then perhaps we can make it a FAQ ;-)

- Bob



Want to link to this message? Use this URL: <https://mail-archive.FreeBSD.org/cgi/mid.cgi?54db43990808280850o29352e83me250d067f0c76717>