Skip site navigation (1)Skip section navigation (2)
Date:      Fri, 10 Dec 1999 08:56:47 -0500 (EST)
From:      Zhihui Zhang <zzhang@cs.binghamton.edu>
To:        freebsd-hackers@freebsd.org, freebsd-fs@freebsd.org
Subject:   Why VMIO directory is a bad idea?
Message-ID:  <Pine.GSO.3.96.991210084142.9306A-100000@sol.cs.binghamton.edu>

next in thread | raw e-mail | index | archive | help

I read some postings on the Linux-Archive, complaining the slowness in
looking up some big directory.  Some claims that since the directory file
is typically small, advanced techniques such as B+ tree (and I add hash
method) are not necessary.  We can simply pre-allocate the directory file
contiguously and achieve good performance. 

This makes me wondering if we can read directory file into memory and keep
it there as long as possible to get a good performance.  I remember there
is a discussion of VMIO directory early this year and only until now I
begin to understand that idea. 

(1) If the directory file is less than one page, there will be a waste of
memory due to internal fragmentation.  Why do not we set a limit, say one
page, on when we start VMIO a directory? 

(2) If VMIO directory is not desirable for some reasons, how about bump up
the usecount of the buffer used by a directory file to let it stay in the
queue longer?

(3) Or maybe we can add a parameter to the filesytem, telling it to try to
preallocate some contiguous disk space for all directory files. I guess
that the cost per bit on disk is less than the cost per bit in memory.

Can anyone give me an idea on how big a directory could be in some
environment?

Any comments or ideas are appreciated.

-Zhihui




To Unsubscribe: send mail to majordomo@FreeBSD.org
with "unsubscribe freebsd-hackers" in the body of the message




Want to link to this message? Use this URL: <https://mail-archive.FreeBSD.org/cgi/mid.cgi?Pine.GSO.3.96.991210084142.9306A-100000>