Skip site navigation (1)Skip section navigation (2)
Date:      Fri, 26 Aug 2005 20:14:36 +1000
From:      Peter Jeremy <PeterJeremy@optushome.com.au>
To:        Marian Hettwer <MH@kernel32.de>
Cc:        freebsd-current@freebsd.org
Subject:   Re: filesystem performance with lots of small files
Message-ID:  <20050826101436.GJ37107@cirb503493.alcatel.com.au>
In-Reply-To: <430E06AA.2000907@kernel32.de>
References:  <430E06AA.2000907@kernel32.de>

next in thread | previous in thread | raw e-mail | index | archive | help
On Thu, 2005-Aug-25 19:58:02 +0200, Marian Hettwer wrote:
>Back to the topic. I have a directory with several thousands (800k and 
>more) small files. UFS2 shows a pretty low performance.

Is your problem lots of small files or lots of files in a single
directory?  These are totally different problems.  And what do you
mean by "pretty low performance"?  What are you measuring?

Unix filesystems use linear searching of directories.  UFS and
UFS_DIRHASH have some performance improvements but at some point you
need to scan the entire directory to determine if a filename is or is
not present.  Your solution is to avoid having lots of files in a
single UFS directory: Either use a directory tree (like squid and
some inn options) or use an inode filesystem (which I thought had
been committed but I can't see it in NOTES).

For "lots of small files", any filesystem is going to have relatively
low I/O performance because the overheads involved in accessing the
first block of a file are fixed and you don't get any benefit from
large-block sequential read-ahead that means that reading 64K-128K
isn't much slower than reading 1K.

-- 
Peter Jeremy



Want to link to this message? Use this URL: <https://mail-archive.FreeBSD.org/cgi/mid.cgi?20050826101436.GJ37107>