Skip site navigation (1)Skip section navigation (2)
Date:      Tue, 29 May 2012 01:33:21 -0700
From:      Doug Barton <dougb@FreeBSD.org>
To:        Daniel Kalchev <daniel@digsys.bg>
Cc:        freebsd-fs@freebsd.org
Subject:   Re: Millions of small files: best filesystem / best options
Message-ID:  <4FC489D1.6070609@FreeBSD.org>
In-Reply-To: <4FC48729.5050302@digsys.bg>
References:  <1490568508.7110.1338224468089.JavaMail.root@zimbra.interconnessioni.it> <4FC457F7.9000800@FreeBSD.org> <20120529161802.N975@besplex.bde.org> <20120529175504.K1291@besplex.bde.org> <4FC48729.5050302@digsys.bg>

next in thread | previous in thread | raw e-mail | index | archive | help
On 5/29/2012 1:22 AM, Daniel Kalchev wrote:
> But how big the entire filesystem is going to be, anyway?

Your math is good, but the problem isn't how big the data is going to be
on disk, it's how to get some kind of reasonable performance. Just
because you can jam something onto a disk doesn't mean you can get it
back off again in any kind of a timely manner. :) This is even more true
if you have a large data set combined with a highly random access
pattern that doesn't repeat often enough to benefit from the cache.

Doug

-- 

    This .signature sanitized for your protection



Want to link to this message? Use this URL: <https://mail-archive.FreeBSD.org/cgi/mid.cgi?4FC489D1.6070609>