Skip site navigation (1)Skip section navigation (2)
Date:      Tue, 15 Feb 2000 21:22:08 -0600 (CST)
From:      Joe Greco <jgreco@ns.sol.net>
To:        dillon@apollo.backplane.com (Matthew Dillon)
Cc:        hackers@freebsd.org
Subject:   Re: Filesystem size limit?
Message-ID:  <200002160322.VAA02109@aurora.sol.net>
In-Reply-To: <200002160012.QAA46218@apollo.backplane.com> from Matthew Dillon at "Feb 15, 2000  4:12:36 pm"

next in thread | previous in thread | raw e-mail | index | archive | help
>     Personally I think going to 64 bit block numbers is overkill.  32 bits
>     is plenty (for the next few decades) and, generally, people running 
>     filesystems that large tend to be in the 'fewer larger files' category
>     rather then the 'billions of tiny files' category, so using a large 
>     block size is reasonable.   At the moment the filesystem block size is 
>     the kernel's minimum disk I/O (at least when accessing portions backed
>     by full blocks), but it is far more likely that we change the kernel 
>     to do less then full block reads then it is that we bump up the block
>     number to 64 bits.
> 
>     Given a kernel modified to not have to read full blocks, the filesystem
>     block size becomes more of a 'reservation size' and in multi-terrabyte
>     filesystems it would not be unreasonable to make this something really
>     big, like a megabyte (a fragment would then be 128K).  With a blocksize
>     of a megabyte filesystems up to 2048 TB would be possible with 31 bit
>     block numbers.

After reflecting on this for a few hours:

In 1990, I considered a gigabyte to be a lot of space.

In 2000, I consider a terabyte to be a lot of space.

I'm wondering if "32 bits is plenty (for the next few decades)" is a
reasonable statement.  I'd extrapolate that:

In 2010, I may consider a petabyte to be a lot of space.

Since the limit of 2048TB is actually 2PB, I don't know if I'd consider
it plenty for more than a decade.

It's probably correct to say that it wouldn't be a serious issue for a 
decade.  But more general statements would seem Gates-ian in nature.

I do know that I'd really like to be able to use larger block sizes and
have it work right, though, regardless of any partial block optimizations
put in the kernel.  For my uses, right now, I can either be smart enough
to optimize or I can know I don't need to worry about the extra baggage
of reading extra blocks.  Given the growth in Usenet binaries, I'm forced
to keep growing the storage, and it is quite possible that within the
year I will be building individual filesystems with half-terabyte capacity
or more.

In 2020, I may consider an exabyte (EB) to be a lot of space.
In 2030, I may consider a zettabyte (ZB) to be a lot of space.
In 2040, I may consider a yottabyte (YB) to be a lot of space.

That's a lottabytes.  But I'll probably be too old to care.

... Joe

-------------------------------------------------------------------------------
Joe Greco - Systems Administrator			      jgreco@ns.sol.net
Solaria Public Access UNIX - Milwaukee, WI			   414/342-4847


To Unsubscribe: send mail to majordomo@FreeBSD.org
with "unsubscribe freebsd-hackers" in the body of the message




Want to link to this message? Use this URL: <https://mail-archive.FreeBSD.org/cgi/mid.cgi?200002160322.VAA02109>