Skip site navigation (1)Skip section navigation (2)
Date:      Sat, 19 Feb 2000 11:10:42 +1030
From:      Greg Lehey <grog@lemis.com>
To:        John Milford <jwm@CSUA.Berkeley.EDU>
Cc:        Joe Greco <jgreco@ns.sol.net>, Brooks Davis <brooks@one-eyed-alien.net>, peter@netplex.com.au, hackers@FreeBSD.ORG
Subject:   Re: Filesystem size limit?
Message-ID:  <20000219111042.I41278@freebie.lemis.com>
In-Reply-To: <200002160425.UAA02729@soda.csua.Berkeley.edu>
References:  <jgreco@ns.sol.net> <200002160425.UAA02729@soda.csua.Berkeley.edu>

next in thread | previous in thread | raw e-mail | index | archive | help
On Tuesday, 15 February 2000 at 20:25:50 -0800, John Milford wrote:
> Joe Greco <jgreco@ns.sol.net>  wrote:
>
>>>
>>> Joe seem to want one.  This size is certaintly within the reach of an
>>> ISP now, and disks just keep getting bigger.  My administrative bias is
>>> that partitioning for a reason other then policy should be avoided and
>>> thus I'd love to see filesystem size support keep ahead of volume sizes
>>> where possiable.  That said, unless someone gives me a very substantial
>>> amount of money to build a cluster at work, I'm not going to be building
>>> any TB file systems for a few more years.
>>
>> Well, I just wanted the thrill of it.
>>
>> I should be building additional machines throughout the year.  If anyone is
>> seriously interested in work on terabyte filesystem issues, I may be able
>> to shanghai one for a month or two and provide access to it.  I may even be
>> able to push it over the 2TB mark (barely).  I do not have the
>> qualifications or need to be doing this myself, though, alas.
>>
>> 72GB disks will be available later this year.  Expect 2.6TB servers.  :-)
>
> 	I will assert that it is insanity to build and use a 1TB UFS
> for small files (~ 2.5e8 inodes or 32GB) at least with the current
> technology.  Maybe I am wrong, if anyone thinks so feel free to tell
> me.  Having said this I think that Matt's idea of increasing the
> effective sector size may be way to go.

The "effective sector size" is really the block size.  Frags don't
count, since there's a maximum of one of them in any file.  We already
have the facility to create large blocks, but in the current
implementation that drags up the size of the frags too, which is not
desirable.  The real issue is that ufs measures data in (real)
sectors, and they have been fixed at (approximately) 512 bytes for a
long time.  As Matt and I discussed, I think it's more sensible to
count in bytes.

> 	Correct me if I am wrong, but the sector size is what has to
> change, and not just the block size.  This being true it would seem
> that if we wanted 2048TB in a FS, we the minimum fragment size would
> be 1MB (the virtual sector size) as there would be no way of
> addressing anything smaller.

If you stuck to 32 bit block numbers.  I don't think that's a good
idea.

Greg
--
Finger grog@lemis.com for PGP public key
See complete headers for address and phone numbers


To Unsubscribe: send mail to majordomo@FreeBSD.org
with "unsubscribe freebsd-hackers" in the body of the message




Want to link to this message? Use this URL: <https://mail-archive.FreeBSD.org/cgi/mid.cgi?20000219111042.I41278>