Skip site navigation (1)Skip section navigation (2)
Date:      Sun, 25 Feb 2001 17:44:13 -0500 (EST)
From:      David Gilbert <dgilbert@velocet.ca>
To:        Matt Dillon <dillon@earth.backplane.com>
Cc:        David Gilbert <dgilbert@velocet.ca>, Bernd Walter <ticso@cicely5.cicely.de>, freebsd-hackers@FreeBSD.ORG
Subject:   Re: [hackers] Re: Large MFS on NFS-swap?
Message-ID:  <15001.35517.468307.915125@trooper.velocet.net>
In-Reply-To: <200102251913.f1PJDAc15495@earth.backplane.com>
References:  <15000.8884.6165.759008@trooper.velocet.net> <20010225042933.A508@cicely5.cicely.de> <200102250644.f1P6iuL12016@earth.backplane.com> <15001.21129.307283.198917@trooper.velocet.net> <200102251913.f1PJDAc15495@earth.backplane.com>

next in thread | previous in thread | raw e-mail | index | archive | help
>>>>> "Matt" == Matt Dillon <dillon@earth.backplane.com> writes:

[... my newfw bomb deleted ...]

Matt>     Heh heh.  Yes, newfs has some overflows inside it when you
Matt> get that big.  Also, you'll probably run out of swap just
Matt> newfs'ing the metadata, you need to use a larger block size,
Matt> large -c value, and a large bytes/inode (-i) value.  But then,
Matt> of course, you are likely to run out of swap trying to write out
Matt> a large file even if you do manage to newfs it.

Matt>     I had a set of patches for newfs a year or two ago but never
Matt> incorporated them.  We'd have to do a run-through on newfs to
Matt> get it to newfs a swap-backed (i.e. 4K/sector) 1TB filesystem.

Matt>     Actually, this brings up a good point.  Drive storage is
Matt> beginning to reach the limitations of FFS and our internal (512
Matt> byte/block) block numbering scheme.  IBM is almost certain to
Matt> come out with their 500GB hard drive sometime this year.  We
Matt> should probably do a bit of cleanup work to make sure that we
Matt> can at least handle FFS's theoretical limitations for real.

That and the availability of vinum and other raid solutions.  You can
always make multiple partitions for no good reason (other than
filesystem limitations), but we were planning to put together a 1TB
filesystem next month.  From what you're telling me, I'd need larger
block sizes to make this work?

IMHO, we might reconsider that.  With SAN-type designs, you're
probably going to find the distribution of filesizes on
multi-terrabyte filesystems that are shared by 100's of computers to
be roughly the same as the filesize distributions on today's
filesystems.

Making the run for larger block sizes puts us in the same league as
DOS.  While it will stave off the wolves, it will only work for so
long give Moore's law.

Dave.

-- 
============================================================================
|David Gilbert, Velocet Communications.       | Two things can only be     |
|Mail:       dgilbert@velocet.net             |  equal if and only if they |
|http://www.velocet.net/~dgilbert             |   are precisely opposite.  |
=========================================================GLO================

To Unsubscribe: send mail to majordomo@FreeBSD.org
with "unsubscribe freebsd-hackers" in the body of the message




Want to link to this message? Use this URL: <https://mail-archive.FreeBSD.org/cgi/mid.cgi?15001.35517.468307.915125>