Skip site navigation (1)Skip section navigation (2)
Date:      Sun, 20 Feb 2005 03:05:26 +0100
From:      Bernd Walter <ticso@cicely12.cicely.de>
To:        Eric Anderson <anderson@centtech.com>
Cc:        Matthew Jacob <lydianconcepts@gmail.com>
Subject:   Re: newfs limits? 10TB filesystem max?
Message-ID:  <20050220020525.GV14312@cicely12.cicely.de>
In-Reply-To: <4217DC90.5090202@centtech.com>
References:  <20050216224825.39102.qmail@web26807.mail.ukl.yahoo.com> <4213D046.4080001@centtech.com> <7579f7fb05021715344d661662@mail.gmail.com> <421605D0.80302@centtech.com> <20050218172831.GA9944@freebie.xs4all.nl> <4217BAC5.10504@centtech.com> <20050220000418.GU14312@cicely12.cicely.de> <4217DC90.5090202@centtech.com>

next in thread | previous in thread | raw e-mail | index | archive | help
On Sat, Feb 19, 2005 at 06:40:48PM -0600, Eric Anderson wrote:
> Bernd Walter wrote:
> >Creating sparse files, e.g. by using dd, is prety much unix basics.
> >And via md(4) you can get a disk type device from a file.
> 
> Sorry - I understand how to make a file with dd, but 5000TB filesystem 
> means to me someone has 5PB of space to put the filesystem on.. I had not 
> heard anyone call a file a 'sparse file' with regards to dd before this, 
> and the man page info for dd and sparse isn't all that telling. 

dd is just a tool to write a single block at a fileoffset.
You can also do with truncate - I usually use dd because truncate is
not avalable e.g. under Solaris.
It an UFS feature to not allocate space for file ranges that have never
been writen.
A file without continuous space allocation is called a sparse file.

> >testdisk=/tmp/testdisk
> >dd if=/dev/zero bs=512 count=1 oseek=2m of=${testdisk}
> >mdev=`mdconfig -a -t vnode -f ${testdisk}`
> >
> >I don't know if md(4) works with such large disks, but it's very likely
> >that is does.
> 
> I see that running the command gives a 1GB file, that takes very little 
> disk space.  I must have missed this option in the dd man pages, or never 
> looked for it.  

However - you need your filesystem setup to support such large files.
That is large fragments to allow large allocation chains with big
fragments each.
In my case I was limited to 128T and since I don't want to newfs the
backing filesystem that's my limit for now without concatenating
multiple of them.

-- 
B.Walter                   BWCT                http://www.bwct.de
bernd@bwct.de                                  info@bwct.de



Want to link to this message? Use this URL: <https://mail-archive.FreeBSD.org/cgi/mid.cgi?20050220020525.GV14312>