Skip site navigation (1)Skip section navigation (2)
Date:      Thu, 29 Nov 2001 10:46:27 -0500
From:      Bill Vermillion <bv@wjv.com>
To:        Marc Blanchet <Marc.Blanchet@viagenie.qc.ca>
Cc:        freebsd-fs@FreeBSD.ORG
Subject:   Re: tgz filesystem
Message-ID:  <20011129104627.A82474@wjv.com>
In-Reply-To: <278520000.1007047945@classic>; from Marc.Blanchet@viagenie.qc.ca on Thu, Nov 29, 2001 at 10:32:26AM -0500
References:  <224030000.1006998892@classic> <20011128200416.Q46769@elvis.mu.org> <278520000.1007047945@classic>

next in thread | previous in thread | raw e-mail | index | archive | help
On Thu, Nov 29, 2001 at 10:32:26AM -0500, Marc Blanchet thus spoke:
> 

> -- mercredi, novembre 28, 2001 20:04:16 -0600 Alfred Perlstein 
> <bright@mu.org> wrote/a écrit:

> > * Marc Blanchet <Marc.Blanchet@viagenie.qc.ca> [011128 19:52] wrote:

> >> - I would like to use a tar-gzip filesystem that mounts a tgz
> >> file. The

> > This would be nearly useless, a .tgz file is a tar file (which
> > has no TOC) in a gzipp'd stream. Meaning that browsing it is
> > pretty much impossible without extracting the whole thing
> > anyhow.

> good point. well taken.

A few years ago when drives were smaller, SCO had implemented a
file system called DTFS - DeskTop File System.  It was a
compressing file system so the space available for storage was
about twice the real space.  Never went very far as the drives got
bigger but it worked well for such things a lap tops with 200MB
drives for instance. 

> however, the extraction would only be done during the mount time
> (with the impact of some significant mount delay before the fs is
> ready if the file is big and the processor is slow). But, if the
> fs is not mounted, then the space of the temporary copy is not
> used, and also it keeps the data in the compressed format.


The problem is you want to mount a file, and it's file SYSTEMS that
we mount.

> Think about the intent: - right now, I have archives of data in
> the popular tgz format. to actually search through it and work
> on the data without knowing exactly in advance which file I'm
> looking for, I essentially have to decompress the whole thing and
> then work on it. One can argue that you could do some kind of tar
> | grep | .... However, this is always a one shot and limited in
> terms of searching. In fact, I do that often and the result is
> that I needed often to do the tar|grep many times before finding
> the right thing; so instead, I decompress the whole thing and then
> work on it. Then after, I have to rm -rf all.

Ah.  Now I see what you want to do.  And there is an easier way. 

> - mounting the file as a fs would do the trick more cleanly, more
> easily.

But how about doing it this way - assuming you have a wildcard
match for files and wanted to search for names in the archive, you
would just do this:

zcat <filematch-wildcard> | tar tvf - | grep <searchname> | less -e

> I might not be convincing anybody, but this would be for me at
> least a good academic exercise.... ;-)))

> Could I get at least some advice on how to start programming this
> (references, code examples,...)? Is the stackable fs software
> (fist) a good starting point? Are the null/umap fs the good
> starting point instead?

Files aren't going to look much like file systems.  An analogy
might be a book doesn't look much like a library where a book is a
file and a library is a file system.

Bill

To Unsubscribe: send mail to majordomo@FreeBSD.org
with "unsubscribe freebsd-fs" in the body of the message




Want to link to this message? Use this URL: <https://mail-archive.FreeBSD.org/cgi/mid.cgi?20011129104627.A82474>