Skip site navigation (1)Skip section navigation (2)
Date:      Thu, 1 May 2003 17:03:18 -0500
From:      Dan Nelson <dnelson@allantgroup.com>
To:        "Jacques A. Vidrine" <nectar@FreeBSD.org>, freebsd-questions@FreeBSD.org
Subject:   Re: huge file, spanning multiple tapes
Message-ID:  <20030501220317.GA58262@dan.emsphone.com>
In-Reply-To: <20030501213842.GA58002@madman.celabo.org>
References:  <20030501213842.GA58002@madman.celabo.org>

next in thread | previous in thread | raw e-mail | index | archive | help
In the last episode (May 01), Jacques A. Vidrine said:
> I thought this would be an easy problem to solve, but I haven't found
> a decent solution after a bit of looking.
> 
> I have a huge file, say 73GB.  I need to dump it to tapes that have a
> capacity of about 33GB each (but there is hardware compression, so
> there is no exact capacity).
> 
> If the file would fit on a single tape, I'd just use good 'ole dd(1).
> 
> I figured, heck, somebody is _bound_ to have written a simple utility
> that, say, reads from a pipe and writes to tape (or vice versa), and
> prompts for a tape switch when it hits end-of-tape.  But alas, I
> cannot find such a beast.
> 
> Using dump, cpio, etc is not really an option.  I need to be able to
> later read the data from tape into a pipe without it hitting disk.
> 
> Anybody have bright ideas?

How about tar?  That lets you specify a change-volume script so you can
change tapes when you're writing, and when you read the tape later you
can specify the --to-stdout flag so it doesn't write to disk.

If you can't have any headers, you can always tell dd to only write
33GB per tape: 

gzip < file | 
(
  dd of=/dev/ersa0 bs=64k count=$((33000000/64))
  echo "please insert next tape"
  head -1 < /dev/tty > /dev/null
)

-- 
	Dan Nelson
	dnelson@allantgroup.com



Want to link to this message? Use this URL: <https://mail-archive.FreeBSD.org/cgi/mid.cgi?20030501220317.GA58262>