Skip site navigation (1)Skip section navigation (2)
Date:      Tue, 19 May 2009 22:26:48 -0500
From:      Dan Nelson <dnelson@allantgroup.com>
To:        Ruben de Groot <mail25@bzerk.org>, Paul Wootton <paul@fletchermoorland.co.uk>, Max Laier <max@love2party.net>, freebsd-current@freebsd.org
Subject:   Re: discrepancies in used space after cpio
Message-ID:  <20090520032647.GF52703@dan.emsphone.com>
In-Reply-To: <20090519103559.GA15608@ei.bzerk.org>
References:  <4A1123C5.3070507@fletchermoorland.co.uk> <4A122C23.40603@freebsd.org> <200905190637.03323.max@love2party.net> <4A128822.9030709@fletchermoorland.co.uk> <20090519103559.GA15608@ei.bzerk.org>

next in thread | previous in thread | raw e-mail | index | archive | help
In the last episode (May 19), Ruben de Groot said:
> On Tue, May 19, 2009 at 11:21:22AM +0100, Paul Wootton typed:
> > Yes /DemoPool is a raidz pool that is going to replace my single disk
> > pool.  Dmitry was right about sparse files
> > demophon# pwd
> > /var/tmp/kdecache-paul/kpc
> > demophon# du -hA .
> > 1.2G    .
> > demophon# du -h .
> > 8.9M    .
> > 
> > Is there a there a better way instead of using cpio for moving an entire
> > filing system from a single disk zfs pool to a raidz zfs pool?  Or does
> > making a sparse file in to a none sparse file just consume more disk
> > space and no other side affects
> 
> zfs send/recv ?

cpio has a --sparse option that might recreate the sparse on the destination
filesystem.  Another solution would be to enable compression on your pool:
"zfs set compress=on /DemoPool".  The default compression (lzjb) consumes
very little CPU and compresses zeros well :)

-- 
	Dan Nelson
	dnelson@allantgroup.com



Want to link to this message? Use this URL: <https://mail-archive.FreeBSD.org/cgi/mid.cgi?20090520032647.GF52703>