Skip site navigation (1)Skip section navigation (2)
Date:      Mon, 19 Jan 2015 18:04:42 +0100
From:      Fabian Keil <>
Subject:   Re: ZFS and sparse file backed md devices
Message-ID:  <>
In-Reply-To: <>
References:  <>

Next in thread | Previous in thread | Raw E-Mail | Index | Archive | Help
Content-Type: text/plain; charset=US-ASCII
Content-Transfer-Encoding: quoted-printable

Steve O'Hara-Smith <> wrote:

> 	I tried to follow the suggestions for converting a ZFS mirror (mine
> was a three way mirror) to a RAIDZ (or in my case a RAIDZ2) when tight on
> discs by creating a pool using sparse file backed md devices to stand in
> for the missing discs. Fortunately I experimented with a dry run using
> nothing but sparse file backed md devices first.
> 	I'm using FreeBSD 10.1-RELEASE-p3.
> 	The first surprise was when I created four 2TB sparse file backed
> md devices using truncate and mdconfig and then tried to make a zfs pool
> out of them. The sparse files became not sparse - or at least tried to but
> of course there wasn't 8TB of space to use in /tmp so it filled up and it
> took a reboot to kill the zpool create run. Next experiment was more
> modest, four 128MB sparse files, sure enough once the zpool create finish=
> they were four 128MB files and not sparse. Creating a pool on real discs
> certainly doesn't write on all the blocks - so why did my sparse files get
> filled in ?

My first suspect would be vdev trimming. On recent FreeBSD releases it's
enabled by default, even if none of the disks actually support trimming.

I set vfs.zfs.trim.enabled=3D0 on all my systems were it doesn't
work (all of them, as I use geli below ZFS).

> 	A little more experimenting revealed that I could offline the 128MB
> md devices one by one, destroy the device, truncate the file up to 2TB,
> recreate the device, wipe the ZFS meta data and replace the offlined
> device without filling in the sparse file. All was well until I did this =
> the fourth device and the pool tried to autoexpand - after a few seconds
> the box locked up and became completely unresponsive to everything except
> pings. Anybody have any idea why ?

See above.
> 	At this point I decided that the sparse file method was a
> non-starter and rebuilt my pool using four 1TB partitions out of the two
> available drives, copied the data, and then replaced the partitions one by
> one with whole drives[1], eventually winding up where I wanted to be with=
> three drive mirror converted to a four drive RAIDZ2. Still I am puzzled as
> to why the sparse file md device route no longer works.

I'm frequently using sparse files for testing and at least for me it works
as expected:

fk@r500 ~ $sudo mdconfig -f /tank/scratch/testfile
fk@r500 ~ $zogftw import
2015-01-19 17:52:18 zogftw: No pool name specified. Trying all unattached l=
abels: test
2015-01-19 17:52:18 zogftw: No geli keyfile found at /home/fk/.config/zogft=
w/geli/keyfiles/test.key. Not using any.
2015-01-19 17:52:21 zogftw: 'test' attached
2015-01-19 17:52:23 zogftw: 'test' imported
fk@r500 ~ $zpool list
tank   228G   196G  32.0G         -    51%    85%  1.00x  ONLINE  -
test  1016P   218K  1016P         -     0%     0%  1.00x  ONLINE  -


Content-Type: application/pgp-signature
Content-Description: OpenPGP digital signature

Version: GnuPG v2



Want to link to this message? Use this URL: <>