Skip site navigation (1)Skip section navigation (2)
Date:      Sat, 26 Jun 2010 14:10:38 +0200
From:      Fabian Keil <freebsd-listen@fabiankeil.de>
To:        freebsd-fs@freebsd.org
Subject:   Re: mdconfig on ZFS leaks disk space
Message-ID:  <20100626141038.0d9f488a@r500.local>
In-Reply-To: <20100625231708.GB29793@server.vk2pj.dyndns.org>
References:  <20100625231708.GB29793@server.vk2pj.dyndns.org>

next in thread | previous in thread | raw e-mail | index | archive | help
--Sig_/M5Kaz=3BE7Wt0QclwkMFXNC
Content-Type: text/plain; charset=US-ASCII
Content-Transfer-Encoding: quoted-printable

Peter Jeremy <peterjeremy@acm.org> wrote:

> I recently did a quick experiment to create an 8TB UFS filesystem
> via mdconfig and after destroying the md and deleting the file,
> the disk space used by the md was not returned - even after a
> reboot.  Has anyone else seen this?
>=20
> I was using a 8.1-prelease/amd64 with everything on ZFS v14 and did:
>=20
> # truncate -s 8T /tmp/space
> # mdconfig -a -t vnode -f /tmp/space
> # newfs /dev/md0
> /dev/md0: 8388608.0MB (17179869184 sectors) block size 16384, fragment si=
ze 2048
>         using 45661 cylinder groups of 183.72MB, 11758 blks, 23552 inodes.
>=20
> This occupied ~450MB on /tmp which uses lzjb compression.
>=20
> # fsck -t ufs /dev/md0
> needed ~550MB VSZ and had ~530MB resident by the end.
>=20
> # mount /dev/md0 /mnt
> # df -k /mnt
> /dev/md0  8319620678  4 7654051020 0%  2 1075407868    0%   /mnt
>=20
> I then copied a random collection of files into /mnt, boosting the
> size of /tmp/space to ~880MB.
>=20
> # umount /mnt
> # fsck -t ufs /dev/md0
> # mdconfig -d -u 0
> # rm /tmp/space
>=20
> At this point, 'df' on /tmp reported 881MB used whilst 'du' on /tmp
> report 1MB used.  lsof showed no references to the space.  Whilst
> there were snapshots of /tmp, none had been taken since /tmp/space
> was created.  I deleted them anyway to no effect.

I can't reproduce this with Martin Matuska's ZFS v16 patch:

fk@r500 /tank/sparse-file-test $df -h ./
Filesystem               Size    Used   Avail Capacity  Mounted on
tank/sparse-file-test     62G    932M     61G     1%    /tank/sparse-file-t=
est
fk@r500 /tank/sparse-file-test $sudo rm space=20
fk@r500 /tank/sparse-file-test $df -h ./
Filesystem               Size    Used   Avail Capacity  Mounted on
tank/sparse-file-test     62G     96K     62G     0%    /tank/sparse-file-t=
est

The pool is still v14.

I thought I remembered reports on zfs-discuss@ about a known bug with
leaked disk space after deleting sparse files that's supposed to be
fixed in latter ZFS versions, but so far I only found reports about
a similar problem with sparse volumes, so maybe I'm mistaken.

Fabian

--Sig_/M5Kaz=3BE7Wt0QclwkMFXNC
Content-Type: application/pgp-signature; name=signature.asc
Content-Disposition: attachment; filename=signature.asc

-----BEGIN PGP SIGNATURE-----
Version: GnuPG v2.0.15 (FreeBSD)

iEYEARECAAYFAkwl7lIACgkQBYqIVf93VJ0wRgCgy9mO7RzuvnhLNIWwJVZmCx9d
9eEAnA0rw2ppN0O81dfZlM4BhqtJpNg/
=LXO/
-----END PGP SIGNATURE-----

--Sig_/M5Kaz=3BE7Wt0QclwkMFXNC--



Want to link to this message? Use this URL: <https://mail-archive.FreeBSD.org/cgi/mid.cgi?20100626141038.0d9f488a>