Skip site navigation (1)Skip section navigation (2)
Date:      Sat, 26 Jun 2010 18:29:41 +0200
From:      =?ISO-8859-1?Q?Micka=EBl_Maillot?= <mickael.maillot@gmail.com>
To:        Fabian Keil <freebsd-listen@fabiankeil.de>
Cc:        freebsd-fs@freebsd.org
Subject:   Re: mdconfig on ZFS leaks disk space
Message-ID:  <AANLkTikRMiGw9X6hqn49F2abtFzEcwTRPxNEO53Bt1ht@mail.gmail.com>
In-Reply-To: <20100626141038.0d9f488a@r500.local>
References:  <20100625231708.GB29793@server.vk2pj.dyndns.org> <20100626141038.0d9f488a@r500.local>

next in thread | previous in thread | raw e-mail | index | archive | help
what is your svn rev ?
because r208869: Fix freeing space after deleting large files with holes
dated: Sun Jun  6 13:08:36 2010


2010/6/26 Fabian Keil <freebsd-listen@fabiankeil.de>:
> Peter Jeremy <peterjeremy@acm.org> wrote:
>
>> I recently did a quick experiment to create an 8TB UFS filesystem
>> via mdconfig and after destroying the md and deleting the file,
>> the disk space used by the md was not returned - even after a
>> reboot. =A0Has anyone else seen this?
>>
>> I was using a 8.1-prelease/amd64 with everything on ZFS v14 and did:
>>
>> # truncate -s 8T /tmp/space
>> # mdconfig -a -t vnode -f /tmp/space
>> # newfs /dev/md0
>> /dev/md0: 8388608.0MB (17179869184 sectors) block size 16384, fragment s=
ize 2048
>> =A0 =A0 =A0 =A0 using 45661 cylinder groups of 183.72MB, 11758 blks, 235=
52 inodes.
>>
>> This occupied ~450MB on /tmp which uses lzjb compression.
>>
>> # fsck -t ufs /dev/md0
>> needed ~550MB VSZ and had ~530MB resident by the end.
>>
>> # mount /dev/md0 /mnt
>> # df -k /mnt
>> /dev/md0 =A08319620678 =A04 7654051020 0% =A02 1075407868 =A0 =A00% =A0 =
/mnt
>>
>> I then copied a random collection of files into /mnt, boosting the
>> size of /tmp/space to ~880MB.
>>
>> # umount /mnt
>> # fsck -t ufs /dev/md0
>> # mdconfig -d -u 0
>> # rm /tmp/space
>>
>> At this point, 'df' on /tmp reported 881MB used whilst 'du' on /tmp
>> report 1MB used. =A0lsof showed no references to the space. =A0Whilst
>> there were snapshots of /tmp, none had been taken since /tmp/space
>> was created. =A0I deleted them anyway to no effect.
>
> I can't reproduce this with Martin Matuska's ZFS v16 patch:
>
> fk@r500 /tank/sparse-file-test $df -h ./
> Filesystem =A0 =A0 =A0 =A0 =A0 =A0 =A0 Size =A0 =A0Used =A0 Avail Capacit=
y =A0Mounted on
> tank/sparse-file-test =A0 =A0 62G =A0 =A0932M =A0 =A0 61G =A0 =A0 1% =A0 =
=A0/tank/sparse-file-test
> fk@r500 /tank/sparse-file-test $sudo rm space
> fk@r500 /tank/sparse-file-test $df -h ./
> Filesystem =A0 =A0 =A0 =A0 =A0 =A0 =A0 Size =A0 =A0Used =A0 Avail Capacit=
y =A0Mounted on
> tank/sparse-file-test =A0 =A0 62G =A0 =A0 96K =A0 =A0 62G =A0 =A0 0% =A0 =
=A0/tank/sparse-file-test
>
> The pool is still v14.
>
> I thought I remembered reports on zfs-discuss@ about a known bug with
> leaked disk space after deleting sparse files that's supposed to be
> fixed in latter ZFS versions, but so far I only found reports about
> a similar problem with sparse volumes, so maybe I'm mistaken.
>
> Fabian
>



Want to link to this message? Use this URL: <https://mail-archive.FreeBSD.org/cgi/mid.cgi?AANLkTikRMiGw9X6hqn49F2abtFzEcwTRPxNEO53Bt1ht>