Skip site navigation (1)Skip section navigation (2)
Date:      Sun, 27 Mar 2011 09:13:32 +0100
From:      Dr Josef Karthauser <josef.karthauser@unitedlane.com>
To:        Jeremy Chadwick <freebsd@jdc.parodius.com>
Cc:        freebsd-fs@freebsd.org
Subject:   Re: ZFS Problem - full disk, can't recover space :(.
Message-ID:  <E70F2E76-5253-4DB9-B05B-AEF3C6F4237E@unitedlane.com>
In-Reply-To: <20110327075814.GA71131@icarus.home.lan>
References:  <9CF23177-92D6-40C5-8C68-B7E2F88236E6@unitedlane.com> <20110326225430.00006a76@unknown> <3BBB1E36-8E09-4D07-B49E-ACA8548B0B44@unitedlane.com> <20110327075814.GA71131@icarus.home.lan>

next in thread | previous in thread | raw e-mail | index | archive | help
On 27 Mar 2011, at 08:58, Jeremy Chadwick wrote:

> On Sun, Mar 27, 2011 at 08:13:44AM +0100, Dr Josef Karthauser wrote:
>> On 26 Mar 2011, at 21:54, Alexander Leidinger wrote:
>>>> Any idea on were the 23G has gone, or how I pursuade the zpool to
>>>> return it? Why is the filesystem referencing storage that isn't =
being
>>>> used?
>>>=20
>>> I suggest a
>>> zfs list -r -t all void/store
>>> to make really sure we/you see what we want to see.
>>>=20
>>> Can it be that an application has the 23G still open?
>>>=20
>>>> p.s. this is FreeBSD 8.2 with ZFS pool version
>>>> 15.
>>>=20
>>> The default setting of showing snapshots or not changed somewhere. =
As
>>> long as you didn't configure the pool to show snapshots (zpool get
>>> listsnapshots <pool>), they are not shown by default.
>>=20
>> Definitely no snapshots:
>>=20
>> infinity# zfs list -tall
>> NAME                           USED  AVAIL  REFER  MOUNTPOINT
>> void                          99.1G  24.8G  2.60G  legacy
>> void/home                     33.5K  24.8G  33.5K  /home
>> void/j                        87.5G  24.8G    54K  /j
>> void/j/buttsby                 136M  9.87G  2.40M  /j/buttsby
>> void/j/buttsby/home           34.5K  9.87G  34.5K  /j/buttsby/home
>> void/j/buttsby/local           130M  9.87G   130M  /j/buttsby/local
>> void/j/buttsby/tmp             159K  9.87G   159K  /j/buttsby/tmp
>> void/j/buttsby/var            3.97M  9.87G   104K  /j/buttsby/var
>> void/j/buttsby/var/db         2.40M  9.87G  1.55M  /j/buttsby/var/db
>> void/j/buttsby/var/db/pkg      866K  9.87G   866K  =
/j/buttsby/var/db/pkg
>> void/j/buttsby/var/empty        21K  9.87G    21K  =
/j/buttsby/var/empty
>> void/j/buttsby/var/log         838K  9.87G   838K  /j/buttsby/var/log
>> void/j/buttsby/var/mail        592K  9.87G   592K  =
/j/buttsby/var/mail
>> void/j/buttsby/var/run        30.5K  9.87G  30.5K  /j/buttsby/var/run
>> void/j/buttsby/var/tmp          23K  9.87G    23K  /j/buttsby/var/tmp
>> void/j/legacy-alpha           56.6G  3.41G  56.6G  /j/legacy-alpha
>> void/j/legacy-brightstorm     29.2G  10.8G  29.2G  =
/j/legacy-brightstorm
>> void/j/legacy-obleo           1.29G  1.71G  1.29G  /j/legacy-obleo
>> void/j/mesh                    310M  3.70G  2.40M  /j/mesh
>> void/j/mesh/home                21K  3.70G    21K  /j/mesh/home
>> void/j/mesh/local              305M  3.70G   305M  /j/mesh/local
>> void/j/mesh/tmp                 26K  3.70G    26K  /j/mesh/tmp
>> void/j/mesh/var               2.91M  3.70G   104K  /j/mesh/var
>> void/j/mesh/var/db            2.63M  3.70G  1.56M  /j/mesh/var/db
>> void/j/mesh/var/db/pkg        1.07M  3.70G  1.07M  /j/mesh/var/db/pkg
>> void/j/mesh/var/empty           21K  3.70G    21K  /j/mesh/var/empty
>> void/j/mesh/var/log             85K  3.70G    85K  /j/mesh/var/log
>> void/j/mesh/var/mail            24K  3.70G    24K  /j/mesh/var/mail
>> void/j/mesh/var/run           28.5K  3.70G  28.5K  /j/mesh/var/run
>> void/j/mesh/var/tmp             23K  3.70G    23K  /j/mesh/var/tmp
>> void/local                     282M  1.72G   282M  /local
>> void/mysql                      22K    78K    22K  /mysql
>> void/tmp                        55K  2.00G    55K  /tmp
>> void/usr                      1.81G  2.19G   275M  /usr
>> void/usr/obj                   976M  2.19G   976M  /usr/obj
>> void/usr/ports                 289M  2.19G   234M  /usr/ports
>> void/usr/ports/distfiles      54.8M  2.19G  54.8M  =
/usr/ports/distfiles
>> void/usr/ports/packages         21K  2.19G    21K  =
/usr/ports/packages
>> void/usr/src                   311M  2.19G   311M  /usr/src
>> void/var                      6.86G  3.14G   130K  /var
>> void/var/crash                22.5K  3.14G  22.5K  /var/crash
>> void/var/db                   6.86G  3.14G  58.3M  /var/db
>> void/var/db/mysql             6.80G  3.14G  4.79G  /var/db/mysql
>> void/var/db/mysql/innodbdata  2.01G  3.14G  2.01G  =
/var/db/mysql/innodbdata
>> void/var/db/pkg               2.00M  3.14G  2.00M  /var/db/pkg
>> void/var/empty                  21K  3.14G    21K  /var/empty
>> void/var/log                   642K  3.14G   642K  /var/log
>> void/var/mail                  712K  3.14G   712K  /var/mail
>> void/var/run                  49.5K  3.14G  49.5K  /var/run
>> void/var/tmp                    27K  3.14G    27K  /var/tmp
>>=20
>> This is the problematic filesystem:
>>=20
>> void/j/legacy-alpha           56.6G  3.41G  56.6G  /j/legacy-alpha
>>=20
>> No chance that an application is holding any data - I rebooting and =
came up
>> in single user mode to try and get this resolved, but no cookie.
>=20
> Are these filesystems using compression?  Have any quota or =
reservation
> filesystem settings set?
>=20
> "zfs get all" might help, but it'll be a lot of data.  We don't mind.
>=20

Ok, here you are. ( http://www.josef-k.net/misc/zfsall.txt.bz2 )

I suspect that the problem is the same as reported here:
http://web.archiveorange.com/archive/v/Lmwutp4HZLFDEkQ1UlX5 namely that =
there was a bug with the handling of sparse files on zfs. The file in =
question that caused the problem is a bayes database from spam assassin.

Joe




Want to link to this message? Use this URL: <https://mail-archive.FreeBSD.org/cgi/mid.cgi?E70F2E76-5253-4DB9-B05B-AEF3C6F4237E>