From owner-freebsd-fs@FreeBSD.ORG Tue Apr 3 08:44:14 2012 Return-Path: Delivered-To: freebsd-fs@freebsd.org Received: from mx1.freebsd.org (mx1.freebsd.org [69.147.83.52]) by hub.freebsd.org (Postfix) with ESMTP id 51999106564A for ; Tue, 3 Apr 2012 08:44:14 +0000 (UTC) (envelope-from c.kworr@gmail.com) Received: from mail-bk0-f54.google.com (mail-bk0-f54.google.com [209.85.214.54]) by mx1.freebsd.org (Postfix) with ESMTP id C1E608FC15 for ; Tue, 3 Apr 2012 08:44:13 +0000 (UTC) Received: by bkcjc3 with SMTP id jc3so3905285bkc.13 for ; Tue, 03 Apr 2012 01:44:12 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20120113; h=message-id:date:from:user-agent:mime-version:to:cc:subject :references:in-reply-to:content-type:content-transfer-encoding; bh=HRSsINOSxrWigkYoqRARwhJuNwgWBhxhndfx1qDvPHM=; b=FwyoxGqxazFaOBqLCQsZ1QKokLeYz23rVjtk0Xdxkn5v6epq4nVB9xnSJaF7yBkDS6 tR4y8eOxz+h/nSrmLI5sxwaoMje9SrpYxEcNVl/96xSACf8KknxtPhQ+LSusvdi4iiq6 ld87Xwo9c9ZNuRiID35QYm4EpkrcB4YYBb+Oop62rDSkWbyXyEs2LSz4td48sq2fjoMC 0Ww0WK3/zbBwbaELHhC9ha2Jd9Ifw1xo4QWPBUW91dRJIDtqYCWWVpdh56KapiO9PSK9 TSoT0p2LmvU3JAzjViTKDCt/d3CPBbJPTCd9Un3plfRmIJ593zknAlKR5RadFnc3tnsI y10w== Received: by 10.204.143.138 with SMTP id v10mr4907285bku.99.1333442652768; Tue, 03 Apr 2012 01:44:12 -0700 (PDT) Received: from green.tandem.local (62-113-132-95.pool.ukrtel.net. [95.132.113.62]) by mx.google.com with ESMTPS id z14sm41107605bky.15.2012.04.03.01.44.10 (version=SSLv3 cipher=OTHER); Tue, 03 Apr 2012 01:44:11 -0700 (PDT) Message-ID: <4F7AB858.3030709@gmail.com> Date: Tue, 03 Apr 2012 11:44:08 +0300 From: Volodymyr Kostyrko User-Agent: Mozilla/5.0 (X11; FreeBSD amd64; rv:11.0) Gecko/20120315 Firefox/11.0 SeaMonkey/2.8 MIME-Version: 1.0 To: Peter Maloney References: <4F75C7EC.30606@gmail.com> <4F75E05D.2060206@brockmann-consult.de> In-Reply-To: <4F75E05D.2060206@brockmann-consult.de> Content-Type: text/plain; charset=UTF-8; format=flowed Content-Transfer-Encoding: 7bit Cc: freebsd-fs@freebsd.org Subject: Re: ZFS v28 and free space leakage X-BeenThere: freebsd-fs@freebsd.org X-Mailman-Version: 2.1.5 Precedence: list List-Id: Filesystems List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , X-List-Received-Date: Tue, 03 Apr 2012 08:44:14 -0000 Peter Maloney wrote: > I think you ran zpool list... Does zfs list show the same? > zfs list -rt all kohrah1 NAME USED AVAIL REFER MOUNTPOINT kohrah1 22,5M 134G 31K /kohrah1 > Do you have any snapshots or clones? None. > What sort of vdevs do you have? pool: kohrah1 state: ONLINE scan: scrub repaired 0 in 0h0m with 0 errors on Fri Mar 30 17:25:16 2012 config: NAME STATE READ WRITE CKSUM kohrah1 ONLINE 0 0 0 mirror-0 ONLINE 0 0 0 da3 ONLINE 0 0 0 da0 ONLINE 0 0 0 errors: No known data errors > Does creating an empty pool show 0 used? What about after adding more > datasets? As I have mirrored pool I'll split it for now and make some tests on other disk. # zpool split kohrah1 kohrah1new # zpool import kohrah1new # zpool list NAME SIZE ALLOC FREE CAP DEDUP HEALTH ALTROOT kohrah1 136G 22,4M 136G 0% 1.00x ONLINE - kohrah1new 136G 21,7M 136G 0% 1.00x ONLINE - # zpool status pool: kohrah1 state: ONLINE scan: scrub repaired 0 in 0h0m with 0 errors on Fri Mar 30 17:25:16 2012 config: NAME STATE READ WRITE CKSUM kohrah1 ONLINE 0 0 0 da3 ONLINE 0 0 0 errors: No known data errors pool: kohrah1new state: ONLINE scan: scrub repaired 0 in 0h0m with 0 errors on Fri Mar 30 17:25:16 2012 config: NAME STATE READ WRITE CKSUM kohrah1new ONLINE 0 0 0 da0 ONLINE 0 0 0 errors: No known data errors # zpool destroy kohrah1new # zpool create -O compression=on -O atime=off kohrah1new da0 # zpool list NAME SIZE ALLOC FREE CAP DEDUP HEALTH ALTROOT kohrah1 136G 22,4M 136G 0% 1.00x ONLINE - kohrah1new 136G 110K 136G 0% 1.00x ONLINE - Fine for me now, 110K seems reasonable. > Do you have datasets? They might use some for metadata. None of them as shown above. > Here begins the guessing and/or babbling... > > And I haven't tried this with zfs, but I know with ext on Linux, if you > fill up a directory, and delete all the files in it, the directory takes > more space than before it was filled (du will include this space when > run). So be very thorough with how you calculate it. Maybe zfs did the > same thing with metadata structures, and just left them allocated empty > (just a guess). > > To prove there is a leak, you would need to fill up the disk, delete > everything, and then fill it again to see if it fit less. If I did such > a test and it was the same, I would just forget about the problem. kk, throwing junk in: # zpool list NAME SIZE ALLOC FREE CAP DEDUP HEALTH ALTROOT kohrah1 136G 22,4M 136G 0% 1.00x ONLINE - kohrah1new 136G 2,29G 134G 1% 1.00x ONLINE - # find /kohrah1new/ | wc -l 150590 # rm -rf /kohrah1new/* # find /kohrah1new/ /kohrah1new/ # zpool list NAME SIZE ALLOC FREE CAP DEDUP HEALTH ALTROOT kohrah1 136G 22,4M 136G 0% 1.00x ONLINE - kohrah1new 136G 436K 136G 0% 1.00x ONLINE - Not a same test you ask but it seems that ZFS leaks on metadata. Or peruses them. Repeating the same tasks results in: # zpool list NAME SIZE ALLOC FREE CAP DEDUP HEALTH ALTROOT kohrah1 136G 22,4M 136G 0% 1.00x ONLINE - kohrah1new 136G 336K 136G 0% 1.00x ONLINE - So this feels like some leftover. > Perhaps another interesting experiment would be to zfs send the pool to > see if the destination pool ends up in the same state. This one is interesting: # zfs snapshot kohrah1@test # zfs send kohrah1@test | zfs receive -F kohrah1new # zpool list NAME SIZE ALLOC FREE CAP DEDUP HEALTH ALTROOT kohrah1 136G 22,4M 136G 0% 1.00x ONLINE - kohrah1new 136G 248K 136G 0% 1.00x ONLINE - So it frees up some space. If I do this on the clean pool: # zpool destroy kohrah1new # zpool create -O compression=on -O atime=off kohrah1new da0 # zfs send kohrah1@test | zfs receive -F kohrah1new # zpool list NAME SIZE ALLOC FREE CAP DEDUP HEALTH ALTROOT kohrah1 136G 22,4M 136G 0% 1.00x ONLINE - kohrah1new 136G 92,5K 136G 0% 1.00x ONLINE - So dump doesn't contain any leftover. However pool counts this leftover as data and replicates it: # zpool destroy kohrah1new # zpool attach kohrah1 da3 da0 # zpool status pool: kohrah1 state: ONLINE scan: resilvered 22,4M in 0h0m with 0 errors on Tue Apr 3 11:34:31 2012 config: NAME STATE READ WRITE CKSUM kohrah1 ONLINE 0 0 0 mirror-0 ONLINE 0 0 0 da3 ONLINE 0 0 0 da0 ONLINE 0 0 0 errors: No known data errors -- Sphinx of black quartz judge my vow.