Skip site navigation (1)Skip section navigation (2)
Date:      Tue, 14 Jun 2011 18:27:01 +0300
From:      Daniel Kalchev <daniel@digsys.bg>
To:        freebsd-fs@freebsd.org
Subject:   Re: Disk usage and ZFS deduplication
Message-ID:  <4DF77DC5.7030503@digsys.bg>
In-Reply-To: <F280A7D8-9847-47F7-A0D7-6DFD54F93102@itassistans.se>
References:  <9544F7B9-E286-4266-86E3-B4D1A667CBBD@itassistans.se>	<20110614150613.GB27199@DataIX.net>	<61335943-0172-4483-A221-5C77CD8BAEFB@itassistans.se>	<BANLkTinXojuA0ehFuBVBbveqRgCDGOb44g@mail.gmail.com> <F280A7D8-9847-47F7-A0D7-6DFD54F93102@itassistans.se>

next in thread | previous in thread | raw e-mail | index | archive | help


On 14.06.11 18:17, Per von Zweigbergk wrote:
>
> But in this case it's not the entire file being hardlinked, rather just some parts of the file being deduplicated so it's not exactly the same. Or is it? This is why I asked on the mailing list. :-)
>
>
Consider, 'storage' is different than file allocation.

With ZFS dedup, the storage (layer) decides whether to store new record, 
or to link it to an existing record. You have no control over this. If 
you ask how many blocks the file occupies in storage, that would be the 
entire file size. If some of the blocks are shared  with other files (or 
whatever) that does not change how many blocks the file uses.

It is different with compression.

Daniel



Want to link to this message? Use this URL: <https://mail-archive.FreeBSD.org/cgi/mid.cgi?4DF77DC5.7030503>