Skip site navigation (1)Skip section navigation (2)
Date:      Mon, 21 Sep 2015 13:50:38 +0100
From:      krad <kraduk@gmail.com>
To:        Quartz <quartz@sneakertech.com>
Cc:        FreeBSD FS <freebsd-fs@freebsd.org>
Subject:   Re: ZFS cpu requirements, with/out compression and/or dedup
Message-ID:  <CALfReyc1DcNaRjhhhx%2B4swF2hbfuAd2tWv2xpjWtfqcDoxHUBw@mail.gmail.com>
In-Reply-To: <55FD9A2B.8060207@sneakertech.com>
References:  <CAEW%2BogbPswfOWQzbwNZR5qyMrCEfrcSP4Q7%2By4zuKVVD=KNuUA@mail.gmail.com> <55FD9A2B.8060207@sneakertech.com>

next in thread | previous in thread | raw e-mail | index | archive | help
"It's also 'permanent' in the sense that you have to turn it on with the
> creation of a dataset and can't disable it without nuking said dataset. "


This is completely untrue,  there performance issues with dedup are limited
to writes only, as it needs to check the DDT table for every write to the
file system with dedup enabled. Once the data is on the disk there is no
overhead, and in many cases a performance boost as less data on the disk
means less head movement and its also more likely to be in any available
caches. If the write performance does become an issue you can turn it off
on that particular file system. This may cause you to no longer have enough
capacity on the pool, but then pools are easily extended.



Want to link to this message? Use this URL: <https://mail-archive.FreeBSD.org/cgi/mid.cgi?CALfReyc1DcNaRjhhhx%2B4swF2hbfuAd2tWv2xpjWtfqcDoxHUBw>