Skip site navigation (1)Skip section navigation (2)
Date:      Mon, 21 Sep 2015 10:10:46 -0400
From:      Quartz <quartz@sneakertech.com>
To:        freebsd-fs@freebsd.org
Subject:   Re: ZFS cpu requirements, with/out compression and/or dedup
Message-ID:  <56000FE6.3000306@sneakertech.com>
In-Reply-To: <55FF2115.8010209@sneakertech.com>
References:  <CAEW%2BogbPswfOWQzbwNZR5qyMrCEfrcSP4Q7%2By4zuKVVD=KNuUA@mail.gmail.com> <55FF111A.4040300@kateley.com> <55FF2115.8010209@sneakertech.com>

next in thread | previous in thread | raw e-mail | index | archive | help
>> Any algorithm for TB's of storage and cpu/ram is usually wrong.
>
> dedup is kind of a special case though, because it has to keep the
> entire DDT in non-paged ram (assuming you want the machine to be usable).
>
> Of course, the rule of thumb is for USED space. 40TB of blank space
> won't need any ram obviously.

Also, just for reference: according to the specs each entry in the dedup 
table costs about 320 bytes of memory per block of disk. This means that 
AT BEST (assuming ZFS decides to use full 128K blocks in your case) 
you'll need 2.5GB of ram per 1 TB of used space just for the DDT stuff 
(not counting ARC and everything else). Most systems are probably not 
going to be lucky enough to have 128K blocks though, so in real world 
terms you're looking at several GB of ram per TB of disk, and in worst 
case scenarios you might need a couple hundred GB... but at that point 
you should be offloading the DDT onto fast SSD L2ARC.





Want to link to this message? Use this URL: <https://mail-archive.FreeBSD.org/cgi/mid.cgi?56000FE6.3000306>