Skip site navigation (1)Skip section navigation (2)
Date:      Wed, 26 Nov 2014 16:08:27 -0500
From:      Daniel Staal <DStaal@usa.net>
To:        freebsd-questions@freebsd.org
Subject:   Re: How much space does raidz2 'eat'?
Message-ID:  <54ACF708CD2ECEF5601D4B3B@[192.168.1.50]>
In-Reply-To: <20141123232623.39d46c80@falbala.rz1.convenimus.net>
References:  <20141123232623.39d46c80@falbala.rz1.convenimus.net>

next in thread | previous in thread | raw e-mail | index | archive | help
--As of November 23, 2014 11:26:23 PM +0100, Christian Baer is alleged to 
have said:

> I just installed my first file server with zfs/zpool. Until now I only
> ever used UFS.
>
> My pool consists of 7 HDDs of the type WDC WD40EFRX-68WT0N0. smart
> tells me the user capacity is: 4,000,787,030,016 bytes.
>
> After creating a raidz2 pool, I get this:
>
> Filesystem       Size    Used   Avail Capacity iused ifree %iused
> Mounted on /dev/ufs/root    992M    491M    421M    54%    2.2k  129k
> 2%   / devfs            1.0K    1.0K      0B   100%       0     0  100%
> /dev /dev/ufs/var      34G    1.1G     30G     4%    2.1k  4.7M    0%
> /var /dev/ufs/usr      58G    6.0G     47G    11%    269k  7.6M    3%
> /usr arc1              16T    192K     16T     0%       7   35G    0%
> /arc1
>
> Notes:
># 1 I did not use phyiscal drives but geli-providers. I want an
> encrypted pool.
># 2 This pool is mainly for cold storage. I do not need extremely high
> performance, but I'd rather optimize it for space.
>
> Now I know that while WD works with kB, FreeBSD works with KiB (factor
> 1024). However, if I break this drive down to that and multiply that by
> 5, I get 18.19TiB, while df gives me 16TiB. Sure, there is some
> overhead and all, but certainly (hopefully) not 2 whole TiB! That would
> be more than 10%.
>
> Is this normal or am I missing something?

--As for the rest, it is mine.

RAIDZ2 has the same space efficiency as RAID6, which is 1-2/n, where 'n' is 
the number of drives in the array.  Which means for your array of 7 drives, 
you should expect to see ~71% of the drive space as 'usable'.  In other 
words, you should expect 31% of your drives to be used for overhead.  ;) 
This is a tradeoff you've made - that 'lost' space means that you can lose 
two drives and still recover your data.  If you wanted to just stripe the 
drives together you could do that under ZFS, but if you lost a drive you'd 
lose all the data on that drive.

Turning on compression can regain some of that, depending on what you are 
storing, and won't cost you any space.  (It might not even cost you any 
speed, depending on your situation - it can be enough faster to do the I/O 
with compressed data that you recover the time from the compression.)

But as Matthew said, you should use the zpool and zfs commands to look at 
the size of things in your zpool - ZFS starts getting into 'actual size' vs 
'apparent size' in a lot of situations, and the ZFS commands can list both 
and show you better what is going on, where the standard unix tools 
basically have to work with 'apparent size', which can vary depending on 
what you are doing.  (If you've got stuff like rolling snapshots happening, 
it can vary based on the time of the day!)

Daniel T. Staal

---------------------------------------------------------------
This email copyright the author.  Unless otherwise noted, you
are expressly allowed to retransmit, quote, or otherwise use
the contents for non-commercial purposes.  This copyright will
expire 5 years after the author's death, or in 30 years,
whichever is longer, unless such a period is in excess of
local copyright law.
---------------------------------------------------------------



Want to link to this message? Use this URL: <https://mail-archive.FreeBSD.org/cgi/mid.cgi?54ACF708CD2ECEF5601D4B3B>