Skip site navigation (1)Skip section navigation (2)
Date:      Mon, 30 Sep 2013 11:07:33 +0200
From:      Borja Marcos <borjam@sarenet.es>
To:        Attila Nagy <bra@fsn.hu>
Cc:        freebsd-fs@FreeBSD.org
Subject:   Re: zfs: the exponential file system from hell
Message-ID:  <77F6465C-4E76-4EE9-88B5-238FFB4E0161@sarenet.es>
In-Reply-To: <52457A32.2090105@fsn.hu>
References:  <52457A32.2090105@fsn.hu>

next in thread | previous in thread | raw e-mail | index | archive | help

On Sep 27, 2013, at 2:29 PM, Attila Nagy wrote:

> Hi,
>=20
> Did anyone try to fill a zpool with multiple zfs in it and graph the =
space accounted by df and zpool list?
> If not, here it is:
> =
https://picasaweb.google.com/104147045962330059540/FreeBSDZfsVsDf#59282714=
43977601554

There is a fundamental problem with "df" and ZFS. df is based on the =
assumption that each file system has=20
a fixed maximum size (generally the size of the disk partition on which =
it resides).

ZFS is really different, though. Unless you assign them fixed sizes, it =
works much like a virtual memory system. There
is a large pool shared by all the datasets, and *any* of them can grow =
to the maximum pool size, that's the data
"df " shows.

With virtual storage allocation, compression and deduplication you can =
no longer make the old assumptions you made
in the old days.=20

Anyway, in a system with variable datasets "df" is actually meaningless =
and you should rely on "zpool list", which gives you
the real size, allocated space, free space, etc.


% zpool list
NAME   SIZE  ALLOC   FREE    CAP  DEDUP  HEALTH  ALTROOT
pool  1.59T   500G  1.11T    30%  1.00x  ONLINE  -
%=20

Times change, embracing the satanic filesystem implies that you have to =
change your mindset (and your scripts!)  :)








Borja.





Want to link to this message? Use this URL: <https://mail-archive.FreeBSD.org/cgi/mid.cgi?77F6465C-4E76-4EE9-88B5-238FFB4E0161>