Skip site navigation (1)Skip section navigation (2)
Date:      Sun, 29 Mar 2020 23:26:03 +0200
From:      Peter Eriksson <pen@lysator.liu.se>
To:        FreeBSD Filesystems <freebsd-fs@freebsd.org>
Subject:   ZFS ARC Metadata "sizing" for datasets&snapshots
Message-ID:  <25DADCD5-085B-4E85-B8DA-B3115472CB2D@lysator.liu.se>
In-Reply-To: <CDB51790-ED6B-4670-B256-43CDF98BD26D@pk1048.com>
References:  <CFD0E4E5-EF2B-4789-BF14-F46AC569A191@lysator.liu.se> <66AB88C0-12E8-48A0-9CD7-75B30C15123A@pk1048.com> <E6171E44-F677-4926-9F55-775F538900E4@lysator.liu.se> <FE244C11-44CA-4DCC-8CD9-A8C7A7C5F059@pk1048.com> <982F9A21-FF1C-4DAB-98B3-610D70714ED3@lysator.liu.se> <CDB51790-ED6B-4670-B256-43CDF98BD26D@pk1048.com>

next in thread | previous in thread | raw e-mail | index | archive | help
Hmm.. I wonder if someone knows how much ZFS ARC metadata one should =
expect for a certain server.

Just for fun, I did some tests.

On a test server with 512GB RAM and arc_max set to 384GB, arc_meta_limit =
set to 256GB, 12866 filesystems (datasets) and 430610 snapshots (spread =
out over those filesystems), and an uptime of 1 day doing nothing =
basically (taking some snapshots) I get these numbers:

anon_size: 1.0 M
arc_max: 412.3 G
arc_meta_limit: 274.9 G
arc_meta_max: 33.5 G
arc_meta_used: 33.5 G
compressed_size: 9.5 G
data_size: 7.4 G
hdr_size: 462.3 M
metadata_size: 30.2 G
mru_size: 16.7 G
other_size: 2.8 G
overhead_size: 28.2 G
size: 40.9 G
uncompressed_size: 45.9 G

Ie ARC metadata_size at 30GB / arc_meta_used at 33.5G.=20

Doing a =E2=80=9Czfs list -t all=E2=80=9D takes ~100s and =E2=80=9Czfs =
list=E2=80=9D takes ~3s.


On production server (just booted) with just 256GB RAM doing =E2=80=9Czfs =
list=E2=80=9D takes 0.4s but =E2=80=9Czfs list -t all=E2=80=9D is taking =
a long time=E2=80=A6

anon_size: 3.3 M
arc_max: 103.1 G
arc_meta_limit: 51.5 G
arc_meta_max: 2.2 G
arc_meta_used: 2.2 G
compressed_size: 1.3 G
data_size: 2.7 G
hdr_size: 17.9 M
metadata_size: 2.0 G
mru_size: 3.3 G
other_size: 180.5 M
overhead_size: 3.4 G
size: 4.9 G
uncompressed_size: 3.4 G

The =E2=80=9Czfs list -t all=E2=80=9D took 2542 seconds (42 minutes) - =
131256 datasets+snapshots (1600 filesystems). That is ~50 =
snapshots/filesystems per second.

After that command has executed metadata_size has increased with 5GB and =
a new =E2=80=9Czfs list -t all=E2=80=9D just takes 37 seconds.

anon_size: 5.1 M
arc_max: 103.1 G
arc_meta_limit: 51.5 G
arc_meta_max: 7.9 G
arc_meta_used: 7.8 G
compressed_size: 2.5 G
data_size: 1.5 G
hdr_size: 98.0 M
metadata_size: 7.1 G
mru_size: 7.3 G
other_size: 660.9 M
overhead_size: 6.1 G
size: 9.4 G
uncompressed_size: 8.2 G

So... perhaps ~40KB (5G/131256) per dataset/snapshot on average. (Yes, =
oversimplificated but anyway)

Hmm.. Perhaps one should regularly do a =E2=80=9Czfs list -t all =
>/dev/null=E2=80=9D just to prime the ARC metadata cache (and keep it =
primed) :-)

- Peter






Want to link to this message? Use this URL: <https://mail-archive.FreeBSD.org/cgi/mid.cgi?25DADCD5-085B-4E85-B8DA-B3115472CB2D>