Skip site navigation (1)Skip section navigation (2)
Date:      Thu, 2 May 2024 13:42:21 -0700
From:      Mark Millard <marklmi@yahoo.com>
To:        FreeBSD Mailing List <freebsd-ports@freebsd.org>, FreeBSD Toolchain <freebsd-toolchain@freebsd.org>
Cc:        Nuno Teixeira <eduardo@freebsd.org>
Subject:   Re: I've  started collecting tmpfs usage figures from a poudriere-devel bulk -a for later publishing some of the top ones (handy for TMPFS_BLACKLIST judgments)
Message-ID:  <E1A29267-998F-42DD-B015-1EC50D9B355C@yahoo.com>
In-Reply-To: <3BF173B4-E6A8-42B7-9A64-0E87E81B8597@yahoo.com>
References:  <97E1D1BA-CBA1-42C6-8202-B6D654941C28@yahoo.com> <3BF173B4-E6A8-42B7-9A64-0E87E81B8597@yahoo.com>

next in thread | previous in thread | raw e-mail | index | archive | help

On Apr 30, 2024, at 19:08, Mark Millard <marklmi@yahoo.com> wrote:

> On Apr 28, 2024, at 18:48, Mark Millard <marklmi@yahoo.com> wrote:
>=20
>> I've modified my local poudriere-devel to have Success and Failure =
lines also
>> report the tmpfs size at that point. Using, say, script to log the =
output to
>> a file allows later sorting and listing of the TMPFS usage filgures. =
(The
>> context is an amd64 one that has the resources to do a full bulk -a =
with
>> USE_TMPFS=3Dall invovled.) An example (from an in-progress bulk -a =
that has a
>> long way to go):
>>=20
>> # grep TMPFS: ~/mmjnk-bulk-a-output.txt | sort -n -r -k11
>>=20
>> [00:36:37] [18] [00:26:53] Finished  print/texlive-docs | =
texlive-docs-20230313: Success ending TMPFS: 10.67 GiB
>> [01:04:41] [07] [00:06:58] Finished  net-mgmt/telegraf | =
telegraf-1.30.1_1: Success ending TMPFS: 10.52 GiB
>> [01:03:32] [25] [00:06:09] Finished  security/trivy | trivy-0.50.1_1: =
Success ending TMPFS: 10.10 GiB
>> . . .
>> [01:15:56] [20] [00:00:54] Finished  databases/pg_tileserv | =
pg_tileserv-1.0.9_12: Failed: build TMPFS: 2.61 GiB
>> . . .
>>=20
>> Note that the design is for sort with -k11 to work for Success and
>> for Failure. (This is why "ending" is in place for Success.) I
>> choose to use poudriere -N (no coloring) for this kind of activity.
>>=20
>> This helps for figuring out what all is appropriate for listing in
>> TMPFS_BLACKLIST for a poudriere-devel configuration to avoid tmpfs
>> competing too much for RAM+SWAP. (But approraite free file system
>> space is needed.)
>>=20
>>=20
>> "<" below is what is new, ">" is what was original, in
>> /usr/local/share/poudriere/common.sh :
>>=20
>> 5928,5934d5927
>> < tmpfs_at_end=3D"$(env BLOCKSIZE=3D512 df -t tmpfs \
>> < ${MASTERMNTROOT}/${MY_JOBID}/ \
>> < ${MASTERMNTROOT}/${MY_JOBID}/.p/ \
>> < ${MASTERMNTROOT}/${MY_JOBID}/usr/local/ \
>> < 2>/dev/null | tail -3 \
>> < | awk '{ tmpfs_use +=3D $3; } END { printf "%s %.2f %s", "TMPFS:", =
tmpfs_use*512/(1024**3), "GiB" }')"
>> <=20
>> 5942c5935
>> <    "Success${COLOR_RESET} ending ${tmpfs_at_end}"
>> ---
>>>   "Success"
>> 5968c5961
>> <    "Failed: ${COLOR_PHASE}${failed_phase}${COLOR_RESET} =
${tmpfs_at_end}"
>> ---
>>>   "Failed: ${COLOR_PHASE}${failed_phase}"
>>=20
>>=20
>> The form of use that I've done also involves (over?) use of
>> MUTUALLY_EXCLUSIVE_BUILD_PACKAGES . It is not as good of data
>> for this other use, but the same .txt file can be processed
>> with:
>>=20
>> # grep TMPFS: ~/mmjnk-bulk-a-output.txt | sort -r -k3 | more
>> [01:42:09] [04] [00:48:16] Finished  lang/erlang-runtime21 | =
erlang-runtime21-21.3.8.24_3: Success ending TMPFS: 1.92 GiB
>> [01:38:39] [28] [00:44:41] Finished  lang/erlang-runtime22 | =
erlang-runtime22-22.3.4.27: Success ending TMPFS: 1.92 GiB
>> [01:05:41] [02] [00:34:54] Finished  lang/erlang-runtime26 | =
erlang-runtime26-26.2.4: Success ending TMPFS: 2.02 GiB
>> . . .
>>=20
>> to find longer running package builds. This is subject to
>> significant variation based on what other builders are
>> running in parallel at the time and what sort of load
>> averages are involved over period in question. The
>> MUTUALLY_EXCLUSIVE_BUILD_PACKAGES that I've used will
>> limit that to some extent. But the result is comparisons
>> of some builds that have no activity in parallel by other
>> builders vs. builds that have extensive parallel activity
>> by other builders (of not-huge packages).
>>=20
>> Note: In modern times the [1D: notation and the like for
>> what the -k3 compares are not well placed in the overall
>> list compared to the likes of, say, [20: . The day vs.
>> hour comparison is not a straight forward thing to sort
>> on.
>>=20
>> Hopefully in a few days I'll be able to list off example
>> top tmpfs usage for USE_TMPFS=3Dall and top build times as
>> well (such as they are).
>>=20
>> Note:
>> This is from my personal environment. I've not tried to
>> simulate how FreeBSD's official package builders are set
>> up to operate.
>=20
> I've decided that the "build times" list is not reasonable
> for use: too much variability of context.
>=20
> But here is a list of 148 or so of the bigger USE_TMPFS=3Dall
> tmpfs-usage builds from a poudriere-devel "-N bulk -c -a"
> run. It covers the > 7.66 GiByte tmpfs examples for how I
> build.
>=20
> # grep TMPFS: mmjnk-bulk-a-output.txt | sort -n -r -k11 | head -84
> [1D:10:04:32] [25] [02:14:20] Finished  www/chromium | =
chromium-124.0.6367.60: Success ending TMPFS: 31.76 GiB
. . .=20
> [1D:11:05:43] [04] [00:16:26] Finished  astro/kstars | =
kstars-3.6.6_1,1: Success ending TMPFS: 7.69 GiB
> . . .

Note: I had not put ttk into the TMPFS_BLACKLIST as I had intended.

> For reference: Queued: 34535 Built: 33990 Failed: 159   Skipped: 100   =
Ignored: 286   Fetched: 0
>=20
> After rebooting, I'm going to re-run a "-N bulk -c -a" based on
> TMPFS_BLACKLIST having a hopefully good approximation to the
> above list matching and TMPFS_BLACKLIST_TMPDIR also being
> defined. Also, use of ALLOW_MAKE_JOBS=3Dyes but no use of
> MUTUALLY_EXCLUSIVE_BUILD_PACKAGES :
> RAM+SWAP: 192 GiBytes + 512 GiBytes =3D=3D 704 GiBytes.
> Media:    PCIe Optane 1.4T has the file system and the swap.
> System:   7950X3D based.

For the 32 hardware thread, 32 builder context, with USE_TMPFS=3Dall
--but also using the TMPFS_BLACKLIST-- the result was:

[1D:16:49:31] Stopping 32 builders
. . .
[main-amd64-bulk_a-default] [2024-05-01_02h27m06s] [committing] Queued: =
34535 Built: 33985 Failed: 162   Skipped: 101   Ignored: 287   Fetched: =
0     Tobuild: 0      Time: 1D:16:50:42

So: a little under 41 hrs for a "-n bulk -c -a".

My adjusted version of top that keeps track of and reports some
Maximum Observed figures (MaxObsYYY naming) reported Swap having
50674Mi MaxObsUsed. So SystemRAM+MaxObsSwapUsed of slightly
under 241.5 GiBytyes.

So the system SystemRAM+SystemSWAP being 704 GiBytes is a little
under 3 times the example observed usage, giving margin for
larger needs over time.

Note the below build finish times and durations and that indicate
the overlapping time frames for the parallel builds for:

[1D:08:08:33] [15] [05:25:13] Finished  devel/electron27 | =
electron27-27.3.11: Success ending TMPFS: 5.15 GiB
[1D:08:24:30] [12] [05:37:16] Finished  devel/electron29 | =
electron29-29.3.1: Success ending TMPFS: 5.21 GiB
[1D:08:49:26] [04] [05:42:10] Finished  devel/electron25 | =
electron25-25.9.8_3: Success ending TMPFS: 4.91 GiB
[1D:11:53:32] [16] [07:50:11] Finished  www/ungoogled-chromium | =
ungoogled-chromium-123.0.6312.122: Success ending TMPFS: 8.08 GiB
[1D:11:54:22] [20] [07:51:01] Finished  www/iridium | =
iridium-browser-2024.01.120_1: Success ending TMPFS: 6.65 GiB
[1D:11:57:03] [07] [07:53:42] Finished  www/chromium | =
chromium-124.0.6367.60: Success ending TMPFS: 8.06 GiB
[1D:12:01:01] [05] [06:49:56] Finished  devel/electron28 | =
electron28-28.3.1: Success ending TMPFS: 5.13 GiB
[1D:13:01:04] [10] [07:23:00] Finished  lang/dotnet | dotnet-8.0.0: =
Success ending TMPFS: 4.65 GiB

But I'll note that all 32 builders were active over those periods
as well, all using ALLOW_MAKE_JOBS=3Dyes without any use of the likes
of MAKE_JOBS_NUMBER_LIMIT. It is a high load average style of
building that basically almost always has work pending that can be
immediately scheduled on any hardware thread that is no longer doing
other work. Still, vastly less than 30hr+ for the 2 *chromium, 25hr+
for iridium, and 20hr+ for electron25. ( 27..29 are blacklisted on
the build server for main-amd64 and there is no 26).

Reminder: prior experimentation had already fetched everything,
thus the "Fetched: 0". Fetching would add to the elapsed time.

I make no claim that build servers should be pushed that hard on
any sort of systained basis --but the above does indicate what a
7950X3D class 16-core/32-hwthread system can do for "bulk -c -a"
activity: somewhat under 2 days for such a "bulk -c -a" to build
33985 packages (plus 162 failures). (The PCIe 1.4T Optane media
helps as well.)

Based on the TMPFS_BLACKLIST that I used, the resulting largest
tmpfs usage examples were (ttk had been omitted from TMPFS_BLACKLIST):

[1D:13:21:23] [12] [00:14:48] Finished  science/dynare | dynare-5.4_7: =
Success ending TMPFS: 10.74 GiB
[1D:12:55:42] [07] [00:40:11] Finished  math/octave | octave-9.1.0: =
Success ending TMPFS: 10.27 GiB
[1D:16:34:13] [25] [00:05:04] Finished  x11/kde5 | =
kde5-5.27.11.23.08.5_2: Success ending TMPFS: 9.95 GiB
[1D:15:29:52] [12] [00:09:15] Finished  science/ttk | ttk-1.2.0_1: =
Success ending TMPFS: 9.59 GiB
[1D:16:30:49] [08] [00:02:04] Finished  biology/biostar-tools | =
biostar-tools-1.10.1_3: Success ending TMPFS: 9.56 GiB
[1D:16:01:32] [32] [00:25:56] Finished  science/code_saturne | =
code_saturne-8.1.1: Success ending TMPFS: 9.50 GiB
[1D:11:16:08] [15] [00:48:29] Finished  emulators/virtualbox-ose | =
virtualbox-ose-6.1.50_1: Success ending TMPFS: 9.48 GiB
[1D:13:41:14] [32] [00:19:20] Finished  deskutils/nextcloudclient | =
nextcloudclient-3.12.3: Success ending TMPFS: 9.21 GiB
[1D:12:14:10] [24] [00:16:20] Finished  print/tex-xetex | =
tex-xetex-0.99993_5: Success ending TMPFS: 9.15 GiB
[1D:15:20:33] [18] [01:14:02] Finished  science/paraview | =
paraview-5.12.0_2: Success ending TMPFS: 9.14 GiB
[1D:11:55:04] [32] [00:06:42] Finished  print/tex-dvipdfmx | =
tex-dvipdfmx-20230313_1: Success ending TMPFS: 9.14 GiB
[1D:14:04:21] [20] [00:01:13] Finished  print/texlive-full | =
texlive-full-20230313: Success ending TMPFS: 9.10 GiB
[1D:13:55:09] [20] [00:05:39] Finished  science/libghemical | =
libghemical-3.0.0_20: Success ending TMPFS: 8.98 GiB
[1D:11:21:28] [02] [00:53:57] Finished  print/tex-luatex | =
tex-luatex-1.16.0_1: Success ending TMPFS: 8.92 GiB
[1D:10:21:01] [31] [00:45:03] Finished  emulators/virtualbox-ose-legacy =
| virtualbox-ose-legacy-5.2.44_17: Success ending TMPFS: 8.90 GiB
[1D:14:26:10] [17] [01:13:36] Finished  graphics/qgis-ltr | =
qgis-ltr-3.34.6: Success ending TMPFS: 8.87 GiB
[1D:14:20:59] [06] [01:08:24] Finished  graphics/qgis | qgis-3.36.2: =
Success ending TMPFS: 8.87 GiB
[1D:16:13:36] [02] [00:41:13] Finished  math/octave-forge | =
octave-forge-20240403: Success ending TMPFS: 8.56 GiB
. . .



An additional note for *chromium/iridium/electron* (and dotnet):

I also previously had done a "bulk -n -a" with those and
more listed in MUTUALLY_EXCLUSIVE_BUILD_PACKAGES and no
TMPFS_BLACKLIST use (but ALLOW_MAKE_JOBS=3Dyes in use):

[1D:01:50:13] [03] [00:33:53] Finished  lang/dotnet | dotnet-8.0.0: =
Success ending TMPFS: 21.23 GiB
[1D:10:04:32] [25] [02:14:20] Finished  www/chromium | =
chromium-124.0.6367.60: Success ending TMPFS: 31.76 GiB
[1D:12:02:49] [27] [01:57:58] Finished  devel/electron25 | =
electron25-25.9.8_3: Success ending TMPFS: 20.55 GiB
[1D:13:51:47] [11] [01:48:15] Finished  devel/electron27 | =
electron27-27.3.11: Success ending TMPFS: 22.51 GiB
[1D:14:53:51] [01] [00:57:16] Finished  devel/electron28 | =
electron28-28.3.1: Success ending TMPFS: 22.66 GiB
[1D:15:51:46] [02] [00:57:52] Finished  devel/electron29 | =
electron29-29.3.1: Success ending TMPFS: 22.49 GiB
[1D:17:19:08] [01] [01:14:40] Finished  www/iridium | =
iridium-browser-2024.01.120_1: Success ending TMPFS: 25.66 GiB
[1D:22:36:10] [02] [01:09:31] Finished  www/ungoogled-chromium | =
ungoogled-chromium-123.0.6312.122: Success ending TMPFS: 21.43 GiB


=3D=3D=3D
Mark Millard
marklmi at yahoo.com




Want to link to this message? Use this URL: <https://mail-archive.FreeBSD.org/cgi/mid.cgi?E1A29267-998F-42DD-B015-1EC50D9B355C>