Skip site navigation (1)Skip section navigation (2)
Date:      Thu, 15 Oct 2020 15:23:51 +0100
From:      Twingly Customer Support <>
Subject:   FreeBSD using swap even though there's a lot of free memory
Message-ID:  <5f885b772d622_95aa2adab2b9c5b41576495c3@sirportly-app-02.mail>

Next in thread | Raw E-Mail | Index | Archive | Help

We have a server running FreeBSD 12.1-RELEASE-p10. We currently have a problem where FreeBSD starting to swap when running ZFS scrub, even though we have ~70G of free memory.
This did not happen before when running FreeBSD 11.3 for example. It started happening at approximately the time we upgraded from 12.1-RELEASE-p5 to 12.1-RELEASE-p6, but if the upgrade is the cause of the problem is unclear, though FreeBSD never swapped for us before that. "Laundry" memory was not something we saw before either, it started to appear at the same time as FreeBSD started swapping.

Eventually, after scrubbing a few times, the swap becomes full and we start seeing "swap_pager_getswapspace(24): failed" etc. in dmesg.
This is the memory usage a while after scrubbing, note the values for Mem/Free and Swap:

% top | head -n 7
last pid:  8112;  load averages:  1.82,  1.77,  1.73  up 6+01:37:42    10:53:48
35 processes:  1 running, 34 sleeping
CPU:  4.9% user,  0.0% nice,  4.2% system,  0.2% interrupt, 90.7% idle
Mem: 110G Active, 27G Inact, 5413M Laundry, 39G Wired, 68G Free
ARC: 34G Total, 28G MFU, 4101M MRU, 53M Anon, 1317M Header, 225M Other
     30G Compressed, 53G Uncompressed, 1.77:1 Ratio
Swap: 8192M Total, 6434M Used, 1757M Free, 78% Inuse

We are running MySQL, which has been configured to use ~50% of the total amount memory (using innodb_buffer_pool_size=127748M)
ZFS ARC has been configured to use 25% of the total memory (using vfs.zfs.arc_max="63874M")

We have tried raising both vfs.zfs.arc_max and innodb_buffer_pool_size, but this did not make any change to the total memory usage, the free memory stays at around 70G and FreeBSD still started swapping.
It's as if the memory is capped at around 180G for some reason.

Are there any configuration values that could cause FreeBSD to swap even though there's free memory? Are there any config values one could try to change in order to get FreeBSD to use the remaining ~70G of free memory instead of swapping?

Let me know if there's any more details you want me to provide and I'll attach those.


// Mattias
From  Thu Oct 15 15:30:15 2020
Return-Path: <>
Received: from ( [IPv6:2610:1c1:1:606c::19:1])
 by (Postfix) with ESMTP id 56BCD4411EA
 for <>;
 Thu, 15 Oct 2020 15:30:15 +0000 (UTC)
Received: from (
 (using TLSv1.3 with cipher TLS_AES_256_GCM_SHA384 (256/256 bits)
 key-exchange ECDHE (P-256) server-signature RSA-PSS (4096 bits) server-digest
 SHA256) (Client did not present a certificate)
 by (Postfix) with ESMTPS id 4CBtWp05dsz4cMv
 for <>; Thu, 15 Oct 2020 15:30:13 +0000 (UTC)
Received: from (news@[])
 by with uucp (rmailwrap 0.5) 
 id 1kT5Ce-008VHj-QA; Thu, 15 Oct 2020 17:30:04 +0200
Received: from (localhost [])
 by (8.16.1/8.16.1) with ESMTP id 09FFQWFL073549
 for <>; Thu, 15 Oct 2020 17:26:32 +0200 (CEST)
Received: (from news@localhost)
 by (8.16.1/8.16.1/Submit) id 09FFQWdo073548
 for; Thu, 15 Oct 2020 17:26:32 +0200 (CEST)
 (envelope-from news)
From: Christian Weisgerber <>
Newsgroups: list.freebsd.questions
Subject: Re: A couple of questions about SSDs
Date: Thu, 15 Oct 2020 15:26:32 -0000 (UTC)
Message-ID: <>
References: <>
User-Agent: slrn/1.0.3 (FreeBSD)
X-Rspamd-Queue-Id: 4CBtWp05dsz4cMv
X-Spamd-Bar: ++
Authentication-Results:; dkim=none; dmarc=none;
 spf=none ( domain of has no SPF policy when
 checking 2a04:c9c7:0:1073:217:a4ff:fe3b:e77c)
X-Spamd-Result: default: False [2.27 / 15.00]; RCVD_TLS_LAST(0.00)[];
 ARC_NA(0.00)[]; FREEFALL_USER(0.00)[news]; FROM_HAS_DN(0.00)[];
 TO_DN_NONE(0.00)[]; AUTH_NA(1.00)[]; RCPT_COUNT_ONE(0.00)[1];
 RCVD_COUNT_THREE(0.00)[3]; DMARC_NA(0.00)[];
 NEURAL_SPAM_MEDIUM(0.30)[0.301]; NEURAL_SPAM_LONG(0.46)[0.460];
 R_SPF_NA(0.00)[no SPF record];
 R_DKIM_NA(0.00)[]; MIME_TRACE(0.00)[0:+];
 ASN(0.00)[asn:202113, ipnet:2a04:c9c7::/32, country:DE];
X-Mailman-Version: 2.1.33
Precedence: list
List-Id: User questions <>
List-Unsubscribe: <>, 
List-Archive: <>;
List-Post: <>
List-Help: <>
List-Subscribe: <>, 
X-List-Received-Date: Thu, 15 Oct 2020 15:30:15 -0000

On 2020-10-14, Polytropon <> wrote:

>> What exactly makes you think, that SSDs need gentle treatment?
> It's probably the limit on write cycles, but I'm not sure how
> this compares to general lifetime calculations compared to
> regular hard disks...

I don't remember when this started, but nowadays hard drives also
come with an explicitly specified workload rating, e.g.

"Ultrastar hard drives are designed with a workload rating up to 550TB
 per year"

Christian "naddy" Weisgerber                

Want to link to this message? Use this URL: <>