From owner-freebsd-current@freebsd.org Sun Sep 15 17:20:14 2019 Return-Path: Delivered-To: freebsd-current@mailman.nyi.freebsd.org Received: from mx1.freebsd.org (mx1.freebsd.org [IPv6:2610:1c1:1:606c::19:1]) by mailman.nyi.freebsd.org (Postfix) with ESMTP id B05EFF2670 for ; Sun, 15 Sep 2019 17:20:14 +0000 (UTC) (envelope-from kostikbel@gmail.com) Received: from kib.kiev.ua (kib.kiev.ua [IPv6:2001:470:d5e7:1::1]) (using TLSv1.3 with cipher TLS_AES_256_GCM_SHA384 (256/256 bits) server-signature RSA-PSS (4096 bits)) (Client did not present a certificate) by mx1.freebsd.org (Postfix) with ESMTPS id 46WbjV3L0pz4X2p; Sun, 15 Sep 2019 17:20:14 +0000 (UTC) (envelope-from kostikbel@gmail.com) Received: from tom.home (kib@localhost [127.0.0.1]) by kib.kiev.ua (8.15.2/8.15.2) with ESMTPS id x8FHK0Pr027171 (version=TLSv1.3 cipher=TLS_AES_256_GCM_SHA384 bits=256 verify=NO); Sun, 15 Sep 2019 20:20:03 +0300 (EEST) (envelope-from kostikbel@gmail.com) DKIM-Filter: OpenDKIM Filter v2.10.3 kib.kiev.ua x8FHK0Pr027171 Received: (from kostik@localhost) by tom.home (8.15.2/8.15.2/Submit) id x8FHK0EG027170; Sun, 15 Sep 2019 20:20:00 +0300 (EEST) (envelope-from kostikbel@gmail.com) X-Authentication-Warning: tom.home: kostik set sender to kostikbel@gmail.com using -f Date: Sun, 15 Sep 2019 20:19:59 +0300 From: Konstantin Belousov To: Don Lewis Cc: Mark Johnston , FreeBSD Current , kib@freebsd.org Subject: Re: spurious out of swap kills Message-ID: <20190915171959.GR2559@kib.kiev.ua> References: <20190913000635.GG8397@raichu> <20190913055332.GN2559@kib.kiev.ua> MIME-Version: 1.0 Content-Type: text/plain; charset=us-ascii Content-Disposition: inline In-Reply-To: User-Agent: Mutt/1.12.1 (2019-06-15) X-Spam-Status: No, score=-1.0 required=5.0 tests=ALL_TRUSTED,BAYES_00, DKIM_ADSP_CUSTOM_MED,FORGED_GMAIL_RCVD,FREEMAIL_FROM, NML_ADSP_CUSTOM_MED autolearn=no autolearn_force=no version=3.4.2 X-Spam-Checker-Version: SpamAssassin 3.4.2 (2018-09-13) on tom.home X-Rspamd-Queue-Id: 46WbjV3L0pz4X2p X-Spamd-Bar: ----- Authentication-Results: mx1.freebsd.org; none X-Spamd-Result: default: False [-5.99 / 15.00]; NEURAL_HAM_MEDIUM(-0.99)[-0.995,0]; NEURAL_HAM_LONG(-1.00)[-1.000,0]; REPLY(-4.00)[] X-BeenThere: freebsd-current@freebsd.org X-Mailman-Version: 2.1.29 Precedence: list List-Id: Discussions about the use of FreeBSD-current List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , X-List-Received-Date: Sun, 15 Sep 2019 17:20:14 -0000 On Sat, Sep 14, 2019 at 06:17:25PM -0700, Don Lewis wrote: > On 13 Sep, Konstantin Belousov wrote: > > On Thu, Sep 12, 2019 at 05:42:00PM -0700, Don Lewis wrote: > >> On 12 Sep, Mark Johnston wrote: > >> > On Thu, Sep 12, 2019 at 04:00:17PM -0700, Don Lewis wrote: > >> >> My poudriere machine is running 13.0-CURRENT and gets updated to the > >> >> latest version of -CURRENT periodically. At least in the last week or > >> >> so, I've been seeing occasional port build failures when building my > >> >> default set of ports, and I finally had some time to do some > >> >> investigation. > >> >> > >> >> It's a 16-thread Ryzen machine, with 64 GB of RAM and 40 GB of swap. > >> >> Poudriere is configured with > >> >> USE_TMPFS="wrkdir data localbase" > >> >> and I have > >> >> .if ${.CURDIR:M*/www/chromium} > >> >> MAKE_JOBS_NUMBER=16 > >> >> .else > >> >> MAKE_JOBS_NUMBER=7 > >> >> .endif > >> >> in /usr/local/etc/poudriere.d/make.conf, since this gives me the best > >> >> overall build time for my set of ports. This hits memory pretty hard, > >> >> especially when chromium, firefox, libreoffice, and both versions of > >> >> openoffice are all building at the same time. During this time, the > >> >> amount of space consumed by tmpfs for /wrkdir gets large when building > >> >> these large ports. There is not enough RAM to hold it all, so some of > >> >> the older data spills over to swap. Swap usage peaks at about 10 GB, > >> >> leaving about 30 GB of free swap. Nevertheless, I see these errors, > >> >> with rustc being the usual victim: > >> >> > >> >> Sep 11 23:21:43 zipper kernel: pid 16581 (rustc), jid 43, uid 65534, was killed: out of swap space > >> >> Sep 12 02:48:23 zipper kernel: pid 1209 (rustc), jid 62, uid 65534, was killed: out of swap space > >> >> > >> >> Top shows the size of rustc being about 2 GB, so I doubt that it > >> >> suddenly needs an additional 30 GB of swap. > >> >> > >> >> I'm wondering if there might be a transient kmem shortage that is > >> >> causing a malloc(..., M_NOWAIT) failure in the swap allocation path > >> >> that is the cause of the problem. > >> > > >> > Perhaps this is a consequence of r351114? To confirm this, you might > >> > try increasing the value of vm.pfault_oom_wait to a larger value, like > >> > 20 or 30, and see if the OOM kills still occur. > >> > >> I wonder if increasing vm.pfault_oom_attempts might also be a good idea. > > If you are sure that you cannot exhaust your swap space, set > > attempts to -1 to disable this mechanism. > > I had success just by increasing vm.pfault_oom_attempts from 3 to 10. I do not mind changing this, but could you do an experiment, please ? Set vm.pfault_oom_attempts to 1, and vm.pfault_oom_wait to 100. In other words, the multiplication of attempts by wait should be the same, but only one wait done. I am curious if it is enough for your config, i.e. it is indeed the pageout speed issue, vs. vm_page_alloc() at fault time does several speedups of pagedaemon during allocation attempts. > > > Basically, page fault handler waits for vm.pfault_oom_wait * > > vm.pfault_oom_attempts for a page allocation before killing the process. > > Default is 30 secs, and if you cannot get a page for 30 secs, there is > > something very wrong with the machine. > > There is nothing really wrong with the machine. The load is just high. > Probably pretty bad for interactivity, but throughput is just fine, with > CPU %idle pretty much pegged at zero the whole time. > > I kept an eye on the machine for a while during a run with the new > tuning. Most of the time, free memory bounced between 2 and 4 GB, with > little page out activity. There were about 60 running processes, most > of which were writing to 16 tmpfs filesystems. Sometimes free memory > dropped into the 1 to 2 GB range and pageouts spiked. This condition > could persist for 30 seconds or more, which is probably the reason for > the OOM kills with the default tuning. I sometimes saw free memory drop > below 1 GB. The lowest I aaw was 470 MB. I'm guessing that this code > fails page allocation when free memmory is below some threshold to avoid > potential deadlocks. This should be vm.v_free_reserved as far as I remember. > > Swap on this machine consists of a gmirror pair of partitions on a pair > of 1 TB WD Green drives, that are now on their third computer. The > remainder of the space of the drives are used for the mirrored vdev for > the system zpool. Not terribly fast, even in the days when these drives > were new, but mostly fast enough to keep all the CPU cores busy other > than during poudriere startup and wind down when there isn't enough work > to go around. I could spend money on faster storage, but it really > wouldn't decrease poudriere run time much. It probably is close enough > to the limit that I would need to improve storage speed if I swapped the > Ryzen for a Threadripper. If you have enough swap, then indeed only the swap speed matters.