Skip site navigation (1)Skip section navigation (2)
Date:      Mon, 4 Jun 2018 10:43:34 +0100
From:      Steve O'Hara-Smith <steve@sohara.org>
To:        Brennan Vincent <brennan@umanwizard.com>
Cc:        John Howie <john@thehowies.com>, freebsd-questions@freebsd.org
Subject:   Re: Is it normal that a user can take down the whole system by using too much memory?
Message-ID:  <20180604104334.cfa2f9b307e52afff34c39ec@sohara.org>
In-Reply-To: <1527981931.2670335.1394316280.09410FC9@webmail.messagingengine.com>
References:  <1527977770.2651378.1394286400.0806CC5C@webmail.messagingengine.com> <01EE7EEA-03AC-4D71-BA08-B0CEA97EE720@thehowies.com> <1527981931.2670335.1394316280.09410FC9@webmail.messagingengine.com>

next in thread | previous in thread | raw e-mail | index | archive | help
On Sat, 02 Jun 2018 19:25:31 -0400
Brennan Vincent <brennan@umanwizard.com> wrote:

> Thanks John for the response -- this should help me solve my practical
> needs.
> 
> I'm also curious, however, to learn more from an OS design perspective.

	Best way to do that is look at the code.

> Why isn't it possible for the kernel to realize it should kill `eatmem`
> rather than make the system unusable?

	The code in vm_pageout.c that handles out of swap conditions tries
to kill the biggest process - by that it means the one estimated to release
the most memory. To that end it skips processes in various states - killed,
protected, system, non-running and then picks the biggest of the remaining
processes to kill.

	One thing that may cause the random chaos you see is that killing a
process takes time, especially if that process has swapped out pages, if
something else calls for memory it has finished dying then the next biggest
process will get killed even though the process responsible has already
been killed (after all it might be locked up and failing to die).

-- 
Steve O'Hara-Smith <steve@sohara.org>



Want to link to this message? Use this URL: <https://mail-archive.FreeBSD.org/cgi/mid.cgi?20180604104334.cfa2f9b307e52afff34c39ec>