Skip site navigation (1)Skip section navigation (2)
Date:      Fri, 16 Apr 1999 07:14:44 +0900
From:      "Daniel C. Sobral" <>
To:        David Schwartz <>
Cc:        chat@FreeBSD.ORG
Subject:   Re: swap-related problems
Message-ID:  <>
References:  <000101be8789$210927b0$>

Next in thread | Previous in thread | Raw E-Mail | Index | Archive | Help
David Schwartz wrote:
>         What the hell are you talking about?

I see you missed this...

>         If, for the vital process, you always reserve enough swap space to allow it
> to dirty all its pages, then that one process will never get killed because
> of overcommitting.
>         No. You reserve swap for the vital process.
>         I already did. You reserve enough swap to allow the vital process to dirty
> all its pages. It really is that simple.
>         I'm sorry, I don't understand at all. Please explain why reserving enough
> swap to allow the vital process to dirty all its pages is not a sufficient
> solution.

Main reason: if that process is the one filling most memory, that
process is the one that will get killed. The process which gets
killed *IS NOT* the one that tries to get memory when the memory is
already full. Unless it happens to be the biggest one.

Now, from the parts of the message I cut out of the quote, I infer
you don't get the other aspects of this problem, so I'll try to
explain them again.

First, all this discussion centers on a program P being able to
allocate all available memory. If a program doesn't try to do that,
it will never run into a malloc(3) returning NULL, pre-allocate or
not. Are we agreed on this?

Second, run program P, using pre-allocation. At some point in the
execution of P, all memory will allocated to it, and there will be
no more free memory available. Correct?

Ok, let's assume that program P will *not* get killed under any
circunstances. That is not true, but let's suppose we changed
FreeBSD to make it so.

Well, we have a lot of programs running, none of them using
pre-allocation. Theoretically, it would be possible that none of
them ever demanded any more memory. Please, convince yourself that
this is very unlikely.

We can safely assume that a program is bound to request memory it
hasn't used before. At the very least, there are all sorts of
processes getting spawned automatically to do all sorts of things.
There are data being read and written to disk, there are networks
packets being received and transmitted. There are cron jobs running.
There is syslogd. Etc.

So, something needs to get killed. Is that something essential to
you? Is syslogd essential? inetd? Maybe the terminal login asking
for password on your console, or your shell? Well, protect all
essential processes from getting killed.

So, what we have now? Non-essential processes, which can get killed.
Kill all of them. The remaining memory may or may not be enough.
Maybe P was run in a particular low memory use situation, and there
simply isn't enough space to run all essential processes. Let's make
this easy, though. Assume that killing the non-essential processes
gives you _at least_ the exact amount of memory you need to be able
to run without any essential process getting killed. Let's call
these non-essential processes N.

Is this a sufficient solution? Yes. But there is a catch.

You'll notice that you *need* to run N before running P. If N is not
running at the time you run P, then you'll end up killing essential
processes. If you can't see why, go back to the paragraph were I
defined N, and think of possible alternate scenarios, and see where
they got you.

Well, you should realize then that you need to discover a set of
processes N, which doesn't do anything essential to you, which you
must run before running P. This is the same thing as defining a
maximum size for P, only harder.

Got it now?

Daniel C. Sobral			(8-DCS)

	"Well, Windows works, using a loose definition of 'works'..."

To Unsubscribe: send mail to
with "unsubscribe freebsd-chat" in the body of the message

Want to link to this message? Use this URL: <>