Skip site navigation (1)Skip section navigation (2)
Date:      Fri, 16 Apr 1999 08:49:20 +0900
From:      "Daniel C. Sobral" <dcs@newsguy.com>
To:        David Schwartz <davids@webmaster.com>
Cc:        chat@FreeBSD.ORG
Subject:   Re: swap-related problems
Message-ID:  <37167B00.3B40F1D7@newsguy.com>
References:  <000201be8791$31c7add0$021d85d1@whenever.youwant.to>

next in thread | previous in thread | raw e-mail | index | archive | help
David Schwartz wrote:
> 
> > First, all this discussion centers on a program P being able to
> > allocate all available memory. If a program doesn't try to do that,
> > it will never run into a malloc(3) returning NULL, pre-allocate or
> > not. Are we agreed on this?
> 
>         What does malloc returning NULL have to do with anything? *sigh*

That would be the failed malloc(). It just happen to be the origin
and, as far as I knew, the subject of this thread. If it changed
along the way, sorry, I missed it.

>         We're talking about a well-behaved program, P, that gets killed because of
> overcommitting caused by some combination of its behavior and the behavior
> of other processes.

Sure, and then you say:

...
> an emergency pool. If the operating system returns NULL when we call malloc,
> we defer memory-intensive tasks for later. In extreme cases, we may refuse
...

and 

>         No. A process can handle a failed malloc or fork by doing something much
> less drastic than death. The kernel cannot.

So, are we talking about malloc() returning NULL or not?

If we are not setting a limit to a process' size, and a malloc
fails, then the memory will be full. Thus, what I said in "first".

And let me repeat here, if this well behaved program does not
allocate all memory, it stands to reason that no malloc it tried
could have failed. If no malloc is failing, it is moot what the
program could do if it did fail.

> > Second, run program P, using pre-allocation. At some point in the
> > execution of P, all memory will allocated to it, and there will be
> > no more free memory available. Correct?
> 
>         No. It's possible that P will allocate all the memory it needs at startup,
> and it will be a very small amount. What do you mean by "all memory will
> allocated to it"?

If it allocates a small amount of memory, the memory won't get full,
and thus no process will get killed. Unless you have something else
behaving badly. If you have something else behaving badly, limit the
datasize of *that*. If you don't want to limit the datasize of that,
then call that P, and go back to "first".

As an aside, if P does not expand to fill all memory, but you are
still facing a situation where processes get killed because of
overcommit, it is simply that you don't have enough memory to run
what you are trying to run.

>         It is false that something needs to get killed. It's entirely possible that
> had the operating system anticipated this situation and handled it more
> smoothly, say by failing to fork or failing to malloc, other well-behaved
> programs would have reduced their memory load.
> 
>         The idea is to make it possible for well behaved programs to avoid this
> situation by anticipating it earlier, the moment the operating system began
> to overcommit.

Err, excuse me? Let's use a "numeric example", as we used to ask our
algebra professor (only to have it say "Given three numbers, a, b
and c, ..." :).

We have 64 Mb of memory. We start running processes. At some point,
they will try to allocate more than 64 Mb of memory.

If at this point we make the malloc calls fail, we are
pre-allocating everything. This doesn't work because your
applications get to use a lot less memory than they would otherwise.
A few very specialized systems have use for that. Unix is not among
these systems, so pre-allocating systems are not relevant to this
discussion.

Thus, we let the applications allocate more than 64 Mb of memory
before making mallocs fail. This is overcommitting. At this point it
is theoretically possible that all memory will get touched, causing
a process to die. In practice, that is unlikely.

Ok, so let the applications grow some more. At some point the system
will get dangerously close to actually facing a memory starvation
situation. Before that happens, the system will start failing
mallocs, so the situation does not worsen. Of course, the system
must be smart enough to let enough free memory so that normal
dirtying of pages already allocated won't cause the memory
starvation, even without any new mallocs.

It is simple to decide what point is that. When you finish this as
your Ph.D. thesis, please send us a reference to your algorithm.

>         For example, one of the programs written by the company I work for has an
> internal memory tracker. We pre-allocate a few megabytes of memory to use as
> an emergency pool. If the operating system returns NULL when we call malloc,
> we defer memory-intensive tasks for later. In extreme cases, we may refuse
> to accept new incoming connections. Because of that, we can avoid running
> into the situation where something needs to be killed.

If the operating system returns NULL, it is either using the
algorithm you'll describe in your Ph.D. thesis, or the memory is
exhausted. If the later, some other process, which is not using
pre-allocated memory, might dirty a page causing something to get
killed.

>         The problem is, if the operating system incorrectly assumes that every
> malloc or fork is vital, it sets itself up for a situation later where a
> copy-on-write will result in a process needing to be killed. This need is
> solely the result of the operating system causing mallocs and forks to
> succeed in the past where their failure may not have been fatal to anything.

I agree. I eagerly await to read your thesis on how an OS can decide
when to stop.

>         You are assuming that processes can do better than the kernel on a failed
> allocation. This is so obviously false that I can't believe that you are
> even advancing it.

Much on the contrary. I'm assuming the kernel cannot arbitrarily
choose something other than memory full condition to fail on a
malloc(), because it doesn't know anything about what the process
needs or not.

>         No. A process can handle a failed malloc or fork by doing something much
> less drastic than death. The kernel cannot.

Well... ok, what would you have the kernel do? Please describe the
entire scenario, not just one process. How much real memory exists?
How much got allocated? When did the mallocs start to fail? How was
each process memory allocated (pre-allocated or on-demand)?

--
Daniel C. Sobral			(8-DCS)
dcs@newsguy.com
dcs@freebsd.org

	"Well, Windows works, using a loose definition of 'works'..."



To Unsubscribe: send mail to majordomo@FreeBSD.org
with "unsubscribe freebsd-chat" in the body of the message




Want to link to this message? Use this URL: <https://mail-archive.FreeBSD.org/cgi/mid.cgi?37167B00.3B40F1D7>