Skip site navigation (1)Skip section navigation (2)
Date:      Thu, 21 May 2009 10:52:26 -0700
From:      Yuri <yuri@rawbw.com>
To:        Nate Eldredge <neldredge@math.ucsd.edu>
Cc:        freebsd-hackers@freebsd.org
Subject:   Re: Why kernel kills processes that run out of memory instead of just failing memory allocation system calls?
Message-ID:  <4A1594DA.2010707@rawbw.com>
In-Reply-To: <Pine.GSO.4.64.0905202344420.1483@zeno.ucsd.edu>
References:  <4A14F58F.8000801@rawbw.com> <Pine.GSO.4.64.0905202344420.1483@zeno.ucsd.edu>

next in thread | previous in thread | raw e-mail | index | archive | help
Nate Eldredge wrote:
> Suppose we run this program on a machine with just over 1 GB of 
> memory. The fork() should give the child a private "copy" of the 1 GB 
> buffer, by setting it to copy-on-write.  In principle, after the 
> fork(), the child might want to rewrite the buffer, which would 
> require an additional 1GB to be available for the child's copy.  So 
> under a conservative allocation policy, the kernel would have to 
> reserve that extra 1 GB at the time of the fork(). Since it can't do 
> that on our hypothetical 1+ GB machine, the fork() must fail, and the 
> program won't work.

I don't have strong opinion for or against "memory overcommit". But I 
can imagine one could argue that fork with intent of exec is a faulty 
scenario that is a relict from the past. It can be replaced by some 
atomic method that would spawn the child without ovecommitting.

Are there any other than fork (and mmap/sbrk) situations that would 
overcommit?

Yuri




Want to link to this message? Use this URL: <https://mail-archive.FreeBSD.org/cgi/mid.cgi?4A1594DA.2010707>