Skip site navigation (1)Skip section navigation (2)
Date:      Tue, 6 Jan 1998 21:12:46 +0000 (GMT)
From:      Terry Lambert <tlambert@primenet.com>
To:        steve@visint.co.uk (Stephen Roome)
Cc:        joelh@gnu.org, freebsd-hackers@FreeBSD.ORG
Subject:   Re: Weird malloc problem.
Message-ID:  <199801062112.OAA10782@usr08.primenet.com>
In-Reply-To: <Pine.BSF.3.96.980106135429.18634A-100000@dylan.visint.co.uk> from "Stephen Roome" at Jan 6, 98 02:24:45 pm

next in thread | previous in thread | raw e-mail | index | archive | help
> > IIRC (and I'm no expert), it is possible to sbrk your processes'
> > entire addressable memory space (all 2^32 bits), and never use it.
> 
> Not very logical though that I can allocate more memory than I have.
> To me at least.

Actually it's very reasonable.  There are many reasons why you
might want a discontinuous virtual address space, where not all
pages are backed by real pages.  For one, you might want to
implement statistical data protection (a one out of 2^20th chance
of someone guessing your page), etc..

This would be highly useful to avoid protection domain crossing
in an OS simulator, etc..

There are other reasons.  Maybe you want a "perfect" hash into
memory, but will store some finite (small) number of hashed
objects.

Maybe you are going to rfork, and you want to be able to marshal
data between "kernel threads" using an IPC mechanism instead of
a heavyweight mechanism for reinstancing them (like Microsoft
screwed up and required by putting instanced OLE/ActiveX interfaces
in thread local storage)... and to do that, you need to ensure
that the address space will not be private -- and you do that
by preallocating it before the first rfork().

There's lots of other examples that I won't bore you with; suffice
to say, it *can* be a useful thing to do.


> Actually, as someone just pointed out, it's fine to set the limits to
> anything, but malloc should never think it suceeded in allocating virtual
> memory which clearly just doesn't exist!

This is a standard feature of memory overcommit.  Other standard
features are:

(1)	The process that can't get the new page dies, even though
	it may be the longest running process on your system.

(2)	When you are using an executable file as a swap store, and the
	image is on a remote FS, if the server goes down, your machine
	hangs in paging in via the vnode pager until the server comes
	back up.

(3)	When you are doing #2, the VEXEC bit can't be set on the NFS
	server, and therefore the image you are using as swap store
	can be overwritten by another NFS client, or by the server
	process, and your application can crash catastrophically
	(including doing such things as deleting all your files,
	if the new image's code is juuuuuuust right).

(4)	You can't do a system suspend/resume, unless you have a seperate
	disk area for backing your primary RAM and restart information,
	in addition to the SWAP space.

Etc..

Overall, with the exception of "committed" applications and remote
FS vagries (which I think should be fixed, since they are fixable),
memory overcommit is worth the trouble it causes.

					Terry Lambert
					terry@lambert.org
---
Any opinions in this posting are my own and not those of my present
or previous employers.



Want to link to this message? Use this URL: <https://mail-archive.FreeBSD.org/cgi/mid.cgi?199801062112.OAA10782>