Skip site navigation (1)Skip section navigation (2)
Date:      Fri, 7 Aug 1998 18:42:56 +1000
From:      "Andrew Reilly" <reilly@zeta.org.au>
To:        Terry Lambert <tlambert@primenet.com>, John Polstra <jdp@polstra.com>
Cc:        current@FreeBSD.ORG
Subject:   GC, was Re: Heads up on LFS
Message-ID:  <19980807184256.A27680@reilly.home>
In-Reply-To: <199808070157.SAA26484@usr06.primenet.com>; from Terry Lambert on Fri, Aug 07, 1998 at 01:57:37AM %2B0000
References:  <199808061603.JAA26197@austin.polstra.com> <199808070157.SAA26484@usr06.primenet.com>

next in thread | previous in thread | raw e-mail | index | archive | help
On Fri, Aug 07, 1998 at 01:57:37AM +0000, Terry Lambert wrote:
> > > JAVA has a nasty tendency to leak like a sieve until the GC hits a
> > > steady state.  As does Modula 3.
> >                  ^^^^^^^^^^^^^^^^
> > 
> > I don't know what you base that statement on.  I have a lot of
> > experience writing and using Modula-3 programs, and I've never
> > observed the behaviour you describe.

Terry goes on to ask a number of rhetorical questions,
presumably intended to paint GC in a grim light, in the context
of embedded systems.  Knowing a thing or two about embedded
systems, and having recently become very interested in dynamic
memory allocation in general, and garbage collection systems in
particular, I thought I'd have a blast.

First, though: let's examine biases: Terry has used C++ and C
as examples of systems to be preferred, on the grounds of
predictability or efficiency.

> Q1:	Is there more than 0 bytes difference between memory
> 	allocated and in use?

This is not the right question.  The right question is: can future
demands for memory be met?  Embedded systems generally don't have
the luxury of being able to be re-booted.  For a long time, my
answer to this question was always: of course: I allocate everything
I need at compile time, so there will be NO future demands for
memory.  Dynamic memory has no place in embedded, real-time systems,
I thought.

Since then, I have built some more complicated systems, that
(because of flexibility requirements) did use dynamic memory of
a sort.  I still could not afford even the possibility of
fragmentation, because the peak memory use was close to the
total memory available.  So I used a "strictly LIFO" stack-based
heap.

You see: even if your code has no memory leaks, and the difference
between "allocated" and "in use" is zero bytes, and there is (barely)
enough memory free to satisfy your future requirements, a straight
C or C++ application is almost certainly going to crash with an
out-of-memory exception caused by fragmentation at some point.
You can't generally move things around to coalesce free space,
because you have pointers that may or may not contain valid addresses
everywhere.

> Q2:	If so, how large can this number be under the worst
> 	possible conditions?

As long as the running configuration fits into the available memory,
what does it matter?  If you run out, then you'd better run the
collector again, and make a note that you'd better run it more
often in future.

> Q3:	Is this more memory than is typically found in a
> 	typical embedded system, such as the one found in a
> 	Microwave oven?

If you can find a microwave oven that feels the need to use
dynamic memory allocation, then it may well be better off using
GC.

> Q4:	What is the typical target platform for an RT OS, and what
> 	is one of the major target platforms for JAVA?

Not microwave ovens, that's for sure.  What's your point here?
(Perspective: one can now buy a 66 MHz, 32-bit processor with 2M
_bytes_ of DRAM on a single chip: the Mitsubishi M32000D4AFP.
You probably don't need one in your microwave oven, but your
launch costs line [below] looks a bit spurious.)

> Q5:	How provable is a system that depends on garbage collection?

How provable is one that depends on dynamic memory allocation at
all?  Not very (if time is a constraint).  At least, if you know
you've catered for the total, worst case requirement, then a GC
system might be able to guarantee that you can always get to the
bits that you need.  You can't say that about a manual system unless
you're prepared to stop and move things around yourself.

> My opinion is that RT OS's and portable devices need to run in
> (compared to most VAX programs) tiny memory footprints for them
> to be useful.  The launch costs alone on the extra memory on its
> way to Mars are very, very large.

The compute costs for memory allocation and reclamation can be
just as large and unpredictable for C++ new and delete as they
are for a GC system.  How do you know that that tree node you're
deleting now is a leaf, or something that is going to trigger an
enormous cascade of subsequent deletions?  In reality, whether
you're using GC or manual reclamation, you simply DO NOT
ALLOCATE OR FREE ANYTHING while you are running the real-time part
of the code. (If you're game to try it though, at least the GC
systems generally allow you to run the collector in a separate,
non-real-time thread, whereas a manual system does it's work
in the one-and-only thread.)

> Feel free to disagree with me (I'm sure you already do... 8-)).

I have copies of a GC-faq by an author too modest even to put an
e-mail address into it, and an excellent survey article "Dynamic
Storage Allocation: A Survey and Critical Review" by Paul Wilson,
Mark Johnstone, Michael Neely and David Boles, somewhere on the
Web, but I regret that I've lost the original URLs.  If anyone can
supply these I would be very grateful.

-- 
Andrew

To Unsubscribe: send mail to majordomo@FreeBSD.org
with "unsubscribe freebsd-current" in the body of the message



Want to link to this message? Use this URL: <https://mail-archive.FreeBSD.org/cgi/mid.cgi?19980807184256.A27680>