Fog Creek Software
Discussion Board

Memory Management Question

What are the pros and cons of Garbage Collector kind of mechanism like in java and the explicit memory management by programmer as in C,C++ and in many other languages? Are there any languages other than java which have some type of Garbage Collector kind of mechanism?.

My case is ,
Having worked in java for 1.5 years, Currently, I use a language called Objective-C(which is a super set of C and has its roots in SmallTalk) which uses reference counting mechanism for memory management. For beginners, its memory management is not so simple to understand, especially for those from java experience and never used C or C++ before.

Is memory management really so hard or is it just because of my java background? Just interested to know ur experiences with the memory management in C or C++ or any other language in the beginning.

cocoa developer for mac
Sunday, June 1, 2003

Other languages with GC coming to mind: Perl (ref. counting), Lisp, Smalltalk.
In short - advantages of GC is easier and more high-level development. Disadvantages - loss of control and care about memory management.
For example, tuning Java applications I usually find that the reaon #1 to bad performnace is lot's of temporal objects and, therefore, GC sessions.

Evgeny Goldin
Sunday, June 1, 2003

Is it hard? No, not really: you create objects and then you free them.

But, you have to remember to do that for *every single byte of memory you allocate*. That's what's hard.

It's mostly a matter of defining a APIs in your application where it's clear who's responsibility it is to deallocate memory (You should make interfaces that are hard to misuse...). And that is hard to do.

I write some stuff in Java and some in C/C++ and friends. In my C apps I've taken to wrapping malloc() and free() with the following:

- the wrappers optionally write to a file detailing every allocation and deallocation, so that a perl script can go through and match them up and ensure that there are none left over.
- Place a couple bytes before and after the allocated memory, to detect overruns and such. Thanks to D. Richard Hipp for both of these ideas!
- my free() takes a pointer to a pointer to memory, so it can set it to null.

Now much of what the above does can be done by tools, like valgrind, which I also make regular use of. However, valgrind is *slow*, and my code is fast 8-}

And with all these tools and a little paranoia with me, I can manage to avoid memory leaks relatively successfully (You can never be *sure*, but I'm pretty close). So no, it's not hard, but it's tedious: you must be careful and have an attention to detail.

Mike Swieton
Sunday, June 1, 2003

I find GC to be tougher to use in any system that requires consistent performance.  Not FAST, but consistant.  Where long bursts of speed followed by occasional slowdowns are not acceptable.  There are of course incremental GC's, which can help, but I havn't gotten the same conistency of performance that C++ gives.  Many people building real-time systems using java turn to object pools, but then you need to ask the pool for an object, then return it later.  This is no different than having to do new/delete.

I do have to say that I believe resource management in C++ is actually easier and more straigtforward.  I say this because I find the GC in java to be rather assymetrical.  Sure, it will automatically deallocate memory, but what about other resources?  Files, for instance, have to be manually closed when you are done with them, (generally by using a finally construct), creating a similar nuisance than deleting memory manually has.  With the constructor/destructor paradigm, any resource can be automatically allocated and deallocated in a predicatable manner.

Bjarne calls this Resource Allocation as Initialization, you can read more about it here:

I also believe allocation using new in C++ (which must at some point be followed by a delete), is often preoptimization.  Allocating on the stack and copying objects, if they are small, can actually be faster.  The reason is that new requires a system call to allocate memory on the heap, wheras otherwise you can allocate on the stack which is much speedier.

Otherwise you can take advantage of the Resource Allocation is Initialization technique by using autopointers, which gives you automatic cleanup of memory at a predictable time.  Some overhead is introduced, but the overhead is consistent.

All this being said, for sitting down and pumping out code quickly, I find that the GC is a very good thing.  It does not entirely get rid of memory leaks in actuallity, because a referenced piece of memory that you forget about isn't going anywhere.  But it is one less thing (very commonly used thing), to worry about.  Memory allocation on the heap in java is one thing where it soundly outperforms C++ (since java preallocates a large chunk of memory).  To do this in C++ you would have to write a custom memory allocator, which most people will not want to do.

Oren Miller
Sunday, June 1, 2003


On the note that it's hard to preallocate a large chunk of memory in C++:

Under Linux, at least, the man page for the sbrk library call appears to do just that: "sbrk increments the program's data space by increment bytes."

I've not used or tested it, but it seems to me that it should do just that, without needing a custom allocator, from the short description. Unless malloc() usually caches things stupidly?

Mike Swieton
Sunday, June 1, 2003


Well I don't think it is a matter of anything being cached stupidly.  I'm sure that the process has a certain amount of memory allocated to it, this is not the problem.  The problem is that malloc needs to make a system call to the OS granting it usage of a piece of memory, and another system call to give it back.  This is where I believe the slowdown is.  I wrote a little test to demonstrate this to myself (under linux) with the following structure:

struct integers
  ( int value1, int value2, int value3, int value4, int value5 )
    : m_value1(value1), m_value2(value2), m_value3(value3),
      m_value4(value4), m_value5(value5) {}
  { m_value1 = m_value2 = m_value3 = m_value4 = m_value5 = 0; }
  int m_value1, m_value2, m_value3, m_value4, m_value5;

So it consists of 5 integers.  I put different numbers into the constructor each time (based on the counter with a constant time operation) and turned off all optimizations to ensure no funny compiler magic.

I allocated the object 10,000,000 times on the stack, allowing scope resolution to clear up the object, and 10,000,000 on the heap, followed by a call to delete.

While the stack test took 0.287 seconds to complete, the heap test took 4.016 seconds, some 14 times slower.

In order to even up the score, I modified the stack test, so that it would make copies of the allocated object.  I found that I needed to make 65 copies of each static object allocated (that's 65,000,000 copies) before the test took 4 seconds.

Let me know if you see something I don't, but these results are consistent with what I have seen in the past.  I believe it because of the number of system calls, but if anyone knows another reasons I'd like to know.  In any case, this is why I consider heap allocation to be a preoptimization.  As you can see in this case, I'd have to make 65 copies of this object before heap allocation starts to make sense.

Oren Miller
Sunday, June 1, 2003

In theory it's simple... everything you allocate must eventually: be deallocated; not deallocated more than once; and not be referenced after it's been deallocated. C++ artefacts such as destructors and smart pointers make it easier to do this consistently in C++ than in C.

The big problem IMO is that if a deallocation bug DOES get through your QC and into your large program, it's difficult to pintpoint where the bug is: a deallocation problem in one part of program causes random wierdness and crashing in other parts of your program when they reference the corrupted heap. This problem is compounded if the nature of your program makes it impossible to use an instrumentation program such as BoundsChecker to identify deallocation bugs (for example, in our case running BoundsChecker made the program fail because it made it run too slowly to drive its real-time external devices). I ended up writing code to hook and instrument the heap... I can sympathise with the notion that heap management is better built into the framework than left to the programmer.

Christopher Wells
Sunday, June 1, 2003

Note also that initial question was GC with Ref-counting, rather then GC with new/delete/malloc/free thing.

In Windows world GC is also in C#.

I'm not aware of any language that uses ref-counting directly for memory management. Ref-counitng is used in COM(ActiveX) for contolling object's lifetime.

I would say ref-counting is more predictable than GC. Memory could be freed as soon as you don't need it depending on runtime support. Note that any of this methods can be done bad. I.e. you are not caching memory you've just deallocated and releasing it to the system or allocating each piece through system allocator that could work for only fixed size allocations like 4K minimum.

Sunday, June 1, 2003

"I allocated the object 10,000,000 times on the stack, allowing scope resolution to clear up the object, and 10,000,000 on the heap, followed by a call to delete."

Please clarify: did you "allocate then delete" 10mil times, or did you allocate 10mil items, and then delete them?

See, when you stack "allocate" an object, a traditional memory allocation isn't required. If the object is 20 bytes (like your example, assuming 4-byte ints), then it simply adjusts the stack pointer by 20, and voila, instant object. A "new", on the other hand, has to go to the heap, find an appropriate block of memory, and remove it from the free-list. That's a LOT more work (yes, 14x as much work would be a good guess, actually).

Brad Wilson (
Sunday, June 1, 2003

"I would say ref-counting is more predictable than GC. Memory could be freed as soon as you don't need it depending on runtime support."

Agreed, ref-counting is more predictable ("deterministic", as long as we aren't talking about real-time requirements). It also cannot solve the circular reference issue very easily, which is a source of a lot of frustration and bugs when people use COM.

Brad Wilson (
Sunday, June 1, 2003

There are two big problems with reference counting as a general memory-management solution:

1) Circular references: A -> B -> A. This is actually quite common; many object models have collections of child objects, where the child objects have a parent pointer.

2) Robustness.

By robustness I mean this: reference counting relys on everyone doing every allocation EXACTLY right EVERY TIME. One developer makes a small slipup in an obscure library and suddenly you've got memory and resource leaks all over the place. I spent almost two weeks once tracking down just such a bug in IE 6.

GC doesn't have this problem - the system takes care of freeing stuff. It's much more robust in the face of programmer error. In fact, it eliminates an entire CLASS of programmer errors.

Chris Tavares
Monday, June 2, 2003

One C++ technique which can be really useful is overwriting operators new() and delete(), when you have a lot of small objects that you are using regularly.

The idea is quite simple. Your class (say Foo) has a stack of pointers to objects (i..e. a stack of Foo*), created statically.
Operator new() just pops the top pointer of the stack if it is not empty, and operator delete() just pushes the returned pointer onto the stack.

If the stack is empty then operator new() takes a chunk of memory from the heap (enough for say 1000 objects), and puts pointers to the start of each object onto the stack.

This technique is detailed in Stroustrup 2nd  Edition section 5.5.6. I havn't found it in the 3rd Edition, but the STL does make it easier.

Using this sort of technique does make it easy to catch memory leaks, because it is very easy to add debugging statements to it (like writing to a file,a s was suggested above).


Monday, June 2, 2003

"By robustness I mean this: reference counting relys on everyone doing every allocation EXACTLY right EVERY TIME."

One would assume we're talking about a memory manager that does the reference counting under the hood, without any explicit steps on the part of the user.

Brad Wilson (
Monday, June 2, 2003

The original question was about automatic memory management as opposed to manual management, so no, I don't think we can make that assumption.

I've used automatic reference counting systems before (In Python before version 2.1) and in general, my robustness observation still held. The python code did the refcounting correctly, but C extension modules often had refcounting errors that caused all kinds of subtle and hard to fix bugs.

Chris Tavares
Monday, June 2, 2003

*  Recent Topics

*  Fog Creek Home