Memory is one resource in your computer which your programs will use. Many languages ease the management of memory by being 'Garbage Collected' which means that rather than the software author having to keep track of memory and release it when they are finished with it, the language itself keeps track and does that on their behalf. Garbage collection schemes are both old (original LISP had gc) and the subject of modern research.

There are many schemes for garbage collection of memory, from simple but effective ones such as Perl 5's reference counting or Python's use of both reference counting and also cycle cleanup, through to the wider field of tracing garbage collectors.

Many programming languages include one or more garbage collection approaches in their memory management runtime, indeed almost every language except C/C++/Obj-C includes some form of dynamic memory management by default.

However, garbage collection is not a panacea. Indeed it can introduce problems to your software which you would not normally encounter if you were managing everything explicitly yourself. For example, depending on the scheme in use, your program may pause for unpredictable periods at unpredictable intervals. If you need your program to always respond smoothly to external events, this kind of behaviour can be a show-stopper. Of course, there's plenty of mechanisms in place to mitigate this kind of problem and your chosen language will have interfaces to the garbage collector to allow you to tell it when you want it to do work, but still, it can be a pain if you're not expecting it.

Garbage collection schemes can mean that your program ends up "using" much more RAM than it is, in fact, using. On large computers this might not be an issue, but if you're writing for more resource constrained situations, or working with super-large datasets, then this might turn out to be a problem too. Again, working with your language runtime's tweaking APIs can help you mitigate this problem.

You can tune a garbage collector, but there is always a trade-off between memory overhead (objects that are unreachable but not yet collected) and processor overhead (mostly for reachability checks). If you have a concurrent garbage collector and a core that you're otherwise not using, then this might be free, but otherwise you will always pay a significant performance penalty in one or both resources.

A fairly extreme case would be something like CPython which primarily uses reference counting and therefore has low memory overhead but spends a lot of time changing reference counts (everything is an object). The other extreme is never collecting, which has unbounded memory overhead but little processor overhead.

The fact that there is always significant overhead makes fully garbage-collected languages unsuitable for most system software. I'm more hopeful about languages that allow tying the lifetime of objects to the scope they are defined in, while also supporting objects that have indefinite lifetime (and are garbage-collected). The challenge has been how to support limited-lifetime objects without allowing references to those objects to leak out of the scope (which would allow use-after-free). C++.NET provides something like this, but with significant restrictions on the types of scoped objects. Rust seems more promising. However I haven't used either of these yet.

Comment by womble2 [livejournal.com] Wed Oct 29 13:41:07 2014

I agree with womble2 on the issue of memory vs. processor time.

Pause times on server or desktop systems can be avoided (or at least significantly reduced) by using a runtime environment with a sufficiently advanced concurrent garbage collector (e.g. a recent HotSpot or some commercial JVM). There are also some companies who claim that they have real-time GCs for embedded systems (e.g. JamaicaVM). Pause times are not an unsolvable issue. They just require a large amount of work on the runtime system to implement a scheme that works well enough.

What you can't avoid when using garbage collection is the cost in processor time or memory.

On the other hand, using manual memory management requires the programmer to keep track of the lifetime of objects. Although there are a lot of debugging tools for memory-related problems such as use-after-free and double-free, it's still preferable to refrain from introducing bugs in the first place.

I'm skeptical that these issues will be solved in the near future. Computer scientists have been working on memory management, both manual and automatic (i.e. GC), for decades.

Comment by www.cloudid.de/?user=a47602f3eec8afceacef3dd015129173fa461fb1 Thu Oct 30 09:47:12 2014