Fog Creek Software
Discussion Board




When was the last time you ran out of memory?

This comment from another post got me thinking:

"Crashing when malloc fails is sophomore stuff. If you are creating commercial software that crashes the first time memory runs out, you don't know what you are doing. Time to move out of high school and start writing software that works rather than hacking together half-baked software that crashes, taking the user's work with it for dumb reasons."

In 20 years of C, Java, and C++ programming for Windows, I can't remember malloc( ) or new failing because I was out of memory.  This has led me to the attitude that if I'm out of memory, all hell must have broken loose on the machine, so whatever courageous recovery I attempt is (a) probably worthless, and (b) certainly doomed to failure.

FWIW, I'm the most anal programmer I've ever met when it comes to error checking - I check everything (except printf's return, of course).  But running out of memory has just never bitten me, and with each passing year, it seems less likely to bite me.

Anyone else think this way?  Or am I just a lazy, ignorant, motherless, half-baked, malodorous, pimply, degenerate programmer?  (Thought I'd beat the quoted poster to the punch.)

Grumpy Old-Timer
Friday, October 17, 2003

We specifically test our application against low-memory conditions.  Embedded software is always trying to reduce its footprint.  In Linux, you can specify a max memory size in the lilo.conf file - the usefulness is that you can say, "well, the box has 32MB or RAM, but let's run it with 16MB".  This usually after an attempt at size reduction in the code base.  No hard drive, so no swap.

So, while on the desktop there is always more room available, its not so in an appliance (lke a cell phone).

nat ersoz
Friday, October 17, 2003

Look in the TCL codebase. This is probably the granddaddy of the modern embeddable scripting language, and has been around for at least a decade and is well respected.

There's no out-of-memory checking in there anywhere. And TCL allocates memory all over the place.

Nobody I know of has ever complained about it.

Chris Tavares
Friday, October 17, 2003

In the embedded space, memory allocation failing is a concern.  Not so much on the desktop, where one could allocate a near-infinite amount of memory.

And if you run out of THAT much memory, perhaps it would be wiser to revisit your architecture ...

I would only worry about it in the case that the memory use of your application is unbounded (e.g., there are no limits to the size of an image, video clip, or document that could be opened at one time, and you don't have a fixed-size cache).

Alyosha`
Friday, October 17, 2003

Memory fragmentation can cause out-of-memory condition. It happened more than once to me.

Long-running applications usually need to handle failed allocations to cleanup resources and shut down correctly.

Pavel
Friday, October 17, 2003

Ever since I started running on my Dual G5 with 4GB of ram, I haven't ran out of memory once :D

So far, one week up and no problems...

Andrew Hurst
Friday, October 17, 2003

I ran out of memory a few hours ago, when I forgot the Parmesian cheese at the store.  Bummer.

Another good reason to check malloc's return code is bug checking.  Even if you aren't allocating a gig for your contact entry, how do you know the guy in the next cube over didn't? 

Then again, I'm an embedded guy.  Memory is limited, and you have to work around running out.

Snotnose
Friday, October 17, 2003

I once used some C++ compiler on Windows and wasn't sure if it returned NULL or generated an exception when operator new failed, so I wrote a test.

I don't remember the result of the test, but I do remember that I had to restart Windows afterwards. And system shutdown took a very long time.

So the only time you should consider testing for failed malloc/new is if you are trying to allocate a single, *very* large block of memory. If that fails you could try a different strategy or release some resources. If a normal allocation fails, well, you are screwed whatever you do.

Dan Shappir
Friday, October 17, 2003

This is pet peeve of mine...

All beginning C++ books say, be prepared to handle a failed new, because it might happen at any time, but they never say what the heck you are supposed to do in that situation.

Unrestricted memory allocation is a huge reason why windows nearly always blows apart in low memory situations. 

I work on a rather large windows app.  The user can have an unlimited number of pages in a workbook, but at some point the system is going to run out of memory, and there isn't much we can do to save the user from expending doom.

Go to the "save all files" routine and hope for the best I guess.

On a server you have to make sure you know how big your workbooks will be and allocate all that memory up front, otherwise you are just asking for a denial of service attack.  The hacker will think, "hmm, what if I open a million page workbook," or a million sessions in server speak?  Say Buh bye.

One little know fact, using the default settings in Visual C++ it is nearly as likely that calling a function will fail as calling new.  Don't believe me?  Read the discussion of the stack implementation is Richter's Advanced Windows.

On the exception topic.  I've put a lot of thought on the topic in the past couple days, and strangely I'm starting to agree with Joel on this one.

christopher baus (tahoe, nv)
Friday, October 17, 2003

Christopher: that's why I say if you have the possibility of opening a million-page workbook, that you should reconsider your architecture ...

E.g., only open a few pages at a time, for example.

Alyosha`
Friday, October 17, 2003

I would agree, but as one wise manager of mine once said..

"It is what it is"

The truth of the matter is even a "few page" workbook could be too big, depending on how many other things the user has going on, or how much RAM is installed. 

We'd have to alloc all the memory for the those few pages up front, and refuse to create or open the workbook when that operation failed.  That would be the wise thing to do, but...

..that means finding all the allocations and summing the amount of memory each one uses, allocate that up front, then use in place new to allocate from that buffer, or telling something like smart heap to secure that amount of memory. 

Anyone doing that?  It certainly isn't CS 101.

christopher baus (tahoe, nv)
Friday, October 17, 2003

"Go to the "save all files" routine and hope for the best I guess."

Holy crap that's a bad idea! I hope you mean "save backups of files that users can attempt to manually restore later" routine and not the standard "save all files" routine.

I'd much rather have my files as they were last time I saved them than have whatever results from attempting to save them when the system has no memory left to give.  What if the file writing routine in the OS needs a bit more memory after saving 10% of the file over the last version? Egads!  Also, if you're that worried about the user losing work I think it makes a lot more sense to do background autosaving on a timed schedule and just give the user the option of restoring the last successful autosave if the program is closed in any unexpected way... So in that case you really don't want to do anything if malloc fails other than maybe display a somewhat useful error message and quit.  This way you're covered no matter what the problem is..low memory condition, unexpected power loss, whatever.

Anyway, on the desktop my general advice to anyone is use asserts and/or exceptions to sanity check memory allocations for the purposes of bug finding (and test in low memory conditions to ensure your recommended/required specs are realistic), but don't spend a lot of time trying to handle the situation very gracefully when it does occur as that is a fool's errand.   

Mister Fancypants
Friday, October 17, 2003

No one runs out of memory these days, but for the wrong reasons:

If you allocate to much, your system slows down so much, that you kill the renegade process, or restart the machine, before the pagefile can be exhausted. One minute jobs can easily extended to one hour when constantly swapping.

Run your machine with no paging, and you'll start to run out of memory more often. Not too often, if you have 1G of memory, but you _will_ encounter this condition.

Ori Berger
Friday, October 17, 2003

I guess my point is, the only way to avoid out of memory situations is to allocate all your memory at startup.  I think that is preferable on servers since you can determine the number of process that will be running, and the amount of memory that will be available.  It is the same for embedded systems as well. 

Out memory situations do happen.  I've had other rouge processes chunk down huge amounts of memory which would eventually cause my program to fail, even it is acting normally.

I was thinking about the "save all files" thing.  When I wrote the comment I realized it wasn't a very reasonable thing to do.  Maybe I should bring that point up.  I think the asynchronous automatic backup would be extremely difficult to implement in our application.

christopher baus (tahoe, nv)
Friday, October 17, 2003

Allocating all memory upfront is not _really_ an option in any modern programming language.

Any C++ "string" you use allocates memor, and may fail at some point. It is more than possible that if a memory allocation failed, exception handlers and other stuff (e.g., save all) will also fail, possibly throwing their own exceptions.

Advice about preallocating a buffer at start and releasing it when memory is exhausted were good when machines ran a single process at a time. But when you have more than one process, it's possible that another process will snatch that memory once you release it, and your own program will still be out of memory.

Really, with Java, Python, Lisp, C++/STL or any other platform that does implicit allocations, an out of memory error is fatal, and the exception handlers might not help you if the code they run needs memory itself (something you have little to no control of).

Ori Berger
Saturday, October 18, 2003

I second Mr Berger. Getting away from heap allocation is damnably hard these days.

However, I think the pre-allocated buffer approach has merit, if this buffer is not freed but instead used for allocations requested after the global heap runs out. Most programs are so dependent on heap-allocated memory that proper recovery is tricky, but this approach might at least enable you to use the existing save routines (which probably allocate memory) to save backups of the data before the inevitable crash.

Tom
Saturday, October 18, 2003

> I second Mr Berger. Getting away from heap allocation is damnably hard these days.

Not that I've done it, but preallocating memory is equivalent to having a private heap: preallocate memory for your private heap, and override operators new and delete (globally, or for each class).

> When was the last time you ran out of memory?

Actually it was on an under-memoried QA machine, failing to allocate memory for a 1760x2000 pixel bitmap with 24-bit colour.

Christopher Wells
Saturday, October 18, 2003

The last time I ran out of memory I forgot the PIN number for my ATM card. How embarasking.

OutOfMemoryException
Saturday, October 18, 2003

What I was getting at was that this preallocated memory is small (512K, say), and used only at a pinch. The rest of the time, you use the system heap. You don't ever free it, because another process might immediately take it away.

In general, using the system heap is a good idea. On Windows NT at least, the system heap does something (it's called something like "virtual thingy" :) so that unused pages are removed from your process' address space and the used ones are rearranged to minimize fragmentation.

(I assume that happens on Windosw 9x, Unix, and MacOS X too. Pre-X MacOS memory handling is so crap I can't see it being supported.)

Tom
Sunday, October 19, 2003

Garbage collection is also a wonderful way to eliminate heap fragmentation. The heap is re-arranged during every GC.

Brad Wilson (dotnetguy.techieswithcats.com)
Sunday, October 19, 2003

That depends on what algorithm your GC uses ...

Anyway, it's nearly impossible to handle out of memory errors in a large system, as others have noted. The problem is that any reasonable recovery action is almost certain to require you allocate memory at some point, which you can't safely do.

I think it's one of those cases where crashing out is perhaps the best and most graceful solution. Certainly better than attempting to save state and getting halfway through (although chances are unlikely you'll get that far, since opening a file may well require a memory allocation, for example).

Sum Dum Gai
Monday, October 20, 2003

*  Recent Topics

*  Fog Creek Home