Handling allocation failure

I’m looking at Array::add and trying to figure out what happens if for some reason memory allocation fails. From my reading of the code (assuming neither JUCE_DEBUG nor JUCE_DLL is defined though I don’t think this assumption is super important), the following is possible:

[code]- something calls Array::add

  • Array::add calls ArrayAllocationBase::ensureAllocatedSize
    • ArrayAllocationBase::ensureAllocatedSize calls ArrayAllocationBase::setAllocatedSize
    • ArrayAllocationBase::setAllocatedSize calls HeapBlock::realloc
      • HeapBlock::realloc calls ::juce_malloc (or ::juce_realloc – either one has the same struggle below from what I can see)
      • juce_Memory.h defines juce_malloc as malloc
      • malloc fails
      • the data member variable in HeapBlock is NULL
  • This line of code in Array::add uses memory that it shouldn’t:
    new (data.elements + numUsed++) ElementType (newElement);[/code]data.elements is a HeapBlock. I’m not sure which overloaded operator on HeapBlock is relevant here, but as far as I can tell either NULL is getting dereferenced or (NULL + some small integer).

Am I reading the code right? Seems like Array::add should either return bool (or something else to indicate failure) or throw an exception. To do that, ArrayAllocationBase would need to change, and possibly HeapBlock as well.

Are you interested in a patch to communicate this failure to callers of Array::add?

Fair point.

Of course, by the time this happens, your mouse and display will have stopped responding long ago, as your system thrashes itself to a grinding halt… and the Array class is only one of many hundreds of places in your app which could be the first place to run up against a lack of memory, so fixing Array won’t be enough to save an app from an ignominious low-memory death!

In fact, I reckon that in practical terms it’s probably impossible to write a non-trivial app that will reliably handle a low-memory situation… It’s probably better just to dereference a null pointer and die instantly than to attempt to struggle on in a situation where it can’t actually perform any useful operations, and will be using a program state that has almost certainly become inconsistent or broken.

The only fix I’d suggest would be to make HeapBlock throw a bad_alloc if the pointer is null (I really should have done that already, not sure why I didn’t…) But honestly, I can’t imagine any real-world situation where this would ever produce a better outcome than a good old fashioned crash.

Having HeapBlock throw bad_alloc sounds great to me. I agree that in most cases hell is going to break loose anyway but this gives the program a chance.

Would you like me to write the patch?

I personally would like to keep JUCE exception-free , at least as a option. Handling all out-of-memory situations is not really possible , at least not by just checking the returned value from malloc: the OS tend to over-allocate memory so it is likely that malloc will always succeed, but when you try to access the memory page, the OS won’t be able to map it and will send a segv . That is at least what happens on linux (see the notes of the malloc man page)

I’m not a fan of exceptions either but using return values for this is a bigger change. I’m fine to make those changes but it sounds like you don’t really want those either.
If the OS doesn’t tell the truth with the return value from malloc then crashing seems OK, but we’d at least handle the NULL return value.

Well, even if I don’t explicitly throw anything in Juce, you still need to deal with the fact that the C++ new operator can throw a bad_alloc, so I’m afraid exceptions are a fact of life! Trying to make your code exception-safe code is a good discipline anyway, as it forces you to use safe object allocation/destruction strategies.

No, quicker for me just to do it than to mess around applying and checking patches. It’s only a few lines.

When possible , I compile with exceptions disabled, so even new won’t throw anything when it fails. I just abort() when it returns zero, which is basically what you will do anyway when you catch the bad_alloc exception in your catch-all stuff. It reduces the compiled code size by more than 20% , and the application is indeed quite faster with exceptions disabled.

From my reading of juce_HeapBlock.h in current git (2c24f1c30231f91e514db2c0d8de9fd81ed012e2), it doesn’t look like anything has changed here. Or at least HeapBlock doesn’t throw bad_alloc. I’m currently looking at a different chain of calls where I think ther’e’s some nasty behavior. Yes, memory allocation failure means that things are already bad to the point of likely being unrecoverable, but accessing someone else’s memory doesn’t seem like a good idea either. Consider this call stack:

  • XmlElement::createDocument
  • MemoryOutputStream::MemoryOutputStream
  • MemoryBlock::MemoryBlock (sets size to 2048)
  • HeapBlock::allocate
  • ::calloc or ::malloc, but whichever one fails so data is NULL

note that here, MemoryOutputStream::data::size is still 2048 and MemoryOutputStream::size is 0. This could be OK, but the next call is:

XmlElement::writeToStream

which eventually calls MemoryOutputStream::write

Whether it’s MemoryBlock::setSize that could write to NULL (or some other small address), or the memcpy in MemoryOutputStream::write itself, something is going to write to memory that it shouldn’t.

It all seems easy to prevent by changing HeapBlock::allocate to return bool. Then we can check that return value where we want in an incremental way.

In this particular scenario, it wouldn’t take much for MemoryOutputStream::write to detect something is wrong and return false. It already returns a bool, but it’s always true at the moment. It’s true that the operator << method in OutputStream.cpp would need to change to communicate the failure higher up (likely by throwing an exception), but at least OutputStream::writeByte could notice the failure, return bool, etc. It all seems pretty straightforward to add the info in some places without making some big global change.

I suppose it’s already clear in some cases whether XmlElement::createDocument fails…if you pass true for includeXmlHeader or a non-empty DTD and the return value is the empty string, it fails. But when includeXmlHeader is false and an empty DTD it seems hard to tell the difference between a failure and an empty XML element.

I’ve changed it in the modules branch. The only feasible way of handling an out-of-memory error is with an exception - if you tried to add explicit out-of-memory error handling to every function-call that may directly or indirectly allocate memory, you’d end up with most of your codebase dedicated to checking for memory errors.