If make_shared/make_unique can throw bad_alloc, why is it not a common practice to have a try catch block for it?

I see two main reasons.

  1. Failure of dynamic memory allocation is often considered a scenario which doesn't allow for graceful treatment. The program is terminated, and that's it. This implies that we often don't check for every possible std::bad_alloc. Or do you wrap std::vector::push_back into a try-catch block because the underlying allocator could throw?

  2. Not every possible exception must be caught right at the immediate call side. There are recommendations that the relation of throw to catch shall be much larger than one. This implies that you catch exceptions at a higher level, "collecting" multiple error-paths into one handler. The case that the T constructor throws can also be treated this way. After all, exceptions are exceptional. If construction of objects on the heap is so likely to throw that you have to check every such invocation, you should consider using a different error handling scheme (std::optional, std::expected etc.).

In any case, checking for nullptr is definitely not the right way of making sure std::make_unique succeeds. It never returns nullptr - either it succeeds, or it throws.


Throwing bad_alloc has two effects:

  • It allows the error to be caught and handled somewhere in the caller hierarchy.
  • It produces well-defined behaviour, regardless of whether or not such handling occurs.

The default for that well-defined behaviour is for the process to terminate in an expedited but orderly manner by calling std::terminate(). Note that it is implementation-defined (but, for a given implementation, well-defined nonetheless) whether the stack is unwound before the call to terminate().

This is rather different from an unhandled failed malloc(), for example, which (a) results in undefined behaviour when the returned null pointer is dereferenced, and (b) lets execution carry on blithely until (and beyond) that moment, usually accumulating further allocation failures along the way.

The next question, then, is where and how, if at all, calling code should catch and handle the exception.

The answer in most cases is that it shouldn't.

What's the handler going to do? Really there are two options:

  • Terminate the application in a more orderly fashion than the default unhandled exception handling.
  • Free up some memory somewhere else and retry the allocation.

Both approaches add complexity to the system (the latter especially), which needs to be justified in the specific circumstances - and, importantly, in the context of other possible failure modes and mitigations. (e.g. A critical system that already contains non-software failsafes might be better off terminating quickly to let those mechanisms kick in, rather than futzing around in software.)

In both cases, it likely makes more sense for any actual handling to be done higher up in the caller hierarchy than at the point making the failed allocation.

And if neither of these approaches adds any benefit, then the best approach is simply to let the default std::terminate() handling kick in.