Levels of Exception Safety

Exceptions are part of C++. They are thrown by the standard library classes, and sometimes even if we are not really using the standard library. So, unless we are in a very restrictive environment like embedded programming and have exceptions disabled in the compiler, we need to be prepared and deal with the fact that exceptions simply can happen.

The four levels

Any piece of code we write has one of four levels of exception safety: No guarantee, the basic guarantee, the strong guarantee anf the nothrow guarantee. Let’s consider them one by one.

What does it mean if code has no guarantee regarding exceptions? It simply means that if an exception is thrown during the execution of that piece of code, anything can happen. With “anything” I mean anything bad, from leaked resources to dangling pointers to violated class invariants. Here’s a very simple example:

struct DoubleOwnership {
 std::unique_ptr<int> pi;
 std::unique_ptr<double> pd;

 DoubleOwnership(int* pi_, double* pd_) : pi{pi_}, pd{pd_} {}

int foo() {
 DoubleOwnership object { new int(42), new double(3.14) };

At first glance this may look good, since the object passes both pointers straight to the two `unique_ptr`s that  take care of the memory release. But this code may leak memory, since when the second of the two `new`s fails, it will throw a `std::bad_alloc`. The exception will propagate out of the function while the memory allocated by the first `new` has not been given to a `unique_ptr` and therefore will never be freed.

Arguably, when the allocation of memory for something tiny like an `int` or `double` fails, we are in big trouble anyways, but the point is that this code may leak resources and is therefore not exception safe.

Generally, any code that has not been proven to be exception safe should has no guarantee and should be considered unsafe. Code without any exception guarantee is hard to work with – we can’t know for sure the state of the objects after an exception is thrown, which means that we possibly even can’t properly clean up and destroy them.

Don’t write code that has no exception guarantee.

Easier said than done? Not really, because the basic guarantee really is pretty basic. It says that if an exception is thrown during the execution of our code, no resources are leaked and we can be sure that our objects class invariants are not violated. Nothing more, nothing less.

It does especially mean that we don’t necessarily know the content or state or values of our objects, but we know we can use and destroy them, because the invariants are intact. That we can destroy them is probably the most important part of the basic guarantee, since a thrown exception will incur some stack unwinding and affected objects may get destroyed.

Design your classes to have proper class invariants that are always met, even in the presence of exceptions.

The strong guarantee adds to the basic guarantee, that if an operation fails with an exception, then it leaves the objects in the same state they had before. In general, for the strong guarantee we have to do all actions that could possibly throw without affecting any existing object, and then commit them with actions that are guaranteed to not throw an exception.

An example for the strong guarantee is the copy and swap idiom for assignment operators:

Strong& operator=(Strong const& other) {
  Strong temp(other);
  return *this;

The steps are simple: first create a copy of the other object. This may throw an exception, but if it does, the function is terminated early and nothing has happened to `*this` or the other object yet. Then swap `*this` with the copy. For this to work, the swap operation may not throw any exceptions. Examples are the exchange of a few pointers and other built in types. The swap is the commit action, after that the assignment is complete. When the function is left with the return statement, the temporary object is destroyed, cleaning up the state previously owned by `*this`.

Providing the strong guarantee can be costly. For example, imagine if the `Strong` object in the example allocates large amounts of memory. Instead of reusing the already allocated memory, the temporary has to allocate new memory just to release the old one after the swap.

Provide the strong guarantee only if needed. Document operations that have the strong guarantee, use the basic guarantee as default.

The last missing level is the nothrow guarantee. It simply means that an operation can not throw an exception. As you have seen, nothrow operations are needed to provide the strong and basic guarantee. There are some operations that should never throw, no matter what:

  • destructors have to be nothrow, because they are called during stack unwinding. If an exception is active and a second exception is thrown during stack unwinding, the program will be terminated.
  • Any cleanup operations like closing files, releasing memory and anything else that might be called from a destructor should not throw.
  • swap operations. They are commonly expected not to throw. If you have an operation that exchanges the values of two objects but can’t provide the nothrow guarantee, don’t call it `swap` but something different like `exchange`.

Consider using the keyword `noexcept` to document operations that provide the nothrow guarantee.


Reasoning about exception safety can be hard, but thinking in the four levels no guarantee, basic guarantee, strong guarantee and nothrow guarantee makes it much easier. Have a short look at each function you write and make sure that it has at least the basic guarantee. If you use code you have not written, assume it has the basic guarantee as well unless it is documented otherwise.

Previous Post
Next Post
Posted in


    1. No, it won’t work just fine. Order of evaluation is not the issue here. The order of execution will basically be:


      pre class=”lang:c++ decode:true”>int* tempPtr1 = new int(42);
      double* tempPtr2 = new double(3.14);
      //call constructor DoubleOwnership(tempPtr1, tempPtr2)
      //… etc
      The unique_ptrs will be constructed inside the DoubleOwnership constructor, i.e. after both new expressions have been evaluated. Now, if the new double(3.14) throws, e.g. because there is no more memory, there is no owner of the memory allocated by the first new. Therefore it creates a leak. See also Jens’ comment.


  1. You’ve written a nice summary here.

    BTW, your assignment operator can be tightened up a bit. Make the temporary copy by passing “other” by value:

    Strong &amp; operator=(Strong other)
    return *this;

    (I suppose you might still want to say “other.swap(*this);”, to emphasize that you’re not calling a global swap function.)

    The above is not only a bit more readable, it might also be a bit more efficient, as it is easier for a compiler to optimize.


  2. I don’t understand why in the example, memory will be leaked. Doesn’t the first unique_ptr be destroyed?

    Maybe I can understand it if you suggest a way to make it exception safe (with the same interface and the same main function).


    1. Memory will be leaked because both “new” calls are evaluated before calling “DoubleOwnership” constructor. In that case if second new fails unique_ptr will not be created and we end up with allocated memory and no handle to it.


      1. does the use of make_unique help? How would you make it safe? It looks like there is no way with the current interface. (I would make the argument unique_ptr in the first place)


        1. Sadly, make_unique won’t help directly with this function signature, since it forces you to separate memory allocation (outside the function) and taking ownership (inside the function). The best way to make this safe is indeed to change the signature to expect two unique_ptr. A workaround would be to first allocate the memory safely and then pass it to the function in a second, nonthrowing step:

          <pre class="lang:c++ decode:true">int foo() {

          auto pi = make_unique(42);
          auto pd = make_unique(3.14);
          DoubleOwnership object { pi.release(), pd.release() };


          1. Could explain why

            int foo() {
            DoubleOwnership object { make_unique(42), make_unique(3.14) };

            wouldn’t solve the problem? I think this first creates a temporary std::unique_ptr, and if the second constructor throws, the unique_ptr’s destructor will be called to release the memory (see also Herb Sutter’s GotW 102: http://herbsutter.com/gotw/_102/). The C++ Core Guidlines authors also think solves the problem:

            fun(shared_ptr(new Widget(a, b)), shared_ptr(new Widget(c, d))); // BAD: potential leak

            This is exception-unsafe because the compiler may reorder the two expressions building the function’s two arguments. In particular, the compiler can interleave execution of the two expressions: Memory allocation (by calling operator new) could be done first for both objects, followed by attempts to call the two Widget constructors. If one of the constructor calls throws an exception, then the other object’s memory will never be released!

            This subtle problem has a simple solution: Never perform more than one explicit resource allocation in a single expression statement. For example:

            shared_ptr sp1(new Widget(a, b)); // Better, but messy
            fun(sp1, new Widget(c, d));

            The best solution is to avoid explicit allocation entirely use factory functions that return owning objects:

            fun(make_shared(a, b), make_shared(c, d)); // Best


            The only problem I see is when DoubleOwnership’s constructor releases the pointer from one of the unique_ptrs, and then throws.

          2. You first snippet would indeed be the best. However, it does not use the constructor taking two raw pointers. That is exactly the point I wanted to make: that constructor was difficult to use safely.
            I don’t see the problem you see with throwing after releasing: unique_ptr‘s constructors and release operation are noexcept, so once the memory allocation has succeeded the DoubleOnwership construction will succeed as well.

  3. Any cleanup operations like closing files, … that might be called from a destructor should not throw.

    And so the error return code of the system is gone on closing generally. No, that’s the wrongest tip to say. If you need a nonthrowing close or destroy, write it and also a throwing one as the default version.


    1. Usually there is no errors in closing something, especially since you normally neither expect nor are interested in whether shutting something went smoothly or not. So in most cases it is only natural to have a nonthrowing cleanup. In addition, if you can’t clean up something due to an error, what are you going to do? You will throw it away anyways.
      In addition, having two operations that do (mostly) the same thing, where one can throw and the other don’t, is a maintenance burden and a disaster waiting to happen. Someone will of course use the wrong version, causing an exception during stack unwinding. Debugging that kind of errors can be a real nightmare.


      1. Usually there is no errors in closing something

        Not? Most Windows-Functions for closing and destroying can return an error, on other systems too, I guess. Think of files on servers: Opening may be checked, writing is not checked because use of cache. So the only possible error return point is at closing the file. Real sample: Windows 95 did not check writing on floppy disc, it wrote all data at closing (unlike Windows NT 4, where ejecting and reinsert while writing was recoverable).
        std::uncaught_exceptions() may be useful for antis of 2 equal functions. Or catch(…) in destructor. Or making the non-throwing version protected or private. Or whatever.
        The point is: Most cleaning up operations must throw. Normally you have to cleanup manually, not per destructor — your second of the 3 points is incorrect formulated.


        1. My point is exactly that you should normally not have to cleanup manually. Having to clean up is a responsibility that should be encapsuled in a RAII class. I am writing about very general guidelines here, so there naturally will be cases where those guidelines don’t fit. However, I don’t see how in the context of exceptions in modern C++, error codes in a 20 year old C library do make a good counter example.


      2. Errors that occur during cleanup are rare, but they often indicate something catastrophic going on. The POSIX close() function is a great example showing two kinds of cleanup errors.

        (1) Errors such as EIO, which may happen if, for example, a network error occurs when closing a file on an NFS mount. This error indicates your data isn’t saved and is a problem that should not be ignored—no different than a write() call that fails due to EIO.

        (2) Errors such as EINVAL, which happen as a result of closing an invalid file descriptor. This indicates a dangling file descriptor in your program (i.e., a logic bug) or a corrupted file descriptor (i.e., who knows?). Again, neither error should be ignored. EINVAL indicates your program is in “undefined behavior” territory, and continuing as though nothing bad happened is a poor choice.

        It’s exactly because cleanup errors are, as you say, “unusual,” that ignoring them is a bad idea. The common case is that no cleanup error occurs and your program runs okay. But when a cleanup error does occur, something catastrophic may be going on and it’s better to be alerted sooner rather than later. I prefer terminating. After all, a robust program recovers from a previous crash—even an unusual one. It’s the non-robust programs that can’t deal with crashing.


        1. As far as I know, those POSIX functions are C functions which do not report errors by throwing exceptions. Cleanups that return error codes are acceptable – you can chose to ignore them or deal with the problem.
          However, cleanups that throw exceptions are a problem. You can hardly write RAII classes that use them, since if an exception is thrown from a destructor, be it during stack unwinding or during normal operation, terminate will be called since destructors are implicitly declared noexcept.
          And just crashing a program is a bad decision, because it means just ending the program without further notice and possibly losing more than just the data of that problematic file. Instead, raising all red flags you have and then gracefully shutting down the program is the better option.
          Note that I am not generally advocating against raising exceptions when a cleanup fails. I am advocating against the cleanup itself raising the exception. The difference is that during a normal function, you can check for cleanup success and in case of a problem throw (which may or may not lead to program shutdown), while in the destructor of the RAII class that manages the cleanup you don’t throw.


Leave a Reply

Your email address will not be published. Required fields are marked *