Final Classes

A few days ago, a colleague asked me if it was wise to make every class a final class. Here is a more sophisticated answer than I could give at that time.

The question arose because said colleague had noticed that some of his team members had used final on several classes without any apparent need.
When asked, they justified the practice with the fact that those classes had not been designed to be derived from.

If someone wanted to derive from such a class, they would need to check if it was save to derive from that class and then remove the final – if needed after some other modifications.
The result, however, was that developers would be discouraged from deriving from such a class. The reason is that final does not mean “not designed to be derived from”, but instead means designed to _not_ be derived from – which is a crucial difference.

The process of deriving from an existing class has two participating roles: The developer writing the would-be base class (or at least the last version thereof) and the developer deriving from that class.
The former communicates to the latter through the way he designs the base class, but there is also a set of implicit rules in play by how the language works and is commonly used.

Not designed to be derived from?

Contrary to some postulates of the late 90s, object oriented programming is not primarily about inheritance but about encapsulation and maintaining data and logic in the same place.
Deep inheritance hierarchies are not a design pattern to strive for, in fact, they are a code smell that may hint at some design issues.

Therefore, in your garden variety C++ program you should not find many classes that derive from others or that are derived from.
Frameworks are a different kind of beast where we encounter inheritance as a tool to allow reuse, but most of the code we write are one-off applications, so I’ll focus on those here.

So, most classes we design will never be derived from, nor will anyone attempt to derive from them. Making a class a final class by default is a waste of time and keywords.
Here we come to one of the implicit rules developers have to keep in mind:

Assume that a class is not designed to be derived from unless the opposite is obvious.

Deriving from classes

To be clear, I am talking about the classical object-oriented inheritance here.
There are other uses of inheritance in C++ like policy classes, meta programming structures and more, which are treated differently – and which definitely should not be crippled with final.

Classes that are actually designed to be derived from are easy to spot: Designed to be derived from usually means designed to be used in a polymorphic way.
This, in turn, means that such classes have virtual methods and in most cases also a virtual destructor.

So, in order to determine whether a class actually has been designed for inheritance, look out methods that are declared virtual or, if the class derives from another, for methods marked with override.

Is inheritance necessary?

Whether the class that you want to reuse is designed to be derived from or not, there still is the question to answer whether inheritance is actually the right tool to reuse that functionality.
Inheritance is one of the tightest couplings we have in C++ and other object oriented languages, and we should prefer composition over inheritance in most cases.

One of the best heuristics to determine whether we should youse inheritance is asking whether we would override one of the virtual methods of the class we want to reuse.
If that’s not the case, ask yourself twice whether composition is not the better choice.

And no, “I would have to forward hundreds of method calls” is not an argument against composition.
It may be tedious boilerplate and most of the C++ refactoring tools are not yet able to auto-generate those forwarding methods, but it’s usually still better than having to maintain the tighter coupling we get by deriving.
(And if you really have hundreds of methods to forward, you have a different problem altogether).

What does all this say about final classes, then?

If someone wants to derive from a class, they have to check if it’s the right action and the right base class, whether it’s a final class or not.
If developers don’t do that, it’s a problem in your team that final definitely can not fix.

On the other hand, a needlessly final class can discourage developers from deriving from it, even if it is the right thing to do in their situation.

As a conclusion, “final class by default” is the wrong course of action. Use it as what it’s meant to be: A big red sign saying “you shall not derive further” for leaf classes in a class hierarchy.

Previous Post
Next Post

9 Comments


  1. Contrary to some postulates of the late 90s, object oriented programming is not primarily about inheritance but about encapsulation and maintaining data and logic in the same place.

    Looking back – yes, I admit I was guilty of overusing inheritance too :-).

    Reply

  2. Thanks a lot!
    This post came right in time. Actually there should be a core guideline telling us how to show that a class is designed to inherit from.

    Reply

  3. The whole point of this article is that “final” is the wrong tool to convey that first approximation.

    Reply

  4. I think that from a software design point of view, ‘final’ is, more or less, never right.

    However, it can be just the optimisation tool you need. There are situations where the ‘final’ keyword makes the difference between virtual dispatch and a trivial inline, and some times those situations matter.

    Reply

    1. On the contrary, from a software design point of view, “final” is almost always right. To a first approximation, all classes are non-inheritable unless explicitly abstract.

      Reply

  5. Never understood the point of using the final keyword. Do you want to derive from my class? Just try it. If you succeed, I’m glad. Why should I forbid it in one way or another?

    By the way, about composition and forwarding methods. A long time ago, I read an article on how generation of such methods can be automated. I do not use this technique by myself, but I consider it very nice. If someone really tired of copying and pasting, then this is definitely better solution then inheritance.

    Reply

    1. Because for example you want devirtualize? Because you are not providing a virtual DTOR and you want avoid someone complaining?

      Reply

      1. Someone can complain only that there is no brain in his own head. If inheritance is not forbidden, this does not mean that it сould be done. There is much cases in real world when something cannot be done, although it’s not forbidden in formal way. When someone needs virtualization of the destructor, he must make sure by himself that he can get it. If someone doing inheritance wrong, it’s not my problem. I’m not responsible for using my class in situations that I could not reasonably foresee. Separation of responsibilities (concerns) is the one of basic principles of OOP.

        Reply

        1. Sometimes the implementer of the final class does not want to worry about breaking other peoples code when changing their class. Inheriting from a class that there is no guarantee for interface stability makes for a brittle code base.

          Reply

Leave a Reply

Your email address will not be published. Required fields are marked *