A few days ago, a colleague asked me if it was wise to make every class a
final class. Here is a more sophisticated answer than I could give at that time.
The question arose because said colleague had noticed that some of his team members had used
final on several classes without any apparent need.
When asked, they justified the practice with the fact that those classes had not been designed to be derived from.
If someone wanted to derive from such a class, they would need to check if it was save to derive from that class and then remove the
final – if needed after some other modifications.
The result, however, was that developers would be discouraged from deriving from such a class. The reason is that
final does not mean “not designed to be derived from”, but instead means
designed to _not_ be derived from – which is a crucial difference.
The process of deriving from an existing class has two participating roles: The developer writing the would-be base class (or at least the last version thereof) and the developer deriving from that class.
The former communicates to the latter through the way he designs the base class, but there is also a set of implicit rules in play by how the language works and is commonly used.
Not designed to be derived from?
Contrary to some postulates of the late 90s, object oriented programming is not primarily about inheritance but about encapsulation and maintaining data and logic in the same place.
Deep inheritance hierarchies are not a design pattern to strive for, in fact, they are a code smell that may hint at some design issues.
Therefore, in your garden variety C++ program you should not find many classes that derive from others or that are derived from.
Frameworks are a different kind of beast where we encounter inheritance as a tool to allow reuse, but most of the code we write are one-off applications, so I’ll focus on those here.
So, most classes we design will never be derived from, nor will anyone attempt to derive from them. Making a class a
final class by default is a waste of time and keywords.
Here we come to one of the implicit rules developers have to keep in mind:
Assume that a class is not designed to be derived from unless the opposite is obvious.
Deriving from classes
To be clear, I am talking about the classical object-oriented inheritance here.
There are other uses of inheritance in C++ like policy classes, meta programming structures and more, which are treated differently – and which definitely should not be crippled with
Classes that are actually designed to be derived from are easy to spot: Designed to be derived from usually means designed to be used in a polymorphic way.
This, in turn, means that such classes have virtual methods and in most cases also a virtual destructor.
So, in order to determine whether a class actually has been designed for inheritance, look out methods that are declared
virtual or, if the class derives from another, for methods marked with
Is inheritance necessary?
Whether the class that you want to reuse is designed to be derived from or not, there still is the question to answer whether inheritance is actually the right tool to reuse that functionality.
Inheritance is one of the tightest couplings we have in C++ and other object oriented languages, and we should prefer composition over inheritance in most cases.
One of the best heuristics to determine whether we should youse inheritance is asking whether we would
override one of the virtual methods of the class we want to reuse.
If that’s not the case, ask yourself twice whether composition is not the better choice.
And no, “I would have to forward hundreds of method calls” is not an argument against composition.
It may be tedious boilerplate and most of the C++ refactoring tools are not yet able to auto-generate those forwarding methods, but it’s usually still better than having to maintain the tighter coupling we get by deriving.
(And if you really have hundreds of methods to forward, you have a different problem altogether).
What does all this say about final classes, then?
If someone wants to derive from a class, they have to check if it’s the right action and the right base class, whether it’s a
final class or not.
If developers don’t do that, it’s a problem in your team that
final definitely can not fix.
On the other hand, a needlessly
final class can discourage developers from deriving from it, even if it is the right thing to do in their situation.
As a conclusion, “
final class by default” is the wrong course of action. Use it as what it’s meant to be: A big red sign saying “you shall not derive further” for leaf classes in a class hierarchy.