Contents
I’ve recently changed jobs, and my new employer has a company blog with some pretty good content. One of the more recent posts is titled ‘What’s wrong with: “I don’t write any tests since I am not a tester”?’ – which made me think about the relation of developers to testing, and about the self-image of any developer who would say that sentence in earnest.
To test or not to test
So, what is testing about? It’s about checking if something works. Whether you write a new piece of code, or whether you change or even just delete it, you have to check if it does what it’s meant to do.
Have you ever changed a source file and just checked it in, closed the ticket and maybe even went home without any further verification that what you just did was the right thing to do? I have. I guess everyone has. And what happened next most surely is one of the following options:
- The CI or build server sends you a mail that there is a build failure or that some automatic test failed.
- An angry coworker knocks at your door, because he pulled the newest code, got a build error and spent some fun time researching where the error originated and who checked that piece of code in.
- Some annoyed tester sends back the ticket you just closed because the problem is not fixed at all.
There are lots of other possible outcomes. However, for anything that involves more than fixing a stupid little type, the one outcome that is least likely is that really everything goes as you intended right away.
What to test
So we do have to test the code changes we implement. The question is: how much should we test, how thorough should our tests be?
There are three to four things we should test after changing code:
- Is the program still executable? This certainly includes a compilation of the translation units affected by the code change, but it does not stop there. You may need as much as a full build involving the linker, a deploy to the target platform(s) and even a smoke test.
- Does the change have the desired effects? Is the bug really gone, or does the feature you implemented behave as specified? Are corner cases covered?
- Does the change have no undesired side effects? Make sure that what you just did only does what you intended it to do. There should be no regressions, it should not change or break other features, and it should not affect performance and other usability metrics.
- Is the code quality still OK? This is a little less definite than the other points, but it still matters. Some may see this as a secondary issue, because internal code quality is not visible to the user, so it has no immediate customer value. However, unless you throw away the code in the near future, bad code quality can severely compromise your ability to maintain your code, which will in turn have an impact on the customer value.
But it’s not my job/role, we have QA for that!
Oh, but it is. Our job is to write working code. So, unless we know as a fact that our code works, we are not done. Period. Our job description is not “type random characters in source code files”, and it never will be. Such jobs are usually paid very poorly, usually with a banana or two per day, not more.
That does not mean that QA is not needed. A tester is the safety net for the developers. They make sure that the code does not only work as the developer thought it should work, they check that how the software works corresponds to how it should work.
Their job is not to play the other side of endless task ping pong. Whenever a tester sends a task back to development, something went wrong. Ideally, in such a case not only the problem should be fixed, but also the cause for the problem should be investigated so it won’t happen again.
How to test
I’ll go back to the four different things we should tests, because we might want to test them differently.
Build and deploy will always be the first thing to do, so it will naturally the thing we’ll do most often. As programmers and – according to the cliche – tech addicts, we should use our talent and the tools at hand to do what we do best: automate the build.
If anything we have to do as frequently as building and deploying takes more than a single click or command, then there’s still room for improvement. Since it’s ran so often, make sure that it does not take too long. If the smoke test or the deploy to every single platform takes more than about a minute, leave them out of the fast build, but prepare another single command that can perform the full deployment test.
Tests against unwanted changes will, depending on their scope, have to be done relatively often, too, so you should automate most of them as well. There are several levels of granularity and often corresponding run time.
Unit tests check the behavior of single classes, functions, and sometimes smaller submodules. Those checks occur at the same level as class design, so there is nobody except the developer who knows that design, who can write those tests.
Acceptance and integration tests check larger parts of the application, but usually not the whole thing. Therefore often a developer is needed to make them work, e.g. by mocking the missing parts of the program.
There is much more to be said and written about all the different types of regression tests, but I won’t go into details at this point. The important thing is: If you change your code often, you’ll want to run most of the tests often. So you’d better automate them and make them fast.
Checking if a code change was successful usually requires checks at the same different levels, and usually you need to run them more than once, especially if the requirements you implement are nontrivial.
So guess what? You’d better automate them. Doing so has the added benefit that you can add them to the regression test suites once your feature is done. Done as in compiled, tested and approved by QA.
Code quality can be checked automatically as well, although tools like static analyzers etc. should not be the definitive last instance to tell you whether your code has good or bad quality. Code quality can be subjective, and not everything can be done y a tool, so you’d better have someone review your code, either by doing pair programming or as a code review as part of your process.
Q: Should I really automate everything?
Well, almost. You can’t automate everything, because for example exploratory testing is a highly intuitive process that can’t be done by machines. Other tests may be very time consuming to automate, e.g. because the test objects do not match the test tools available.
However, automatic tests are not only reproducible and thus more reliable than manual tests. If done right, they can be triggered semi-automatically or even automatically, e.g. by a code check in. That way all that is needed is a computer to run them and precious manpower is saved. The net result is that tests are run even more often, which in turn leads to earlier finding of bugs and more reliable software.
Bottom Line: Automate any test that technically can be automated, as long as setting up the automation costs less than the accumulated effort of manual testing. Strive to design your code and architecture in a way that facilitates test automation.
When everything is automated, why have human testers at all?
Firstly, as written above, a human tester has another view on the functional requirements than a developer, so especially acceptance tests benefit from a cooperation of testers and developers.
Secondly, there always will be things that have not been considered before, so there will be corners not covered by automatic tests. Someone has to poke around in the software, trying to find those blind spots, which is also known as exploratory testing.
Conclusion for developers
Every developer should have some professional pride. That pride demands that we strive for perfection, which in turn means that we want to deliver perfect, flawless software. We will never get there, (hence the need for QA as a last isntance) but our aim as professionals should be to come as close as we can.
And we definitely come closer if we test our code as thoroughly as possible, so testing, and especially writing automated tests, is not some inferior task that should be avoided. It’s a central part of our job.
Permalink
Thought provoking article for software developers who are not willing to do testing of their software as they really think that it’s an inferior task…
Permalink
I definitely agree with this. One thing that I used to struggle with was testing of behaviors that aren’t easy to test manually, like failure conditions. I would write the code to handle the failure, but never test that it handled the failure properly, since that seemed too difficult. Unit testing and the dependency injection go a long way toward solving that issue.
Permalink
This whole post absolutely correct. I have never understood the mentality behind those who can just write a piece of code, commit it, and go home. I have fixed so many bugs from other devs that do that and it annoys the crap out of me!!
Coding is your craft. Own it. Be proud of it. You cannot be proud of crappy code, and the biggest sign that your code is crappy, is that it doesn’t even work. So if you don’t know it works, then you don’t know it isn’t crappy, and you cannot be proud of your code.
I’ve seen UI engineers complete entire stories and push them to QA, and the first thing that happens is a crash or a button they implemented just doesn’t do anything because they forgot to implement the button handler!! You know what would have caught that? 30 seconds of your time making sure that code you wrote, works. Instead you wasted QA’s time and other engineers’ time who grab that bug to help the team out and realize what a stupid mistake it was and who did it. It makes you look bad to the rest of your team, cuts down morale, tons of stuff.
Test. Your. Code. Don’t be that guy. At the very least, the very least make sure it builds, and ensure that the code at least does logically what you expect it to do. That means if you implemented a button press handler that opens a dialog, even if it is as simple as “System.OpenDialog()”, make sure that it actually opens the dialog before saying you’re done!!
Permalink
I have seen that situation. The worst thing is that it’s contagious. If many of the devs in a team don’t write tests, the others get discouraged and give up after a while. This leads to a culture where automated tests are a secondary concern, code quality is poor and the bug frequency is high. If people don’t care about tests, unit test suites degrade and can take hours to run (I have blogged about that earlier), which in turn discourages everyone from running them. The project gets into a spiral of ever increasing technical debt and only the dedication of the whole team can improve the situation, because individuals can not work against a team culture of test negligence.
So, if you are in a situation like this, after a while you give up because it’s too taxing to swim against the current, so that means either you stop to care about tests as well or you quit.