I had a conversation recently with a friend regarding testing. He was skeptical about the value of tests. I couldn’t disagree more. I don’t think I did a good job expressing why they are important, though. So, I decided to write down my thoughts.
The most direct reason is that it allows us to prevent regressions. As we edit our software, we’re bound to break things. It just happens. We can’t be perfect. So, we write tests to try to catch ourselves when we do break things.
But there’s an important side effect to this, confidence. Having tests allows us to make changes aggressively, without fear of causing damage. And that’s important if you want to be able to quickly adapt your software as your business learns and grows.
Tests also have a number of other second-order effects.
- They document the system.
This is especially important if you want others to be able to jump in and work with your code. By simply reading the relevant tests, a new developer can learn how to work within your system.
- They speed up the development of complex or difficult changes.
Sometimes you know exactly what to do; how to make some new feature or change. Often it’s a bit hazy regarding how you’ll make it work. Tests can guide us towards a solution, keeping us on track without wasting extra time. It’s almost zen-like when you do it right.
- They create clarity in your system design.
In addition to helping you make complex changes, writing tests guides you to more effectively separate areas of your application into discrete and independent parts. Having a well-separated system further enables you to make changes more quickly in the future.
How can we improve a barely tested code base?
I know that not all projects play by the book. Every project lies somewhere on the spectrum of haphazard mess to beautiful rainbow of code. I’d like to address the issue of projects that are trying to become more “rainbow-like”.
Since one of the primary goals of writing tests is that we want to protect the value of the system that we already have, it can almost be understandable when working on a low-value system that tests are few and far between. This can be the case in a very new project, or one that was intended to be a one-off effort.
But, as the value of our system increases then it becomes more important to protect the value that we’ve created. Tests are like an insurance policy. You buy a car. You don’t want to worry every time you drive it.
The downside is, that often we feel as though we’re able to make faster progress without writing tests. Of course, this is true until it isn’t. Adding test coverage will eventually allow a project to make more steady progress, at the expense of a small overhead per change.
So how can we introduce tests with a minimum of interruption to our perceived velocity? I have a personal rule for this. In the following two cases, we should always write a test.
- If you’re writing a completely new feature/class/module/method, etc. – write just a basic test. In other words give it the most basic of inputs, and expect the correct output. We’re just making sure that it’s not completely and utterly broken.
- If you’re fixing a bug – write a test that elicits that bug. Then fix the bug, making sure the test passes.
What I’m trying to capture with these rules is to follow the Pareto principle. In other words, I want to identify the 20% of effort that gives us 80% of the results.
If the change you’re trying to make doesn’t fall under one of these two situations, use your discretion. Sometimes I know that writing a test is going to help me figure out a certain issue. Or maybe I just feel like it.
The insight here is that most of the time when we break something, we completely break it. That’s the 80%. That’s why I write a basic test for everything. Because that simple test will catch most mistakes. Then when a bug occurs, I write a test for that as well. Slowly but surely we’ll approach a much stronger level of test coverage, with hardly any extra real effort.
But tests are such a hassle!
If you think that writing tests is too much work, my simple rebuttal to this is, “you’re doing it wrong”. Modern test suites tend to be so nicely designed that writing a test is hardly any work at all above what you would have already done. Sorry. It’s true.
I think the issue that causes this perception is just unfamiliarity with the test suites themselves. Because it’s a new thing to learn, and you don’t feel you need it, you think it’s too much work. I can promise you that just like learning a new language or framework, that the time spent familiarizing yourself with a good testing library will pay you back immensely.
Nowadays, if I’m considering consulting on a project and I notice that they don’t have any tests, then I don’t take the contract. Because honestly, it makes me look bad when I simply can’t move very fast because I have to constantly stop to consider whether I’m breaking other parts of the system. In one case, I did take the job, but the very first thing I did was add a proper test suite.
That sums up most of my personal views on testing. Leave a comment if you feel like it. Thanks so much for reading!