Another point is that code coverage has diminishing returns. In practice, most agree as most projects set the lower bound for coverage to around 80%. There is actually supporting research such as ‘Exploding Software-Engineering Myths’. What follows are general arguments.
Even with 100% code coverage you trust your dependencies. They can, in principle, have 0% code coverage.
For many products, it is acceptable to have the common cases work but not the exotic ones (‘Unit Test Fetish’). If you miss a corner case bug due to low code coverage that affects 0.1% of your users you might survive. If your time to market increases due to high code coverage demands you might not survive. And “just because you ran a function or ran a line does not mean it will work for the range of inputs you are allowing” (source).
Code Quality and Unit Tests
There is the claim that making your code unit-testable will improve its quality. Many arguments and some empirical evidence in favor of that claim exist so I will put light on the other side.
The article ‘Unit Test Fetish’ states that unit tests are an anti-architecture device. Architecture is what makes software able to change. Unit tests ossify the internal structure of the code. Here is an example:
Imagine you have three components, A, B and C. You have written an extensive unit test suite to test them. Later on you decide to refactor the architecture so that functionality of B will be split among A and C. You now have two new components with different interfaces. All the unit tests are suddenly rendered useless. Some test code may be reused but all in all the entire test suite has to be rewritten.
This means that unit tests increase maintenance liabilities because they are less resilient against code changes. Coupling between modules and their tests is introduced! Tests are system modules as well. See ‘Why Most Unit Testing is Waste’ for these points.
There are also some psychological arguments. For example, if you value unit-testability, you would prefer a program design that is easier to test than a design that is harder to test but is otherwise better, because you know that you’ll spend a lot more time writing tests. Some further points can be found in ‘Giving Up on Test-First Development’.
The article ‘Test-induced Design Damage’ by David Heinemeier Hansson claims that to accommodate unit testing objectives, code is worsened through otherwise needless indirection. The question is if extra indirection and decoupled code is always better. Does it not have a cost? What if you decouple two components that are always used together. Was it worth decoupling them? You can claim that indirection is always worth it but you cannot, at least, dismiss harder navigation inside the code base and during run-time.
Lean Testing takes an economic point of view to reconsider the Return on Investment of unit tests. Consider the confidence a test provides. Integration tests provide the best balance between cost, speed and confidence. Be careful about code coverage as too high aspirations there are likely counter-productive. Be skeptical about the code-quality improving powers of making code unit-testable.
To make it clear, I do not advocate to never write unit tests. I hope that I provided a fresh perspective on testing. As a future article, I plan to present how to concretely implement a good integration test for both a frontend and backend project.
If you desire clear, albeit unnuanced, instructions, here is what you should do: Use a typed language. Focus on integration and end-to-end tests. Use unit tests only where they make sense (e.g. pure algorithmic code with complex corner cases). Be economic. Be lean.