When unit testing you might trust your technologies and frameworks. You might feel comfortable assuming that using the standard unit testing technologies for a particular technology stack will be a net benefit. However there are times where you might be considering unit testing in particular scenarios with specific technologies that aren’t quite as well proven. When exploring this unknown territory it is important to prove that your unit tests are in fact a net benefit. Here are two metrics that you can look at:
- Percentage of test failures that are only a problem with the unit test, not a problem with the code running in a real environment. This often happens when you are mocking complex interfaces and services.
- Percentage of times that a bug fix or otherwise non breaking change requires an update to a unit test.
The goal is to have both of these percentages at 0%. This would indicate that every test failure represents a real issue in your code and that only feature changes require updates to your unit tests. As these percentages approach 100% the less helpful your unit tests become. However what percentage should act as the cut off for when your unit tests are a net positive versus when they are a net negative? We can make two assumptions:
- The effort saved by having a test failure that reflects a real code issue exactly out weighs the effort spent on every false positive.
- The effort spent on every unit test update that comes from a non feature change is out weighed by the effort saved by having a unit test that comes from a feature change.
With these two assumptions in place then 50% for each of the previously mentioned metrics is the cut off for when your unit tests become a net negative. Both of these are broad assumptions that might not always be true, but would need estimated on a project by project basis based upon the technology that is in place. However it does provide a framework for thinking about the utility of unit testing.