Automated Regression Code Coverage
We are always looking for easy ways to measure ourselves and what we produce. The dream is to have some metric which is automatically calculated, point at it, and say “this proves we (do not) have quality!”
Enter code coverage. Many people promote this metric to satisfy this need, and it is frequently one of the first things implemented when a company decides to care about quality, so let’s examine this claim.
Unit test code coverage can be good. It encourages people to write test and exercise every line of code in a test suite. Branch coverage can be better by also looking at the logical options. However, just because tests are being written does not mean they are worthwhile tests. Using code coverage to encourage test writing needs to go hand in hand with items such as reviewing or pairing on tests to share good knowledge and a culture striving for useful tests.
Additionally, code coverage is a weird metric. 100% is not a meaningful number - it ironically does not tell you something useful. On the other hand, 50% is a very useful number in identifying areas which have high needs for tests.
Code coverage is at its strongest when it is used as a tool to target areas which need tests. It is a measure of absence of quality, but not of presence.
In conclusion, code coverage is not a metric which says you have quality. It can be useful and has a place within a quality strategy, but take care to treat it as a proof of absence, not a proof of presence.
Note: UAT, Regression, and acceptance tests should not be used - think of them as QA tools (see other blog post) to aid in the reduction of the manual effort required (or freeing up resources away from regression and the like enabling them to be put towards things like exploratory testing). They have different focuses where it is not about what code is there, but how the code enables users.