Last September, I spoke about enhancing code coverage at STARWEST. My talk was based on ideas that I introduced in Metrics that Resist Gaming and some related posts.
The key points are that metrics should be:
- able drive decisions.
- combined to make them multi-dimensional.
- based on leading indicators that will align to lagging indicators.
And then I applied that to code coverage. I combined it with code complexity, the location of recent code changes, analytics, and then I stress tested the covered tests using mutation testing. The idea is that you should care more about coverage when the code is hard to understand, was just changed, or users depend on it more. And since coverage is only half of what you need to do to test (i.e. you also need to assert), mutation testing will find where you have meaningless coverage.
As a bonus fifth enhancement, I talked about making sure you were getting the business results of better testing. For that, I spoke about DORA and specifically the metrics that track failed deployments and the mean time to recovery from that failure.