Setting Automated Test Coverage as a goal is at best, misguided. Automated test coverage is useful as a strategy or as a diagnostic metric, however using it as a goal is idiotic and will lead to waste and the wrong behaviour.
For any IT system, there are three options for testing:
- Automated tests
- Manual tests
- No tests
Lets pop the why stack on automated tests. Automated tests are faster and more reliable than manual tests. Automated and Manual tests are normally safer than no testing. So the reasons for automated tests are:
- Reduced lead time.
- Reduced variability in lead time.
- Lower probability of a production incident.
Our goal should be to improve one of these metrics, normally reduce lead time. Lead time and automated test coverage are correlated. If you attempt to reduce lead time, one of the strategies you are likely to apply is to increase automated test coverage. As such automated test coverage is an excellent diagnostic metric to help the team identify ways to reduce their lead time.
There is not a causal relationship between automated test coverage and lead time. Increasing automated test coverage does not automatically reduce lead time. Many years ago I worked on system with no automated test coverage. Management imposed a 100% test coverage goal for all systems. Everyone on the project stopped working on anything else and spent a few days writing tests. As the business analyst I was given a list of classes and told how to write standards tests for each method to ensure the test coverage tool would register us as meeting our 100% target. We achieved 100% automated test coverage but no improvement in lead time or anything. The activity generated no benefit to the organisation, it was pure waste.
If you set reducing lead time as a goal, you will likely see an increase in automated test coverage. If you set increased automated test coverage, it is possible you will see no benefit.
January 9th, 2019 at 1:21 pm
Ooh! How I like your topic, and I feel you’re on a good track! Still I would move to a different PoV and emphasise the fact that testing by itself aims just to offer information for supporting decisions. This would later on support the lead time you mention. Lead time alone is important, but without the right information this lead time might be leading to the wrong direction or providing wrong outcomes in the absence of good & reliable information. Automation alone can speed information extraction, but it can lower the level of understanding the relevance of the information or even provide false information, instilling a false sense of safety.
Also, your three options (automated / manual / no testing) are not mutually exclusive and can be blended in various proportions for various areas.
In the end, I really like you closing line, that of showing that automation coverage can be at best a proxy measure, not a goal by itself.