Many development teams and projects use code coverage - e.g. how many lines of code are covered by automated tests - as an objective, and saying it must be 100% or another percentage.
But is this an effective metric?
In the same way as deleting failing tests to fix a pipeline, a code coverage amount can be faked.
With this in mind, what if, instead of setting an objective such as 100% code coverage, you used it as a guideline?
If you’re working on a legacy project, what if you set a minimum code coverage amount as a guideline to ensure any new code has tests by not dropping under that level?
Would that be better than saying every line of code needs to be covered?
Code coverage is something I’m thinking of using more, so I want to know what you think.