Testing Services provides a platform for QA professionals to discuss and gain insights in to the business value delivered by testing, the best practices and processes that drive it and the emergence of new technologies that will shape the future of this profession.

« Agile Test data administration - let's crack the code | Main | Modern testing techniques- How to realize full business benefits of the Cloud? »

What constitutes Test coverage?

As a practitioner, we struggle when customers asks us about the test coverage and number of test cases written to test an application. The client's intention for this question is to check the confidence we have on the quality of testing achieved through test cases and its coverage.

Most times, we start looking at requirement coverage and start associating the level of confidence through traceability of requirements into test cases.

This is a decent measure to demonstrate the minimum testing required for an application, as we can find out if one or more requirements have not been mapped to any test case, and hence not been tested at all. While it guarantees all specified requirements to be tested for basic functionality, it does not guarantee depth and coverage inside a particular functionality. The approach leaves multiple questions unanswered like, how would you determine that the application is tested for all possible configuration permutations and combinations? How would you find out whether the application has been tested for end-to-end scenarios or covering all possible options? How would you find the amount of negative testing done?

While discussing this topic with colleagues of mine, one of them asked if there is a benchmark available to determine the number of test cases to be written for an application, based on number of lines of code or requirements. This got me thing. I knew that I was not aware of any such benchmarks, however, I started thinking on the possibility of relating it with estimation modeling and using this to possibly estimate or determine the number of test cases (and scenarios) one should expect based on requirements, type of application and phases of testing. This could be the closest way of determining the extent of testing and test case writing required. Of course, this can be done only if you have great confidence in your estimation modeling technique and the internal benchmarks used.

However, the confidence in quality of testing does not only relate to test coverage and number of test cases written. It also is determined by the quality and quantity of defects a testing team is able to catch. Any number of test cases can be written, however the only way to determine the quality of testing is to relate this to the number of defects raised. There are various defect prediction models available that can help you estimate or predict the number of defects for a particular phase in testing. There are also many techniques through which we can predict the number of defects. The most popular way however would be relating it with size of the code, function points or overall development effort. Of course, one also needs to determine the maturity of processes adopted in the development life cycle to expect or gauge the quality and quantity of defects flowing into testing phase.

Based on this, we can come to the conclusion that to ensure complete test coverage we need an effective estimation model couple with a solid defect predication technique. Using the typical estimation model we can determine the number of test cases required and hence the amount of test preparation required. Further, by using a defect prediction model, we can determine the extent of testing to be done. Together, these two bring confidence in our testing and also work as lead indicators to the applications quality.


Keep on posting, Rajneesh. It's a good thing to exchange thoughts!

Two comments:

Is it the number of bugs raised that says anything about the quality of testing? Or is it the importance of the bugs that are being raised?

And second, the number of test cases doesn't relate to the quality of testing (like you correctly stated). Then why start modelling the number of test cases required? Why not start the actual testing?

It's my personal experience that customers do not actually care how many test cases we have written and run, nor do they care how many primary or secondary defects we've uncovered. Customers who ask for these things only do so because they have been trained to ask for them. These statistics give them a false assurance of product health and assures them everyone is busy and giving them a good return on their investment. It's not necessarily a bad thing to communicate, but I find if you put in a little work you can usually find a more cost effective way to communicate the information they need.

I agree, it is interesting when clients ask for 6000 test cases for an application. We need to determine what they are really asking for. Confidence or test quality?, code quality? or requirements delivered?

An aspect of coverage that could be indicative of quality is Code Coverage. Execute the test set and determine how many lines of code or decisions, etc in the code were executed by a coverage tool.

This would give an objective measure of test coverage - of course it doesn't mean that the requirements were delivered, even if 100% coverage was achieved.

If it is quality of testing that is questioned and not the quality of code... we could also look at fault seeding tools to see if tests pick up injected code defects.

It all depends on the context and objective of testing.

What would be truly helpful is to be able to get the test coverage tools to work in a production environment. This would allow to identify most used program logic paths, which would then allow the testers to focus on key areas of an application. Does anyone know if such tools exist?

Post a comment

(If you haven't left a comment here before, you may need to be approved by the site owner before your comment will appear. Until then, it won't appear on the entry. Thanks for waiting.)

Please key in the two words you see in the box to validate your identity as an authentic user and reduce spam.