Testing Services provides a platform for QA professionals to discuss and gain insights in to the business value delivered by testing, the best practices and processes that drive it and the emergence of new technologies that will shape the future of this profession.

« May 2010 | Main | July 2010 »

June 8, 2010

What constitutes Test coverage?

As a practitioner, we struggle when customers asks us about the test coverage and number of test cases written to test an application. The client's intention for this question is to check the confidence we have on the quality of testing achieved through test cases and its coverage.

Most times, we start looking at requirement coverage and start associating the level of confidence through traceability of requirements into test cases.

This is a decent measure to demonstrate the minimum testing required for an application, as we can find out if one or more requirements have not been mapped to any test case, and hence not been tested at all. While it guarantees all specified requirements to be tested for basic functionality, it does not guarantee depth and coverage inside a particular functionality. The approach leaves multiple questions unanswered like, how would you determine that the application is tested for all possible configuration permutations and combinations? How would you find out whether the application has been tested for end-to-end scenarios or covering all possible options? How would you find the amount of negative testing done?

While discussing this topic with colleagues of mine, one of them asked if there is a benchmark available to determine the number of test cases to be written for an application, based on number of lines of code or requirements. This got me thing. I knew that I was not aware of any such benchmarks, however, I started thinking on the possibility of relating it with estimation modeling and using this to possibly estimate or determine the number of test cases (and scenarios) one should expect based on requirements, type of application and phases of testing. This could be the closest way of determining the extent of testing and test case writing required. Of course, this can be done only if you have great confidence in your estimation modeling technique and the internal benchmarks used.

However, the confidence in quality of testing does not only relate to test coverage and number of test cases written. It also is determined by the quality and quantity of defects a testing team is able to catch. Any number of test cases can be written, however the only way to determine the quality of testing is to relate this to the number of defects raised. There are various defect prediction models available that can help you estimate or predict the number of defects for a particular phase in testing. There are also many techniques through which we can predict the number of defects. The most popular way however would be relating it with size of the code, function points or overall development effort. Of course, one also needs to determine the maturity of processes adopted in the development life cycle to expect or gauge the quality and quantity of defects flowing into testing phase.

Based on this, we can come to the conclusion that to ensure complete test coverage we need an effective estimation model couple with a solid defect predication technique. Using the typical estimation model we can determine the number of test cases required and hence the amount of test preparation required. Further, by using a defect prediction model, we can determine the extent of testing to be done. Together, these two bring confidence in our testing and also work as lead indicators to the applications quality.

June 2, 2010

Agile Test data administration - let's crack the code

 In my last post (http://www.infosysblogs.com/testing-services/2010/05/emerging_areas_of_testing.html) I summarized a few of the emerging trends within the testing arena. Now, I wanted to take a specific thread from those trends and elaborate a little more on "Agile Test Data Administration".

If there is one thing that is resonating again and again within the Testing space, it is that of QA teams and administrators wanting to get their arms around "Test Data". More so to gain confidence around the Preparation and Usage of Test data within the Testing lifecycle as well as to maximize efficiency gains by moving towards an agile way of administering and using Test data.

Let's consider a typical workflow that Test data goes thru within the Testing lifecycle, for ease of understanding I have split this into 2 chunks of activities,

Test data preparation
Test data usage

Test data preparation would include manufacturing the data either "Copy" or "Create", then de-sensitize, mask and then more than often will have to provision the data in multiple testing environments so would include data sub-setting and multi-environment provisioning. The core skills that relates to this chunk of activities are more aligned to a DBA role. Also consider that once the Test data is manufactured it has to be maintained as well, again this points at a DBA role considering managing integrity, Data quality and relationships.

Test Data usage, now the focus suddenly shifts to a "Tester" who might (or) might not be Database savvy....however the tester has to use data for the testing to go thru and complete. So, the tester would look at "Test conditions" converting into "Data conditions" then map those data conditions to accurate physically available data in a Test data set/Environment and in most occasions would need "Data reservation' & "Traceability" to ensure safe passage of the combo unit (Test case + Data).

The key, missing link that almost all QA teams are struggling to bridge are the efficiencies that are lost in translation between the DBA and the tester. Essentially both the DBA and the Tester in their individual capacities are able to function and get to a logical, tangible end point, however converting a DBA's output to a tester's input and vice versa has been begging for resolution.

Consider an example: Tester A needs all John does living in the state of CA, who are going to reach retirement age and have filed a claim in the last 2 weeks. The Tester's output is in plain English and can be documented, Now it is up to the DBA to search Client Table A, relate client age from Table B using Table A's index, use the common index to search Table C for Claims in the last 2 weeks.

Assuming the DBA search returns 6 unique candidates; the DBA will furnish this in the tester's environment and allow the tester to pick and choose for usage. Let us consider a couple of things

Question: Is this the only request for such data from the entire testing team? Re-use? If so, How?

Question: Will Tester A consume all 6 candidates? Again if testers are re-using, how do we ensure that there is no conflict?

Question: Let us assume Tester A does consume all 6 candidates in Test iteration 1? What will the tester do for the second iteration when the tester needs the same data? Copy from PROD?

Question: How does Tester A associate the 6 chosen data candidates to each of his/her test cases? How do we achieve traceability? How do we ensure reverse traceability to the data if indeed we are able to re-engineer efficient Auto creation of data?

The answers to most (or) all of these questions, contains the synergies that can be gained to introduce agility in the 'Test Administration" process. While, we collectively ponder on some of these questions? I would like to set a pre-text for my next post...were I will dwell on some of the possible answers the testing industry can consider.

1. Federated Test Data relationship management

2. Collaborative Test data provisioning

3. Auto creation of Test data by the Tester

4. Test case - Data, traceability and reverse engineering.

Signing off for now and will post soon and hoping to interact with you in this space.