Testing Services provides a platform for QA professionals to discuss and gain insights in to the business value delivered by testing, the best practices and processes that drive it and the emergence of new technologies that will shape the future of this profession.

« Performance Testing for Online applications - The Cloud Advantage | Main | Infosys and Co-operative Banking Group @Iqnite UK 2011 »

Transforming 'Testing Program Metrics' to 'Business Program Metrics'

Software Program Metrics is a necessary evil today. Be it around quality, productivity, efficiency, coverage, scope, defects, etc., organizations need to collect these data points to measure/ analyze the success of any given program. While this sounds very straight forward, it's not. Most of the times, this data collection exercise remains an exercise, with very limited time being spent on the analysis part to identify clear areas of improvement, for ensuring program/project success. So, how do you ensure that testing metrics are defined correctly and more importantly are linked to business outcomes, so that all parties involved take them very seriously?


For the purpose of this discussion, let's take the example of defining metrics for a QA/ Testing Program. While defining the Program Metrics, for a QA project, the following factors need to be taken into consideration:


·         Number of Metrics: They need to be apt for the size of the project.


·         The Larger Purpose: The metrics need to serve a larger purpose of improvement than just having another metrics for the sake of it.


·         Informative: Are the defined metrics giving meaningful information to the stakeholders


·         Stakeholder: Is each metrics mapped to one or more stakeholders


·         Systems & Tools: Does the project team have the tools and systems in place that capture data for computing the metrics


·         Benchmark: Do the metrics have a bench mark


·      Definition & Measurement: Does the definition and measurement methodology if the metrics remain constant throughout a project


·         SLA: Are the metrics covering the critical SLA's of the project


·         Project Success: Is the success criteria of the project mapped to the metrics


Stakeholder's Satisfaction: Are all the stakeholders involved in the project satisfied on at least one metrics


After addressing these questions, the next recommended step is the conversion of the testing program metrics to business program metrics. This can be a challenging task but it is very important and relevant to the involved stakeholders, especially to the client manager and the IT manager, for ensuring the success of the QA program in question. The objective of all metrics collected is to track, analyze data and use this analysis for further improvements. For this to be achieved the testing metrics needs to be linked to a possible business benefit of the project.


To elaborate how testing program metrics can be linked to business benefit outcomes, let us consider the following example of "Test Case Execution Productivity Program Metrics".


Definition: To measure the # of test cases executed in a cycle by a QA team during a defined time period

Usage: Measures the # of test cases that can be completed by the team

Formula: # of test cases executed / total time spent in execution

Trend/Analysis: A high number in this metrics signifies a good understanding of the application, domain and a possibility of lesser execution time

Business Value: Indicates faster time to market


In order to link test case execution productivity metrics to a business benefit, the below mentioned points need to be taken into consideration and addressed


1.    The stakeholder's definition of the test case execution productivity

2.    Is the client or stakeholder really interested in the # of test cases executed?

3.    How many test cases should be written for a requirement or scenario?

4.    How many of those test cases are mapped to the complexity of the scenario?

5.    What if the test cases executed are high, and the defect count is low?

6.    What is the test case execution productivity, and is the defect count more than expected?

7.    What if the productivity is high but the coverage of the functionality is low?

8.    Is there a tool to increase the productivity (is this tool being used by the team or are they focusing on manual execution)?

9.    Can risk based testing be introduced during execution instead of just focusing on the end result of high productivity?

10. Is the scenario coverage during execution mapped to the test coverage/ test execution productivity? Is there a close correlation between them?


Depending on the type of project and the nature of technology, this simple testing program metric "test case execution productivity" can reflect a lot from a measurement standpoint within a project.


To summarize, the detailed analysis of any testing program metrics should lead to business benefits and value ("time to market" in this case) to ensure the success of the program. Without this linkage, metrics additions become a theoretical exercise with no real value being delivered to the overall project and to the stakeholders involved.

Post a comment

(If you haven't left a comment here before, you may need to be approved by the site owner before your comment will appear. Until then, it won't appear on the entry. Thanks for waiting.)

Please key in the two words you see in the box to validate your identity as an authentic user and reduce spam.