Testing Services provides a platform for QA professionals to discuss and gain insights in to the business value delivered by testing, the best practices and processes that drive it and the emergence of new technologies that will shape the future of this profession.

« Dozen things that contribute to a "Perfect Performance Test Engineer" | Main | What is in the recently release HP ALM 11 version? »

Why does Testing Metrics Program Fail?

Test management Guide Series-Topic 3

Why does Testing Metrics Program Fail?

1.0  Introduction


We all know that statistical data is collected for metric analysis to measure the success of an organization and for identification of continuous improvement opportunities, but not many organizations are successful in addressing this objective.

Senior management love to see metrics but fail to understand why the teams are unable to produce meaningful metric trends which can provide clear actionable steps for continuous improvements. It was surprising to see many client senior executives started asking me for help in establishing meaningful metrics programs for their respective organizations. Several questions popped in my mind - what makes testing metrics program such a challenge? We collect and present tones of data, what exactly is missing? Even though right from the CIO wants a peak view of various metrics to understand the progress made, why does several testing organization fail to produce the same in a meaningful way? All metrics definition and techniques are available in multiple sources, still why does organizations struggling in collect, Analyze and report the same?

After thinking about these questions I started asking myself few fundamental questions and started looking at several metrics reports. I was not surprised to find the below

·         Most of the Testing Metric reports were internal focused. They were trying to tell how good the testing organization is operating with no actionable next steps

·         Metric reports had several months of data with minimal analysis finding and no actionable items for each stake holder

·         99% of the action items identified were only related to actions taken by the testing organization

2.0  Revisiting the Golden Rules of Testing Metric Management:


While I was conducting a detailed study of the same, I felt like re-establishing few golden rules of metric management which we all know, but might fail to implement

Rule #1: Metrics program should be driven and reviewed by executive leadership atleast once in a Quarter.  In reality, very few organizations have a CIO dashboard to understand the reasons for Quality and Cost issues inspite of hearing repeated escalations from business partners on production defects, performance issues and delayed releases

a)         Metrics collection is an intense activity which needs right data sources and acceptance by entire organization (not just testing). Unless driven by senior leadership, metrics collection will have limited actionable items

b)         Requires alignment from all IT and business organizations to collectively make improvements

Rule #2: Ensure all Participating IT and Business organizations are in alignment with the metrics program


ü  Testing organization shows that it met 100% of its milestones related to schedule adherence, but the project would have got delayed by 6 months

ü  Testing effectiveness would be 98%, while several production defects resulted in an couple of million production support activities which was caused by issues which was not in control of testing organization

ü  Business operations and UAT teams are focused on current projects, while metrics provide trending information. All teams should be aware of improvement targets set for the current year or current release


Rule #3: Ensure necessary data sources are available and data collection processes are in place which ensures uniform data collection.

Uniform standards are not followed among IT groups, which make data collection a challenge

·         Effort data is always erroneous

·         Schedule changes multiple times in the life cycle of the project

·         Defect resolutions is always a conflict

·         Defect management systems are not uniformly configured


ü  There is no infrastructure or process to view and analyze production defects

ü  More than 40% defects are due to non coding issues, but defect management workflow does not capture the same

ü  Requirement defects are identified in testing phase making it 10 times more expensive, but there is no means to identify these trends and quantify

Rule #4: Ensure metrics are trended and analyzed to identify areas of improvements. Metrics data is garbage unless you trend the data, analyze them to find identify action items. Trending analysis should help in

·         Identify improvement opportunities for all participating IT groups

·         Set improvement targets

·         Plotted graphs which can give meaningful views

·         View to compare the performance with Industry standards

While it is not always easy to follow all the 4 golden rules of testing metrics, but an attempt to achieve compliance with these 4 rules would significantly improve the success of testing metrics program

3.0  What should I look in my testing metrics?


While I continued to look for reasons and answers for failure of metrics program, I realized that most of testing organization selects metrics which are more internal focused and analysis is also carried out to indentify issues within testing organization or to defend the actions of the testing organization. I am making an attempt to summarize suggestions on how you can make better use of metrics data  

Metric Perception


Suggested Action Items

Testing Effectiveness


 Testing organization reports effectiveness as 98%, but many production defects reported

·  Absence of a process to collect production data


·  System testing and UAT execution in parallel


·  Lack of effective requirement traceability


·   Lack of End to End test environment to replicate production issues


·   Extensive ambiguity in business requirement and results in defects in production due to unclear requirements

·  Publish Testing organizations effectiveness and Project Testing effectiveness to clearly highlight the overall project issues


·   Establish a process to involve testing team and production support team to analyze every production defect and take necessary corrective action


·   Associate $ value lost due to every defect in production to get attention from senior management towards infrastructure or improving requirements management



Test Case Effectiveness -


No Industry standard recommendations on targets based on testing type and hence not much action items identified.  Data is reported regularly


·   Difficult to set targets


·   Lack of actionable decision points based on increase or decrease of test case effectiveness


·   RTM is not prepared and hence the quality of test case and its coverage is a concern


·   Based on test case preparation and execution productivity, set targets for test case effectiveness. This will ensure right effort is spent on test planning and execution to achieve the target testing effectiveness


·   If test case effectiveness in below threshold for 3 consecutive releases, optimize your test bed and eliminate test cases which are not yielding any defects


·   If test case effectiveness in below threshold, validate the applicability of Risk based testing


·   If the test case effectiveness is higher then threshold, unit testing might be an issue and recommend corrective action

Schedule Adherence


Project is delayed by 6 months, but testing team has met its schedule

·   Project managers baseline schedule after delays in every life cycle stage. Testing reports delays in schedule only if it is due to issues related to testing


·   Testing wait time increases due to schedule delays in each stage and testing cost increases. Testing does not quantify wait times


·   Testing team do not use historic wait times in estimates resulting in testing budget overruns and increasing testing to development spend ratio


·   Track schedule milestones across life cycle stages and collect metrics for delays in any milestone


·   Create action triggers if the total delayed milestones crosses 10% of overall schedule milestones


·   Calculate the additional QA spend due to missed scheduled milestones and report the same in the metrics


·   Establish operational level agreements b/w various team to identify schedule adherence issues


Requirement Stability


Senior management believes that achieving requirement stability is a myth and requirements continue to change due to various reasons. This is because of the fact that 90% of the industry has been facing this issue for decades


·   Requirement stability is a broad measure and has several parameters contributing to this measure (Ambiguous, mistake in articulation, Incomplete, addition of new requirement, edits, deletions, etc)


·   Each parameter is not captured and reported in separate categories and represented by life cycle phase. Requirement can be changed due to design issue, testability issue, cost of development and implementation, operational challenges, change in mandates, organizational polices, etc


·   Lacks quantification of its effect in each testing phase


·   Lack of forums to discuss requirement stability index issues and improvements


·   Lack of requirement management tool to automate traceability and versioning


·   Report # of times requirement review signoff from test team was missed


·   Report test case rework effort due to requirement deletion, edits, addition


·   Report additional test case execution effort due to requirement deletion, edits, addition


·   Report # of defects due to requirement issues (missing, ambiguous, incomplete) and quantify the effort needed to correct them by dev and testing


·   Quantify the SME bandwidth requirement to support testing activities. If requirements are elaborate, SME bandwidth needs should reduce with time


·    Report # of CR added and its effort to measure scope creep


·   Report # of times requirement change not communicated to testing team






Defect Rejection Ratio and other defect Metrics


Senior management has no idea what to do with this data at end of the release



·   Reporting at the end of the release does not give opportunity to make course correction in the ongoing project


·   Metrics like defect severity, defect ageing, defect rejection needs immediate corrective action


·   Threshold points for each of this defect metrics has not been established and automated trigger points calling for action not defined


·   Report these metrics on a weekly basis rather than at the end of the release


·   Create automated trigger points. For e.g. If # of critical defects goes beyond 5, action point has to be triggered



Test Case  preparation & execution productivity


Senior management has no idea what to do with this data at end of the release




·   Reporting at the end of the release does not give opportunity to make course correction in the ongoing project


·   Difficult to set target due to difficulty in establishing testing unit


·   Report these metrics on a weekly basis rather than at the end of the release


·  Create automated trigger points.


·  Report reduction in test execution productivity due to

o   environment downtime

o   wait time due to delays in build

o   wait time due to delays in issue resolution

o   rework effort during test case preparation

o   rework due to lack of understanding and application knowledge


4.0  Conclusion


The above table provides insights into the complexity involved in metrics program to collect analyze and report metrics. There are clearly 2 types of metrics

·         Metrics which has impact on the overall project and hence very important to senior management of the organization

·         Metrics which are internally focused towards testing organization

Testing Effectiveness, Schedule Adherence and Requirement stability helps in identification of issues which impacts the entire projects and also which results in delays in projects, defects in production and budget overruns which are very important to CIO. The CIO metrics dashboard should have these metrics and reasons for failure.  As a QUALITY GOAL KEEPER for the entire organization, testing organizations should clearly identify actionable improvements for all stake holders for metrics program to be successful

In order to keep a tab on the effectiveness and efficiency of the testing organization itself, internal focused metrics like defect severity, defect ageing, # of defects, testing productivity, cost of quality, % automated and many more are important. Clear steps should be defined to identify actionable items. The actionable items are all internal to the testing organization and they should be used for identification of internal improvement targets and initiatives.




Very nice article Vasu.

Regarding, "Requirement Stability" > suggested action item> 4rth Bullet > says Report # of defects due to requirement issues. But, unfortunately, most BAs show a very high reluctance to accept these as Defects. I believe IT PM and Senior Mgmt should pay attension here as having right requirements forms the key foundation for QA team.

Thanks for this interesting read Vasu.
Regarding Rule#2, Testing org's metrics might be right. Testing Effectiveness might be 98%. But still there might be production issues. So, who does that analysis. Inspite of proper testing, how come there are defects in Production ? Was it the change control, was it wrong requirements ? Testing org alone may not be able to do this analysis, find the root causes and justify its metrics. This macro-metrics/ analysis is absent most often.

Good Article. A subject which is dear to me too.

Based on my experiene, in most of the places, the root issue is about sources of information and the "right" data in them. If that is plugged then the raw data can be used in whichever ways. Also the test management do not seem to have say in jointly arriving at metrics to be captured at the Organization level.

I appreciate your point about CIO dashboard and review of that with all the stakeholders. It should be a good starting point for identifying the apt metrics and QA tying its goal cards towards improvement of items in there.

Honestly, the article still is inside looking except for the CIO dashboard.The points described here look more appropraite for a lower maturity organisation and not for CMMi 5 companies.

Good article Vasu. In fact I was looking for some industry level SLA's or standards set for some of these testing KPI's, and this article is very good in articulating the current issues /challenges faced by the IT organization along with the suggested action items are more realistic approachs . The proposed actions you have listed under each metric seems to be quantitatively achievable/measurable rather than just reporting the % value against the target values. Thanks for the neat presentation and compiling the information to us.

Beautifully written vasu, reason why this caught my interest is currently we are going through a metrics design and implementation program at client place and why does Testing Metrics Program Fail is something I started of writing on and happen to see this article and it’s so much real to what we are going through things like Metrics data is garbage unless you trend the data, post production defects which were not under control of Testing organization .one more interesting thing happening here how much time you spend to collate the data when you first time start (part of TCoE) and trying to collate the data to perfect the metrics kills the intend, waste of time. This article of great help

Good article Vasu. By reading this one , I am interested in your first two topics related to Test management Guide Series.Could you please share ?

Hi Vasu, It is really a good Article. It is an useful article for all the Testing professionals.

Well articulated article Vasu.

Post a comment

(If you haven't left a comment here before, you may need to be approved by the site owner before your comment will appear. Until then, it won't appear on the entry. Thanks for waiting.)

Please key in the two words you see in the box to validate your identity as an authentic user and reduce spam.