Testing Services provides a platform for QA professionals to discuss and gain insights in to the business value delivered by testing, the best practices and processes that drive it and the emergence of new technologies that will shape the future of this profession.

Main

September 26, 2016

Hadoop based gold copy approach: An emerging trend in Test Data Management

Author: Vikas Dewangan, Senior Technology Architect


The rapid growth in data volumes of both structured and unstructured data in today's enterprises is leading to new challenges. Production data is often required for testing and development of software applications in order to simulate production like scenarios.

Continue reading "Hadoop based gold copy approach: An emerging trend in Test Data Management " »

September 13, 2016

Lift the curb on Performance Engineering with Innovation

Author: Sanjeeb Kumar Jena, Test Engineer

In my previous two blogs, I discussed about bringing pragmatism and a cultural mind-set change to performance engineering teams. In this blog, we will evaluate the outcome of these shifts during the transformation journey from performance testing (only limited to quality assessment phase of software applications) to performance engineering (covers the entire life-span of software applications to ensure higher return on investment).

Continue reading "Lift the curb on Performance Engineering with Innovation" »

August 25, 2016

Agile and testing: What banks need to know

Author: Gaurav Singla, Technical Test Lead

Agile -- a testing framework introduced in 2001 -- is now being used across various banks in order to cater to their IT project needs. The basic principles of the agile methodology are as follows:

Continue reading "Agile and testing: What banks need to know" »

August 16, 2016

Culture is everything in Performance Engineering

Author: Sanjeeb Kumar Jena, Test Engineer

We are living in a bulging knowledge economy where anyone can access information from anywhere with a device that fits into their pocket. We are living in a hyper-connected world via the World Wide Web. Today, business is not the 19th century one-way 'producer-consumer' relationship. Instead, it's a two-way communication. An effective business model is 'not about finding customers for your products' but it's about 'making products for your customers'.

Continue reading "Culture is everything in Performance Engineering" »

August 3, 2016

Crowd Testing: A win-win for organizations and customers

Author: Manjunatha Gurulingaiah Kukkuru, Principal Research Analyst

Software is evolving with every passing day to accommodate newer prospects of its usage in day-to-day life.

Continue reading "Crowd Testing: A win-win for organizations and customers" »

August 2, 2016

Pragmatic Performance Engineering

Author: Sanjeeb Kumar Jena, Test Engineer

When you hear the term 'performance engineering,' what's the first thing you think of? The famed battle of 'performance testing versus performance engineering' or because we are engineers, do you immediately think of making performance testing a better quality assurance process?

Continue reading "Pragmatic Performance Engineering" »

July 27, 2016

The three ingredients for a perfect test strategy

Authors: Gayathri V- Group Project Manager; Gaurav Gupta- Senior Project Manager

Last week, we won a testing proposal for a mandatory program that cuts across multiple applications. Although we, the 'pursuit team,' were celebrating the win, a voice in our head incessantly kept saying, "Winning is just one part of the game; but delivering such huge programs on time is always a tall order!"

Continue reading "The three ingredients for a perfect test strategy" »

July 26, 2016

Performance engineering in Agile landscape and DevOps

Author: Aftab Alam, Senior Project Manager

Over the last couple of years, one of the key shifts in the software development process has been to move away from the traditional waterfall approach and instead, embrace newer models like DevOps. One of the main goals of development and operations (DevOps) is to monetize investments as soon as possible. In the traditional waterfall model, UI mockups are all business owners (investors) have access to, before agreeing to invest.

Continue reading "Performance engineering in Agile landscape and DevOps" »

July 18, 2016

Four approaches to big data testing for banks

Author: Surya Prakash G, Delivery Manager

Today's banks are a stark contrast to what they were a few years ago, and tomorrow's banks will operate with newer paradigms as well, due to technological innovations. With each passing day, these financial institutions experience new customer expectations and an increase in interaction through social media and mobility. As a result, banks are changing their IT landscape on priority, which entails implementing big data technologies to process customer data and provide new revenue opportunities. A few examples of such trending technology solutions include fraud and sanctions management, enhanced customer reporting, new payment gateways, customized stocks portfolio-based on searches, and so on.

Continue reading "Four approaches to big data testing for banks" »

June 27, 2016

Three Generations streaming in a Network

Author: Hemalatha Murugesan, Senior Delivery Manager

Are you using iPhone 6s asked my 80+ year old neighbor as we were transiting in apartment lift?  I responded nope, Samsung and enquired what help is needed.   He wanted assistance in how to use the various apps as he just got the iphone 6s as a gift.   Sure why not and will come over to your place, I winked and concluded.  

Continue reading "Three Generations streaming in a Network" »

June 8, 2016

Golden rules for large migration

Author: Yogita Sachdeva, Group Project Manager

In my experience of working with large banks, I have worked on small programs related to acquisitions and mergers. I have also worked on voluminous upgradations and migrations. I wondered what was different which actually a tie breaker for large program was. I always tried to scratch my brain to figure out what it takes to make a large program run. I realized that smaller programs generally get successfully delivered with the apt technical skills of the team. However, large programs are normally meant to deliver a business strategy as big as the creation of a new bank. A large program encompasses a group of related projects, managed in a coordinated manner, to obtain benefits and to optimize the cost control.

Continue reading "Golden rules for large migration" »

June 7, 2016

Role of Validation in Data Virtualization

Author: Kuriakose KK, Senior Project Manager

How can I see the big picture and take an insightful decision with attention to details now?

Jack, the CEO of a retail organization with stores across the world, is meeting his leadership team to discuss disturbing results of the Black Friday sale. He enquires about reasons behind why they were unable to meet their targets and the reason is promptly answered by his leaders as missed sales, delayed shipping, shipping errors, overproduction, sales teams not selling where market demand exists, higher inventory, etc. Jack is disturbed by these answers, and on further probing understands most of these are judgment errors. 

Continue reading "Role of Validation in Data Virtualization" »

June 1, 2016

Predictive Analytics Changing QA

 Author: Pradeep Yadlapati, AVP

Today's mobile economy is changing the way enterprises do business. A recent survey indicates that the mobile ecosystem generates 4.2% of the global GDP, which amounts to more than US $3.1 trillion of added economic value. It is no surprise that organizations are fast embarking on digital transformations.

The pervasiveness of devices is altering interaction as well as business models. Customers expect a seamless experience across different channels. Everyone wants one-touch information and they expect applications to display preferences and facilitate quicker and smarter decisions. 

Continue reading "Predictive Analytics Changing QA" »

May 31, 2016

Performance Testing in the Cloud

Author: Navin Shankar Patel, Group Project Manager

If a layperson, frozen in time for 10 years, suddenly wakes up and eavesdrops on a conversation between CIOs, he/she might assume that a group of weather forecasters are conversing. That is because the entire discussion is centered on 'cloud' and is interspersed with a lot of mostly unintelligible words.

Continue reading "Performance Testing in the Cloud" »

April 28, 2016

Manage tests, the automation way

Author: Swathi Bendrala, Test Analyst 

Most software applications today are web-based. To keep pace with competition and highly demanding processes within the enterprise, these web-based applications undergo frequent updates, either to add new features incorporate new innovations. While these updates are necessary, the amount spent to roll-out these updates are important too. 

Continue reading "Manage tests, the automation way" »

April 26, 2016

Validate to bring out the real value of visual analytics

Author: Saju Joseph, Senior Project Manager

Everyday enterprises gather tons of data streaming in from all directions. The challenge lies in taking this huge volume of data, sometimes unstructured in nature, synthesizing it, quantifying it, and increasing its business value. One way to achieve this is by moving from traditional reporting to analytics.

Continue reading "Validate to bring out the real value of visual analytics" »

April 14, 2016

Is Performance Testing really Non-Functional Testing?

Author: Navin Shankar Patel, Group Project Manager

The world of testing has evolved over the years and like most evolutions in the technology world, it has spawned a plethora of methodologies and philosophies. We now have super specializations in the testing world - UI testing, data services testing, service virtualization, etc. However, some beliefs remain unchanged. Especially the belief that the testing universe is dichotomous - characterized by functional and non-functional testing.

Continue reading "Is Performance Testing really Non-Functional Testing?" »

January 14, 2016

Crowdsourced Testing: Leveraging the power of a crowd to do the testing

Author: Harsh Bajaj, Project Manager

What is crowdsourcing?

The term 'crowdsourcing' was coined in 2006 when Jeff Howe wrote an article for Wired Magazine - "The rise of crowdsourcing." Crowdsourcing is a combination of two words - crowd and outsourcing. The objective is to get the work done from an open-ended crowd in the form of an open call. 

Continue reading "Crowdsourced Testing: Leveraging the power of a crowd to do the testing" »

December 28, 2015

Elevating QA to CXO

Author: Harleen Bedi, Principal Consultant

With rapidly evolving and emerging technology trends such as Cloud, Mobile, Big data, Social, etc. QA is now being looked as a key component in the modernization and optimization agenda of any CXO. This is supported by - World Quality Report that reveals that Application Quality Assurance and Testing Now Accounts for almost a Quarter of IT Spending. 

Continue reading "Elevating QA to CXO" »

December 14, 2015

Ensure the Quality of Data Ingested for True Insights

Author: Naju D. Mohan, Delivery Manager

I sometimes wonder whether it is the man's craze for collecting things, which is driving organizations to pile up huge volumes of diverse data at unimaginable speeds. Amidst this rush for accumulating data, the inability to derive value from this heap of data is causing a fair amount of pain and a lot of stress on business and IT.   

Continue reading "Ensure the Quality of Data Ingested for True Insights" »

September 18, 2015

Assuring quality in Self Service BI

Author: Joye R, Group Project Manager

Why self-service BI?

Organizations need access to accurate, integrated and real-time data to make faster and smarter decisions. But, in many organizations, decisions are still not based on BI simply due to the challenges in IT systems to keep up with the demands of businesses for information and analytics.

Self-service BI provides an environment where business users can create and access a set of customized BI reports and analytics without any IT team involvement.

Continue reading "Assuring quality in Self Service BI" »

September 7, 2015

Role of Open Source Testing Tools

Author: Vasudeva Muralidhar Naidu,Senior Delivery Manager

Quality organizations are maturing from Quality Control (Test the code) to Quality Assurance (building Quality into the product) to Quality Management. In addition to bringing Quality upfront and building Quality into the product, Quality Management also includes introduction of devOps principles in testing, optimization of Testing Infrastructure (Test environments and tools).

Continue reading "Role of Open Source Testing Tools" »

August 21, 2015

Are We Prepared to Manage Tomorrow's Test Data Challenges?

Author: Sunil Dattatray Shidore, Senior Project Manager

As tomorrow's enterprises are embracing latest technology trends including SMAC (Social, Mobile, Analytics, Cloud) and adopting continuous integration and agility, it is imperative to think of more advanced, scalable and innovative ways to manage test data in non-production environments. It can be for development, testing, training, POC, pre-prod purposes. The question really is - have we envisioned the upcoming challenges and complexity in managing test data and are we prepared and empowered with right strategies, methodologies, tools, processes and skilled people in this area?


Continue reading "Are We Prepared to Manage Tomorrow's Test Data Challenges?" »

August 17, 2015

Extreme Automation - The Need for Today and Tomorrow

Author: Vasudeva Muralidhar Naidu,Senior Delivery Manager

We have read about the success of the 'New Horizon Spacecraft', and its incredible journey to Planet Pluto. This is extreme engineering and pushing human limits to the edge. Similarly, when we hear about the automobile industry and the fact that one additional car is getting assembled every 6 minutes, we are quite amazed at the level of automation that has been achieved.

Continue reading "Extreme Automation - The Need for Today and Tomorrow" »

August 3, 2015

Three Stages of Functional Testing 3Vs of big data

Author: Surya Prakash G, Group Project Manager

By now, everyone has heard of big data. These two words are heard widely in every IT organization and across different industry verticals. What is needed, however, is a clear understanding of what big data means, and how big data can be implemented in day-to-day businesses. The concept of big data refers to a huge amount of data, petabytes of data and huge mountains of data. With ongoing technology changes, data forms an important input for making meaningful decisions.

Continue reading "Three Stages of Functional Testing 3Vs of big data" »

Balancing the Risk and Cost of Testing

Author: Gaurav Singla,Technical Test Lead

A lot of things about banking software hinge on how and when it might fail and what impact that will create.

This drives all banks to invest heavily in testing projects. Traditionally, banks have been involved in testing software modules from end-to-end and in totality, which calls for large resources. Even then, testing programs are not foolproof, often detecting minor issues while overlooking critical ones that might even dent the bank's image among its customers.

Continue reading "Balancing the Risk and Cost of Testing" »

July 21, 2015

Is your testing organization ready for the big data challenge?

Author: Vasudeva Muralidhar Naidu,Senior Delivery Manager       

Big data is gaining popularity across industry segments. From being limited to lab research in niche technology companies to being widely used for commercial purposes; big data has achieved a wider scope of application. Many mainstream organizations, including global banks and insurance organizations, have already started using big data technologies (open source) to store historical data. While this is the first step to value realization, we will soon see this platform being used for processing unstructured data as well.

Continue reading "Is your testing organization ready for the big data challenge?" »

July 14, 2015

Automation - A new measurement of client experience

Author: Rajneesh Malviya, AVP - Delivery Head - Independent Validation Solutions

A few months ago, I was talking to one of my clients who visited us on the Pune campus. She shared how happy she was with the improved client visit process. She was given a smart card like any of our employee and was mapped / marked as visitor and with that she could move from one building to another without much of a hassle.  She no longer had to go through the manual entry process at each building. Like an employee, she could use her smart card at the turnstile to enter & exit our buildings and at the same time her entry was recorded as per compliance need. As she had been in our campus before, she was clearly able to experience the great difference brought about by automation.


Continue reading "Automation - A new measurement of client experience" »

July 6, 2015

Automated Performance Engineering Framework for CI/CD

Author: Aftab Alam, Senior Project Manager, Independent Validation and Testing Services

with contribution from Shweta Dubey

Continuous Integration is an important part of agile based development process. It's getting huge attention in every phase of Software Development Cycle (SDLC) to deliver business feature faster and confidently.

Most of the times it's easy to catch functional bugs using test framework but for performance testing, it requires scripting knowledge along with load testing and analysis tools.

Continue reading "Automated Performance Engineering Framework for CI/CD" »

June 30, 2015

Transforming Paradigms - From Quality Assurance to Business Assurance

Author: Srinivas Kamadi, AVP - Group Practice Engagement Manager.

Today's digital landscape is riddled with disruptive forces that are transforming business models and industries alike. The proliferation of social channels and continuous creation of big data is fuelling this transformation and heating up global competition. Forces such as Social, Mobile, Analytics and Cloud (SMAC) and the Internet of Things (IoT) are now critical to delivering omni-channel experiences. These digital imperatives guide how businesses engage with their customers, employees and stakeholders. Customers demand 24/7 connectivity and free flowing information accessibility. This has made it contingent upon companies to deliver superior customer experience in an agile fashion.

Continue reading "Transforming Paradigms - From Quality Assurance to Business Assurance" »

June 29, 2015

New Age of Testing - The WHAT, WHY and HOW?

Author: Mahesh Venkataraman, Associate Vice President, Independent Validation Services.

While testing has always been important to IT, the last decade has seen it emerge as a discipline in its own right. Hundreds of tools have been developed and deployed, commercially as well as 'openly'. New methodologies have been formulated to test the latest business and technology transformations. IT organizations today recognize testing as a critical function that assures the readiness of a system to go live (or a product to be released to the market).

Continue reading "New Age of Testing - The WHAT, WHY and HOW?" »

October 27, 2014

Mobile Native App- Real User Experience Measurement

MPOS-dynatraceUEM.png


As we all know that testing server and making sure server side is up and running does not guarantee good end user experience. There are several SaaS solutions to measure client side performance of a website and mobile apps like SOASTA mPluse, TouchTest, webpagetest, Dynatrace UEM etc.  How can we leverage same technique to measure Sales person experience for Mobile POS applications before releasing new features or how can we monitor Sales person app usages/behavior just like we do real user experience analysis for website?

Continue reading "Mobile Native App- Real User Experience Measurement" »

September 24, 2014

Accessibility Compliance : What different User groups look for ?

Accessibility compliance is gaining more strength across organizations due to legal mandates.

Web Content Accessibility Guidelines (WCAG 2.0) is referred as a broad guideline across geos, which takes into consideration major physical impairments and how to meet the needs of such users having impairments. t is vital for achieving accessibility compliance successfully in a program, that the teams engaged and working together, concerned groups are quite aware of their program specific accessibility guidelines.

Continue reading "Accessibility Compliance : What different User groups look for ?" »

September 2, 2014

Usability Testing: What goes into user recruitment and test session readiness?

Usability testing is meant to help discover design issues and get appropriate directions on further improving overall user experience. But, these sessions would not be possible without having right set of users who would review and provide feedback.

Getting right users 
At times, it is challenging to get right users for test sessions.For many organizations where usability process for design is newly followed, there is still less clarity on whether:

  • usability testing is required ( business impact)
  • if required, what type of users would be participating in test sessions
  • will such users be identified and available for such sessions
  • if available, which specific locations need to be considered.

 

Continue reading "Usability Testing: What goes into user recruitment and test session readiness?" »

April 14, 2014

Changing perspectives in Testing - Adapting to evolving expectations

In the past decade or so, the basics of testing process and tools have not really changed. Different tools to automate various phases of testing have been developed with these tools seeking to realize specific concepts in automation. At its core, testing still remains an activity directed to seek and seize defects and fix them before the system goes live in production or gets launched in the market. So what is new and changing in testing?
 

It is just that testing professionals have to adapt to some changing scenarios while still holding on to time tested principles and techniques. There are some subtle and obvious contextual changes that the testing teams need to be aware of and adapt in order to stay relevant and deliver progressive value. Some of them are to do with 'mindsets' while some others are to do with enhancement and evolution of existing techniques:

 

1.    Collaborative Testing. With the increasing adoption of Agile and progressive towards realizing DevOps vision, the testing team needs to shift focus from bug detection to early and continuous feedback  and contributions to improving quality. This implies testing early and often rather than 'test after developers are done'. This also implies high degree of comfort with skeletal documentation and ability to extrapolate and visualize requirements.

 

2.    Continuous Test Automation. Testing early and continuously also necessitates using a wide spectrum of tools (commercial, in-house developed and open source) and scripting languages. This is what I call Continuous Test Automation throughout the project lifecycle. Some also call it extreme automation. This needs programming mindset to be developed combined with the tester's keen eye for finding defects and providing early feedback! I believe all test professionals need to develop these skills; not just the automation engineers.

 

3.    Visual Modeling. With the need for tighter and frequent collaboration with program team also comes the need to use visual modeling tools. One example is activity models in model driven testing.  I have also seen many teams using mind-maps for capturing test design. Testing community has experimented much with model driven testing. While this helps, many teams have admitted that this is often time consuming and effort intensive. An area that is very little explored is defect prediction and modeling. Recently an Australia based Banking customer wanted to us to propose ideas for defect modeling and visual defect heat mapping techniques. Since testing is expected to 'seek and seize' defects, it is a good idea to focus on defects and failure mode modeling rather than modeling the entire requirements. This area needs further study and experimentation.

 

4.    Mission Risk Mitigation. This is about addressing the question 'what is the risk that this system will fail to achieve the stated IT mission goals?'  and reporting those risks based on sound analysis of metrics. This calls for thorough understanding of business goals and how the current system under test is expected to contribute to the same. This is what I call 'shift-up' which implies the ability to appreciate the higher order business goals and continuous evaluation of risks supported by various business driven metrics analytics.

 

5.    Business driven metrics analysis. This is related to the point above. Metrics need to be collected and reported under multiple levels of hierarchy. Such reports must be accompanied by insights and recommendations that help the management to make critical business decisions. An important part of metrics analysis is alerts. An alert is meant to be a call for management action. It is a warning of an impending issue. Often testing team assumes that mere sending of status reports is good enough for management to take necessary action. Far from it. Metrics must be analyzed for trends in time and correlation with other related metrics to draw meaningful conclusions, help decision support, make appropriate recommendations and initiative management actions. Such a roll up of analysis must address the decision support requirements of all levels of organizational hierarchy. Some examples of alerts:

 

a.     Threshold alerts - a specific metric is below (or above) a threshold value and needs management attention.

b.    Correlation alerts - a specific metric is not consistent with another and needs further analysis. For example - defect fix rate is lagging behind defect find rate for the observation period. Another example of correlation alert is if an area of code has very high degree of churn but defect find rate is relative low; can mean hidden lurking defects that may demand focused testing techniques.

c.     Out of control alert - a specific metric is out of control from a statistical perspective and needs attention

 

In conclusion, the fundamentals of testing process and technology have not really changed (and probably never will) but the context in which testers do their jobs is changing fast and the testing community at all levels have to adapt to these changes to continue to stay visible and relevant to senior levels of management.

October 10, 2013

Preparing ourselves for Application Security Testing

 

Haven't we all as functional testers, done 'Authentication' and 'Authorization' testing? Almost every application demands some flavor of these tests. So, are we not already doing 'Application Security Testing'?

Let's explore and see what's the extra mile, we need to traverse, in each phase of SDLC, to say confidently, that the applications we are testing are secured ones.

 

Continue reading "Preparing ourselves for Application Security Testing " »

October 3, 2013

Crowd Testing

The concept of crowdsourcing is not new. The practice of harnessing ideas, services or content from a larger pool of unknown population has existed for many centuries.  For example Oxford English Dictionary got created through an open call to the community to identify all the words in English language along with their usage; this call yielded 6 million submissions over 70 years! The Indian Government effectively used crowd sourcing to obtain entries for the symbol of the India Rupee which finally led to selection of the current symbol. On a lighter note, in India we do see crowdsourcing all around us. A crowd of helpful volunteers trying to help fix or push-start a broken automobile is a common sight here!

Continue reading "Crowd Testing" »

August 9, 2013

Testing Center of Excellence - Value articulation to business is the key

The recent TechValidate survey of Infosys TCoE clients, across a spectrum of industries, corroborates that 'efficiency and cost' (63%) are the primary reasons for setting up TCoE. This comes as no surprise as most of the IT organizations are moving in the direction of articulating the business value of most of the services that they offer.  However, almost everyone faces challenges while articulating the business value of technical offerings. Some of the challenges that I faced in testing world are

·         Testing is an activity that does not have a tangible outcome like development

·         Testing is still considered as a small subset under the gamut of activities in software development

·         Lack of industry data to derive the business benefits

 

Continue reading "Testing Center of Excellence - Value articulation to business is the key" »

July 29, 2013

Increasing agility via Test Data Management

 

Does test data requirements need to be captured as part of the functional requirements or non-functional requirements?  Is test data management a costing exercise or business critical?  Since the testing team is provisioning the test data in whatever mechanisms, do we need test data management team? 

 

Continue reading "Increasing agility via Test Data Management" »

April 4, 2013

4DX - The Golden Experience

The sun's scattered rays are too weak to start a fire, but once you focus them with a magnifying glass they will bring paper to flame in seconds. The same is true of human beings--once their collective energy is focused on a challenge; there is little they can't accomplish.

4DXGoldenExperience.jpgA common experience of most test management teams is that they are pretty good in setting goals and targets but more often they don't have a consistent approach to achieve them or are trying to focus on too many things at once.

We might have been sharing the same experience had we not been introduced to Franklin Covey's 4 Disciplines of Execution (4DX) framework through a 2 day internal workshop conducted by Infosys. 4DX is powerful methodology that helps translate business strategy into laser-focused action. This framework helped us to accomplish a Wildly Important Goal (WIG) of increasing revenue share from specialized services by 7% y-o-y. We achieved this goal 3 months ahead of the target date.

Continue reading "4DX - The Golden Experience" »

March 20, 2013

Outsourcing the Test Function - Being Prepared

Organizations today tend to jump into outsourcing the testing function with expectations of significant cost savings, reduced cycle times, and higher quality without knowing exactly how to achieve these goals. Having worked on outsourced testing engagements both as the client and as the vendor, I have observed common stumbling blocks and challenges and can provide lessons learned that will help you to avoid, or at least be better prepared for, these challenges. 

I will begin with things to consider and do before you begin the RFP process.  Once your organization has committed to proceeding, I will identify key items to address during the RFP process. What happens after you have signed the contract?  The final section will take you from the transition to steady state and into continuous improvement. 

To learn about these items and more, come join me and other quality professionals in Chicago for the QAI Quest Conference & Expo, April 15-19. For more information on my session, please proceed to http://www.qaiquest.org/2013/conference/testing-outsourcing-challenges-views-from-both-sides/.

Hope to see you in Chicago!

 

March 7, 2013

Test Tool Selection for Functional Automation - The missing dimension

"How do you select the appropriate test tool for your functional testing?"

"As per the listed features, the automation tool we invested in seems to be right. But, the % of automation we could really achieve is very low. What did we overlook at the time of tool selection?"

"Both my applications are of the same technology. We achieved great success in automation with one of them and failed miserably with the other. What could be the reason?"

Continue reading "Test Tool Selection for Functional Automation - The missing dimension" »

February 5, 2013

Usability Testing: Why is it beneficial to test early?

Often, when I am taking usability testing session, participants ask me what would be ideal window to perform usability testing in a project. Incidentally, this is the same query I have received in the past from some clients as well, during usability testing strategy phase.

Usability testing has been integral part of design process for long now due to upfront benefits. Primarily, it provides direct user feedback which leads to deeper user insights related to design.
It is one of those rare occasions in the entire project where one has the opportunity to interact directly with users and get their feedback.

Usability testing can be carried out at any stage of design process: Initial wireframes, visual design, HTML prototype or live website/application.Also, there could be more than one round of usability testing depending on project scenario.

The best time/window to have usability testing in a project is during initial design stage due to following benefits it provides:

Quick to Iterate
Doing early testing ensures the design changes are quick to incorporate. Early design captures feedback on global, main navigation, terminologies and overall page structure.
The wireframes are still not designed in detail (or it could be just a mock up of static images linked together) and hence can  quickly be  changed based on user feedback and validation.

Less Rework
A change in design at wireframe level (initial design stage) will always be done with less effort than changing a fully developed HTML page at later development stage, and which has even the visual design incorporated by then.

Getting design right
I have mentioned in my previous blog on Business value of usability testing that early testing helps to get design right  during initial stage itself.
Simply because users provide feedback on overall design concept and information architecture at wireframe level.
These wireframes undergo modifications based on user feedback, get aligned to most of the user expectations and level and form a strong foundation for design.Majority of design issues are taken care at initial stage itself.

User Acceptance
Early testing helps in understanding acceptance level of representative users during the design process and also provides further design directions.
Once designers get confidence level on their design since has been through the initial feedback session of users, delivering the final best suited design to end users as per usability standards becomes a fairly achievable target.

Cost Effective
During initial stages, it is very cost effective to make any design iterations at wireframe level rather than making changes during later stages of development.
The cost associated to have visual design team and developers get involved for page iterations at later stages of development is thus saved.

Usability testing is a powerful technique to discover usability issues. Early inclusion of it will help to fix major usability issues during early stages, thus helping design to mature progressively.

January 25, 2013

Communicating the value delivered by testing - An organizational necessity

More often than not testing is perceived as a cost instead of a value-add. This perhaps can be best attributed to the inability of the testing organization to justify the true value or impact testing brings to business. This inability also stems from the fact that today's testing teams don't have a solid framework that helps them clearly map testing metrics to the key business objectives outlined by the organization.

Most of the existing frameworks provide a mechanism to map QA metrics to engineering impacts like quality, cost, and productivity improvements, etc. However, the issue arises when testing organizations try to use the same frameworks to extend the mapping of the engineering impact to business levers like increase in revenue and reduction in cost.

Continue reading "Communicating the value delivered by testing - An organizational necessity" »

January 3, 2013

Future demands testers to broaden their areas of expertise

While scanning the list of best practices for Traditional projects recommended by one of the top technology research firms, I happened to realize that quite a few of them like early involvement in SDLC cycle or early automation etc. have already been implemented in the projects executed by Infosys. To me, this broadly meant, we have already adapted what others foresee as a future trend. On the other hand, it inspired me to think about what could be the possible forms that software testing might embrace in the near future.

 

In the current world, where there is expectation on every penny spent to be realized, to me the reality in the near future will be as below:

 

1.       Economic uncertainty across the world (with the possible exception of Brunei or Madagascar) will force governments and corporations to cut costs and squeeze more output for what they spend. From a software testing perspective, this may result in the testing team no longer limiting itself to regular testing, but also foraying into other SDLC aspects like test data management, test environment maintenance, reporting performance issues, ensuring optimal leverage and usage of testing tools as part of the testing scope.

2.       Accountability of development and testing teams for number of bugs/defects identified, and slipped.

3.       With more and more software testing tools available as freeware, tool usage may be priced based on benefits realized rather than usage

4.       All facets of business right from banking to gaming embracing mobility applications, resulting in quicker development and deployment, leading to shorter and effective testing cycles.

 

To sum things up, the preceding points indicate a common direction for software testing, where testing team will be made accountable for more activities in the SDLC than ever before, thus increasing the need for a tester to equip himself/herself with enhanced skills on domain and technical perspectives.

December 12, 2012

What is a Test Factory Model and how can we gain the maximum value from it for SAP applications?

Let me start with why do we need a test factory. With the ever-growing need for resources and rising costs in the running of enterprise applications (such as SAP and Oracle applications), testing of enterprise applications has become fairly complex and is a major contributor to burgeoning IT costs. Test Factory is a unique concept and model that allows us to address this problem.

 

If an organization plans to implement the Test Factory model, it can turn to vendors who have the expertise to test enterprise applications in a more efficiently-governed and cost-effective manner. At the same time, this model allows the day-to-day operations of the organization to be run effectively, peacefully and uninterrupted.

 

A recent POV - co-authored by practitioners from Diageo and Infosys - helps you to understand the various challenges faced by SAP-enabled organizations, and provides an in-depth look at the process of setting up a Test Factory as well as the benefits thereof.  The POV can be accessed at http://www.infosys.com/IT-services/independent-validation-testing-services/white-papers/Documents/test-factory-setup.pdf  

October 1, 2012

'Fix it' or 'Build it right'? How do we improve quality through collaboration and shared accountability?

The past few years have witnessed a considerable shift in IT management focus of financial organizations from cost optimization to improving the quality of the product or service. Reducing costs does not help in retaining market position if the products or services don't sell. The economy gets challenged as is, making it an extremely competitive environment, so it is no longer good enough to just identify and fix issues. Every defect before a release contributes to a delay in time to market and defects which slip through, reduce customer confidence in the quality of the product or service. As companies move from a defect detection to a defect prevention mindset, they have reaped the resulting benefits of shorter time to market, increased mitigation of business risks and compliance of IT systems to regulations. Improving upstream quality is therefore something IT leaders have on the top of their mind. Collaboration between different IT and business functions plays a key role in upstream quality, and also involves breaking away from some of the existing silos of software development.

September 28, 2012

Program managing multi-vendor QA teams - It is an Art!

If you take today's QA organizations, the presence of a multiple vendors/partners is a given. While this may be beneficial to organization from risk mitigation and cost efficiency perspective, it does pose significant challenges from a vendor/partner management perspective. This challenge gets further aggravated given the increasingly complex nature of IT programs and the short turn-around time demanded by business.

In such time-constrained and high stress environments, managing multi-vendor QA teams to achieve the common goal of zero defect leakage to User Acceptance Testing [UAT] and Production becomes harder and harder. In most cases, client QA organizations end up spending significant and valuable management time in resolving inter-person conflicts, rectifying communication deficiency between vendors, innuendos of favoritism, or in explaining to business stakeholders why the productivity of QA is low.

So, given these challenges, should QA Organizations look for a single-vendor strategy? Not at all. Multi-vendor/partner situations are here to stay. What organizations need to find is the best collaborative platform to ensure that they get the maximum from the partner set up that they have created.

From my experience of leading multi-vendor QA teams at several organizations, I have noted down few best practices that I believe can help you in this journey of maximizing value from your partner eco-system. According to me these are simple and common sense guidelines and have worked like a charm wherever I have implemented or adopted them.

Continue reading "Program managing multi-vendor QA teams - It is an Art!" »

August 29, 2012

"Show me the Money" - Realizing Business Value with Mature QA Practices

The role of QA in transforming a business has evolved multi-fold. In the past, typically QA managers would be satisfied by delivering a defect-free product with may be a certain amount of innovation. In today's volatile economic environment, expectations from QA team to deliver more value with less costs have become the order of the day. The new age customer not only wants a high quality product but also expects an enhanced business value in line with the organizations' strategic goals. While producing a defect-free deliverable is 'bread and butter' for the QA team, realizing business value requires some strategic thinking keeping in mind the vision/goals of the customer organization.

 

Highly mature QA practices revolve around 4 key levers - Business Process, End-user, Business Impact and Business Metrics. Yogita Sachdeva and myself have tried implementing this strategy in some of our client QA programs with the objective of transforming them into highly matured QA organizations. Awareness about the business processes involved, understanding of high maturity levers to transform the QA organization are some of the pre-requisites for realizing business value in QA. To know more, read our latest POV titled "Creating Business Value with Mature QA practices" at http://www.infosys.com/IT-services/independent-validation-testing-services/white-papers/Documents/creating-business-value.pdf.

January 27, 2012

STC conference by QAI

I recently attended the STC conference by QAI that was held in Bangalore on Dec 1st and 2nd. It is one of the largest international testing conferences in India and this year's theme was 'Testing Enterprise 2.0 - Getting Ready for the Quantum Leap". I was particularly looking forward to the insightful sessions by leading testing leaders and practitioners. There were some sessions that were conspicuous with their new perspectives.

The conference began with a thought provoking session on "The New Age QA delivery models - A Quantum leap in the way testing is Delivered and Measured" by Manish Tandon, Global Head - Independent Validation and Testing Solutions, Infosys.

Continue reading "STC conference by QAI " »

January 12, 2012

Best Practices in Data Warehouse Validation - Test Planning Phase

The complexity and criticality of data warehouse testing projects is growing rapidly each day.  Data warehouses need to be validated for functionality, quality, integrity, availability, scalability and security based on the defined business requirements by an organization. Based on my experiences as a testing practitioner, I believe the following best practices in the test planning phase can significantly contribute to successfully validating the data warehouse.

 

1.    Comprehensively Understand the Data Model

The data architecture and model is the blueprint of any data warehouse and understanding it helps comprehend the bigger picture of a data warehouse. It's also important to understand the methods used for the key relationships between the major and critical data sources. The relationship hierarchies and the depth of data throw light on the complexity of transformation rules. The quality of data and size of data warehouse determines the functional and non-functional testing requirements for validating the data warehouse.

 

2.    Understand the Business Requirements Clearly

It's important to understand the complete business context of the need and implication of data warehouse testing. Mapping the business drivers to the source systems helps increase the testing quality, effectiveness and coverage. Getting the test data early during the test planning stage itself decreases risk and increases the predictability of testing.  The data transformations mapping document is the heart of any data warehouse testing project. Hence, understanding the business rules or the transformation mapping document and running books, early in the testing life cycle, helps the testing team reduce rework and delays in the test preparation and execution phases.

 

3.    Plan Early for Data Warehouse Testing Environment

Planning for test environments based on the quality, size, criticality and complexity of the data warehouse helps reduce the testing cycle. Also, the option of shared environments for the testing teams can also be explored to help reduce costs. Planning for test environments from the following perspectives help decrease the possibility of potential defects:

 

·        Reverse planning from the release date and preparing a high level test execution plan

·        Understanding and documenting requirements, mitigating the risks/constraints in preparing and running the tests from specific geographical locations - such as time zone, availability of environment or systems & access constraints

·        Planning for different types of functional and non-functional test requirements and their test data. Test data plays a vital role in Data Warehouse testing. Planning and preparing for test data early in lifecycle helps avoid the subsequent cascading delays during execution.

These best practices can largely contribute to a successful data warehouse validation. I shall definitely be blogging more on the best practices in the Data Warehouse test preparation and execution phases in the coming weeks. I do look forward to your thoughts, inputs and best practice ideas.

November 3, 2011

Collaborative Testing Effort for Improved Quality

The collaboration amongst the business, development and testing teams can reduce the risk during the entire software development and testing lifecycle and considerably improve the overall quality of the end application. As a testing practitioner, I believe that the testing teams need to begin collaboration at an earlier stage as described below rather than the conventional collaboration during the test strategy phase:

·         During the requirement analysis phase the business/product teams need to collaborate with the development teams to validate the requirements.

·         The test data needs to be available earlier and the testing teams need to collaborate with business/product teams to validate the test data for accuracy, completeness and check if it's in sync with the business requirements spelled out.

·         Collaborate with the development team and share the test data which can be used in the unit/integration testing phases.

·         Collaborate again with the business teams to formulate a combined acceptance test strategy which would help reduce time to market.

·         Collaborate with the development team to review the results of unit testing/integration testing and validate them.

·         Collaborate with business/product teams to validate the test results of the combined acceptance testing.

Testing at each lifecycle stage has its own set of challenges and risks. If the potential defects are not detected earlier they escalate further down in the SDLC. However, an experienced and competent test practitioner can identify these defects earlier on, when it originates, and address them in the same stage. Below are some examples which reinstate this fact.

 

·         A good static testing of the functional, structural, compliance and non-functional aspects of an application during the requirement phase can reduce 60% of the defects from cascading down to production.

·         Similarly, getting all the required test data (as specified by the business requirement) as early as towards the end of requirements analysis phase can inject the sense of testing early in lifecycle which would improve test predictability.

·         Planning ahead for performance, stability and scalability testing during the system design phase can help reduce the costs of the potential defect incurred later on. Also, proactive non-functional testing (as required by business) contributes significantly for faster time to market.

·         Test modeling during the test preparation phase helps avoid the tight coupling of the system that is being tested with the test environment. This eventually helps in achieving continuous progressive test automation.

·         Collaboration with the development teams ensure that they have used and benefited from the test data shared by the testing teams. This collaboration helps the testing teams validate the architecture, design and code by following simple practical in-process validations through static testing of the functional, structural and non-functional aspects of the application.

·         Mechanisms which help predict when to end testing is a key requirement during execution. One such mechanism is a stop test framework based on the understanding of the application carved around the defect detection rate.

All the approaches described above let testers save on time and focus more on the right metrics collection and maintaining dashboards during test execution. It also ensures that testing is not limited to just one phase but is a fine thread that runs across the entire SDLC in order to improve quality, reduce costs and time to market for all business applications.

The benefits of this collaborative approach are many. I have listed a few benefits based on my collaborative team experiences:

·         De-risks the entire software development process by embedding testing as an inherent part of each stage of the SDLC process.

·         Defects are found early in the life cycle which reduces the total cost of fixing the defect at a later stage. The cost ratio between finding the defect at the requirements stage vs finding the same at the production stage is 1:1000.

·         Shortens the time to market by using this approach which has a built in self-corrective mechanism at each.

September 27, 2011

Infosys and Co-operative Banking Group @Iqnite UK 2011

 

Hi there! A little over a week remains for the UK leg of the Iqnite conference (http://www.iqnite-conferences.com/uk/index.aspx) to begin. I can't begin to explain how excited I am to be presenting again at this premier conference for software testing professionals. To me the conference provides a great opportunity to learn more of the latest and most relevant QA practices from other QA professionals. My association with the event goes back to 2009 when I presented a session on Progressive Test Automation. This year I am teaming up with Paul Shatwell of the Co-operative Banking Group to present a session on "The Co-operative Banking Group's Approach to a One-Test Service Transformation".  

 

Wondering what "One-Test Service" is?

Continue reading "Infosys and Co-operative Banking Group @Iqnite UK 2011 " »

Transforming 'Testing Program Metrics' to 'Business Program Metrics'

Software Program Metrics is a necessary evil today. Be it around quality, productivity, efficiency, coverage, scope, defects, etc., organizations need to collect these data points to measure/ analyze the success of any given program. While this sounds very straight forward, it's not. Most of the times, this data collection exercise remains an exercise, with very limited time being spent on the analysis part to identify clear areas of improvement, for ensuring program/project success. So, how do you ensure that testing metrics are defined correctly and more importantly are linked to business outcomes, so that all parties involved take them very seriously?

 

For the purpose of this discussion, let's take the example of defining metrics for a QA/ Testing Program. While defining the Program Metrics, for a QA project, the following factors need to be taken into consideration:

 

·         Number of Metrics: They need to be apt for the size of the project.

 

·         The Larger Purpose: The metrics need to serve a larger purpose of improvement than just having another metrics for the sake of it.

 

·         Informative: Are the defined metrics giving meaningful information to the stakeholders

 

·         Stakeholder: Is each metrics mapped to one or more stakeholders

 

·         Systems & Tools: Does the project team have the tools and systems in place that capture data for computing the metrics

 

·         Benchmark: Do the metrics have a bench mark

 

·      Definition & Measurement: Does the definition and measurement methodology if the metrics remain constant throughout a project

 

·         SLA: Are the metrics covering the critical SLA's of the project

 

·         Project Success: Is the success criteria of the project mapped to the metrics

 

Stakeholder's Satisfaction: Are all the stakeholders involved in the project satisfied on at least one metrics

 

After addressing these questions, the next recommended step is the conversion of the testing program metrics to business program metrics. This can be a challenging task but it is very important and relevant to the involved stakeholders, especially to the client manager and the IT manager, for ensuring the success of the QA program in question. The objective of all metrics collected is to track, analyze data and use this analysis for further improvements. For this to be achieved the testing metrics needs to be linked to a possible business benefit of the project.

 

To elaborate how testing program metrics can be linked to business benefit outcomes, let us consider the following example of "Test Case Execution Productivity Program Metrics".

 

Definition: To measure the # of test cases executed in a cycle by a QA team during a defined time period

Usage: Measures the # of test cases that can be completed by the team

Formula: # of test cases executed / total time spent in execution

Trend/Analysis: A high number in this metrics signifies a good understanding of the application, domain and a possibility of lesser execution time

Business Value: Indicates faster time to market

 

In order to link test case execution productivity metrics to a business benefit, the below mentioned points need to be taken into consideration and addressed

 

1.    The stakeholder's definition of the test case execution productivity

2.    Is the client or stakeholder really interested in the # of test cases executed?

3.    How many test cases should be written for a requirement or scenario?

4.    How many of those test cases are mapped to the complexity of the scenario?

5.    What if the test cases executed are high, and the defect count is low?

6.    What is the test case execution productivity, and is the defect count more than expected?

7.    What if the productivity is high but the coverage of the functionality is low?

8.    Is there a tool to increase the productivity (is this tool being used by the team or are they focusing on manual execution)?

9.    Can risk based testing be introduced during execution instead of just focusing on the end result of high productivity?

10. Is the scenario coverage during execution mapped to the test coverage/ test execution productivity? Is there a close correlation between them?

 

Depending on the type of project and the nature of technology, this simple testing program metric "test case execution productivity" can reflect a lot from a measurement standpoint within a project.

 

To summarize, the detailed analysis of any testing program metrics should lead to business benefits and value ("time to market" in this case) to ensure the success of the program. Without this linkage, metrics additions become a theoretical exercise with no real value being delivered to the overall project and to the stakeholders involved.

March 1, 2011

What is in the recently release HP ALM 11 version?

That HP is the market leader in the Quality Assurance and Testing tools space with a 50-60% of market share is no secret. If the version 11 of the suite of products released recently are any indication, HP intends to keep it that way for a very long time to come. Below are my notes from one of their latest road shows that I attended.

 

In this posts I comment on what Quality Centre 11 (QC 11), specifically the HP Sprinter feature within QC 11 has to offer us.

Continue reading "What is in the recently release HP ALM 11 version?" »

December 24, 2010

Why does Testing Metrics Program Fail?

Test management Guide Series-Topic 3

Why does Testing Metrics Program Fail?

1.0  Introduction

 

We all know that statistical data is collected for metric analysis to measure the success of an organization and for identification of continuous improvement opportunities, but not many organizations are successful in addressing this objective.

Senior management love to see metrics but fail to understand why the teams are unable to produce meaningful metric trends which can provide clear actionable steps for continuous improvements. It was surprising to see many client senior executives started asking me for help in establishing meaningful metrics programs for their respective organizations. Several questions popped in my mind - what makes testing metrics program such a challenge? We collect and present tones of data, what exactly is missing? Even though right from the CIO wants a peak view of various metrics to understand the progress made, why does several testing organization fail to produce the same in a meaningful way? All metrics definition and techniques are available in multiple sources, still why does organizations struggling in collect, Analyze and report the same?

After thinking about these questions I started asking myself few fundamental questions and started looking at several metrics reports. I was not surprised to find the below

·         Most of the Testing Metric reports were internal focused. They were trying to tell how good the testing organization is operating with no actionable next steps

·         Metric reports had several months of data with minimal analysis finding and no actionable items for each stake holder

·         99% of the action items identified were only related to actions taken by the testing organization

2.0  Revisiting the Golden Rules of Testing Metric Management:

 

While I was conducting a detailed study of the same, I felt like re-establishing few golden rules of metric management which we all know, but might fail to implement

Rule #1: Metrics program should be driven and reviewed by executive leadership atleast once in a Quarter.  In reality, very few organizations have a CIO dashboard to understand the reasons for Quality and Cost issues inspite of hearing repeated escalations from business partners on production defects, performance issues and delayed releases

a)         Metrics collection is an intense activity which needs right data sources and acceptance by entire organization (not just testing). Unless driven by senior leadership, metrics collection will have limited actionable items

b)         Requires alignment from all IT and business organizations to collectively make improvements

Rule #2: Ensure all Participating IT and Business organizations are in alignment with the metrics program

Examples:

ü  Testing organization shows that it met 100% of its milestones related to schedule adherence, but the project would have got delayed by 6 months

ü  Testing effectiveness would be 98%, while several production defects resulted in an couple of million production support activities which was caused by issues which was not in control of testing organization

ü  Business operations and UAT teams are focused on current projects, while metrics provide trending information. All teams should be aware of improvement targets set for the current year or current release

 

Rule #3: Ensure necessary data sources are available and data collection processes are in place which ensures uniform data collection.

Uniform standards are not followed among IT groups, which make data collection a challenge

·         Effort data is always erroneous

·         Schedule changes multiple times in the life cycle of the project

·         Defect resolutions is always a conflict

·         Defect management systems are not uniformly configured

Examples:

ü  There is no infrastructure or process to view and analyze production defects

ü  More than 40% defects are due to non coding issues, but defect management workflow does not capture the same

ü  Requirement defects are identified in testing phase making it 10 times more expensive, but there is no means to identify these trends and quantify

Rule #4: Ensure metrics are trended and analyzed to identify areas of improvements. Metrics data is garbage unless you trend the data, analyze them to find identify action items. Trending analysis should help in

·         Identify improvement opportunities for all participating IT groups

·         Set improvement targets

·         Plotted graphs which can give meaningful views

·         View to compare the performance with Industry standards

While it is not always easy to follow all the 4 golden rules of testing metrics, but an attempt to achieve compliance with these 4 rules would significantly improve the success of testing metrics program

3.0  What should I look in my testing metrics?

 

While I continued to look for reasons and answers for failure of metrics program, I realized that most of testing organization selects metrics which are more internal focused and analysis is also carried out to indentify issues within testing organization or to defend the actions of the testing organization. I am making an attempt to summarize suggestions on how you can make better use of metrics data  

Metric Perception

Issues

Suggested Action Items

Testing Effectiveness

 

 Testing organization reports effectiveness as 98%, but many production defects reported

·  Absence of a process to collect production data

 

·  System testing and UAT execution in parallel

 

·  Lack of effective requirement traceability

 

·   Lack of End to End test environment to replicate production issues

 

·   Extensive ambiguity in business requirement and results in defects in production due to unclear requirements

·  Publish Testing organizations effectiveness and Project Testing effectiveness to clearly highlight the overall project issues

 

·   Establish a process to involve testing team and production support team to analyze every production defect and take necessary corrective action

 

·   Associate $ value lost due to every defect in production to get attention from senior management towards infrastructure or improving requirements management

 

 

Test Case Effectiveness -

 

No Industry standard recommendations on targets based on testing type and hence not much action items identified.  Data is reported regularly

 

·   Difficult to set targets

 

·   Lack of actionable decision points based on increase or decrease of test case effectiveness

 

·   RTM is not prepared and hence the quality of test case and its coverage is a concern

 

·   Based on test case preparation and execution productivity, set targets for test case effectiveness. This will ensure right effort is spent on test planning and execution to achieve the target testing effectiveness

 

·   If test case effectiveness in below threshold for 3 consecutive releases, optimize your test bed and eliminate test cases which are not yielding any defects

 

·   If test case effectiveness in below threshold, validate the applicability of Risk based testing

 

·   If the test case effectiveness is higher then threshold, unit testing might be an issue and recommend corrective action

Schedule Adherence

 

Project is delayed by 6 months, but testing team has met its schedule

·   Project managers baseline schedule after delays in every life cycle stage. Testing reports delays in schedule only if it is due to issues related to testing

 

·   Testing wait time increases due to schedule delays in each stage and testing cost increases. Testing does not quantify wait times

 

·   Testing team do not use historic wait times in estimates resulting in testing budget overruns and increasing testing to development spend ratio

 

·   Track schedule milestones across life cycle stages and collect metrics for delays in any milestone

 

·   Create action triggers if the total delayed milestones crosses 10% of overall schedule milestones

 

·   Calculate the additional QA spend due to missed scheduled milestones and report the same in the metrics

 

·   Establish operational level agreements b/w various team to identify schedule adherence issues

 

Requirement Stability

 

Senior management believes that achieving requirement stability is a myth and requirements continue to change due to various reasons. This is because of the fact that 90% of the industry has been facing this issue for decades

 

·   Requirement stability is a broad measure and has several parameters contributing to this measure (Ambiguous, mistake in articulation, Incomplete, addition of new requirement, edits, deletions, etc)

 

·   Each parameter is not captured and reported in separate categories and represented by life cycle phase. Requirement can be changed due to design issue, testability issue, cost of development and implementation, operational challenges, change in mandates, organizational polices, etc

 

·   Lacks quantification of its effect in each testing phase

 

·   Lack of forums to discuss requirement stability index issues and improvements

 

·   Lack of requirement management tool to automate traceability and versioning

 

·   Report # of times requirement review signoff from test team was missed

 

·   Report test case rework effort due to requirement deletion, edits, addition

 

·   Report additional test case execution effort due to requirement deletion, edits, addition

 

·   Report # of defects due to requirement issues (missing, ambiguous, incomplete) and quantify the effort needed to correct them by dev and testing

 

·   Quantify the SME bandwidth requirement to support testing activities. If requirements are elaborate, SME bandwidth needs should reduce with time

 

·    Report # of CR added and its effort to measure scope creep

 

·   Report # of times requirement change not communicated to testing team

 

 

 

 

 

Defect Rejection Ratio and other defect Metrics

 

Senior management has no idea what to do with this data at end of the release

 

 

·   Reporting at the end of the release does not give opportunity to make course correction in the ongoing project

 

·   Metrics like defect severity, defect ageing, defect rejection needs immediate corrective action

 

·   Threshold points for each of this defect metrics has not been established and automated trigger points calling for action not defined

 

·   Report these metrics on a weekly basis rather than at the end of the release

 

·   Create automated trigger points. For e.g. If # of critical defects goes beyond 5, action point has to be triggered

 

 

Test Case  preparation & execution productivity

 

Senior management has no idea what to do with this data at end of the release

 

 

 

·   Reporting at the end of the release does not give opportunity to make course correction in the ongoing project

 

·   Difficult to set target due to difficulty in establishing testing unit

 

·   Report these metrics on a weekly basis rather than at the end of the release

 

·  Create automated trigger points.

 

·  Report reduction in test execution productivity due to

o   environment downtime

o   wait time due to delays in build

o   wait time due to delays in issue resolution

o   rework effort during test case preparation

o   rework due to lack of understanding and application knowledge

 

4.0  Conclusion

 

The above table provides insights into the complexity involved in metrics program to collect analyze and report metrics. There are clearly 2 types of metrics

·         Metrics which has impact on the overall project and hence very important to senior management of the organization

·         Metrics which are internally focused towards testing organization

Testing Effectiveness, Schedule Adherence and Requirement stability helps in identification of issues which impacts the entire projects and also which results in delays in projects, defects in production and budget overruns which are very important to CIO. The CIO metrics dashboard should have these metrics and reasons for failure.  As a QUALITY GOAL KEEPER for the entire organization, testing organizations should clearly identify actionable improvements for all stake holders for metrics program to be successful

In order to keep a tab on the effectiveness and efficiency of the testing organization itself, internal focused metrics like defect severity, defect ageing, # of defects, testing productivity, cost of quality, % automated and many more are important. Clear steps should be defined to identify actionable items. The actionable items are all internal to the testing organization and they should be used for identification of internal improvement targets and initiatives.

 

 

August 7, 2010

What is the future of the testing function?

This is a question that gets asked by most executives looking to establish independent testing as a function/department on an on-going basis, or by those purists who believe in creation rather than correction. But, this is a question that all testing professionals need to ask themselves.

This question can be tackled in many ways. Here's my perspective -

First - Software testing is here to stay for next several years
Second - I do believe that several large software development centers will mature in this process and package softwares will address majority requirements in coming years. Software testing will increasingly fall in-line with development/configuration, along with increased levels of instrumentation (automation), just like we have in automobile assembly lines.

There are several steps being taken by organizations. All testing functions need to be setup keeping this long term goal in mind.

As testing professionals we need to contribute towards accelerating this change. Simply because, those that work towards developing solutions that can remove the need for independent testing will do well for themselves in the long run. It's about accepting the future of testing and keeping pace with it rather than wishing it away. 
Thoughts???

May 24, 2010

How lean is too lean? - Making testing lean!

One of the topics that is often discussed during my interactions with the customers is around how the current testing can be leaner, smarter and cost effective?  While most of these customers agree that testing is a necessity, they are worried about the cost. Some of them have gone ahead and cut on their testing staff and budgets this has impacted the quality and timelines of their products and services adversely. Can organizations go too far with the cost and people cuts?

Continue reading "How lean is too lean? - Making testing lean!" »