Testing Services provides a platform for QA professionals to discuss and gain insights in to the business value delivered by testing, the best practices and processes that drive it and the emergence of new technologies that will shape the future of this profession.

« Social Media, Cloud, Analytics and Mobility (SCAM) | Main | Is User Experience measurable ? »

Changing perspectives in Testing - Adapting to evolving expectations

In the past decade or so, the basics of testing process and tools have not really changed. Different tools to automate various phases of testing have been developed with these tools seeking to realize specific concepts in automation. At its core, testing still remains an activity directed to seek and seize defects and fix them before the system goes live in production or gets launched in the market. So what is new and changing in testing?
 

It is just that testing professionals have to adapt to some changing scenarios while still holding on to time tested principles and techniques. There are some subtle and obvious contextual changes that the testing teams need to be aware of and adapt in order to stay relevant and deliver progressive value. Some of them are to do with 'mindsets' while some others are to do with enhancement and evolution of existing techniques:

 

1.    Collaborative Testing. With the increasing adoption of Agile and progressive towards realizing DevOps vision, the testing team needs to shift focus from bug detection to early and continuous feedback  and contributions to improving quality. This implies testing early and often rather than 'test after developers are done'. This also implies high degree of comfort with skeletal documentation and ability to extrapolate and visualize requirements.

 

2.    Continuous Test Automation. Testing early and continuously also necessitates using a wide spectrum of tools (commercial, in-house developed and open source) and scripting languages. This is what I call Continuous Test Automation throughout the project lifecycle. Some also call it extreme automation. This needs programming mindset to be developed combined with the tester's keen eye for finding defects and providing early feedback! I believe all test professionals need to develop these skills; not just the automation engineers.

 

3.    Visual Modeling. With the need for tighter and frequent collaboration with program team also comes the need to use visual modeling tools. One example is activity models in model driven testing.  I have also seen many teams using mind-maps for capturing test design. Testing community has experimented much with model driven testing. While this helps, many teams have admitted that this is often time consuming and effort intensive. An area that is very little explored is defect prediction and modeling. Recently an Australia based Banking customer wanted to us to propose ideas for defect modeling and visual defect heat mapping techniques. Since testing is expected to 'seek and seize' defects, it is a good idea to focus on defects and failure mode modeling rather than modeling the entire requirements. This area needs further study and experimentation.

 

4.    Mission Risk Mitigation. This is about addressing the question 'what is the risk that this system will fail to achieve the stated IT mission goals?'  and reporting those risks based on sound analysis of metrics. This calls for thorough understanding of business goals and how the current system under test is expected to contribute to the same. This is what I call 'shift-up' which implies the ability to appreciate the higher order business goals and continuous evaluation of risks supported by various business driven metrics analytics.

 

5.    Business driven metrics analysis. This is related to the point above. Metrics need to be collected and reported under multiple levels of hierarchy. Such reports must be accompanied by insights and recommendations that help the management to make critical business decisions. An important part of metrics analysis is alerts. An alert is meant to be a call for management action. It is a warning of an impending issue. Often testing team assumes that mere sending of status reports is good enough for management to take necessary action. Far from it. Metrics must be analyzed for trends in time and correlation with other related metrics to draw meaningful conclusions, help decision support, make appropriate recommendations and initiative management actions. Such a roll up of analysis must address the decision support requirements of all levels of organizational hierarchy. Some examples of alerts:

 

a.     Threshold alerts - a specific metric is below (or above) a threshold value and needs management attention.

b.    Correlation alerts - a specific metric is not consistent with another and needs further analysis. For example - defect fix rate is lagging behind defect find rate for the observation period. Another example of correlation alert is if an area of code has very high degree of churn but defect find rate is relative low; can mean hidden lurking defects that may demand focused testing techniques.

c.     Out of control alert - a specific metric is out of control from a statistical perspective and needs attention

 

In conclusion, the fundamentals of testing process and technology have not really changed (and probably never will) but the context in which testers do their jobs is changing fast and the testing community at all levels have to adapt to these changes to continue to stay visible and relevant to senior levels of management.

Post a comment

(If you haven't left a comment here before, you may need to be approved by the site owner before your comment will appear. Until then, it won't appear on the entry. Thanks for waiting.)

Please key in the two words you see in the box to validate your identity as an authentic user and reduce spam.