Testing Services provides a platform for QA professionals to discuss and gain insights in to the business value delivered by testing, the best practices and processes that drive it and the emergence of new technologies that will shape the future of this profession.


May 17, 2019

QA Paradigms in Open Banking

Open Banking started as a regulation in the British banking circles, and now countries around the world are racing to adopt it. Australia is making its first move towards Open Banking later this year in July. The European Union is adopting PSD2 on lines of Open Banking. Countries implementing Open Banking are being watched intensely by those planning  to adopt these standards like Israel, Canada, Hong Kong, Japan and Singapore. Everyone is waiting to see the outcome of Open Banking imperatives. What is Open banking? Why should the IT world take notice? And what would be the implications of Open Banking in the Software Testing world? I am going to take a stab at these in the next few paragraphs.

To start with, Open Banking is a directive by UK's Competition and Markets Authority which mandates that all banks should expose their customers' data via open APIs to third party providers like competitor banks and FinTechs, with the express consent of the customer. What started as a regulation, Open Banking now broadly refers to the splitting of banking services and enabling customer's data access to partners outside the incumbent banking system with express consent. Open banking has created avenues for Fintechs and challenger banks to use technology which can leverage Customer data to help secure loans, provide a level playing field to pick and choose, help with payments, etc. Open banking has truly enabled FinTech firms to compete with large banks by helping them design more customer friendly products and also has provided much needed competition between banks to provide more value to the customer. Until recently if another bank/FinTech wanted access to the financial data of a customer, either the customer would have to fill in the data fields manually or the bank/FinTech would obtain the customer's login credentials and scrape the incumbent bank's page to get the required data. This is not a best practice with in terms of cyber security and a rather crude way to garner data. Now, Open banking has made it very convenient for customers to expose their data via open APIs. Additionally, it has empowered customers to switch banks easily. And further constructed a level playing field where FinTech firms can leverage data and technology to come up with creative solutions against the larger banks. Overall, Open Banking has increased competition and innovation while adding value to the end customer.

European Union implemented its own Open Banking regulation known as PSD2 which is an abbreviation for Second Payment Services Directive. PSD2 is a regulatory directive by the European Banking Association (EBA) applicable to European Union markets. PSD2 requires banks to grant customers the right to choose their payment partners PSD2 had been conceived with the intent of making payments easier in terms of innovation and use. There were few salient differences between Open Banking regulations of UK and PSD2 but in November 2017, Competition and Markets Authority mandated that Open Banking should be compliant with all PSD2 directives.  Open Banking will now cover all payment products like Credit Cards, Debit Cards, e-Wallets, etc. which are part of PSD2. Both PSD2 and Open Banking regulations have evolved to complement each other by increasing the scope of financial products under Open banking.

Payments will also get simplified via Open Banking. For example, currently on an ecommerce site, a typical payment goes via various intermediaries like the merchant, Payment Gateway, card associations like Visa or MasterCard, issuing bank and acquiring bank. But, with Open Banking, the online retailers can directly conduct payment transactions with your bank without any intermediaries. And again this benefits the end customer as the surcharges demanded by these intermediaries is eliminated.

Which brings us to the ultimate question of this blog, how will Open Banking affect IT industry especially Software Testing? Interoperability through common standards as one of the keystone objectives of Open Banking.  To achieve this, banks will have to build open APIs which comply with regulatory standards, security protocols, safe data transfer, compliance with all the directives, etc. Open banking creates a plethora of opportunities in regulatory testing, penetration testing and security testing to make sure all the security protocols are in place and thwart cyber criminals, performance testing when many customers try to access or transfer data at the same time, API testing, Accessibility testing, consent testing, Strong Customer Authentication testing.

Open Banking has the potential to grow into a niche QA area where domain experts with testing skills would work on ensuring the APIs and platforms are performing optimally. Experts in QA who are well versed with Open Banking landscape will be very much in demand and since all financial institutions operating in Europe and UK must conform to Open Banking standards, all of them will require support in this area. There are 9000+ financial institutions in Europe and all of them will have to comply with Open Banking/PSD2 which translates to immense QA opportunities.

This write up serves as a generic introduction to Open Banking and opportunities in store. My next post of Open Banking will look at more granular details of how Open Banking will affect QA in various industries like Retail!


Continue reading "QA Paradigms in Open Banking" »

March 28, 2019

Service Virtualization using Mock Server


Service Virtualization is a technique for integrating a mock server in a test suite to remove dependencies on real back end systems or external party systems from test environment. It is an ideal solution for Test Driven Development (TDD) and Business Driven Development teams who want to quickly test the application and API services to find out the major problems.

Service Virtualization is best suited in Micro Services based Architecture, Service - Oriented Architecture and Cloud based Architecture.  It is the most important component of DEVOPS community.

Problem Statement

This is a compact view of micro services based architecture in which application is communicated to real end back systems through a number of API calls to receive output responses. For instance, In banking applications -  Some of the important REST API calls like accounts, payments, transactions etc.

Here is the list of problems with this kind of application infrastructure:

-          There is no dedicated environment for Automation testing, UAT testing and Performance testing. As environment has been    shared between all the teams that causes delays.

-          Environment is mostly down due to deployment releases and server configuration issues.

-          As data is different in Automation Testing and Performance Testing, test data setup is also a big challenge for teams.

-          Tests are brittle not robust, that means no reusability hence not able to achieve cent percent test coverage due to environmental issues.

 Implementation of Service Virtualization

In my recent assignment, Our QA team was struggling with test coverage issues in automation testing, unexpected environmental issues, performance related issues and many more in real end back systems because these systems were associated with third party vendors and these systems were not accessible to our teams.

With these problems, Our Teams were blocked and handicapped to perform any testing operations hence unexpected delays in production releases that impacts the project schedule and delivery.

To come out from these challenges we have created and implemented a mock server virtualization solution.

Proposed Solution
a.   Introduction of Solution

-          This is a Wire Mock Server or Virtual Service based environment model. One way of solving the dependencies and issues. Using Virtual services or Mocks, allow you to dismantle the testing from real back end systems and provide independent environment to different testing teams. The problems described above are resolved completely. People are happy and satisfied.

b. Application of Solution

 Here are the advantages of using Service Virtualization Over Traditional approach:

-          Test Coverage has been improved upto cent percent and avoid unexpected environment issues hence test quality has been improved.

-          All the QA teams used similar environment independent of other teams.

-          As per the business requirements, Test data set up is easy to create and handle it in an optimize way.

-          Test Development is robust and having less number of issues.

-          There are no issues in environment deployment and configuration.

-          Service Virtualization model is more agile over traditional models.

-          Less or no cost in the development and implementation of Mock Server Virtualization.

-          Flexible to fit in any kind of application architecture.

-          Leverage testers to become developers by manipulating the output responses according to their needs.

-          It's very quick and fast solution to resolve all the issues related to real end environments.

-          This approach reduces the man working hour efforts and time by 90%; it's a very effective solution for company business.

 Future Direction / Long-Term Focus

-          Service Virtualization is one way. Especially for large software projects, this practice can dramatically reduce the company cost.

-          Enhances the practical reusability of Service Virtualization,  hence reduces the future development efforts.

-          Implement such kinds of testing practices in other business necessities such as Cloud based Architecture, Service Oriented Architecture.

Results / Conclusion

We believe, this kind of approach will help people to accomplish various upcoming engagements and produce remarkable results.

Continue reading "Service Virtualization using Mock Server" »

March 25, 2019

Role of Artificial Intelligence in Performance testing and Engineering

A typical Performance Testing starts with analyzing the application UI and creating the test scripts. Post that users hit the application server and generate beautiful dashboards from Load testing tools indicating the Response time, Throughput, CPU utilization time, memory utilization etc.

In the era of AI (Artificial Intelligence) powered softwares, during the early stages of application design, performance engineers should be able to answer questions like:  What should we expect once the application is in production? Where are the potential bottlenecks? How to tune application parameters to maximize performance?

Critical applications need a mature approach to Performance testing and monitoring. AI is the intelligent part of Performance Testing process. It acts as brain in the process. Daily Tasks like test design, Scripting and implementation can be handled using AI, so that test engineers can focus on creative side of software testing.

One reasonable use case of using AI in PT (Performance Testing) can be codeless automation script. Writing performance scripts using Natural Language Processing(NLP) can make the scripting task way easier. In this type of testing, computers learn from the data given to them without programming it. Below are the aspects of solution empowered by AI-ML (Artificial Intelligence- Machine Learning) in performance testing:

  • The testing environment developed using ML, will have advanced capabilities in terms of self-healing and intuitive dashboarding. using deep learning algorithms, the corrections can be handled automatically.
  • The test flows are recorded and can be tested using data. No coding required in most of the scenarios.
  • Reusable functions and objects can be generated and grouped using semi-supervised learning. Scenarios are flow-based, and thus the implementation is transparent to user.

Yet another use case would be performance test modelling processes. AI's pattern recognition strength can extract relevant patterns while load testing which is very useful for modelling performance process. The PT model consists of the algorithms being used, from which AI learns from the given data. The ability of AI to anticipate future load problems helps in creating Performance test model efficiently. It deals with lot of data and can predict the system failures. Once the system data is analyzed, Performance test model can be created based on the system behavior.

Another area can be SLA design. SLAs should be measurable, attainable, simple, realistic and time bound, but most SLA are not designed like this. This is the basic limitation of human powered systems. However, once AI takes the role, the situation will change. It can track all the affecting areas and gets reinforced into monitoring system with providing granularity. It can analyze the complexity of the system and suggest the appropriate SLA. For example, if the lines of code are 1000 then SLA can be considered as 500 milliseconds. AI can detect working trends in a system directly, as system performance changes, SLA can fine-tune in real time.

 Monitoring tools like Dynatrace, AppDynamics introduced AI into their system which are helping in identifying the bottlenecks in multiple tiers of applications in early stages of software development. It can analyze the application and can predict the performance defects at the code level. Many open source tools like webpage test, GTmetrix, Yslow pinpoint specific problems like server request issues and help engineers to solve the issues quickly. Automation Tools like Test.ai is useful in getting the performance metrics of your application as well.

Role of AI in every phase of performance testing and engineering is proved very beneficial and is future of performance testing. Use of AI in performance testing will make tasks like scripting, monitoring highly impactful and help to get real time results very quickly. I believe, in future role of AI in performance testing will be a game changer!

December 7, 2018

Embrace the Future

The unfolding of Cloud Computing, Introduction of Enterprise level Integration Patterns & up folding of micro-services has not only disrupted the existing nomenclature but also makes us think the way we do Integration. When every pioneering companies are embracing the micro services to tackle their complex enterprise architectures, there's one aspect of is which is still open for exploration i.e. Data Validation & Data Warehousing.

Yes, it is true that there are many organizations who are consciously embracing the concept of data services around there data lakes for either master data management or for analytical purpose (simple data read) but very little have been thought upon using the full flavor of micro service driven architecture on areas like data integration, data quality or validation and metadata management space of work.

If we travel back a few years in Time, the idea of SoC (separation of concerns) was ignored due to the need of heavy lifting of data and availability of Integration Tools which usually were tightly coupled with each other. These tools were an Instant hit as they wrapped up complexities of managing job failures, providing reports etc. but it could not fully tackle the learning curve, complexity involved and most important - the need to adapt to frequent changes.

The basic principle of micro service is to break a complex application and decompose it in to multiple self-contented services which can connect to each other to achieve a complex functionality. Given that above use cases are always complex in nature micro services could be a great way to automate data validation & design our future data warehouses.

September 26, 2018

The next frontier of RPA: Intelligent Process Automation

We must not be afraid to push boundaries; instead, we should leverage our science and our technology, together with our creativity and our curiosity, to solve the world's problems.
                                                                                                                                               ~ Jason Silva

Robotic Process Automation (RPA) is now mainstream. But is RPA enough? RPA is automation for today. What would be automation for tomorrow? With AI slowly becoming all pervasive, AI in RPA should be made an integral part of any enterprise level RPA. AI powered RPA can help realize the ultimate goal of Intelligent Process Automation.

Let us consider the case of Test Management. This consists of different tasks as listed below.
  1. Review access requests from various users for various tools and platforms across the entire QA organization
  2. Create multiple user roles and revoke/modify them as and when required
  3. Upload test cases in the right location
  4. Update execution status
  5. Generate different reports/statistics and send to the relevant stakeholders
For the sake of ease, let us pick one of the tasks - review access requests for all users across the entire QA organization. If the access request is keyed in a form available online, the RPA bot can read the digitized inputs, check if the person requesting the access is to be assigned that particular role against a database and assign/deny the access.

Imagine executing this process for thousands of users manually across the entire organization. A human FTE doing this task will take many person hours to accomplish this. And moreover, there is a risk of a slip-up. The task is repetitive, mundane, follows a sequence of steps and high volume. Using RPA, this chunk of access granting can be done in a fraction of the time required by the human FTE.
Now, let us consider the same scenario from a different perspective. Imagine someone who needs to be assigned to a certain project but is not able to access request form and drops a mail to the admin. Can the RPA bot read through the mail and check the required requirements for granting access to the user from the mail?

No! This scenario would require understanding the mail and then initiating the process. Basically, a judgement call along with NLP capabilities. RPA tools are rule-based. What we need here is an intelligent algorithm that can learn how to take a decision.

Or, if a certain technicality in the access review process changes, the whole RPA execution would come to stand still until we reconfigure the changes in the RPA bot.

Add AI to RPA in Access review process, what do we get?
  1. Understand unstructured data: Based on an email from the user, AI powered RPA can pick up relevant inputs using NLP and grant/deny access
  2. Self-learn capabilities: For any process change, AI powered RPA would have self-learning capabilities to adjust to the new process without any human reprogramming
  3. Analytics: AI powered RPA can work with the large amount of access requests to prioritize the critical requests or come up with trends or insights with respect to the access grant process
To elucidate it with another example, we cannot use RPA to play chess. Because it would require laying out the rules for billions of combinations (there are 288 billion chess games possible). But AI can look at the moves of the player, learn from the thousands of chess games played before and come up with a move without the rules being explicitly laid out for it. 

Executing a task in a particular sequence is RPA done properly. Working with unstructured data, self-learning from the thousands of completed tasks, analyzing the sequences and adjusting the tasks to achieve greater efficiency is Artificial Intelligence.
Intelligent Process Automation.jpg
The next wave of RPA is RPA powered by AI, the cognitive RPA. Implementing AI with RPA to enable supervised learning/unsupervised learning/cognitive capabilities to self-learn and optimize processes along with producing insights is the need of the hour. Traditional enterprise RPA solutions should start inculcating Artificial Intelligence/Machine learning capabilities in their offerings. This is what would help us achieve Intelligent Process Automation.

March 14, 2018

Future is Intelligent systems: Are you ready to test them?

Author: Harleen Bedi, Industry Principal, Infosys Validation Solutions

Artificial Intelligence (AI) is frequently making headlines with various possibilities that it offers to make our lives easier and how it is driving innovation in all spheres of our lives. There are opportunities for AI applications in almost all domains/ fields - right from home automation, personal virtual assistants, automated service agents, fraud detection, preventive maintenance, personalized experiences, financial advisory, healthcare recommendations and many more.  Multiple reports or studies predict huge market potential for AI and associated technologies.

Continue reading "Future is Intelligent systems: Are you ready to test them?" »

January 2, 2018

Big data validation for a memorable digital customer experience

Naju D. Mohan, Delivery Manager, Data Services, Infosys Validation Solutions

I have often heard from my colleagues, sales team members and sometimes even clients on what and how exactly do you validate big data and data insights? This is not surprising to me, since big data and occasionally even the insights derived from it is a black box for most of them. For a whole lot of people, it is lots and lots of data which traverses across systems, gets churned by algorithms, which most of us have forgotten after our school years and finally displayed using fancy visualizations making it a mysterious world. Let me make a modest attempt to take you through the journey of big data for a common use case, which we have experienced in our daily lives. I shall take a pause at each step of data flow and explain what has to be tested and how we confirm on data quality. 

Continue reading "Big data validation for a memorable digital customer experience " »

October 25, 2017

Testing Traditional Databases vs NoSQL Databases

Author: Surya Prakash G., Delivery Manager, Infosys Validation Solutions

When was the last time you heard the words "Big data", "Traditional Database", and "NoSQL database". These words are heard every second now and are widely used terms in every IT organization and across different industry verticals. These terms signify the way technology changes are happening due to huge data that is getting generated through various means (IOT, Social media, Sensors etc.).

Continue reading "Testing Traditional Databases vs NoSQL Databases" »

July 12, 2017

Allure of Cloud may fade away without proper data validation

Naju D. Mohan, Delivery Manager, Data Services, Infosys Validation Solutions

The need to validate integrity, correctness and completeness of data, residing in the cloud is increasing every second with the penetration of mobile devices and the inter connection of computing devices through internet. Cloud seems to be establishing itself as the best possible alternative to meet these data storage and processing demands. Data management capabilities of traditional data stores are revamped to meet the demands for huge volume and variety of data in cloud storage. This calls for new testing techniques and tools for ensuring 100% data validation.

Continue reading "Allure of Cloud may fade away without proper data validation" »

July 6, 2017

Thinking the Unthinkable: An AI approach to scenario prediction

Every now and then QAs are confronted with the uncomfortable situation where a defect is overlooked and it makes its way to the higher environments. This happens despite QA team having supreme understanding of the system under test. Due to the tremendous complexity of the real world applications and the sheer lack of resources (especially time), many flows and behaviors inherent in the application may not be tested and the authenticity of such behaviors remains dubious.  Also, curiously novice users are able to find defects that expert users cannot. Expert users of the application suffer from a syndrome that can at best be described as hindsight bias, wherein they tend to be blind towards the possibilities in the application that they are not used to.

One way to deal with the above mentioned scenario is to hire more and more QAs so that there are more pair of eyes looking at the application hence trying out and finding out more hidden behaviors that can potentially be defects. Due to the cost issues involved, this approach is not very practical. Another approach is that if we can try to simulate the users with different thinking patterns. This approach puts us in the realm of Artificial Intelligence.  We have developed a Genetic Algorithm (GA) for black box software testing that can do just that.

Genetic Algorithms are the heuristic method of optimization that simulates survival mechanics of genes during the course of evolution. They are based on the mechanics of survival such that these string structures that are linear yet randomized, are able to exchange information to form a search algorithm. An initial group of random individuals (population) evolve according to a fitness function that determines the survival of the individuals. The algorithm searches for those individuals that lead to better values of the fitness function through a selection, mutation and crossover genetic operations.

Brief overview of System Under Test:

Our system under test was an incident logging system. There were few requisites  that needs to be fulfilled before user can successfully create an incident. Few of them are:

1)      User Rights and Associations: User with correct rights and associations should be able to create the incident only in their associated domains and should not infringe outside their domains.

2)      Routing mechanism: Un-authorized users should not be able to access create incident screen either through user action on menu items or directly through URL.

3)      Presence/Absence of Data: Presence of data should be there for mandatory fields before user can create an incident. However, there should not be any checks like this for optional fields.

4)      Dependent fields: Dependent fields should not get the data before their parent fields being populated.

5)      Authenticity of Data: User should not be able to create the incident without the correct data in the fields that have look-up value checks.

Synopsis of the Solution:

 Our algorithm placed the factors outlined above as the bits on the string much like DNA. Each factor was recognized by the place it holds on the string structure.

The objective of the GA was to highlight any false positives i.e. the flows in the application that lead to successful creation of the incident when they should not have been created. For this purpose, we gave higher weights to the factors that were less likely to create an incident. For example, a user (A) with CREATE rights is more likely to create an incident successfully then a user (B) that have just READ rights. So, in this case user B will get significantly higher value of weight than user A as he/she is less likely to create an incident. Same kind of weight distribution was done for other factors as well.

Finally, an equation was created that will calculate the fitness value of the individual string structure. The equation was designed to give higher values of fitness to the individuals that resulted in successful creation of incident even though when the odds of such a thing happening was less. The fitness value of first generation of 10 strings were calculated and then mutation and cross-over events were performed to create the offspring of 10 strings. If the parent was having higher value of fitness then it was retained else child was selected. Once this selection process was over, again the same steps were performed to generate next generation. This process was repeated till the n­th generation. After which human analyst can look at the resultant strings and verify whether these flow really are defects.

Advantages of GA based software testing over traditional testing:

  1.  Able to explore various scenarios hence improved coverage of the system, many of the scenarios may not even cross the mind of the human QAs.
  2. Increasing the Defect Detection Efficiency leading to improved Quality of the system.
  3. Easy to change the directionality of search to find new defects by changing the weights associated with the factors.
  4. Substitute for human testers with the advantage to work on non-business hours including weekends.
  5. High risk area in the application can better be covered by increasing the population size and number of generations.

Associated Challenges:

  1. Technical expertise required to build the framework.
  2. Clear understanding of factors affecting the application is required. It is still not Subject Matter Expert (SME) agnostic. The bias of the SMEs may affect the outcome of the runs.
  3. Hardware intensive. Procuring the advanced computation resources may not be easy in the client environment.

This is a humble attempt to increase the software quality and provide end users with less grief due to missed defects without escalating the project costs.  In the last few years there has been significant involvement of Artificial Intelligence related technologies in most of the aspects of software yet software testing seems to have been eluded by the riches it provides. I sincerely hope that software testing has lot to gain from the advances in the AI making software solutions and products more reliable and efficient over the passage of time. 

Continue reading "Thinking the Unthinkable: An AI approach to scenario prediction" »

March 8, 2017

DevOps at Enterprise scale

Author: Varun Rathore, Delivery Manager

The current business environment makes many demands on organizations. With the proliferation of start-ups becoming a game changer to the existing brick and mortar industries, the need to innovate, iterate and stay relevant, is enormous. In view of this, UBS has embarked on a journey to transform itself through DevOps practices

Continue reading "DevOps at Enterprise scale" »

November 26, 2016

Predictive analytics - An emerging trend in QA

Author: Indumathi Devi G., Project Manager, Infosys Validation Solutions

As digital transformation is rapidly changing business operations, quality assurance (QA) also has to change from traditional quality control to intelligent quality assurance. Nowadays, clients not only want to test software adequately, but also as early and thoroughly as possible. To accomplish these goals, it is important to opt for shift left testing and predict the failures even before the applications are handed over for testing. Today's business dynamics require QA professionals to make critical decisions quickly. It is imperative to make use of the avenues, such as customer feedback, defect data, and test results available at disposal to make prompt decisions.

Continue reading "Predictive analytics - An emerging trend in QA" »

November 22, 2016

A large brewer decodes social media with Infosys

Author: Surya Prakash G., Delivery Manager, Infosys Validation Solutions

Digitization has become the buzzword in every industry vertical, as end consumers have been swept away by the digital world. The advent of Internet of Things (IoT) implies that smart products, services, factories, and operations are replacing traditional ones. Data from social media is enhancing decision making to increase revenues by impacting unstructured data for analysis. This is leading to an exponential increase in data analysis, interpretation, and the way data from social media is used in a meaningful form. With this, the focus on data testing has to move beyond volume and variety, to include velocity and veracity of data.

Continue reading "A large brewer decodes social media with Infosys" »

A.E.I.O.U of New Age Quality Engineering

Author: Srinivas Yeluripaty, Sr. Industry Principal & Head, IVS Consulting Services

In today's digital world, 'change' is the only constant and organizations are grappling with ways to meet the ever-changing expectations of key stakeholders, especially ubiquitous consumers. With the GDP transformed by mobile economy, globalization leading to "Global One" customers, payment industry transforming from "Cashless" to "Card-less" to "Contactless" transactions, ever growing emphasis on security and compliance, the expectations on IT are reshaping significantly. To achieve that pace and flexibility, organizations are increasingly adopting agile methods and DevOps principles.

Continue reading "A.E.I.O.U of New Age Quality Engineering" »

November 21, 2016

SAP Test Automation approach using HP UFT based solution

Author: Kapil Saxena, Delivery Manager

Problem statement
Most businesses that function on SAP, or plan to implement SAP must consider multiple factors, as their entire business runs on this backbone. Their major worries are testing effectiveness, preparedness, cost, and time to market. Infosys SAP Testing Unit has answers to all four, which have been well implemented and proven but I am reserving this blog for the last two.

Continue reading "SAP Test Automation approach using HP UFT based solution" »

November 18, 2016

Darwin and world of Digital transformations

Author: Shishank Gupta - Vice President and Delivery Head, Infosys Validation Solutions

When Charles Darwin proposed the theory of 'survival of the fittest', I wonder if he imagined its applicability beyond the life forms. Since the advent of the internet, the bargaining power of the consumers is steadily increasing and product and service providers often find themselves playing the catch up game to provide the best of the product features bundled with the best of consumer experience. What would Darwin's advice to product companies in today's Digital world be?

Continue reading "Darwin and world of Digital transformations" »

October 6, 2016

Testing the Internet of Things Solutions

Author: Tadimeti Srinivasan, Delivery Manager

The Internet of Things (IoT) is a network of physical objects (devices, vehicles, buildings, and other items) that are embedded with electronics, software, sensors, and network connectivity to collect and exchange data.

Continue reading "Testing the Internet of Things Solutions" »

Infosys view on relevance of Continuous Validation in DevOps Journey

Author: Prashant Burse, AVP-Senior Delivery Manager

Rapid digitization is forcing organizations to develop more and more consumer driven enterprise strategies. This is forcing businesses and IT organizations to respond with increased agility to ever changing consumer needs. IT organizations are hence shifting from traditional waterfall based delivery models to Agile and eventually DevOps methodology.

Continue reading "Infosys view on relevance of Continuous Validation in DevOps Journey" »

October 3, 2016

Evolution of mobile devices and Impact on Performance

Author: Yakub Reddy Gurijala, Senior Technology Architect

In last decade, mobile devices evolved from point to point communication like phone calls and SMS to smart features with advanced OS capable of executing native applications. These change created lot of opportunities and challenges for online businesses and application developers.

Continue reading "Evolution of mobile devices and Impact on Performance" »

September 30, 2016

Starwest 2016- Infosys is a Platinum sponsor

Author: Pradeep Yadlapati - Global Head of Consulting, Marketing, Alliances, and Strategy 

Starwest 2016 - As the curtains rise, we are looking forward to another year of exciting conversations with customers, prospects, and partners. This provides a fantastic platform for us that offers a great way to stay connected with practitioners, evangelists, and product vendors.

Continue reading "Starwest 2016- Infosys is a Platinum sponsor " »

September 27, 2016

Trends impacting the test data management strategy

Author: Vipin Sagi, Principal Consultant

Test data management (TDM) as a practice is not new. It has been around for a few years now. But only in the last decade, has it evolved at a rapid pace with mature IT organizations ensuring that it is integrated into the application development and testing lifecycles. The key driver for this has been technology disruptions, compelling IT organizations to deliver software faster with high quality and at low cost.

Continue reading "Trends impacting the test data management strategy" »

September 26, 2016

Hadoop based gold copy approach: An emerging trend in Test Data Management

Author: Vikas Dewangan, Senior Technology Architect

The rapid growth in data volumes of both structured and unstructured data in today's enterprises is leading to new challenges. Production data is often required for testing and development of software applications in order to simulate production like scenarios.

Continue reading "Hadoop based gold copy approach: An emerging trend in Test Data Management " »

September 13, 2016

Lift the curb on Performance Engineering with Innovation

Author: Sanjeeb Kumar Jena, Test Engineer

In my previous two blogs, I discussed about bringing pragmatism and a cultural mind-set change to performance engineering teams. In this blog, we will evaluate the outcome of these shifts during the transformation journey from performance testing (only limited to quality assessment phase of software applications) to performance engineering (covers the entire life-span of software applications to ensure higher return on investment).

Continue reading "Lift the curb on Performance Engineering with Innovation" »

August 25, 2016

Agile and testing: What banks need to know

Author: Gaurav Singla, Technical Test Lead

Agile -- a testing framework introduced in 2001 -- is now being used across various banks in order to cater to their IT project needs. The basic principles of the agile methodology are as follows:

Continue reading "Agile and testing: What banks need to know" »

August 16, 2016

Culture is everything in Performance Engineering

Author: Sanjeeb Kumar Jena, Test Engineer

We are living in a bulging knowledge economy where anyone can access information from anywhere with a device that fits into their pocket. We are living in a hyper-connected world via the World Wide Web. Today, business is not the 19th century one-way 'producer-consumer' relationship. Instead, it's a two-way communication. An effective business model is 'not about finding customers for your products' but it's about 'making products for your customers'.

Continue reading "Culture is everything in Performance Engineering" »

August 3, 2016

Crowd Testing: A win-win for organizations and customers

Author: Manjunatha Gurulingaiah Kukkuru, Principal Research Analyst
            Swati Sucharita, Senior Project Manager

Software is evolving with every passing day to accommodate newer prospects of its usage in day-to-day life.

Continue reading "Crowd Testing: A win-win for organizations and customers" »

August 2, 2016

Pragmatic Performance Engineering

Author: Sanjeeb Kumar Jena, Test Engineer

When you hear the term 'performance engineering,' what's the first thing you think of? The famed battle of 'performance testing versus performance engineering' or because we are engineers, do you immediately think of making performance testing a better quality assurance process?

Continue reading "Pragmatic Performance Engineering" »

July 27, 2016

The three ingredients for a perfect test strategy

Authors: Gayathri V- Group Project Manager; Gaurav Gupta- Senior Project Manager

Last week, we won a testing proposal for a mandatory program that cuts across multiple applications. Although we, the 'pursuit team,' were celebrating the win, a voice in our head incessantly kept saying, "Winning is just one part of the game; but delivering such huge programs on time is always a tall order!"

Continue reading "The three ingredients for a perfect test strategy" »

July 26, 2016

Performance engineering in Agile landscape and DevOps

Author: Aftab Alam, Senior Project Manager

Over the last couple of years, one of the key shifts in the software development process has been to move away from the traditional waterfall approach and instead, embrace newer models like DevOps. One of the main goals of development and operations (DevOps) is to monetize investments as soon as possible. In the traditional waterfall model, UI mockups are all business owners (investors) have access to, before agreeing to invest.

Continue reading "Performance engineering in Agile landscape and DevOps" »

July 18, 2016

Four approaches to big data testing for banks

Author: Surya Prakash G, Delivery Manager

Today's banks are a stark contrast to what they were a few years ago, and tomorrow's banks will operate with newer paradigms as well, due to technological innovations. With each passing day, these financial institutions experience new customer expectations and an increase in interaction through social media and mobility. As a result, banks are changing their IT landscape on priority, which entails implementing big data technologies to process customer data and provide new revenue opportunities. A few examples of such trending technology solutions include fraud and sanctions management, enhanced customer reporting, new payment gateways, customized stocks portfolio-based on searches, and so on.

Continue reading "Four approaches to big data testing for banks" »

June 27, 2016

Three Generations streaming in a Network

Author: Hemalatha Murugesan, Senior Delivery Manager

Are you using iPhone 6s asked my 80+ year old neighbor as we were transiting in apartment lift?  I responded nope, Samsung and enquired what help is needed.   He wanted assistance in how to use the various apps as he just got the iphone 6s as a gift.   Sure why not and will come over to your place, I winked and concluded.  

Continue reading "Three Generations streaming in a Network" »

June 8, 2016

Golden rules for large migration

Author: Yogita Sachdeva, Group Project Manager

In my experience of working with large banks, I have worked on small programs related to acquisitions and mergers. I have also worked on voluminous upgradations and migrations. I wondered what was different which actually a tie breaker for large program was. I always tried to scratch my brain to figure out what it takes to make a large program run. I realized that smaller programs generally get successfully delivered with the apt technical skills of the team. However, large programs are normally meant to deliver a business strategy as big as the creation of a new bank. A large program encompasses a group of related projects, managed in a coordinated manner, to obtain benefits and to optimize the cost control.

Continue reading "Golden rules for large migration" »

June 7, 2016

Role of Validation in Data Virtualization

Author: Kuriakose KK, Senior Project Manager

How can I see the big picture and take an insightful decision with attention to details now?

Jack, the CEO of a retail organization with stores across the world, is meeting his leadership team to discuss disturbing results of the Black Friday sale. He enquires about reasons behind why they were unable to meet their targets and the reason is promptly answered by his leaders as missed sales, delayed shipping, shipping errors, overproduction, sales teams not selling where market demand exists, higher inventory, etc. Jack is disturbed by these answers, and on further probing understands most of these are judgment errors. 

Continue reading "Role of Validation in Data Virtualization" »

June 1, 2016

Predictive Analytics Changing QA

 Author: Pradeep Yadlapati, AVP

Today's mobile economy is changing the way enterprises do business. A recent survey indicates that the mobile ecosystem generates 4.2% of the global GDP, which amounts to more than US $3.1 trillion of added economic value. It is no surprise that organizations are fast embarking on digital transformations.

The pervasiveness of devices is altering interaction as well as business models. Customers expect a seamless experience across different channels. Everyone wants one-touch information and they expect applications to display preferences and facilitate quicker and smarter decisions. 

Continue reading "Predictive Analytics Changing QA" »

May 31, 2016

Performance Testing in the Cloud

Author: Navin Shankar Patel, Group Project Manager

If a layperson, frozen in time for 10 years, suddenly wakes up and eavesdrops on a conversation between CIOs, he/she might assume that a group of weather forecasters are conversing. That is because the entire discussion is centered on 'cloud' and is interspersed with a lot of mostly unintelligible words.

Continue reading "Performance Testing in the Cloud" »

April 28, 2016

Manage tests, the automation way

Author: Swathi Bendrala, Test Analyst 

Most software applications today are web-based. To keep pace with competition and highly demanding processes within the enterprise, these web-based applications undergo frequent updates, either to add new features incorporate new innovations. While these updates are necessary, the amount spent to roll-out these updates are important too. 

Continue reading "Manage tests, the automation way" »

April 26, 2016

Validate to bring out the real value of visual analytics

Author: Saju Joseph, Senior Project Manager

Everyday enterprises gather tons of data streaming in from all directions. The challenge lies in taking this huge volume of data, sometimes unstructured in nature, synthesizing it, quantifying it, and increasing its business value. One way to achieve this is by moving from traditional reporting to analytics.

Continue reading "Validate to bring out the real value of visual analytics" »

April 14, 2016

Is Performance Testing really Non-Functional Testing?

Author: Navin Shankar Patel, Group Project Manager

The world of testing has evolved over the years and like most evolutions in the technology world, it has spawned a plethora of methodologies and philosophies. We now have super specializations in the testing world - UI testing, data services testing, service virtualization, etc. However, some beliefs remain unchanged. Especially the belief that the testing universe is dichotomous - characterized by functional and non-functional testing.

Continue reading "Is Performance Testing really Non-Functional Testing?" »

January 14, 2016

Crowdsourced Testing: Leveraging the power of a crowd to do the testing

Author: Harsh Bajaj, Project Manager

What is crowdsourcing?

The term 'crowdsourcing' was coined in 2006 when Jeff Howe wrote an article for Wired Magazine - "The rise of crowdsourcing." Crowdsourcing is a combination of two words - crowd and outsourcing. The objective is to get the work done from an open-ended crowd in the form of an open call. 

Continue reading "Crowdsourced Testing: Leveraging the power of a crowd to do the testing" »

December 28, 2015

Elevating QA to CXO

Author: Harleen Bedi, Principal Consultant

With rapidly evolving and emerging technology trends such as Cloud, Mobile, Big data, Social, etc. QA is now being looked as a key component in the modernization and optimization agenda of any CXO. This is supported by - World Quality Report that reveals that Application Quality Assurance and Testing Now Accounts for almost a Quarter of IT Spending. 

Continue reading "Elevating QA to CXO" »

December 14, 2015

Ensure the Quality of Data Ingested for True Insights

Author: Naju D. Mohan, Delivery Manager

I sometimes wonder whether it is the man's craze for collecting things, which is driving organizations to pile up huge volumes of diverse data at unimaginable speeds. Amidst this rush for accumulating data, the inability to derive value from this heap of data is causing a fair amount of pain and a lot of stress on business and IT.   

Continue reading "Ensure the Quality of Data Ingested for True Insights" »

September 18, 2015

Assuring quality in Self Service BI

Author: Joye R, Group Project Manager

Why self-service BI?

Organizations need access to accurate, integrated and real-time data to make faster and smarter decisions. But, in many organizations, decisions are still not based on BI simply due to the challenges in IT systems to keep up with the demands of businesses for information and analytics.

Self-service BI provides an environment where business users can create and access a set of customized BI reports and analytics without any IT team involvement.

Continue reading "Assuring quality in Self Service BI" »

September 7, 2015

Role of Open Source Testing Tools

Author: Vasudeva Muralidhar Naidu,Senior Delivery Manager

Quality organizations are maturing from Quality Control (Test the code) to Quality Assurance (building Quality into the product) to Quality Management. In addition to bringing Quality upfront and building Quality into the product, Quality Management also includes introduction of devOps principles in testing, optimization of Testing Infrastructure (Test environments and tools).

Continue reading "Role of Open Source Testing Tools" »

August 21, 2015

Are We Prepared to Manage Tomorrow's Test Data Challenges?

Author: Sunil Dattatray Shidore, Senior Project Manager

As tomorrow's enterprises are embracing latest technology trends including SMAC (Social, Mobile, Analytics, Cloud) and adopting continuous integration and agility, it is imperative to think of more advanced, scalable and innovative ways to manage test data in non-production environments. It can be for development, testing, training, POC, pre-prod purposes. The question really is - have we envisioned the upcoming challenges and complexity in managing test data and are we prepared and empowered with right strategies, methodologies, tools, processes and skilled people in this area?

Continue reading "Are We Prepared to Manage Tomorrow's Test Data Challenges?" »

August 17, 2015

Extreme Automation - The Need for Today and Tomorrow

Author: Vasudeva Muralidhar Naidu,Senior Delivery Manager

We have read about the success of the 'New Horizon Spacecraft', and its incredible journey to Planet Pluto. This is extreme engineering and pushing human limits to the edge. Similarly, when we hear about the automobile industry and the fact that one additional car is getting assembled every 6 minutes, we are quite amazed at the level of automation that has been achieved.

Continue reading "Extreme Automation - The Need for Today and Tomorrow" »

August 3, 2015

Three Stages of Functional Testing 3Vs of big data

Author: Surya Prakash G, Group Project Manager

By now, everyone has heard of big data. These two words are heard widely in every IT organization and across different industry verticals. What is needed, however, is a clear understanding of what big data means, and how big data can be implemented in day-to-day businesses. The concept of big data refers to a huge amount of data, petabytes of data and huge mountains of data. With ongoing technology changes, data forms an important input for making meaningful decisions.

Continue reading "Three Stages of Functional Testing 3Vs of big data" »

Balancing the Risk and Cost of Testing

Author: Gaurav Singla,Technical Test Lead

A lot of things about banking software hinge on how and when it might fail and what impact that will create.

This drives all banks to invest heavily in testing projects. Traditionally, banks have been involved in testing software modules from end-to-end and in totality, which calls for large resources. Even then, testing programs are not foolproof, often detecting minor issues while overlooking critical ones that might even dent the bank's image among its customers.

Continue reading "Balancing the Risk and Cost of Testing" »

July 21, 2015

Is your testing organization ready for the big data challenge?

Author: Vasudeva Muralidhar Naidu,Senior Delivery Manager       

Big data is gaining popularity across industry segments. From being limited to lab research in niche technology companies to being widely used for commercial purposes; big data has achieved a wider scope of application. Many mainstream organizations, including global banks and insurance organizations, have already started using big data technologies (open source) to store historical data. While this is the first step to value realization, we will soon see this platform being used for processing unstructured data as well.

Continue reading "Is your testing organization ready for the big data challenge?" »

July 14, 2015

Automation - A new measurement of client experience

Author: Rajneesh Malviya, AVP - Delivery Head - Independent Validation Solutions

A few months ago, I was talking to one of my clients who visited us on the Pune campus. She shared how happy she was with the improved client visit process. She was given a smart card like any of our employee and was mapped / marked as visitor and with that she could move from one building to another without much of a hassle.  She no longer had to go through the manual entry process at each building. Like an employee, she could use her smart card at the turnstile to enter & exit our buildings and at the same time her entry was recorded as per compliance need. As she had been in our campus before, she was clearly able to experience the great difference brought about by automation.

Continue reading "Automation - A new measurement of client experience" »

July 6, 2015

Automated Performance Engineering Framework for CI/CD

Author: Aftab Alam, Senior Project Manager, Independent Validation and Testing Services

with contribution from Shweta Dubey

Continuous Integration is an important part of agile based development process. It's getting huge attention in every phase of Software Development Cycle (SDLC) to deliver business feature faster and confidently.

Most of the times it's easy to catch functional bugs using test framework but for performance testing, it requires scripting knowledge along with load testing and analysis tools.

Continue reading "Automated Performance Engineering Framework for CI/CD" »

June 30, 2015

Transforming Paradigms - From Quality Assurance to Business Assurance

Author: Srinivas Kamadi, AVP - Group Practice Engagement Manager.

Today's digital landscape is riddled with disruptive forces that are transforming business models and industries alike. The proliferation of social channels and continuous creation of big data is fuelling this transformation and heating up global competition. Forces such as Social, Mobile, Analytics and Cloud (SMAC) and the Internet of Things (IoT) are now critical to delivering omni-channel experiences. These digital imperatives guide how businesses engage with their customers, employees and stakeholders. Customers demand 24/7 connectivity and free flowing information accessibility. This has made it contingent upon companies to deliver superior customer experience in an agile fashion.

Continue reading "Transforming Paradigms - From Quality Assurance to Business Assurance" »

June 29, 2015

New Age of Testing - The WHAT, WHY and HOW?

Author: Mahesh Venkataraman, Associate Vice President, Independent Validation Services.

While testing has always been important to IT, the last decade has seen it emerge as a discipline in its own right. Hundreds of tools have been developed and deployed, commercially as well as 'openly'. New methodologies have been formulated to test the latest business and technology transformations. IT organizations today recognize testing as a critical function that assures the readiness of a system to go live (or a product to be released to the market).

Continue reading "New Age of Testing - The WHAT, WHY and HOW?" »

October 27, 2014

Mobile Native App- Real User Experience Measurement


As we all know that testing server and making sure server side is up and running does not guarantee good end user experience. There are several SaaS solutions to measure client side performance of a website and mobile apps like SOASTA mPluse, TouchTest, webpagetest, Dynatrace UEM etc.  How can we leverage same technique to measure Sales person experience for Mobile POS applications before releasing new features or how can we monitor Sales person app usages/behavior just like we do real user experience analysis for website?

Continue reading "Mobile Native App- Real User Experience Measurement" »

September 24, 2014

Accessibility Compliance : What different User groups look for ?

Accessibility compliance is gaining more strength across organizations due to legal mandates.

Web Content Accessibility Guidelines (WCAG 2.0) is referred as a broad guideline across geos, which takes into consideration major physical impairments and how to meet the needs of such users having impairments. t is vital for achieving accessibility compliance successfully in a program, that the teams engaged and working together, concerned groups are quite aware of their program specific accessibility guidelines.

Continue reading "Accessibility Compliance : What different User groups look for ?" »

September 2, 2014

Usability Testing: What goes into user recruitment and test session readiness?

Usability testing is meant to help discover design issues and get appropriate directions on further improving overall user experience. But, these sessions would not be possible without having right set of users who would review and provide feedback.

Getting right users 
At times, it is challenging to get right users for test sessions.For many organizations where usability process for design is newly followed, there is still less clarity on whether:

  • usability testing is required ( business impact)
  • if required, what type of users would be participating in test sessions
  • will such users be identified and available for such sessions
  • if available, which specific locations need to be considered.


Continue reading "Usability Testing: What goes into user recruitment and test session readiness?" »

January 20, 2014

Social Media, Cloud, Analytics and Mobility (SCAM)

Social Media, Cloud, Analytics and Mobility: These are 4 common buzzwords that we hear today. They are indeed very much inter-related as well!  Social media allows instantaneous interactions, sharing of news, photos, videos etc. From a technical perspective, this requires elastic omnipresent storage capability.  Cloud provides this for the Social Media. The moment something is on cloud, it can be big - big data. Small data can be hosted locally. If data is big, cloud is a good medium and the data can be leveraged for analytics. This facilitates informed decision making. For an end user, this should be omnipresent, thus available at fingertips. Mobility facilitates that.

Continue reading "Social Media, Cloud, Analytics and Mobility (SCAM)" »

October 10, 2013

Preparing ourselves for Application Security Testing


Haven't we all as functional testers, done 'Authentication' and 'Authorization' testing? Almost every application demands some flavor of these tests. So, are we not already doing 'Application Security Testing'?

Let's explore and see what's the extra mile, we need to traverse, in each phase of SDLC, to say confidently, that the applications we are testing are secured ones.


Continue reading "Preparing ourselves for Application Security Testing " »

October 3, 2013

Crowd Testing

The concept of crowdsourcing is not new. The practice of harnessing ideas, services or content from a larger pool of unknown population has existed for many centuries.  For example Oxford English Dictionary got created through an open call to the community to identify all the words in English language along with their usage; this call yielded 6 million submissions over 70 years! The Indian Government effectively used crowd sourcing to obtain entries for the symbol of the India Rupee which finally led to selection of the current symbol. On a lighter note, in India we do see crowdsourcing all around us. A crowd of helpful volunteers trying to help fix or push-start a broken automobile is a common sight here!

Continue reading "Crowd Testing" »

September 30, 2013

Testing-as-a-Service (TaaS): Take a Peek


In my last post, I wrote how market if adapting to TaaS. Here I am talking "What is TaaS?" and "Why do we need to evolve?".

What is TaaS?

Considering all prevailing aspects of New Age QA, major QA service suppliers are now formalizing their own TaaS model for test outsourcing. Fundamentally, TaaS doesn't necessarily represent a true cloud service, but it incorporates aspects of cloud. This paradigm shift in thinking has made every QA service provider build and continuously mature their TaaS model. TaaS requires us to introduce testing infrastructure, test tools, accelerators, human effort in delivering testing services, delivery methodology, framework and best practices bundled together and apply it uniformly.


Continue reading "Testing-as-a-Service (TaaS): Take a Peek" »

September 23, 2013

Testing-as-a-Service (TaaS) - Changing Times

In the recent third party survey of Infosys clients, one message is coming clear that customers are moving away from traditional staff augmentation model towards a single custodian for their entire testing needs. There is increased focus on making testing services predictable, accessible, available without compromising on quality and cost. Customer is looking out for an outcome based sourcing and pricing while outsourcing testing services. We can see how New Age QA is evolving from testing phase to Business Assurance. Customer loyalty has become very important and one can't compromise with the quality of an application and hence in application delivery, software testing remains an essential element of business operational efficiency and risk management.

Continue reading "Testing-as-a-Service (TaaS) - Changing Times" »

September 18, 2013

Retail industry challenges that demand a more mature Testing organization


We live in a digital world and mobile society. It has changed our lives completely, more so for retailers. One bad experience, online or on mobile devices and retail customers are gone forever.. Unlike most industry domains such as banking & financial or healthcare, retail industry is uniquely challenged with high operational overhead and lower profit margin. Overheads include maintaining large number of retail stores where as profit from selling a merchandize could be as low as few pennies. Plus, IT systems are still expected to provide the same level of quality what banking or healthcare industry may require.


In this blog, I am presenting challenges that specifically demand an efficient SDLC process backed by faster software development as well as efficient testing process.


Continue reading "Retail industry challenges that demand a more mature Testing organization " »

September 15, 2013

Different BPM Products: What difference does it make from a testing perspective?

This blog is the third in the 3-series blogs on validating BPM implementations. Here, we focus on the common products that are available in the market, their common use in enterprise implementations from a testing perspective.


According to a recent Forrester report on BPM, (The Forrester Wave: BPM Suites, Q1, 2013) in the year 2013, BPM suites are to take a center stage in Digital Disruption. The Disruptive forces of change include technical, business and regulatory changes.  It is a well-established fact that key to any change management or strategy implementation is to start small, think big and move fast! When it comes to the validation of BPM products, the story is not much different, except that we typically see testing organizations fail when it comes to scaling and moving fast. Business Process validation might be a success in silos or at an application/project level, however when it comes to Enterprise processes or integration involving several systems, across LOBs, we just cannot move at the same pace. One key reason is the lack of understanding of the overall BPM picture at hand.


Continue reading "Different BPM Products: What difference does it make from a testing perspective?" »

September 3, 2013

Is your Performance Testing in an Agile SDLC really 'agile'?

Agile software development is seen as a value-driven method where cross functional teams are expected to deliver and respond to changes quickly and for the iterations happening throughout the development cycle. I have come across performance testing team working in agile projects complaining about burnout due to excessive testing and effort overrun. While an agile method promotes incremental development to changes, it creates quite a bit of challenge for the performance test team. Performance testing team expects a functionally stable environment to carry out tests and investigate performance issues but with continuous changes happening in the code it seems a distant dream. Once bottleneck is identified, performance tuning is done that involves isolating the code segment causing performance issue and rectifying it. However, continuous changes in the code and the functionality make it difficult to pin point the code causing performance degradation. Moreover, short sprint timeline for performance testing does not leave a room for deep dive analysis.

Nevertheless there are benefits too when performance testing goes hand-in-hand with agile development cycle. It is well known that the cost of fixing a defect is less when detected early in the life cycle stages. The objective of testing in the sprint is to test performance early and often. It becomes more costly to incorporate the changes related to performance when testing takes longer time. Performance testing team becomes an important part of the collaborative team and a continuous feedback to developers, architects, project team on application performance is provided which finally results in the delivery of a superior quality product.

To quantify Agility in a project a metric called Agility Measurement Index (AMI) is used which can be derived based on five dimensions - Duration, Risk, Novelty, Effort and Interaction. A low value of AMI indicator is a combination of short duration, low risk, low novelty, limited effort and with minimal customer interaction and that project is more likely suitable for waterfall model. To read more about AMI refer to the research paper titled METRICS AND TECHNIQUES TO GUIDE SOFTWARE DEVELOPMENT published by Florida State University. I believe similar agility assessment is required separately for the performance testing as well to take a decision as to whether the performance testing piece is really a right candidate for the agile cycle.

August 27, 2013

Testing of Business Process Implementations


There is no well-defined framework or solution when it comes to a testing strategy for BPM implementation. Testing methodology, to a great extent depends on the specific implementation (available points of interception, data protocols, transport protocols, traceability of business and design requirements etc.). In other words, not all BPM implementations are exact in nature. However, there are two major focus points when it comes to testing a BPM implementation

(a)    Service level validation

(b)   Business Process / Integration validation

Service level validation focuses on validating the service as a standalone entity. Tools exist which facilitate the automated validation of the functionality embedded behind the end point url. Some of the common tools are IBM RIT, CA LISA, HP Service Test, Parasoft SOATest, and Infosys IP Middleware Testing Solution. It has been witnessed in several engagements and projects that focus should not only be on the functional aspects of the service, but also on the performance, security and governance aspects. This helps in shift left of the entire validation process.

Business Process validation focuses on orchestration testing or choreographic validation of composite services. It doesn't stop at validating the routing logic alone. There is much more. Business Process validation is often not done using the Presentation tier. This means that the test requirements, tools and strategy are not going to be the same as used in Functional Presentation Tier testing. Key advantages with the process layer testing are:

(a)    Early Defect Detection

(b)   Reduced analysis

Below graph shows the defect detection pattern for two similar projects, where a white box testing for the Business Process Layer was done (Project A) and where it was not done (Project B). We can see that testing is halted by the critical 1 or 2 middle tier defects on the initial days in Project B (black box testing).



  Figure 1: Defect Detection Pattern

The defects in BPM testing are 'pin point' defects. They are isolated and the defective flow / attribute is pin pointed. This is different from the black box testing defects, where majority of the defects in the middle tier manifest as 'page not found' or 'service not available' to the end user.

To conclude, Business Process Method implementation requires a different testing strategy. The test strategy, to a great extent depends on the underlying technology architecture. Plugging in 3rd party BPM products from vendors like IBM, TIBCO and Pega will help eliminate the redundant development efforts to a great extent (more on the different BPM products in my next blog). Industry specific products are also available from most of these vendors. Nevertheless, validation of the integrated middle tier is of utmost importance as the architectural landscape DNA of every enterprise is different.


August 20, 2013

Business Process Management(BPM) Implementation - An Introduction


Given the nature of BPM implementation, no silver bullet exists for testing BPM implementation. In this post, we will focus on how Business Process Implementations occur at enterprises first, and then we will define a testing methodology for the same.

Business world has come a long way from the days when IT or computing power was used for accounting and administrative purposes only. From being a mere facilitator, IT has come a long way to run the business in many sectors like Banking, Telecom, Retail marketing etc.

Let us take an example to understand how IT has changed in all these years. In today's times when a customer wishes to buy a Television set, he browses for the available brands, specification & price online. He places an order for the chosen model online and makes payment via Credit Card. This process invokes several other flows internally.

a)      As the order is placed, the inventory is checked and a 'unit' is blocked, inventory count is reduced accordingly.

b)      During this process, if the inventory stock falls below a threshold, an order might be placed to the distributor for more pieces.

c)       The credit card information entered by the user is validated by another gateway. A web service might return the authenticity, available balance etc. with respect to the credit card.

d)      When the shipping mode is selected by the user, an order is placed with the external shipping agency.

e)      The workflow might also place an order once in 2 days, consolidating all orders or immediately based on the shipping mode that was selected

f)       Shipping company, in turn initiates a set of services based on this new request... ...so on. 

This integrated workflow is an unattended automated way of B2B integration. This is much different from the way a customer visits the shop, asking for a TV, shopkeeper looking for a TV in his store and giving him the TV.

There are several methods of implementing the integration workflow. Real-world workflows could be much more complex than the above example. The automated workflow makes the entire transaction much faster and convenient for the end user and merchant. However, the fact that it is unattended and automated requires it to function right. The more complex is the workflow ecosystem, the more integration points surface and thus more chances of failure.  A defect in the integration tier can result in loss of goodwill as well as revenue.

BPM products today, embed such industry specific integration solutions and are available as Commercial-Off-The-Shelf (COTS) products. Many enterprises plug such reusable integration products into their architectural landscape. Testing this orchestration is of utmost importance. We will see how to test a BPM implementation in the next blog..........

July 31, 2013

IT Operations that beleaguered on-premise testing

In my previous blog, I had mentioned how delays and overhead costs proved to be burden while conducting on-premise testing. But these are not all. Legacy datacenter operations too have added to its anguishes. Let's take a look at the issues I encountered, in some of my engagements, because of clients' legacy infrastructure.

  • Lack of Agility
  • Lack of Automation
  • Availability of testing environments

Continue reading "IT Operations that beleaguered on-premise testing" »

July 29, 2013

Increasing agility via Test Data Management


Does test data requirements need to be captured as part of the functional requirements or non-functional requirements?  Is test data management a costing exercise or business critical?  Since the testing team is provisioning the test data in whatever mechanisms, do we need test data management team? 


Continue reading "Increasing agility via Test Data Management" »

July 12, 2013

Accelerating Business Intelligence through Performance Testing of EDW Systems

Enterprise Data-warehouse (EDW) systems help businesses take better decisions by collating massive operational data from disparate systems and converting them into a format that is current, actionable and easy to comprehend. Gartner has identified performance as one of the key differentiators for data warehouses. With data warehouses growing in size, meeting the performance and scalability requirements has been an incessant challenge for enterprises. EDW performance testing will uncover the bottlenecks and scalability issues before go-live thereby reducing performance related risks and scalability concerns. EDW performance testing covers the following key aspects:

-          Performance of  jobs to Extract, Transform and Load (ETL) data into EDW

-          Performance during  report generation and analytics

-          Scalability of Data-Warehouse

-          Stability  and resource utilization

Continue reading "Accelerating Business Intelligence through Performance Testing of EDW Systems" »

June 13, 2013

Testing for an Agile Project - Cool breeze in Summer


For a software testing professional, working in an agile project is no less than a breeze of cool air in the hot summer. The point that I am trying to drive is that for a tester, who works regularly in projects that operate in the traditional model, the agile model is a very welcome change. Let me tell you why...

Continue reading "Testing for an Agile Project - Cool breeze in Summer" »

June 12, 2013

Top 4 cost hurdles while doing on premise testing

In my previous blog, I have mentioned how on-premise testing activity brings in delay in an actual testing. In this post, I will uncover expenses suffered by the businesses while doing the same. Let's take a look at what eats away your hard earned money.

Continue reading "Top 4 cost hurdles while doing on premise testing" »

May 28, 2013

Frustrated over testing delays? Cloud based testing can help you

Guest Post by

Nilanjan Ghosh, Project Manager, CORPIVS, Infosys

Testing on legacy datacenter has proven to be a time consuming affair, given the extent of bottlenecks it brings in. It affects the business delivery as product release gets delayed. In this post, I would be highlighting the major concerns which have forced the industry to think on some workaround and paved the way for Cloud based testing. 

Continue reading "Frustrated over testing delays? Cloud based testing can help you" »

April 4, 2013

4DX - The Golden Experience

The sun's scattered rays are too weak to start a fire, but once you focus them with a magnifying glass they will bring paper to flame in seconds. The same is true of human beings--once their collective energy is focused on a challenge; there is little they can't accomplish.

4DXGoldenExperience.jpgA common experience of most test management teams is that they are pretty good in setting goals and targets but more often they don't have a consistent approach to achieve them or are trying to focus on too many things at once.

We might have been sharing the same experience had we not been introduced to Franklin Covey's 4 Disciplines of Execution (4DX) framework through a 2 day internal workshop conducted by Infosys. 4DX is powerful methodology that helps translate business strategy into laser-focused action. This framework helped us to accomplish a Wildly Important Goal (WIG) of increasing revenue share from specialized services by 7% y-o-y. We achieved this goal 3 months ahead of the target date.

Continue reading "4DX - The Golden Experience" »

April 1, 2013

Benefits realized through Effective Test Data Management


EffectiveTestDataManagement.jpgWith significant percentage of effort being spent on Test Data Management (TDM) activities in projects, its benefits are often questioned by clients. Some of the common questions include:

·         Why is a different team needed for provisioning test data for applications being tested?

·         Will this result in reduction in FTE count?

·         What benefits will be realized by having a TDM team?

·         What is the value add TDM team will bring to IT?


Continue reading "Benefits realized through Effective Test Data Management" »

March 21, 2013

The Testing Standard journey session @ QUEST 2013

Join Allyson Rippon, Executive Global IT Manager, UBS and me at this year's QUEST 2013 QAI Conference & Expo @ Chicago, when we share our Testing Standard journey. 

For more information on my session and the conference schedule, please visit the following link: http://www.qaiquest.org/2013/conference/the-standardization-journey-how-to-centralize-your-testing-standards/

What is Testing Standard anyway?

Continue reading "The Testing Standard journey session @ QUEST 2013" »

March 7, 2013

Test Tool Selection for Functional Automation - The missing dimension

"How do you select the appropriate test tool for your functional testing?"

"As per the listed features, the automation tool we invested in seems to be right. But, the % of automation we could really achieve is very low. What did we overlook at the time of tool selection?"

"Both my applications are of the same technology. We achieved great success in automation with one of them and failed miserably with the other. What could be the reason?"

Continue reading "Test Tool Selection for Functional Automation - The missing dimension" »

February 11, 2013

8 Key Test Data Management Challenges

Today's uncertain business environment is witnessing a lot of budget cuts, high competition, job cuts and much more.  In such a scenario, addressing Test Data Management (TDM) with the right approach will not only save huge effort and costs but also bring in business agility, reduced time to market with reliability and coverage of the test cases along with reduction in test environment costs.  As per analysts, the effort spent in doing TDM ranges anywhere between 12% - 14% while in some cases where the applications are data intensive, the effort can go beyond 21%.  That is phenomenal amount of effort and time spent on TDM activities that can be addressed.

Continue reading "8 Key Test Data Management Challenges " »

January 3, 2013

Future demands testers to broaden their areas of expertise

While scanning the list of best practices for Traditional projects recommended by one of the top technology research firms, I happened to realize that quite a few of them like early involvement in SDLC cycle or early automation etc. have already been implemented in the projects executed by Infosys. To me, this broadly meant, we have already adapted what others foresee as a future trend. On the other hand, it inspired me to think about what could be the possible forms that software testing might embrace in the near future.


In the current world, where there is expectation on every penny spent to be realized, to me the reality in the near future will be as below:


1.       Economic uncertainty across the world (with the possible exception of Brunei or Madagascar) will force governments and corporations to cut costs and squeeze more output for what they spend. From a software testing perspective, this may result in the testing team no longer limiting itself to regular testing, but also foraying into other SDLC aspects like test data management, test environment maintenance, reporting performance issues, ensuring optimal leverage and usage of testing tools as part of the testing scope.

2.       Accountability of development and testing teams for number of bugs/defects identified, and slipped.

3.       With more and more software testing tools available as freeware, tool usage may be priced based on benefits realized rather than usage

4.       All facets of business right from banking to gaming embracing mobility applications, resulting in quicker development and deployment, leading to shorter and effective testing cycles.


To sum things up, the preceding points indicate a common direction for software testing, where testing team will be made accountable for more activities in the SDLC than ever before, thus increasing the need for a tester to equip himself/herself with enhanced skills on domain and technical perspectives.

December 31, 2012

6 Steps to Test Games Effectively

Have you ever wondered how games are tested and delivered? Effective game testing strategy leads to a hassle free and responsive gaming vigor to the end users. Gaming industry is growing exponentially due to increase in apps usage in mobile and tablets. It is hard to define a standard strategy for game testing and testing methodologies need to be tailored as each game differs from the other. To start with, aspects like Functionality, Usability, Multi-player functionality, Regression, Endurance, Compatibility, Localization / Internationalization (L10N, I18N), Performance, Content Testing, Hardware and Recovery Scenarios have to be analyzed. Phases of game testing almost align with the standard Software Testing Life Cycle (STLC) and based on my experience, below is the approach to test games effectively.

Continue reading "6 Steps to Test Games Effectively" »

December 12, 2012

What is a Test Factory Model and how can we gain the maximum value from it for SAP applications?

Let me start with why do we need a test factory. With the ever-growing need for resources and rising costs in the running of enterprise applications (such as SAP and Oracle applications), testing of enterprise applications has become fairly complex and is a major contributor to burgeoning IT costs. Test Factory is a unique concept and model that allows us to address this problem.


If an organization plans to implement the Test Factory model, it can turn to vendors who have the expertise to test enterprise applications in a more efficiently-governed and cost-effective manner. At the same time, this model allows the day-to-day operations of the organization to be run effectively, peacefully and uninterrupted.


A recent POV - co-authored by practitioners from Diageo and Infosys - helps you to understand the various challenges faced by SAP-enabled organizations, and provides an in-depth look at the process of setting up a Test Factory as well as the benefits thereof.  The POV can be accessed at http://www.infosys.com/IT-services/independent-validation-testing-services/white-papers/Documents/test-factory-setup.pdf  

November 30, 2012

Recommended structure for Organizational Security Assurance team


Security defects are sensitive by nature, always raised as top priority tickets and costlier than functional and performance defects. Apart from the business impact, there is impact on the company's image, lost data costs, loss of end-user confidence and it leads to compliance and legal issues. So, with such high levels of risk associated with security defects, it is surprising to see that many organizations do not have an internal structure towards security assurance.


Internal security assurance is needed for any organization to increase security awareness across the enterprise, have a structure to deal with various security compliance aspects and to use this structure to strengthen and build and test processes. Setting clear goals, reporting structure, defining activities and enlisting performance measurement criteria helps in smoother functioning of security assurance team. To know more about a team structure that is capable of providing enterprise-wide security assurance service for Web applications, read our POV titled "3-Pillar Security Assurance Team Structure for ensuring Enterprise Wide Web Application Security" at http://www.infosys.com/IT-services/independent-validation-testing-services/white-papers/Documents/security-assurance-team.pdf.

November 5, 2012

Big Data: Is the 'Developer testing' enough?


A lot has been said about the What, the Why and the How of Big Data. Considering the technical aspect of Big Data, isn't it enough that these implementations can be production ready with just the developers testing it? As I probe deeper into the testing requirements, it's clear that 'Independent Testers' have a greater role to play in the testing of Big Data implementations. All arguments in favor of 'Independent testing' hold equally true for the Big Data based implementations. In addition to the 'Functional Testing' aspect, the other areas, where 'Independent Testing' can be a true value add are:

· Early Validation of Requirements

· Early Validation of Design

· Preparation of Big Test Data

· Configuration Testing

· Incremental load Testing

In this blog, I will touch upon the listed additional areas and what should be the focus of 'Independent Testing'.

Continue reading "Big Data: Is the 'Developer testing' enough?" »

July 16, 2012

Testing BIG Data Implementations - How is this different from Testing DWH Implementations?

Whether it is a Data Warehouse (DWH) or a BIG Data Storage system, the basic component that's of interest to us, the testers, is the 'Data'. At the fundamental level, the data validation in both these storage systems involves validation of data against the source systems, for the defined business rules. It's easy to think that, if we know how to test a DWH we know how to test the BIG Data storage system. But, unfortunately, that is not the case! In this blog, I'll shed light on some of the differences in these storage systems and suggest an approach to BIG Data Testing.

Continue reading "Testing BIG Data Implementations - How is this different from Testing DWH Implementations?" »

June 13, 2012

Enhance an enterprises' testing maturity with The Enterprise QA Transformation Model

With reduced investments in IT and the constant pressure to reduce costs, IT teams have been looking for innovative ways to improve efficiencies, reduce re-work and optimize operations. This is where an organizations capability to ensure and control quality of their complex IT ecosystems is becoming very critical. Further, this has necessitated that QA organizations evaluate their processes and capabilities in delivering quality.

Test Maturity Models were developed to enable the assessment of current testing capabilities and processes. Over the years, several such models were developed and many have been re-defined several times to catch up with the needs of the market. But majority of the models ended up being rigid and certification oriented while the industry was looking for a comprehensive test maturity model that could be customized, adapted and facilitated the improvement of testing capabilities selectively. Such a model is more relevant as more and more companies were adopting global sourcing models and their capabilities to deliver was not limited to one organization, but collective capabilities of several organizations.

In my latest POV co-authored along with Aromal Mohan I discuss and help understand "The Enterprise QA Transformation Model" which has been developed to provide a framework that will evaluate the current testing capabilities of an organization and provide a reference framework for their improvement. It is a continuous model that helps in selectively strengthening the required capabilities, thus making the improvements more relevant to the model, in which the organization operates. To know more please click here
http://www.infosys.com/IT-services/independent-validation-testing-services/white-papers/Documents/enterprise-QA-transformation-model.pdf. I look forward to your comments and feedback.

March 9, 2012

The Right Cloud Based QA Environment for your Business

I can clearly see that most enterprises are keen on cloud adoption, based on my interactions with them. But the first thing that perplexes them is how to go about evaluating and determining the appropriate cloud deployment that fits their business needs.

In an attempt to address these concerns, I talk about the various factors like understanding the QA infrastructure requirements, the existing infrastructure availability, application release calendar and the budget appetite, that need to be gauged for taking this decision in my latest POV.  To know more, please click here http://www.infosys.com/IT-services/independent-validation-testing-services/white-papers/Documents/cloud-based-QA-environment.pdf. As always I look forward to your views and feedback.

March 6, 2012

Overcoming challenges associated with SaaS Testing

Today's tough economic environment has put a lot of pressure on organizations to deliver business applications faster and at lower costs.  The rapid growth of the cloud coupled with the current economic environment constraints, has led to the growing adoption of SaaS based applications by organizations.SaaS based applications help organization's focus on their core business than on non-core activities like managing hardware, building applications and maintaining them. However, the adoption of SaaS demands comprehensive testing to reap all benefits associated with it. In my earlier paper, I had identified and described the challenges associated with SaaS testing (http://www.infosys.com/IT-services/independent-validation-testing-services/white-papers/Documents/saas-testing.pdf).

Continue reading "Overcoming challenges associated with SaaS Testing" »

January 20, 2012

Testing for cloud security - What is the data focus of QA teams (Part 2/3)

In my early blog on testing for cloud security (http://www.infosysblogs.com/testing-services/2011/12/testing_for_cloud_security-_wh.html), I had discussed the security concerns of cloud adoption from an infrastructure standpoint. Now, let us take a look at what would be the focus of cloud security testing from a data perspective. Enterprises are highly concerned about the security of their data in the cloud. They are well aware that any sort of data security breach could lead to non-compliance, resulting in expensive legal law suits that could cause long term damage to the overall credibility of the organization

Continue reading "Testing for cloud security - What is the data focus of QA teams (Part 2/3)" »

January 9, 2012

The Future of Testing in Financial Industry

The financial services industry in the US and Europe is undergoing rapid changes due to technology advancement, digital convergence, increasing cheaper and newer channels of communication. A few decades back, a typical financial institution just aspired to be the best deposit, savings & loan organization in a particular geography. However, today financial institutions largely rely on technology for growth and to increase their reach beyond their geographical boundaries through new communication channels. Many financial firms are engaging in strategic mergers & acquisitions to diversify their product & service portfolios and to increase their global foot print. Increasing market share through innovative product offering is the path adopted by most firms in this industry today. The current business environment mandates that they keep pace with the technological advancements (mobile platforms, browser standards and tablets) so that they can meet the growing business demands of the industry.


These global trends have opened up newer challenges for QA practitioners with complex applications, higher end-user expectations, higher ROI, heterogeneous layers, compliance requirements, mergers and acquisitions. The role of QA has matured & now organizations and projects seek multi skilled QA professionals with expertise in SOA automation QA, e-commerce, performance QA, data warehouse performance QA, mobile network performance QA and information security QA etc.


How a multi-dimensional QA model can help address some of these challenges for building tomorrow's financial services enterprise have been spelled out in my latest POV at ...http://www.infosys.com/IT-services/independent-validation-testing-services/Pages/financial-services-testing.aspx. Please share your comments, inputs and feedback. I look forward to them.

November 10, 2011

What are the challenges in SaaS Testing?

While I was doing my research on the forms of cloud that was seeing maximum adoption in the market, I learned that they were primarily of the Software-as-a-Service(SaaS) kind. SaaS is largely being adopted because businesses realize that they can get on-board quickly with SaaS, with very limited upfront OPEX investment.


However, this marked increase in SaaS adoptions has only resulted in subscribing enterprises having higher expectations from their overall SaaS investments. This obviously leads to the demand for "Getting it right the first time", which brings in the entire focus on SaaS Testing.  SaaS testing itself comes with its own set of challenges like live upgrade testing, validating data integrity and privacy, validating data migration from existing systems/other SaaS systems, validating integration with enterprise applications, validating interface compatibility and shorter validation cycle times.


To find out more about the challenges that organizations face in Saas Testing...read my POV also co-authored by Shubha Bellave at http://www.infosys.com/IT-services/independent-validation-testing-services/white-papers/Documents/saas-testing.pdf and do share your comments and feedback.

November 8, 2011

How is Testing Cloud Based Applications Different From Testing on Premise Applications in QA Clouds

The cloud has been a unique revolutionary force which has driven enterprises, market analysts and end users go "gung ho" about it because of the impact that it has created as an Infrastructure as a Service (IaaS), Software as a Service (SaaS), Platform as a Service (PaaS). Besides this, the Cloud also comes with various deployment models like the private, public, federated and virtual private clouds, making it highly flexible and versatile from a standpoint of meeting business requirements successfully. 


I have started to see organizations beginning to adopt clouds for their QA needs primarily for two reasons - either they have a stream of cloud based applications rolling out which they want to validate or to overcome their existing QA infrastructure limitations.  Often organizations ask me questions like "Does our testing methodology need to evolve when we need to test applications on the cloud?" and "how is cloud testing different from traditional testing?"


Well let's look at providing some answers to these commonly posed questions.



Testing On-Premise Applications in QA Cloud 


Organizations turn to cloud based QA environments to overcome their existing QA infrastructure limitations. In such a scenario, the traditional/on-premise resident applications are tested by deploying them on QA clouds.  The cloud is leveraged as an infrastructure-as-a-service (IaaS) model, which is predominantly availed in a private or a public cloud deployment model. The testing methodology would not vary in these cases when on-premise applications need to be tested for functional, performance and security requirements. The QA cloud provides resources for a production like infrastructure and testing tools needed for planning the capacity for an on premise application's performance testing. When an on-premise application is being tested on the cloud, it would also need a performance benchmarking exercise, where the performance of the application in an on-premise production environment or in an on-premise staging environment is benchmarked against the performance of the application in a pre-production QA cloud.


Testing Cloud based Applications in QA Cloud


Organizations use different forms of clouds (IaaS, SaaS, PaaS), and in various combinations, to roll out cloud based applications to gain a competitive edge.  Testing cloud based applications include three scenarios :


1)    A portion of application is migrating into the cloud,

2)    An application has been completely migrated onto the cloud and

3)    An application is completely built on the cloud itself


The testing methodology has to evolve in all these scenarios and would need to take into account the virtualized infrastructure, network, application business logic, data and end-user experience.  Testing cloud based applications requires business workflow testing, exceptions mechanisms, simulating failure scenarios and disaster recovery scenarios. When we include cloud and enterprise application integration scenarios also, then testing would also need to include comprehensive integration testing, API testing and billing mechanism testing (in case of SaaS applications). Scenarios that involve partial or complete migration of an application into the cloud need to be tested for data migration, data security and data privacy. The focus of testing cloud based applications needs to include validating the accuracy of cloud attributes like multi-tenancy, elasticity, security, availability, interoperability and metering in multi-instance loaded environment. Security validation would call for cross site scripting, script injection, user access & roles testing, cookie & session isolation testing and multi-tenancy isolation testing.


We need to remember that there is no single or ideal approach for cloud testing. This is primarily due to the fact that when an organization embarks onto cloud testing, various factors like the cloud architecture design, non-functional and compliance requirements, etc., need to be taken into account to ensure successful and complete testing.

November 3, 2011

Collaborative Testing Effort for Improved Quality

The collaboration amongst the business, development and testing teams can reduce the risk during the entire software development and testing lifecycle and considerably improve the overall quality of the end application. As a testing practitioner, I believe that the testing teams need to begin collaboration at an earlier stage as described below rather than the conventional collaboration during the test strategy phase:

·         During the requirement analysis phase the business/product teams need to collaborate with the development teams to validate the requirements.

·         The test data needs to be available earlier and the testing teams need to collaborate with business/product teams to validate the test data for accuracy, completeness and check if it's in sync with the business requirements spelled out.

·         Collaborate with the development team and share the test data which can be used in the unit/integration testing phases.

·         Collaborate again with the business teams to formulate a combined acceptance test strategy which would help reduce time to market.

·         Collaborate with the development team to review the results of unit testing/integration testing and validate them.

·         Collaborate with business/product teams to validate the test results of the combined acceptance testing.

Testing at each lifecycle stage has its own set of challenges and risks. If the potential defects are not detected earlier they escalate further down in the SDLC. However, an experienced and competent test practitioner can identify these defects earlier on, when it originates, and address them in the same stage. Below are some examples which reinstate this fact.


·         A good static testing of the functional, structural, compliance and non-functional aspects of an application during the requirement phase can reduce 60% of the defects from cascading down to production.

·         Similarly, getting all the required test data (as specified by the business requirement) as early as towards the end of requirements analysis phase can inject the sense of testing early in lifecycle which would improve test predictability.

·         Planning ahead for performance, stability and scalability testing during the system design phase can help reduce the costs of the potential defect incurred later on. Also, proactive non-functional testing (as required by business) contributes significantly for faster time to market.

·         Test modeling during the test preparation phase helps avoid the tight coupling of the system that is being tested with the test environment. This eventually helps in achieving continuous progressive test automation.

·         Collaboration with the development teams ensure that they have used and benefited from the test data shared by the testing teams. This collaboration helps the testing teams validate the architecture, design and code by following simple practical in-process validations through static testing of the functional, structural and non-functional aspects of the application.

·         Mechanisms which help predict when to end testing is a key requirement during execution. One such mechanism is a stop test framework based on the understanding of the application carved around the defect detection rate.

All the approaches described above let testers save on time and focus more on the right metrics collection and maintaining dashboards during test execution. It also ensures that testing is not limited to just one phase but is a fine thread that runs across the entire SDLC in order to improve quality, reduce costs and time to market for all business applications.

The benefits of this collaborative approach are many. I have listed a few benefits based on my collaborative team experiences:

·         De-risks the entire software development process by embedding testing as an inherent part of each stage of the SDLC process.

·         Defects are found early in the life cycle which reduces the total cost of fixing the defect at a later stage. The cost ratio between finding the defect at the requirements stage vs finding the same at the production stage is 1:1000.

·         Shortens the time to market by using this approach which has a built in self-corrective mechanism at each.

September 27, 2011

Infosys and Co-operative Banking Group @Iqnite UK 2011


Hi there! A little over a week remains for the UK leg of the Iqnite conference (http://www.iqnite-conferences.com/uk/index.aspx) to begin. I can't begin to explain how excited I am to be presenting again at this premier conference for software testing professionals. To me the conference provides a great opportunity to learn more of the latest and most relevant QA practices from other QA professionals. My association with the event goes back to 2009 when I presented a session on Progressive Test Automation. This year I am teaming up with Paul Shatwell of the Co-operative Banking Group to present a session on "The Co-operative Banking Group's Approach to a One-Test Service Transformation".  


Wondering what "One-Test Service" is?

Continue reading "Infosys and Co-operative Banking Group @Iqnite UK 2011 " »

September 26, 2011

Performance Testing for Online applications - The Cloud Advantage

Organizations have finally realized that building brand loyalty online contributes significantly to the overall brand value of the organization. In order to achieve this brand loyalty in the online space, organizations need to focus on two key elements - user experience and application availability.


Organizations can improve their online end user experience by conducting usability testing and by taking feedback from users to uncover potential usability issues. Usability testing helps identify deviations from usability standards and provides improved design directions as part of its iterative design process.


Uninterrupted application availability can be achieved by focusing on the performance aspects of the business application. To do so, the prime focus needs to be on performance throughout the application life cycle stages, right from requirements gathering, understanding the business forecasts, accounting for seasonal and peak workloads, capacity planning for production and ensuring right disaster recovery strategies like multiple back-ups across geographies, etc. All these need to be further coupled with the right performance validation approach.


Performance testing should not only focus on simulating the user load. It should also focus on simulating the critical business transaction and resource intensive operations, all under realistic patterns of usage. While certifying applications for performance, testing teams need to ensure that the user load factor takes into consideration the growth projections for the next five years at least, along with the peak seasonal user hits. This can help the organization ensure scalability of the application to handle not only peak traffic for the current year, but also online customer traffic for the next 5 years.


While all this sounds good, the common client concern with such preparation is the need for setting up such production like performance environments for the enablement of perfect performance testing of online business applications.  The setting up of this environment will require huge amount of CAPEX investment and worst of all will remain underutilized post the completion of the performance testing exercise. Leveraging the cloud can help organizations quickly and effectively set up production like performance environments and convert this CAPEX requirement to OPEX. This pay as you go model of testing, in the form of cloud based environment and tools, is the modern way for an organization to be cost effective in the current economic scenario and achieve thorough, end to end, performance testing of online business applications.


However, organizations need to realize that moving an application to the cloud does not mean access to infinite resources. Most organizations make this assumption while moving to the cloud and this can prove very costly. Whether an application is on a cloud or an on-premise application, it still needs to be designed to diligently handle application and availability failures.  Even in the cloud, the organization needs to sign up for specific computing power, a certain amount of storage power for the anticipated peak user load, etc. Any wrong forecasting on the mentioned factors or in the traffic increase pattern can, and will, result in application unavailability for users. Further, whether on the cloud or not, a disaster recovery back up plan is a must, that too a multi-geo one. This would help avert any business disruption in the event of any outage in a particular geography.

September 7, 2011

Enabling Effective Performance Testing for Mobile Applications

Mobile performance testing approach/strategy is closely similar to other performance testing approaches. We just need to break the approach in order to ensure all facets relevant to performance testing are noted and taken care off.

Understanding technical details on how mobile applications work

This is a primary step. Most mobile applications use a protocol (WAP, HTTP, SOAP, REST, IMPS or custom) to communicate with the server, using wireless devices. These calls get transmitted via various network devices (e.g., routers, gateways of wireless service provide or ISP) to reach the mobile application server.

Performance test tool selection

Once we know the nitty-gritties of how the mobile application works, we need to select or develop performance tools which mimic the mobile application client traffic and record it from the mobile client or simulator. There are several tools available in the marketplace to enable the same - HP's LoadRunner, CloudTest, iMobiLoad, etc.   Besides this, the mobile application provider will not have control over network delays; however it is still very important to understand how network devices and bandwidths would impact performance and the end user response time for the application in question. Shundra plugin with HP LoadRunner or Antie SAT4(A) , have features that mimic various network devices and bandwidths.

Selecting the right monitoring tool


Once we have zeroed in on the load generating tool, we now need monitoring tools to measure client and server performance.

We can use DynaTrace, SiteScope or any other APM (Application Performance Monitoring) tools to measure server side performance. These tools will capture and display, in real-time, the performance metrics such as response times, bandwidth usage, error rates, etc. If monitoring is in place on the infrastructure side, then we'll also be able to capture and display metrics such as CPU utilization, memory consumption, heap size and process counts on the same timeline as the performance metrics. These metrics will help us identify performance bottlenecks quickly which eliminates the  possible negative impact on the end user experience.

Performance of mobile app client is also critical due to resource limitations with respect to CPU capacity, memory utilization, device power/battery capacity, etc. If the constructed mobile application consumes a lot of CPU and memory, then it will take more time to load on devices. This would in turn significantly impact the speed and the user's ability to multitask on the same device. Also, if the application consumes a lot of power/battery, it would also reduce the user acceptance for such mobile applications.  For this, app plugins can be developed to measure and log mobile client performance as well. We can install plugins in mobile devices and encourage users to use it when loading is being simulated.  Possible tools that can be used are WindTunnel, TestQuest, Device Anywhere, etc. Plugins can capture performance data and the same can be sent to a central server for analysis.


In nutshell, with the right performance test strategy and tools in place, we can ensure effective Mobile application performance testing. This would ensure that the organization is able to deliver high performance and scalable apps to businesses which positively impacts the top line growth

August 18, 2011

The Importance of Service Level Agreements (SLAs) in Software Testing

It's pretty well understood in the software industry that Testing is a specialized area which helps organizations reduce risk and derive greater business value across the entire software development lifecycle. However many organizations continue to struggle with figuring out the best way to define service-level agreements (SLAs) and the outcomes that can govern testing relationships. Through my experience over the years I believe it's extremely important for customers to define SLAs upfront in order to ensure 100% alignment of goals between service provider and customer and to accelerate trust in relationships especially with first time partners.  

Before we go on to define the SLAs, it's important to define the Key Result Areas (KRAs). These are broad level areas where the SLAs will be measured and they could be in areas like governance, process, resources/staff and transition. Once these are defined, we can define the SLAs within each KRA. It's Important to choose SLAs which are relevant to the engagement (managed service, co-sourced, staff augmentation) or the type of testing (functional/automation/performance/SOA etc.). A common mistake made while defining SLAs is not defining the criticality of a particular SLA. This is important because it's not necessary to have the same level of criticality for all SLAs. Some are more relevant than others and hence we can use a classification like critical, high, medium or low for the same. Once the level of criticality is assigned to the SLAs, we need to decide on how they would be measured. I have invariably seen that customers are unsure about the measurement of SLA's.  However, deciding the tools and the methodology of calculation for measuring the SLAs is imperative. Finally, a decision on the frequency of the SLA capture (release wise, monthly or quarterly), when (release wise, monthly or quarterly) and how (spreadsheet, sharepoint, document) will it be decided upon.

In any new engagement where SLAs are defined for the first time, there will invariably be questions about the targets like how do we determine these targets? In such situations, it's always important to define a "Nursery Period". The purpose of this "Nursery Period" is to benchmark the targets for those SLAs where a period of demonstration is required before it can be set. At the end of this exercise, all SLAs should be specific, quantifiable and measurable.

The commercial framework for a risk and reward model is the key component of SLA definition process. Before delving into commercials, it's important to decide how the SLA scores would be computed. Each individual SLA should be measured and a weighted score determined based on the SLAs criticality weighting. The individual weighted scores should then be averaged. The final average weighted score is then used to calculate the commercials "Debits" or "Credits". To make this less complicated it may make sense to include only the critical and high SLAs in determining the score. Working out modalities (frequency, process of payments) of "Debits" or "Credits" is the last leg in definition of R&R model. My recommendation is to implement separate governance for risk and reward model to facilitate a collaborative and transparent relationship for this key aspect. The governance framework should have clearly defined procedures for issue resolution and escalation to allow both parties to efficiently work through issues inherent in a risk and reward agreement.

To continue the relevance of SLAs, it's important that it is reviewed regularly. I think it should be reviewed every quarter. The SLA's that do not serve their purpose need to be eliminated. "Raise the bar" for SLAs which are being consistently met for consecutive periods.

Lastly, it is extremely important to create awareness about the SLAs amongst internal and external stakeholders and project participants. This is necessary so that everybody understands the SLAs and its objectives. Communication of the SLAs is very critical and essential to the successful execution and completion of the testing assignment at hand.

July 28, 2011

Service Virtualization - Completing the Cloud Story

Organizations that have applications in production are required to have atleast 4-5 different sets of pre-production environments like System Testing, Performance Testing, User Acceptance Testing, Automated Regression Testing environments, to ensure 100% validation of the different set of requirements associated with an application. This in all probability increases the CAPEX budget for the organization. Organizations typically consolidate, virtualize and share these infrastructures for validating applications across different Lines of Business (LoBs). But this exercise is also bound to have significant OPEX associated with it due to the costs incurred in terms of having dedicated teams/personnel to manage the environments, rental costs and infrastructure costs. Also, even in these setups testing teams are constrained with situations like waiting for access to expensive test tool licenses/legacy systems, external/dependent systems, etc. In order to overcome these issues in testing traditional environments, it is but natural for us to look at the need to virtualize external/dependent systems using techniques like Service Virtualization.

Let us consider a scenario, where we have a Payments Processing Engine (PPE) which is currently undergoing changes and is hosted in a traditional QA environment. This PPE systems needs to talk to two major external systems, Legacy and Data warehouse, which are not currently available and are out of scope for testing. If the organization is to test the PPE system end-to-end, then they will need to acquire access to the external systems. Further, not being available on the virtualized environment is not the only constraint that this situation has to offer. Access to legacy system is expensive and it's made available in a 2 hour time window only. Also, the Data warehouse system is not available in the pre-production environment. When there are such constraints/dependencies on external systems, delays in time-to-market and increase CAPEX requirements are bound to bring down the overall testing efficiency. The way out for Organizations faced with such situations is to adopt Service Virtualization or virtualize services for all external/dependent systems, like the Legacy and Data warehouse systems in this particular example.

Today's market dynamics forces business to be more cost effective, agile and scalable to service ever changing market demands. The advent of cloud computing has made it possible for organizations to achieve the above mentioned points, in addition to helping organizations move from a CAPEX to OPEX business model. Though this movement to the cloud brings sizable benefits and cost savings, it doesn't however answer the question of dependencies on external systems. Organizations will need to spend huge amounts on setting up cloud images for these large external systems, making the entire process unfeasible. So, how can organizations do away with the issue of external system dependences in a cloud environment? This is where Service Virtualization comes in. With Service Virtualization, organizations can create virtual models of external dependent systems and bring them to the cloud as Virtual Services (VSE), with 24/7 availability and low cost.

Let us consider a scenario to understand the applicability of Service Virtualization in a Cloud environment.  Currently, we have an Order Management System (OMS), hosted in a cloud based environment, undergoing changes. This OMS system in turn needs to talk to 3 major external systems - Mainframe, ERP and Databases - that are not on the cloud and are out of scope for testing. If the organization is to test the OMS along with 3 external systems, then they will need to spend huge amounts in setting up the external systems - Mainframe, ERP and Database, in the cloud. This will result in higher CAPEX for the organization, which could very well blunt the cloud benefits of Cost Saving and Optimized IT spending. With Service Virtualization, the organization can host the OMS application in a virtual machine in the cloud, while the external/dependent systems - Mainframe, ERP and Databases, can be modeled and used as Virtual Services (VSE) implementations in the cloud. Thus by applying Service Virtualization all the external/dependent systems are provisioned at a fraction of the overall external system setup costs. With Service Virtualization, Organizations can achieve goals of elastic capacity consumption. Organizations can also cut down significant wait times associated with effort of infrastructure acquisition, installation and setup, and with accessing of external /dependent systems from months/weeks to a few minutes.

Thus with Service Virtualization, Organizations can achieve their overall goal/ objective of moving to the cloud and being responsive and relevant to the ever changing market dynamics and demands.

July 26, 2011

The move towards a modern QA organization

Managing the agility of application development has grown in importance as a lot of my customers are competing to dominate the market. The role of Quality Assurance teams to drive these changes is becoming more and more important. The QA teams should not only do testing but also champion the process so that our customer's software products and processes adhere to the requirements and plans......

Continue reading "The move towards a modern QA organization" »

July 7, 2011

Service Virtualization - Where is it applicable?

Service Virtualization is the new buzz word today. My belief is that most of organizations understand the applicability of Service Virtualization but, the real challenge they face is to understand its working.

Bearing this understanding, I landed at HP Discover 2011, Las Vegas, Nevada. Besides the blistering heat what hit me was that clients were still grappling with the very applicability of service virtualization.

A sample conversation follows.

A QA director from a leading fortune 500 company  walked up to me and as I was talking about Service Virtualization looked at me and said, in a rather loud tone, that Service Virtualization would never work for him as he was not a small-time product company. He added that Service Virtualization could at best be used for critical application testing in large organizations like his and nothing else. My response to him was a simple question, "Sir, do you know where service virtualization can be used in your organization?" After a long pause, he replied that he definitely saw no use for it in day-to-day testing at least.

I didn't know where to begin correcting his misconceptions. Over a long coffee break, I did share with him that Service Virtualization was useful to both small and large organizations. It was equally relevant to complex and day-to-day testing programs. Most importantly, every testing professional needs to understand its relevance to be able to decide if he needed it or not.

Continue reading "Service Virtualization - Where is it applicable?" »

December 24, 2010

Why does Testing Metrics Program Fail?

Test management Guide Series-Topic 3

Why does Testing Metrics Program Fail?

1.0  Introduction


We all know that statistical data is collected for metric analysis to measure the success of an organization and for identification of continuous improvement opportunities, but not many organizations are successful in addressing this objective.

Senior management love to see metrics but fail to understand why the teams are unable to produce meaningful metric trends which can provide clear actionable steps for continuous improvements. It was surprising to see many client senior executives started asking me for help in establishing meaningful metrics programs for their respective organizations. Several questions popped in my mind - what makes testing metrics program such a challenge? We collect and present tones of data, what exactly is missing? Even though right from the CIO wants a peak view of various metrics to understand the progress made, why does several testing organization fail to produce the same in a meaningful way? All metrics definition and techniques are available in multiple sources, still why does organizations struggling in collect, Analyze and report the same?

After thinking about these questions I started asking myself few fundamental questions and started looking at several metrics reports. I was not surprised to find the below

·         Most of the Testing Metric reports were internal focused. They were trying to tell how good the testing organization is operating with no actionable next steps

·         Metric reports had several months of data with minimal analysis finding and no actionable items for each stake holder

·         99% of the action items identified were only related to actions taken by the testing organization

2.0  Revisiting the Golden Rules of Testing Metric Management:


While I was conducting a detailed study of the same, I felt like re-establishing few golden rules of metric management which we all know, but might fail to implement

Rule #1: Metrics program should be driven and reviewed by executive leadership atleast once in a Quarter.  In reality, very few organizations have a CIO dashboard to understand the reasons for Quality and Cost issues inspite of hearing repeated escalations from business partners on production defects, performance issues and delayed releases

a)         Metrics collection is an intense activity which needs right data sources and acceptance by entire organization (not just testing). Unless driven by senior leadership, metrics collection will have limited actionable items

b)         Requires alignment from all IT and business organizations to collectively make improvements

Rule #2: Ensure all Participating IT and Business organizations are in alignment with the metrics program


ü  Testing organization shows that it met 100% of its milestones related to schedule adherence, but the project would have got delayed by 6 months

ü  Testing effectiveness would be 98%, while several production defects resulted in an couple of million production support activities which was caused by issues which was not in control of testing organization

ü  Business operations and UAT teams are focused on current projects, while metrics provide trending information. All teams should be aware of improvement targets set for the current year or current release


Rule #3: Ensure necessary data sources are available and data collection processes are in place which ensures uniform data collection.

Uniform standards are not followed among IT groups, which make data collection a challenge

·         Effort data is always erroneous

·         Schedule changes multiple times in the life cycle of the project

·         Defect resolutions is always a conflict

·         Defect management systems are not uniformly configured


ü  There is no infrastructure or process to view and analyze production defects

ü  More than 40% defects are due to non coding issues, but defect management workflow does not capture the same

ü  Requirement defects are identified in testing phase making it 10 times more expensive, but there is no means to identify these trends and quantify

Rule #4: Ensure metrics are trended and analyzed to identify areas of improvements. Metrics data is garbage unless you trend the data, analyze them to find identify action items. Trending analysis should help in

·         Identify improvement opportunities for all participating IT groups

·         Set improvement targets

·         Plotted graphs which can give meaningful views

·         View to compare the performance with Industry standards

While it is not always easy to follow all the 4 golden rules of testing metrics, but an attempt to achieve compliance with these 4 rules would significantly improve the success of testing metrics program

3.0  What should I look in my testing metrics?


While I continued to look for reasons and answers for failure of metrics program, I realized that most of testing organization selects metrics which are more internal focused and analysis is also carried out to indentify issues within testing organization or to defend the actions of the testing organization. I am making an attempt to summarize suggestions on how you can make better use of metrics data  

Metric Perception


Suggested Action Items

Testing Effectiveness


 Testing organization reports effectiveness as 98%, but many production defects reported

·  Absence of a process to collect production data


·  System testing and UAT execution in parallel


·  Lack of effective requirement traceability


·   Lack of End to End test environment to replicate production issues


·   Extensive ambiguity in business requirement and results in defects in production due to unclear requirements

·  Publish Testing organizations effectiveness and Project Testing effectiveness to clearly highlight the overall project issues


·   Establish a process to involve testing team and production support team to analyze every production defect and take necessary corrective action


·   Associate $ value lost due to every defect in production to get attention from senior management towards infrastructure or improving requirements management



Test Case Effectiveness -


No Industry standard recommendations on targets based on testing type and hence not much action items identified.  Data is reported regularly


·   Difficult to set targets


·   Lack of actionable decision points based on increase or decrease of test case effectiveness


·   RTM is not prepared and hence the quality of test case and its coverage is a concern


·   Based on test case preparation and execution productivity, set targets for test case effectiveness. This will ensure right effort is spent on test planning and execution to achieve the target testing effectiveness


·   If test case effectiveness in below threshold for 3 consecutive releases, optimize your test bed and eliminate test cases which are not yielding any defects


·   If test case effectiveness in below threshold, validate the applicability of Risk based testing


·   If the test case effectiveness is higher then threshold, unit testing might be an issue and recommend corrective action

Schedule Adherence


Project is delayed by 6 months, but testing team has met its schedule

·   Project managers baseline schedule after delays in every life cycle stage. Testing reports delays in schedule only if it is due to issues related to testing


·   Testing wait time increases due to schedule delays in each stage and testing cost increases. Testing does not quantify wait times


·   Testing team do not use historic wait times in estimates resulting in testing budget overruns and increasing testing to development spend ratio


·   Track schedule milestones across life cycle stages and collect metrics for delays in any milestone


·   Create action triggers if the total delayed milestones crosses 10% of overall schedule milestones


·   Calculate the additional QA spend due to missed scheduled milestones and report the same in the metrics


·   Establish operational level agreements b/w various team to identify schedule adherence issues


Requirement Stability


Senior management believes that achieving requirement stability is a myth and requirements continue to change due to various reasons. This is because of the fact that 90% of the industry has been facing this issue for decades


·   Requirement stability is a broad measure and has several parameters contributing to this measure (Ambiguous, mistake in articulation, Incomplete, addition of new requirement, edits, deletions, etc)


·   Each parameter is not captured and reported in separate categories and represented by life cycle phase. Requirement can be changed due to design issue, testability issue, cost of development and implementation, operational challenges, change in mandates, organizational polices, etc


·   Lacks quantification of its effect in each testing phase


·   Lack of forums to discuss requirement stability index issues and improvements


·   Lack of requirement management tool to automate traceability and versioning


·   Report # of times requirement review signoff from test team was missed


·   Report test case rework effort due to requirement deletion, edits, addition


·   Report additional test case execution effort due to requirement deletion, edits, addition


·   Report # of defects due to requirement issues (missing, ambiguous, incomplete) and quantify the effort needed to correct them by dev and testing


·   Quantify the SME bandwidth requirement to support testing activities. If requirements are elaborate, SME bandwidth needs should reduce with time


·    Report # of CR added and its effort to measure scope creep


·   Report # of times requirement change not communicated to testing team






Defect Rejection Ratio and other defect Metrics


Senior management has no idea what to do with this data at end of the release



·   Reporting at the end of the release does not give opportunity to make course correction in the ongoing project


·   Metrics like defect severity, defect ageing, defect rejection needs immediate corrective action


·   Threshold points for each of this defect metrics has not been established and automated trigger points calling for action not defined


·   Report these metrics on a weekly basis rather than at the end of the release


·   Create automated trigger points. For e.g. If # of critical defects goes beyond 5, action point has to be triggered



Test Case  preparation & execution productivity


Senior management has no idea what to do with this data at end of the release




·   Reporting at the end of the release does not give opportunity to make course correction in the ongoing project


·   Difficult to set target due to difficulty in establishing testing unit


·   Report these metrics on a weekly basis rather than at the end of the release


·  Create automated trigger points.


·  Report reduction in test execution productivity due to

o   environment downtime

o   wait time due to delays in build

o   wait time due to delays in issue resolution

o   rework effort during test case preparation

o   rework due to lack of understanding and application knowledge


4.0  Conclusion


The above table provides insights into the complexity involved in metrics program to collect analyze and report metrics. There are clearly 2 types of metrics

·         Metrics which has impact on the overall project and hence very important to senior management of the organization

·         Metrics which are internally focused towards testing organization

Testing Effectiveness, Schedule Adherence and Requirement stability helps in identification of issues which impacts the entire projects and also which results in delays in projects, defects in production and budget overruns which are very important to CIO. The CIO metrics dashboard should have these metrics and reasons for failure.  As a QUALITY GOAL KEEPER for the entire organization, testing organizations should clearly identify actionable improvements for all stake holders for metrics program to be successful

In order to keep a tab on the effectiveness and efficiency of the testing organization itself, internal focused metrics like defect severity, defect ageing, # of defects, testing productivity, cost of quality, % automated and many more are important. Clear steps should be defined to identify actionable items. The actionable items are all internal to the testing organization and they should be used for identification of internal improvement targets and initiatives.



July 22, 2010

Modern testing techniques- How to realize full business benefits of the Cloud?

Cloud based application or services provide benefits like increased  business agility, reduced cost of ownership etc to your customers. But at the same time it increases your risk to business and gives you sleepless nights. Here are some of the concerns that I often hear from my customers:

- Will I have much visibility of what is happening inside the cloud? 
- How do I ensure that on an ongoing basis the outcomes promised meets the customer SLAs?
- If any changes happen inside a cloud will this potentially affect my business and the interdependent systems?

Choosing a right testing model will help you mitigate these risks. In my view the following are some of key shifts required from your traditional testing model for you to be successful:

1. More focus on the non functional aspects like performance, availability, security etc.
2. Choosing a testing methodology to address agile/component based development life cycle
3. Upgraded testing skilled team who has a good understanding of cloud based validation requirements.
4. Usage of a set of tools to address the different validation requirements
5. Provide automated test coverage to build agility in testing
6. Having a good test environment strategy to predict the behavior of your applications on a hosted cloud.

June 2, 2010

Agile Test data administration - let's crack the code

 In my last post (http://www.infosysblogs.com/testing-services/2010/05/emerging_areas_of_testing.html) I summarized a few of the emerging trends within the testing arena. Now, I wanted to take a specific thread from those trends and elaborate a little more on "Agile Test Data Administration".

If there is one thing that is resonating again and again within the Testing space, it is that of QA teams and administrators wanting to get their arms around "Test Data". More so to gain confidence around the Preparation and Usage of Test data within the Testing lifecycle as well as to maximize efficiency gains by moving towards an agile way of administering and using Test data.

Continue reading "Agile Test data administration - let's crack the code" »

May 24, 2010

How lean is too lean? - Making testing lean!

One of the topics that is often discussed during my interactions with the customers is around how the current testing can be leaner, smarter and cost effective?  While most of these customers agree that testing is a necessity, they are worried about the cost. Some of them have gone ahead and cut on their testing staff and budgets this has impacted the quality and timelines of their products and services adversely. Can organizations go too far with the cost and people cuts?

Continue reading "How lean is too lean? - Making testing lean!" »