Testing Services provides a platform for QA professionals to discuss and gain insights in to the business value delivered by testing, the best practices and processes that drive it and the emergence of new technologies that will shape the future of this profession.


December 17, 2020

Content Validation and Accessibility Testing

Content validation is a type of static testing that ensures the quality and accuracy of your content. It includes validating the data present in a document, web page or email against the data present in an application, file, or database.

Accessibility testing is the practice of making web and mobile apps usable by as many people as possible. This includes making content accessible to those with disabilities like vision impairment, hearing disabilities, and other physical or cognitive conditions.

There are 285 million visually impaired people across the globe, of whom 39 million are blind. The impairments include color blindness, total blindness, cataract, blur vision etc. There are 466 million people in the world with disabling hearing loss. (*Source: World Health Organization). The other factor which should be kept in mind is the mental health and intellectual spectrum as the content should be accessible to all humans with varying processing capability.

Since a lot of content is conveyed in the form of documents, we need to ensure that the content curated in a document is validated and accessible. We cannot eliminate the errors, but we can reduce them by implementing some content validation techniques.

In this blog, we have covered requirements, challenges and key focus areas for validation and accessibility of content for PDF, Word, Email, web format and mobile channels.

Content Validation

PDF Content Validation

In current scenario, there is a huge dependency on PDF as an end user deliverable. PDF is a file format used to present and exchange information in an offline mode and is designed for human readers and not for machine processing. Hence, automating the content testing of PDF is a challenge. In order to validate, business entities need to be extracted from a PDF file and compared against the target.


  • Predefining the structure of a PDF according to continuously varying business demand
  • Lack of automation in the content validation resulting in manual content validation for large number of documents
  • Difficulty in extracting the data from image-based documents
  • Identifying business entities from image-based documents

PDF Validation Workflow:

Word Content Validation

Microsoft Word is the most popular word processing document in the world. We may need to verify, store or perform some validations on the data of a word file. A correctly spelled but invalid value can make a document unusable. We can use macros, scripting, etc. to validate the data to some extent.


  • To identify the co-related data as well as the business entities for textual data validation in a word document

Email Content Validation

Just like any other content validation, email content validation is very important part of testing as we handle important official data through email. Verification of email is critical to avoid any miscommunication.

Checkpoints to be considered for email validation:

  • Testing of unknown errors related to HTTP error codes or some non-working or broken links present in the email
  • Missing images due to script related issues
  • Spell checks, web standard checks, etc
  • Correct navigation, easy to read and understand paragraphs. Page loading should be validated and carefully tested by different performance checks.
  • Copyright & Trademark symbols should be carefully placed and checked
  • Testing the subject of email is very important
  • Checking template and basic designing of email as it is the major attraction of email


  • Manual testing for email content validation

Web Content Validation

Web Content Validation includes extracting data from webpages and validating it against some targets like a database or a file. It can also be a webpage to webpage comparison which comes up during website migrations. QA teams work on comparing and verifying the data against old website, find broken links, static content, dynamic content, images, videos etc.

With the usage of latest content management solution for building website and mobile app, enterprises are looking forward to providing a good digital experience to customers via the front end while managing the marketing content and digital assets on the backend.


  • Managing the content across online mediums and simultaneously syncing well with the in-store mediums is a challenge

Content Accessibility

PDF Accessibility

Portable document format is the most used format across the world, it should be made sure that the document is accessible by people who are blind, have low vision, deaf, or with impaired hearing or have any other cognitive impairments.

To make the document accessible for people who rely on assistive technologies, the content should be structured and tagged properly.

Web Content Accessibility Guidelines (WCAG) 2.0 and PDF/UA are common accessibility standards. PDF/A is an ISO-standardized version of the portable document Format (PDF) specialized for use in the archiving and long-term preservation of electronic documents.


  • PDF readers available in market do provide option to check accessibility of the document but user must opt for their pro version to use it.
  • The other challenge is the batch processing of documents. A tool which can process multiple documents simultaneously is required.

Word Accessibility

Microsoft Word, just like the entire Microsoft Office Suite, provides a built-in Accessibility Checker tool that helps identify potential accessibility concerns. The Accessibility Checker task pane will show:

  1. Accessibility Errors
  2. Warnings
  3. Tips to repair the errors

Additional information for specific issues is available at bottom of the task pane.


The Accessibility Checker tool does not guarantee a fully accessible document or comprehensive usability of the workbook for individuals with disabilities. Some documents might present accessibility challenges that need to be addressed manually.

Email Accessibility

Email campaigns holds great value in today's time. How impactful can an email campaign be if it can't be read and understood by people with disabilities?

Emails should be designed keeping best practices in mind like:

  • It should support screen-reading devices
  • It should have alternate text for images which screen reader or voice assistance can dictate
  • Visual content should be checked for color blindness
  • It should be accessible on mobile and IOT devices

Web Accessibility

Web accessibility means the content present on web in form of websites, available tools, technologies should be designed and developed so that it facilitates people with disabilities to access them. People should be able to perceive, understand, navigate and interact and contribute to the web.

Web accessibility encompasses all disabilities that affect access to the Web, including:

  • Audio
  • Cognitive
  • Neurological
  • Physical
  • Speech
  • Visual

Web content accessibility guideline 2.0 is based on four principles: Perceivable, Operable, Understandable and Robust. It is further differentiated into 3 levels:

Level A - Most basic web accessibility features

Level AA - Deals with biggest and most common barriers for disabled users

Level AAA - It is highest and most complex level of web accessibility

Challenges faced:

  • To make the content accessible for wider section of audience
  • To check the conformance of accessibility standards
  • To get repair recommendations for the content and insights on the same

Content on Mobile device

With the heavy content consumption these days on mobile device and increased usage of smart IOT devices, it has become mandatory to make the content accessible. Also, there is a need of making the content uniform across devices - laptop, desktop, smartphone, tablet, smart watch etc.

Some of the pre-requisites for improving the mobile accessibility include:

  1. Responsive design - Content should fit the layout of the mobile and adjust according to the screen orientation of device
  2. Transcript - Alternative text for pictorial content

  3. User Input - Making it easier for user to interact with the content and participate actively
  4. Aesthetics - Color contrast, visibility, font type and style, font color etc. should be customizable as per the user's need

  5. External device support - Now content should be designed and edited in such a way that any external device like screen reader or voice assistance can assist users with disabilities to access the content


Content Validation is a wide topic which spans multiple channels. Each area has its own set of challenges and automating the process of testing the content varies from one area to another. Natural language processing is one good selection while validating textual content.

The requirement for web content validation has increased because managing the content across online mediums and simultaneously syncing well with the in-store mediums has become a tough task. Instead of manually validating the content, automation can be expanded in the area of content validation as many areas of content validation testing are still being done manually by the testing team.

Technology should assist in interpreting the content, be it in tabular, image, audio or video format; it should be made accessible.

Managing content is the right thing to do. As an added advantage to any business, Content validation and accessibility will enhance the user experience and will bring in more customer, trust and credibility to the business.



Deepak Ruchandani- Senior Associate Consultant

Divya Vettukattu Valappil- Technology Lead



Continue reading "Content Validation and Accessibility Testing" »

November 20, 2020

Making Enterprise Test Automation a Possibility using UiPath

Energizing the core of enterprise business using AI driven automation is the key for success for all progressive business houses. Combining the power of Robotic Process Automation (RPA) and related cognitive capabilities is giving a new direction to enterprise test automation. UiPath has been at the forefront of this journey. The new UiPath Test Suite (Advanced IDE StudioPro, Test Manager and Test Robots) provides a platform for all enterprise testing needs, be it Software testing or even testing RPA. Its current success gives us confidence that the future of Enterprise test automation will be driven by RPA based platforms.

Continue reading "Making Enterprise Test Automation a Possibility using UiPath" »

May 19, 2020

Data Fabric - The Futuristic Data Management Solution

Global Research and Advisory firm Gartner has identified top data and analytics trends for 2020, which will have significant transformative potential in the next two to five years. Data Fabric is one of the most prominent trends. With the enormous growth of both structured and unstructured data from smartphones, IoT devices, and digital channels there is a need to be able to process large amounts of data, mine it, analyze it and make it accessible. Data Fabric is a method to understand large amounts of data traversed through the cloud systems.

In this blog, we will understand What is a Data Fabric, The Data Fabric Stack, and a few Use Cases.

A.      What is Data Fabric

Data Fabric is a unified architecture and data service set running on that architecture, that helps organizations manage their data across on-premise, cloud, and hybrid cloud systems. Data Fabric is a single, unified platform for data integration that simplifies and integrates data management across platforms to accelerate digital transformation.


§  Connects to platforms using pre-packaged functions and connections

§  Integrates and manages data from on-premise and cloud environments

§  Support for batch and real time data streams

§  Data Quality, Data Enrichment and Data Governance capabilities

§  Support for API development and integration


B.      The Data Fabric Stack includes following layers - 

·        Data Collection & Storage: Ingest & Integrate data, events and APIs from any source, from on premise and in the cloud

·        Data Services: Manage several services at this layer including data governance, data protection, data quality and adherence to compliance standards

·        Transformation Layer: Involves cleaning and enrichment of batch and real-time data to enable informed decisions

·        Analytics/Sharing Layer: Realize data value by making it available internally and externally via self-service capabilities, analytic portals, and APIs

C.       Successful Use Cases

«  A leading pizza company in the world, with both delivery and carry put operations, utilizes data fabric to maintain the competitive advantage. It allows ordering of pizza from a plethora of devices including TV, Smartwatch, Smartcar, etc. resulting in 25TB of data, from 100,000 data sources - structured and unstructured. Using Data Fabric, company gathered and analyzed data from their POS systems, multiple supply chain centers, and across digital channels including text messages, Twitter, Amazon Echo

«  A leading pharma company, applied AI to develop weed identification, and enabled farmers to apply the exact solution needed to kill the weed species. It developed an app which used machine learning and artificial intelligence to match photos that farmers uploaded to the app. This resulted in better choice of seed variety, better application of crop protection products, and best harvest timing

«  Leading insurance company is utilizing data fabric to store and analyze claim data - claim report, incident data, police report, claim history, claim details, counterparty details, etc. This has helped in faster settlement of claims and also to make policies more compelling and price them competitively


In a world where technology is changing everyday lives, digital transformation tops the strategic agenda of most organizations and their leaders. To be successful in digital transformation journey, data is lifeline, to enable new customer touch points, create innovative business propositions, and optimize operations. Data fabric enables businesses to achieve these by offering connectors for hybrid systems, advanced data integration capabilities, and analytical capabilities. Demand for data fabric will get stronger as organizations look to stay on top of emerging technologies and new trends to stay competitive, stay relevant and maintain business edge.

September 16, 2019

UI vs UX: Revisiting the age old debate

With technological advancement reaching the common man's hand in 21st century, everybody seeks to experience technology without spending much brain and time. Mobile and Web consumers nowadays expect quick and consistent navigation with seamless experience. Hence the growing emphasis on professional UI/UX design in software applications.

While we realize the immense importance of a visually appealing and user friendly application experience, UI/UX are the terms that are generally used interchangeably in the software world. As a matter of fact, the terms are closely linked when being talked in software design landscape.

UI is not UX

By definition, UI or User Interface is the graphical layout of an application which a user interacts with. This includes buttons, input controls, screen layout and every micro-interaction. UI designers create the look and feel of a user interface for an application.


UX or User Experience determines how easy or difficult is to interact with the User interface elements on the application. This being the main reason people generally confuse UI and UX to be similar terms and use them interchangeably.

Granted, it is completely fine to use UI/UX together in software design, wherein, the UX designers are also concerned on application UI to ensure smooth navigation and provide a seamless application experience. However, it should be understood that UI is just one of the salient elements in UX as shown in Fig. Designers work on both user interface and user experience design for a customer friendly application.

When it comes to application testing, UX/UI are mostly covered during user acceptance testing phase in an SDLC. While teams do realize the importance of UI testing early on (along with functional testing), to avoid any defects percolating to later tests, usability testing (UX and UI teamed up) is generally scheduled after/with integration tests to accommodate application agility. However, teams end up doing highly expensive rework, due to last minute customer feedback on supposedly less important non-functional aspects like user interface and experience.

UI testing: Test early test often

Validating seamless user experience may seem more relevant during end of the application testing processes, however validating UI with respect to interface, design and navigation requirements need to be taken up way early.

With the rise in customer centric business requirements, it is thus prudent, that UI tests be planned early and be done repeatedly till all functional and non-functional requirements are met. UI automation scripts come handy while planning repetitive tests like performance, load and device/browser compatibility. Early performance or compatibility testing allows capacity planners and infrastructure architects with early warnings on any potential problems with the scalability of the architecture. UI layout and navigation may be volatile during early stages of the application development. Teams must carefully isolate application UI and functionalities to enable independent tests for better results. Automation scripts must be used where functional or UI requirements are stable. QA techniques in early forms can be applied to usability or design testing even before the UI is integrated with functionalities. Automated regression tests should be conducted as often as possible through the course, and not just as part of final QA activities or just before system integration.

A volatile UI may be a ticking bomb towards the end of application lifecycle which may adversely affect the user experience offered by the application. It is thus wise to stub out non-functional testing especially UI for early defect detection and avoid rework post integration tests.

Happy testing!

September 3, 2019

Winning the Test Automation game

Enough has been said about writing better tests, optimize automation scripts or planning test cycles. Teams can choose from a plethora of test accelerators available in the market depending upon the features and automation maturity offered. However, with frequent changes to the application behavior and business requirements per se release planning, test maintenance and test criteria, the selected automation tools are not able to cope up with the pace testing workflows change over time.

Hence it is prudent to consider the maintainability aspect during the engineering cycle of the automation solution[i]. Having said that it is equally important testing teams to layout a viable plan with realistic automation goals and also accommodate incremental automation.  

Lay out an Automation Roadmap

Project teams always thrive on the 'automate whatever possible' mantra. Therefore, end up addressing the pertinent challenges and do away with minimalistic automation ignoring the possible troublemakers. Automation tests may work wonders for progression cycles, however, once teams get into regression, they start realizing the side-effects of not having proper test maintenance in place. While they come up with corrective measures to improve regression planning, reuse automation tests or even correcting them, the overhead is tough to crack.

The major flaw lies in haphazard adoption of automation in pockets wherein the tools in use may address only a few or more aspects in the process workflow. Rest is either carried out manually or using other tools. Teams generally take help of macros or client side stored procedures or scripts to make different tools or manual processes work together. The lack of support in end to end workflow leads to issues like flaky tests, unmanaged automation and hence depleting ROI<reference to debacles>.

It is thus essential to plan test automation meticulously with an incremental roadmap and test traceability on top priority. Provision to accommodate incremental automation like new tools to address complex application features or leverage latest technologies for better coverage, shall help testing teams to achieve better results over time.

 Pick the right ingredients

While the benefits of test automation are proven with variety of mature scripting tools available, testing teams are still struggling with test maintenance debacles. Correcting the approach may help automation teams engineer a sustainable framework, but to realize the best possible results, testing teams must use the right ingredients in the first place.

During test planning phase, testing teams tend to focus more on leveraging application changes to ensure best possible test coverage. Automation test cases keep piling up due to lack of regular optimization, with zero traceability and usability. Leveraging relationship between test assets (test data, automation scripts) available from previous cycles and application objects, functions and their properties, could augment automation planning and self-healing of test scripts. A calculated impact analysis of application changes onto the test scripts thus helps testing teams to reuse, optimize and write better automation tests.

Additionally, test data preparation in pockets also leads to unmanaged automation workflows. It is high time when due diligence is put in preparing test data for application testing using appropriate channels. Automation tools available for test data preparation and workflow management could be leveraged for test data traceability and consistency especially in SIT environments.

Putting it all together

The responsibility of providing a viable test automation solution though lies in the hands of automation engineering team, however it is the implementation team who can make best out of the available offerings and make the test automation process a success.   

[i] https://www.infosysblogs.com/testing-services/2019/05/building_a_sustainable_test_au.html

July 14, 2019

Building a Sustainable Test Automation Solution

Test automation is an essential part of QA processes in the software testing industry. Once a mere tool for optimization, supporting manual testing, test automation has now become primary driver in QA. However, successful test automation is much more than just writing a code to de-manualize a step wise process. 

During a digital transformation journey, everything may seem pretty straightforward while using automation frameworks and scripts. In reality, this automation success is short-term. Less than a year into the implementation, many teams soon get pulled into vicious circle of automation maintenance. Issues like flaky test results, change in expected behavior of the system, environment/infrastructure changes, diminish your ROI from a test automation framework[i]. It is thus essential to realize the fact that the success of automation solutioning, especially in software testing landscape, is more about avoiding mistakes than just getting it right!

The need of the hour is to look beyond the surface and come up with a futuristic, self-healing and sustainable test automation solution bedecked with best practices and technology. There isn't a checklist for 'right automation', but in fact 'quick solutions' are certainly expensive, or nearly impossible to maintain. Consider the following tips while embarking on your test automation journey. It may not be a cakewalk but definitely will have long term maintainability.


  1. Simplify
    Testing requirements are as vast as application development. The automation tests are expected to match up the pace of application complexities as features mature. However, flexibility in the system should not be syntactically complex, that might bog down the user. An ideal solution should truly serve the testing goals while at the same time be fluid enough to handle real world testing complexities.
  2. Modularity & Reusability
    Testing approach/type may differ on project basis, depending upon factors like application type and life-cycle process. Automation components must be tailored in a way that they should be non-cohesive to the landscape diversity. These modules could be reused as common automation assets in multiple projects. Such a system thus eases the test maintainability and enhances trace-ability. 
  3. Handle dynamic nature of the application
    Identification of frequent changes and dynamic elements in an application are the two major challenges in the real world of test automation. Most test automation frameworks are unable to identify these dynamics, wherein test planning becomes ineffective and thus may lead to defects. Hence, a provision for identification criterion of application changes, dynamic objects and properties update, is vital for effective test automation.

  1. Centralize Test services wherever possible
    A one-stop solution for dynamic test assets, on-demand test tools and environment, with minimal domain knowledge and solution expertise, Testing As A Service or TAAS is an outsourcing model. 
    It is recommended to offer software testing as a service over cloud, especially in projects where extensive automation and short execution cycles are involved. The components can be used on-the-fly as per subscription using a centralized infrastructure.   
  2. Domain flavored automation
    While testing an application, a tester must think like an end user. Especially in Banking, Financial Services and Insurance (BFSI) and telecom domains, it is essential to know working procedures and domain keywords to write and execute tests better. Similarly, for an automation solution, a distinct edge on domain knowledge is vital to ensure maximum coverage of the functional and non-functional aspects of an application under test.

  1. Building intelligent automation
    Automation is not a one-time solution but a process. A smart automation solution should ideally be self-learning and adaptive. Amalgamation of AI/ML in QA automation helps inducing trace-ability in progression tests and self-healing in regression tests. Alternatively, for lesser sophisticated continuous automation delivery, code-less test automation can be explored. 
To sustain test automation, an appropriate framework with right mix of infrastructure and technology is vital, it is equally essential to streamline testing processes at practice level. Success in test automation requires immaculate planning and design work. Remember, automation in testing is not just a fancy UI to perform test steps, but should be aimed at building a solution that has long term maintainability and traceability. Hence, a sustainable automation solution that would suit the real world testing problems is a must!

June 19, 2019

Test Automation Debacles

In the era of digital transformation and spurring competition, organizations are joining the automation bandwagon without a second thought. Especially in software testing; automation has acquired an important place to address needs of agile and continuous testing processes. While the benefits of test automation are well proven with plenty of mature scripting frameworks available in the market, the death knell to the automation journey comes when testing teams start to struggle with test maintenance fatalities.

The Mayhem

We studied a few QA projects closely, right from test planning to execution, only to realize the damage, that un-managed test automation can do to the automation ROI.

Based on the process maturity and application under test, testing teams use various tools and frameworks implementing automation at varied levels. There are tools and techniques that offer automation in pockets and enable teams to realize instant benefits. However, shortsighted test automation is unable to keep up with the pace with which application features evolve. With teams going agile, short test cycles bedecked with amateur testing practices add to the debacles, hence lowering the automation tests maturity graph. Whereas, application under test continues to evolve with exponential speed. Therefore, the gap between automation tests and application features keeps increasing at alarming rate with time, as shown in Figure 1. With frequent changes in the application, poorly written tests and un-managed 'quick' automation test cycles, the teams get caught into vicious cycle of maintenance costs and decline in test coverage. This upsurge in test maintenance leads to regression defects, hence diminishing ROI from the automation.


As a matter of fact, project teams land into worse situation with test maintenance during automation cycles as compared to manual ones! We observed teams spend more time in fixing test scripts almost by a factor of 200 as compared to what they spend in manual tests. That explains the steep surge in automation ROI in Figure 1.

The mistake lies in..?

We have blamed automation practices and application dynamics enough. Test automation experts are already looking into streamlining test automation and devising ways to leverage application changes to plan tests better. Did it really help? I don't think so! It may slow down the damage but eventually the irreversible destruction caused by haphazard test automation is realized sooner or later. The primary cause lies in the expectations. Testing teams do realize end to end automation is an incremental process or for that matter 100% automation may/may not be achievable, due to which, the end-user's expectations from test automation solutions stoop to minimal level. That's where the problem starts. It's the automation engineering that needs to be addressed!

We may standardize tools or processes for a testing team to adhere to, wherein test automation could be implemented in pockets and benefits be realized momentarily. However, the teams must realize the tool benefits are as good as the features offered and the automation coverage, which may turn into a nightmare in long-term, with no test maintenance available. Hence it is now necessary to look beyond test automation and address the gaps in how the automation tools and accelerators are engineered to offer better and sustainable automation.

You may find my recommendations on picking up the right ingredients and building a sustainable test automation solution in coming blogs.

March 31, 2019

RPA Performance Testing

In today's rapidly changing technology landscape, new ground breaking trends are emerging every day. Some of today's key trends driving financial services industry imperatives are -

1. Robotic Process Automations (RPA)
2. AI and Digital Assistant such as Chatbot
3. Block Chain
4. Big Data

RPA has created lot of buzz in the industry. Organizations are reaping in immense benefit by implementing RPA. As per Mckinsey, "110-140 million FTE's could be replaced by automation tools and software by 2020 ". RPA implementation has necessitated strong testing support to avoid any failures because it can be very expensive in the later stages of the development. One of the challenges faced by organizations is identification of bottlenecks and hotspots. As per IBM World Testing Report, 65% of organization are facing challenges related to Performance testing.


While organization are reaping RPA benefits, it is equally important to ensure the performance of RPA processes is up to the mark and it meets 3S (speed, scalability and stability) mantra.

Before dwelling deeper in RPA PT challenges and solution, let's understand the typical RPA landscape.

  • RPA landscape


As seen from above diagram, RPAs possesses immense capability for integration with varied landscape. It can be easily integrated with legacy, web based, API based, mainframe applications and many others. They also promote reuse by "exposing" their learning's to shared library which can be used by other bots. PRA interacts with different systems via screen scarping, emails, OCR, APIs etc. replicating user actions.

  • Performance Testing areas

Having understood the landscape, let's focus on what are the key elements of the performance testing that should focus on.

1. Capacity related issues when concurrent jobs are scheduled by robots

2. Tasks completed in given time per bot

3. Licensing and bot utilization -

  • Licenses - Monitors total number of acquired robot licenses
  • Robot utilization vs. capacity - Monitors the percentage of acquired robot licenses that are utilized in production 

4. hourly/daily variability in robot usage

5. Elastic Scalability - Dynamically upscaling and down scaling hundreds of robots to ensure RPA meets user demands

6.Complete eco system performance - Along with RPA processes, we need to focus on each application in the eco system.

  • Challenges faced

While now we understand what should be focus areas, there are inherent performance testing challenges faced by RPA. They are -

1. Dissimilar technologies: As seen from RPA landscape, each application under RPA execution may belong to a different technology. We need to assure that each component meets performance in isolation and in E2E eco system

2. Performance testing tools availability: Diverse landscape adds complexity that one single PT tool cannot support varied needs of ecosystem. For RPA systems, there are no record and playback mechanisms available while for RPA backend systems, we have to explore appropriate COTS/commercial tools based on protocol support via POC, knowledge sharing, etc.

3.Test environment: The Performance testing environment may not be exact production replica due to cost or any other factors. We need to plan the realistic workload which will cater to scaled down version and any other dependency to achieve desired results within the ecosystem.

4. Monitoring solutions: Similar to performance testing tool availability challenge, narrow set of monitoring solutions exists to monitor platform, detect the performance issues and for bottleneck analysis. We have to explore COTS/open source tools to cover the varied technology landscape.

5. Continues delivery pipeline: Current RPA solutions are mostly commercials solutions and RPA engineers don't have any open source options available due to proprietary the binary file formats. This should likely change down the line as RPA adopts open source standards. Infosys AssistEdge RPA community edition is certainly a revolutionary step towards this.

6.Unavailability of RPA backend / interacting systems: Since complete RPA ecosystem is a complex one, there are chances that one of the interfacing system may be behaving poorly or down temporarily.. 

How do we overcome these challenges? What strategy we adopt? The solution lies in sociability test.

  • Sociability Performance Testing

Sociability test will focus on core RPA process and any systems interacting with RPA. Refer to the diagram below.

RPAComponent_new - Solution.png

  • Key aspects to look at -

1. Tools and technology - Tools used will vary and can be combination of open source and COTS system. We need to assess the complete technology landscape and consider two separate areas here - RPA vs other IT systems.
For RPA there is no specific tool for PT but we can collect critical stats by observing the monitoring console. For E.g. process run time, number of records processed, computing units used, license usage etc. So the monitoring console is currently our best bet to fine tune RPA processes.
For other IT systems, we can explore use of open source systems such as jmeter or COTS such as micro focus performance center, NEOLOAD, etc.
Key is to ensure E2E ecosystem testing to ensure accurate stats and stable systems.

2. Utilizing strong APM - APM such as Dynatrace/AppD will need to be installed in order to get the detailed system metrics and transaction response times on downstream IT systems. APM tool can help in baselining transactions etc.  These can be used to monitor the RPA Infrastructure on which it is hosted and backend/interfacing systems as well.

3. Test Data - For setting up data for test, you can look at RPA itself to create required test data etc. as well i.e. system under test will be leveraged for automation as well.

4. Service Virtualization - Service virtualization using tools CA Service virtualization, Parasoft virtualize etc. can help to emulate the behaviors of various interacting components. It may not be possible to leverage this solution in all situations but should help in cutting down the testing cycle wherever possible.

5. Establishing CoE - PT CoE will play crucial role as we have multiple teams involved in E2E testing. Establishing proper processes and governance models will ensure testing is done in minimal time and less cost.

To summarize, RPA itself automation process and script less, again scripting it using another automation tool may not work. So monitoring is our focus area along with workload formation to test in pre-prod. It's like batch run where workload will be initiated by RPA itself but you will use another tools for monitoring performance etc.

March 28, 2019

Service Virtualization using Mock Server


Service Virtualization is a technique for integrating a mock server in a test suite to remove dependencies on real back end systems or external party systems from test environment. It is an ideal solution for Test Driven Development (TDD) and Business Driven Development teams who want to quickly test the application and API services to find out the major problems.

Service Virtualization is best suited in Micro Services based Architecture, Service - Oriented Architecture and Cloud based Architecture.  It is the most important component of DEVOPS community.

Problem Statement

This is a compact view of micro services based architecture in which application is communicated to real end back systems through a number of API calls to receive output responses. For instance, In banking applications -  Some of the important REST API calls like accounts, payments, transactions etc.

Here is the list of problems with this kind of application infrastructure:

-          There is no dedicated environment for Automation testing, UAT testing and Performance testing. As environment has been    shared between all the teams that causes delays.

-          Environment is mostly down due to deployment releases and server configuration issues.

-          As data is different in Automation Testing and Performance Testing, test data setup is also a big challenge for teams.

-          Tests are brittle not robust, that means no reusability hence not able to achieve cent percent test coverage due to environmental issues.

 Implementation of Service Virtualization

In my recent assignment, Our QA team was struggling with test coverage issues in automation testing, unexpected environmental issues, performance related issues and many more in real end back systems because these systems were associated with third party vendors and these systems were not accessible to our teams.

With these problems, Our Teams were blocked and handicapped to perform any testing operations hence unexpected delays in production releases that impacts the project schedule and delivery.

To come out from these challenges we have created and implemented a mock server virtualization solution.

Proposed Solution
a.   Introduction of Solution

-          This is a Wire Mock Server or Virtual Service based environment model. One way of solving the dependencies and issues. Using Virtual services or Mocks, allow you to dismantle the testing from real back end systems and provide independent environment to different testing teams. The problems described above are resolved completely. People are happy and satisfied.

b. Application of Solution

 Here are the advantages of using Service Virtualization Over Traditional approach:

-          Test Coverage has been improved upto cent percent and avoid unexpected environment issues hence test quality has been improved.

-          All the QA teams used similar environment independent of other teams.

-          As per the business requirements, Test data set up is easy to create and handle it in an optimize way.

-          Test Development is robust and having less number of issues.

-          There are no issues in environment deployment and configuration.

-          Service Virtualization model is more agile over traditional models.

-          Less or no cost in the development and implementation of Mock Server Virtualization.

-          Flexible to fit in any kind of application architecture.

-          Leverage testers to become developers by manipulating the output responses according to their needs.

-          It's very quick and fast solution to resolve all the issues related to real end environments.

-          This approach reduces the man working hour efforts and time by 90%; it's a very effective solution for company business.

 Future Direction / Long-Term Focus

-          Service Virtualization is one way. Especially for large software projects, this practice can dramatically reduce the company cost.

-          Enhances the practical reusability of Service Virtualization,  hence reduces the future development efforts.

-          Implement such kinds of testing practices in other business necessities such as Cloud based Architecture, Service Oriented Architecture.

Results / Conclusion

We believe, this kind of approach will help people to accomplish various upcoming engagements and produce remarkable results.

Continue reading "Service Virtualization using Mock Server" »

March 25, 2019

Role of Artificial Intelligence in Performance testing and Engineering

A typical Performance Testing starts with analyzing the application UI and creating the test scripts. Post that users hit the application server and generate beautiful dashboards from Load testing tools indicating the Response time, Throughput, CPU utilization time, memory utilization etc.

In the era of AI (Artificial Intelligence) powered softwares, during the early stages of application design, performance engineers should be able to answer questions like:  What should we expect once the application is in production? Where are the potential bottlenecks? How to tune application parameters to maximize performance?

Critical applications need a mature approach to Performance testing and monitoring. AI is the intelligent part of Performance Testing process. It acts as brain in the process. Daily Tasks like test design, Scripting and implementation can be handled using AI, so that test engineers can focus on creative side of software testing.

One reasonable use case of using AI in PT (Performance Testing) can be codeless automation script. Writing performance scripts using Natural Language Processing(NLP) can make the scripting task way easier. In this type of testing, computers learn from the data given to them without programming it. Below are the aspects of solution empowered by AI-ML (Artificial Intelligence- Machine Learning) in performance testing:

  • The testing environment developed using ML, will have advanced capabilities in terms of self-healing and intuitive dashboarding. using deep learning algorithms, the corrections can be handled automatically.
  • The test flows are recorded and can be tested using data. No coding required in most of the scenarios.
  • Reusable functions and objects can be generated and grouped using semi-supervised learning. Scenarios are flow-based, and thus the implementation is transparent to user.

Yet another use case would be performance test modelling processes. AI's pattern recognition strength can extract relevant patterns while load testing which is very useful for modelling performance process. The PT model consists of the algorithms being used, from which AI learns from the given data. The ability of AI to anticipate future load problems helps in creating Performance test model efficiently. It deals with lot of data and can predict the system failures. Once the system data is analyzed, Performance test model can be created based on the system behavior.

Another area can be SLA design. SLAs should be measurable, attainable, simple, realistic and time bound, but most SLA are not designed like this. This is the basic limitation of human powered systems. However, once AI takes the role, the situation will change. It can track all the affecting areas and gets reinforced into monitoring system with providing granularity. It can analyze the complexity of the system and suggest the appropriate SLA. For example, if the lines of code are 1000 then SLA can be considered as 500 milliseconds. AI can detect working trends in a system directly, as system performance changes, SLA can fine-tune in real time.

 Monitoring tools like Dynatrace, AppDynamics introduced AI into their system which are helping in identifying the bottlenecks in multiple tiers of applications in early stages of software development. It can analyze the application and can predict the performance defects at the code level. Many open source tools like webpage test, GTmetrix, Yslow pinpoint specific problems like server request issues and help engineers to solve the issues quickly. Automation Tools like Test.ai is useful in getting the performance metrics of your application as well.

Role of AI in every phase of performance testing and engineering is proved very beneficial and is future of performance testing. Use of AI in performance testing will make tasks like scripting, monitoring highly impactful and help to get real time results very quickly. I believe, in future role of AI in performance testing will be a game changer!

December 7, 2018

Embrace the Future

The unfolding of Cloud Computing, Introduction of Enterprise level Integration Patterns & up folding of micro-services has not only disrupted the existing nomenclature but also makes us think the way we do Integration. When every pioneering companies are embracing the micro services to tackle their complex enterprise architectures, there's one aspect of is which is still open for exploration i.e. Data Validation & Data Warehousing.

Yes, it is true that there are many organizations who are consciously embracing the concept of data services around there data lakes for either master data management or for analytical purpose (simple data read) but very little have been thought upon using the full flavor of micro service driven architecture on areas like data integration, data quality or validation and metadata management space of work.

If we travel back a few years in Time, the idea of SoC (separation of concerns) was ignored due to the need of heavy lifting of data and availability of Integration Tools which usually were tightly coupled with each other. These tools were an Instant hit as they wrapped up complexities of managing job failures, providing reports etc. but it could not fully tackle the learning curve, complexity involved and most important - the need to adapt to frequent changes.

The basic principle of micro service is to break a complex application and decompose it in to multiple self-contented services which can connect to each other to achieve a complex functionality. Given that above use cases are always complex in nature micro services could be a great way to automate data validation & design our future data warehouses.

July 6, 2017

Thinking the Unthinkable: An AI approach to scenario prediction

Every now and then QAs are confronted with the uncomfortable situation where a defect is overlooked and it makes its way to the higher environments. This happens despite QA team having supreme understanding of the system under test. Due to the tremendous complexity of the real world applications and the sheer lack of resources (especially time), many flows and behaviors inherent in the application may not be tested and the authenticity of such behaviors remains dubious.  Also, curiously novice users are able to find defects that expert users cannot. Expert users of the application suffer from a syndrome that can at best be described as hindsight bias, wherein they tend to be blind towards the possibilities in the application that they are not used to.

One way to deal with the above mentioned scenario is to hire more and more QAs so that there are more pair of eyes looking at the application hence trying out and finding out more hidden behaviors that can potentially be defects. Due to the cost issues involved, this approach is not very practical. Another approach is that if we can try to simulate the users with different thinking patterns. This approach puts us in the realm of Artificial Intelligence.  We have developed a Genetic Algorithm (GA) for black box software testing that can do just that.

Genetic Algorithms are the heuristic method of optimization that simulates survival mechanics of genes during the course of evolution. They are based on the mechanics of survival such that these string structures that are linear yet randomized, are able to exchange information to form a search algorithm. An initial group of random individuals (population) evolve according to a fitness function that determines the survival of the individuals. The algorithm searches for those individuals that lead to better values of the fitness function through a selection, mutation and crossover genetic operations.

Brief overview of System Under Test:

Our system under test was an incident logging system. There were few requisites  that needs to be fulfilled before user can successfully create an incident. Few of them are:

1)      User Rights and Associations: User with correct rights and associations should be able to create the incident only in their associated domains and should not infringe outside their domains.

2)      Routing mechanism: Un-authorized users should not be able to access create incident screen either through user action on menu items or directly through URL.

3)      Presence/Absence of Data: Presence of data should be there for mandatory fields before user can create an incident. However, there should not be any checks like this for optional fields.

4)      Dependent fields: Dependent fields should not get the data before their parent fields being populated.

5)      Authenticity of Data: User should not be able to create the incident without the correct data in the fields that have look-up value checks.

Synopsis of the Solution:

 Our algorithm placed the factors outlined above as the bits on the string much like DNA. Each factor was recognized by the place it holds on the string structure.

The objective of the GA was to highlight any false positives i.e. the flows in the application that lead to successful creation of the incident when they should not have been created. For this purpose, we gave higher weights to the factors that were less likely to create an incident. For example, a user (A) with CREATE rights is more likely to create an incident successfully then a user (B) that have just READ rights. So, in this case user B will get significantly higher value of weight than user A as he/she is less likely to create an incident. Same kind of weight distribution was done for other factors as well.

Finally, an equation was created that will calculate the fitness value of the individual string structure. The equation was designed to give higher values of fitness to the individuals that resulted in successful creation of incident even though when the odds of such a thing happening was less. The fitness value of first generation of 10 strings were calculated and then mutation and cross-over events were performed to create the offspring of 10 strings. If the parent was having higher value of fitness then it was retained else child was selected. Once this selection process was over, again the same steps were performed to generate next generation. This process was repeated till the n­th generation. After which human analyst can look at the resultant strings and verify whether these flow really are defects.

Advantages of GA based software testing over traditional testing:

  1.  Able to explore various scenarios hence improved coverage of the system, many of the scenarios may not even cross the mind of the human QAs.
  2. Increasing the Defect Detection Efficiency leading to improved Quality of the system.
  3. Easy to change the directionality of search to find new defects by changing the weights associated with the factors.
  4. Substitute for human testers with the advantage to work on non-business hours including weekends.
  5. High risk area in the application can better be covered by increasing the population size and number of generations.

Associated Challenges:

  1. Technical expertise required to build the framework.
  2. Clear understanding of factors affecting the application is required. It is still not Subject Matter Expert (SME) agnostic. The bias of the SMEs may affect the outcome of the runs.
  3. Hardware intensive. Procuring the advanced computation resources may not be easy in the client environment.

This is a humble attempt to increase the software quality and provide end users with less grief due to missed defects without escalating the project costs.  In the last few years there has been significant involvement of Artificial Intelligence related technologies in most of the aspects of software yet software testing seems to have been eluded by the riches it provides. I sincerely hope that software testing has lot to gain from the advances in the AI making software solutions and products more reliable and efficient over the passage of time. 

Continue reading "Thinking the Unthinkable: An AI approach to scenario prediction" »

March 8, 2017

DevOps at Enterprise scale

Author: Varun Rathore, Delivery Manager

The current business environment makes many demands on organizations. With the proliferation of start-ups becoming a game changer to the existing brick and mortar industries, the need to innovate, iterate and stay relevant, is enormous. In view of this, UBS has embarked on a journey to transform itself through DevOps practices

Continue reading "DevOps at Enterprise scale" »

November 22, 2016

A.E.I.O.U of New Age Quality Engineering

Author: Srinivas Yeluripaty, Sr. Industry Principal & Head, IVS Consulting Services

In today's digital world, 'change' is the only constant and organizations are grappling with ways to meet the ever-changing expectations of key stakeholders, especially ubiquitous consumers. With the GDP transformed by mobile economy, globalization leading to "Global One" customers, payment industry transforming from "Cashless" to "Card-less" to "Contactless" transactions, ever growing emphasis on security and compliance, the expectations on IT are reshaping significantly. To achieve that pace and flexibility, organizations are increasingly adopting agile methods and DevOps principles.

Continue reading "A.E.I.O.U of New Age Quality Engineering" »

November 21, 2016

SAP Test Automation approach using HP UFT based solution

Author: Kapil Saxena, Delivery Manager

Problem statement
Most businesses that function on SAP, or plan to implement SAP must consider multiple factors, as their entire business runs on this backbone. Their major worries are testing effectiveness, preparedness, cost, and time to market. Infosys SAP Testing Unit has answers to all four, which have been well implemented and proven but I am reserving this blog for the last two.

Continue reading "SAP Test Automation approach using HP UFT based solution" »

November 18, 2016

Darwin and world of Digital transformations

Author: Shishank Gupta - Vice President and Delivery Head, Infosys Validation Solutions

When Charles Darwin proposed the theory of 'survival of the fittest', I wonder if he imagined its applicability beyond the life forms. Since the advent of the internet, the bargaining power of the consumers is steadily increasing and product and service providers often find themselves playing the catch up game to provide the best of the product features bundled with the best of consumer experience. What would Darwin's advice to product companies in today's Digital world be?

Continue reading "Darwin and world of Digital transformations" »

October 6, 2016

Testing the Internet of Things Solutions

Author: Tadimeti Srinivasan, Delivery Manager

The Internet of Things (IoT) is a network of physical objects (devices, vehicles, buildings, and other items) that are embedded with electronics, software, sensors, and network connectivity to collect and exchange data.

Continue reading "Testing the Internet of Things Solutions" »

Infosys view on relevance of Continuous Validation in DevOps Journey

Author: Prashant Burse, AVP-Senior Delivery Manager

Rapid digitization is forcing organizations to develop more and more consumer driven enterprise strategies. This is forcing businesses and IT organizations to respond with increased agility to ever changing consumer needs. IT organizations are hence shifting from traditional waterfall based delivery models to Agile and eventually DevOps methodology.

Continue reading "Infosys view on relevance of Continuous Validation in DevOps Journey" »

October 3, 2016

Evolution of mobile devices and Impact on Performance

Author: Yakub Reddy Gurijala, Senior Technology Architect

In last decade, mobile devices evolved from point to point communication like phone calls and SMS to smart features with advanced OS capable of executing native applications. These change created lot of opportunities and challenges for online businesses and application developers.

Continue reading "Evolution of mobile devices and Impact on Performance" »

September 30, 2016

Starwest 2016- Infosys is a Platinum sponsor

Author: Pradeep Yadlapati - Global Head of Consulting, Marketing, Alliances, and Strategy 

Starwest 2016 - As the curtains rise, we are looking forward to another year of exciting conversations with customers, prospects, and partners. This provides a fantastic platform for us that offers a great way to stay connected with practitioners, evangelists, and product vendors.

Continue reading "Starwest 2016- Infosys is a Platinum sponsor " »

September 27, 2016

Trends impacting the test data management strategy

Author: Vipin Sagi, Principal Consultant

Test data management (TDM) as a practice is not new. It has been around for a few years now. But only in the last decade, has it evolved at a rapid pace with mature IT organizations ensuring that it is integrated into the application development and testing lifecycles. The key driver for this has been technology disruptions, compelling IT organizations to deliver software faster with high quality and at low cost.

Continue reading "Trends impacting the test data management strategy" »

September 26, 2016

Hadoop based gold copy approach: An emerging trend in Test Data Management

Author: Vikas Dewangan, Senior Technology Architect

The rapid growth in data volumes of both structured and unstructured data in today's enterprises is leading to new challenges. Production data is often required for testing and development of software applications in order to simulate production like scenarios.

Continue reading "Hadoop based gold copy approach: An emerging trend in Test Data Management " »

September 13, 2016

Lift the curb on Performance Engineering with Innovation

Author: Sanjeeb Kumar Jena, Test Engineer

In my previous two blogs, I discussed about bringing pragmatism and a cultural mind-set change to performance engineering teams. In this blog, we will evaluate the outcome of these shifts during the transformation journey from performance testing (only limited to quality assessment phase of software applications) to performance engineering (covers the entire life-span of software applications to ensure higher return on investment).

Continue reading "Lift the curb on Performance Engineering with Innovation" »

August 25, 2016

Agile and testing: What banks need to know

Author: Gaurav Singla, Technical Test Lead

Agile -- a testing framework introduced in 2001 -- is now being used across various banks in order to cater to their IT project needs. The basic principles of the agile methodology are as follows:

Continue reading "Agile and testing: What banks need to know" »

August 16, 2016

Culture is everything in Performance Engineering

Author: Sanjeeb Kumar Jena, Test Engineer

We are living in a bulging knowledge economy where anyone can access information from anywhere with a device that fits into their pocket. We are living in a hyper-connected world via the World Wide Web. Today, business is not the 19th century one-way 'producer-consumer' relationship. Instead, it's a two-way communication. An effective business model is 'not about finding customers for your products' but it's about 'making products for your customers'.

Continue reading "Culture is everything in Performance Engineering" »

August 3, 2016

Crowd Testing: A win-win for organizations and customers

Author: Manjunatha Gurulingaiah Kukkuru, Principal Research Analyst
            Swati Sucharita, Senior Project Manager

Software is evolving with every passing day to accommodate newer prospects of its usage in day-to-day life.

Continue reading "Crowd Testing: A win-win for organizations and customers" »

August 2, 2016

Pragmatic Performance Engineering

Author: Sanjeeb Kumar Jena, Test Engineer

When you hear the term 'performance engineering,' what's the first thing you think of? The famed battle of 'performance testing versus performance engineering' or because we are engineers, do you immediately think of making performance testing a better quality assurance process?

Continue reading "Pragmatic Performance Engineering" »

July 27, 2016

The three ingredients for a perfect test strategy

Authors: Gayathri V- Group Project Manager; Gaurav Gupta- Senior Project Manager

Last week, we won a testing proposal for a mandatory program that cuts across multiple applications. Although we, the 'pursuit team,' were celebrating the win, a voice in our head incessantly kept saying, "Winning is just one part of the game; but delivering such huge programs on time is always a tall order!"

Continue reading "The three ingredients for a perfect test strategy" »

July 26, 2016

Performance engineering in Agile landscape and DevOps

Author: Aftab Alam, Senior Project Manager

Over the last couple of years, one of the key shifts in the software development process has been to move away from the traditional waterfall approach and instead, embrace newer models like DevOps. One of the main goals of development and operations (DevOps) is to monetize investments as soon as possible. In the traditional waterfall model, UI mockups are all business owners (investors) have access to, before agreeing to invest.

Continue reading "Performance engineering in Agile landscape and DevOps" »

July 18, 2016

Four approaches to big data testing for banks

Author: Surya Prakash G, Delivery Manager

Today's banks are a stark contrast to what they were a few years ago, and tomorrow's banks will operate with newer paradigms as well, due to technological innovations. With each passing day, these financial institutions experience new customer expectations and an increase in interaction through social media and mobility. As a result, banks are changing their IT landscape on priority, which entails implementing big data technologies to process customer data and provide new revenue opportunities. A few examples of such trending technology solutions include fraud and sanctions management, enhanced customer reporting, new payment gateways, customized stocks portfolio-based on searches, and so on.

Continue reading "Four approaches to big data testing for banks" »

June 27, 2016

Three Generations streaming in a Network

Author: Hemalatha Murugesan, Senior Delivery Manager

Are you using iPhone 6s asked my 80+ year old neighbor as we were transiting in apartment lift?  I responded nope, Samsung and enquired what help is needed.   He wanted assistance in how to use the various apps as he just got the iphone 6s as a gift.   Sure why not and will come over to your place, I winked and concluded.  

Continue reading "Three Generations streaming in a Network" »

June 8, 2016

Golden rules for large migration

Author: Yogita Sachdeva, Group Project Manager

In my experience of working with large banks, I have worked on small programs related to acquisitions and mergers. I have also worked on voluminous upgradations and migrations. I wondered what was different which actually a tie breaker for large program was. I always tried to scratch my brain to figure out what it takes to make a large program run. I realized that smaller programs generally get successfully delivered with the apt technical skills of the team. However, large programs are normally meant to deliver a business strategy as big as the creation of a new bank. A large program encompasses a group of related projects, managed in a coordinated manner, to obtain benefits and to optimize the cost control.

Continue reading "Golden rules for large migration" »

June 7, 2016

Role of Validation in Data Virtualization

Author: Kuriakose KK, Senior Project Manager

How can I see the big picture and take an insightful decision with attention to details now?

Jack, the CEO of a retail organization with stores across the world, is meeting his leadership team to discuss disturbing results of the Black Friday sale. He enquires about reasons behind why they were unable to meet their targets and the reason is promptly answered by his leaders as missed sales, delayed shipping, shipping errors, overproduction, sales teams not selling where market demand exists, higher inventory, etc. Jack is disturbed by these answers, and on further probing understands most of these are judgment errors. 

Continue reading "Role of Validation in Data Virtualization" »

June 1, 2016

Predictive Analytics Changing QA

 Author: Pradeep Yadlapati, AVP

Today's mobile economy is changing the way enterprises do business. A recent survey indicates that the mobile ecosystem generates 4.2% of the global GDP, which amounts to more than US $3.1 trillion of added economic value. It is no surprise that organizations are fast embarking on digital transformations.

The pervasiveness of devices is altering interaction as well as business models. Customers expect a seamless experience across different channels. Everyone wants one-touch information and they expect applications to display preferences and facilitate quicker and smarter decisions. 

Continue reading "Predictive Analytics Changing QA" »

May 31, 2016

Performance Testing in the Cloud

Author: Navin Shankar Patel, Group Project Manager

If a layperson, frozen in time for 10 years, suddenly wakes up and eavesdrops on a conversation between CIOs, he/she might assume that a group of weather forecasters are conversing. That is because the entire discussion is centered on 'cloud' and is interspersed with a lot of mostly unintelligible words.

Continue reading "Performance Testing in the Cloud" »

April 28, 2016

Manage tests, the automation way

Author: Swathi Bendrala, Test Analyst 

Most software applications today are web-based. To keep pace with competition and highly demanding processes within the enterprise, these web-based applications undergo frequent updates, either to add new features incorporate new innovations. While these updates are necessary, the amount spent to roll-out these updates are important too. 

Continue reading "Manage tests, the automation way" »

April 26, 2016

Validate to bring out the real value of visual analytics

Author: Saju Joseph, Senior Project Manager

Everyday enterprises gather tons of data streaming in from all directions. The challenge lies in taking this huge volume of data, sometimes unstructured in nature, synthesizing it, quantifying it, and increasing its business value. One way to achieve this is by moving from traditional reporting to analytics.

Continue reading "Validate to bring out the real value of visual analytics" »

April 14, 2016

Is Performance Testing really Non-Functional Testing?

Author: Navin Shankar Patel, Group Project Manager

The world of testing has evolved over the years and like most evolutions in the technology world, it has spawned a plethora of methodologies and philosophies. We now have super specializations in the testing world - UI testing, data services testing, service virtualization, etc. However, some beliefs remain unchanged. Especially the belief that the testing universe is dichotomous - characterized by functional and non-functional testing.

Continue reading "Is Performance Testing really Non-Functional Testing?" »

January 14, 2016

Crowdsourced Testing: Leveraging the power of a crowd to do the testing

Author: Harsh Bajaj, Project Manager

What is crowdsourcing?

The term 'crowdsourcing' was coined in 2006 when Jeff Howe wrote an article for Wired Magazine - "The rise of crowdsourcing." Crowdsourcing is a combination of two words - crowd and outsourcing. The objective is to get the work done from an open-ended crowd in the form of an open call. 

Continue reading "Crowdsourced Testing: Leveraging the power of a crowd to do the testing" »

December 28, 2015

Elevating QA to CXO

Author: Harleen Bedi, Principal Consultant

With rapidly evolving and emerging technology trends such as Cloud, Mobile, Big data, Social, etc. QA is now being looked as a key component in the modernization and optimization agenda of any CXO. This is supported by - World Quality Report that reveals that Application Quality Assurance and Testing Now Accounts for almost a Quarter of IT Spending. 

Continue reading "Elevating QA to CXO" »

December 14, 2015

Ensure the Quality of Data Ingested for True Insights

Author: Naju D. Mohan, Delivery Manager

I sometimes wonder whether it is the man's craze for collecting things, which is driving organizations to pile up huge volumes of diverse data at unimaginable speeds. Amidst this rush for accumulating data, the inability to derive value from this heap of data is causing a fair amount of pain and a lot of stress on business and IT.   

Continue reading "Ensure the Quality of Data Ingested for True Insights" »

September 18, 2015

Assuring quality in Self Service BI

Author: Joye R, Group Project Manager

Why self-service BI?

Organizations need access to accurate, integrated and real-time data to make faster and smarter decisions. But, in many organizations, decisions are still not based on BI simply due to the challenges in IT systems to keep up with the demands of businesses for information and analytics.

Self-service BI provides an environment where business users can create and access a set of customized BI reports and analytics without any IT team involvement.

Continue reading "Assuring quality in Self Service BI" »

September 7, 2015

Role of Open Source Testing Tools

Author: Vasudeva Muralidhar Naidu,Senior Delivery Manager

Quality organizations are maturing from Quality Control (Test the code) to Quality Assurance (building Quality into the product) to Quality Management. In addition to bringing Quality upfront and building Quality into the product, Quality Management also includes introduction of devOps principles in testing, optimization of Testing Infrastructure (Test environments and tools).

Continue reading "Role of Open Source Testing Tools" »

August 21, 2015

Are We Prepared to Manage Tomorrow's Test Data Challenges?

Author: Sunil Dattatray Shidore, Senior Project Manager

As tomorrow's enterprises are embracing latest technology trends including SMAC (Social, Mobile, Analytics, Cloud) and adopting continuous integration and agility, it is imperative to think of more advanced, scalable and innovative ways to manage test data in non-production environments. It can be for development, testing, training, POC, pre-prod purposes. The question really is - have we envisioned the upcoming challenges and complexity in managing test data and are we prepared and empowered with right strategies, methodologies, tools, processes and skilled people in this area?

Continue reading "Are We Prepared to Manage Tomorrow's Test Data Challenges?" »

August 17, 2015

Extreme Automation - The Need for Today and Tomorrow

Author: Vasudeva Muralidhar Naidu,Senior Delivery Manager

We have read about the success of the 'New Horizon Spacecraft', and its incredible journey to Planet Pluto. This is extreme engineering and pushing human limits to the edge. Similarly, when we hear about the automobile industry and the fact that one additional car is getting assembled every 6 minutes, we are quite amazed at the level of automation that has been achieved.

Continue reading "Extreme Automation - The Need for Today and Tomorrow" »

August 3, 2015

Three Stages of Functional Testing 3Vs of big data

Author: Surya Prakash G, Group Project Manager

By now, everyone has heard of big data. These two words are heard widely in every IT organization and across different industry verticals. What is needed, however, is a clear understanding of what big data means, and how big data can be implemented in day-to-day businesses. The concept of big data refers to a huge amount of data, petabytes of data and huge mountains of data. With ongoing technology changes, data forms an important input for making meaningful decisions.

Continue reading "Three Stages of Functional Testing 3Vs of big data" »

Balancing the Risk and Cost of Testing

Author: Gaurav Singla,Technical Test Lead

A lot of things about banking software hinge on how and when it might fail and what impact that will create.

This drives all banks to invest heavily in testing projects. Traditionally, banks have been involved in testing software modules from end-to-end and in totality, which calls for large resources. Even then, testing programs are not foolproof, often detecting minor issues while overlooking critical ones that might even dent the bank's image among its customers.

Continue reading "Balancing the Risk and Cost of Testing" »

July 21, 2015

Is your testing organization ready for the big data challenge?

Author: Vasudeva Muralidhar Naidu,Senior Delivery Manager       

Big data is gaining popularity across industry segments. From being limited to lab research in niche technology companies to being widely used for commercial purposes; big data has achieved a wider scope of application. Many mainstream organizations, including global banks and insurance organizations, have already started using big data technologies (open source) to store historical data. While this is the first step to value realization, we will soon see this platform being used for processing unstructured data as well.

Continue reading "Is your testing organization ready for the big data challenge?" »

July 14, 2015

Automation - A new measurement of client experience

Author: Rajneesh Malviya, AVP - Delivery Head - Independent Validation Solutions

A few months ago, I was talking to one of my clients who visited us on the Pune campus. She shared how happy she was with the improved client visit process. She was given a smart card like any of our employee and was mapped / marked as visitor and with that she could move from one building to another without much of a hassle.  She no longer had to go through the manual entry process at each building. Like an employee, she could use her smart card at the turnstile to enter & exit our buildings and at the same time her entry was recorded as per compliance need. As she had been in our campus before, she was clearly able to experience the great difference brought about by automation.

Continue reading "Automation - A new measurement of client experience" »

July 6, 2015

Automated Performance Engineering Framework for CI/CD

Author: Aftab Alam, Senior Project Manager, Independent Validation and Testing Services

with contribution from Shweta Dubey

Continuous Integration is an important part of agile based development process. It's getting huge attention in every phase of Software Development Cycle (SDLC) to deliver business feature faster and confidently.

Most of the times it's easy to catch functional bugs using test framework but for performance testing, it requires scripting knowledge along with load testing and analysis tools.

Continue reading "Automated Performance Engineering Framework for CI/CD" »

June 30, 2015

Transforming Paradigms - From Quality Assurance to Business Assurance

Author: Srinivas Kamadi, AVP - Group Practice Engagement Manager.

Today's digital landscape is riddled with disruptive forces that are transforming business models and industries alike. The proliferation of social channels and continuous creation of big data is fuelling this transformation and heating up global competition. Forces such as Social, Mobile, Analytics and Cloud (SMAC) and the Internet of Things (IoT) are now critical to delivering omni-channel experiences. These digital imperatives guide how businesses engage with their customers, employees and stakeholders. Customers demand 24/7 connectivity and free flowing information accessibility. This has made it contingent upon companies to deliver superior customer experience in an agile fashion.

Continue reading "Transforming Paradigms - From Quality Assurance to Business Assurance" »

June 29, 2015

New Age of Testing - The WHAT, WHY and HOW?

Author: Mahesh Venkataraman, Associate Vice President, Independent Validation Services.

While testing has always been important to IT, the last decade has seen it emerge as a discipline in its own right. Hundreds of tools have been developed and deployed, commercially as well as 'openly'. New methodologies have been formulated to test the latest business and technology transformations. IT organizations today recognize testing as a critical function that assures the readiness of a system to go live (or a product to be released to the market).

Continue reading "New Age of Testing - The WHAT, WHY and HOW?" »

March 26, 2015

Accessibility Compliance: What Developers need to know and when?

At times, Accessibility Testing defects get Development team focus at fag end of a release in the right perspective. Prior to this, Functional testing is a priority since basic functionality of system must work to get through Integration and User Acceptance Testing (UAT).

It is important for Development team to understand core concepts of Accessibility as well as the exact code level changes to be done during defect fixing. And this needs to happen well in advance during initial stage when requirements would be signed off.

This understanding could happen in several ways. 

Continue reading "Accessibility Compliance: What Developers need to know and when?" »

October 27, 2014

Mobile Native App- Real User Experience Measurement


As we all know that testing server and making sure server side is up and running does not guarantee good end user experience. There are several SaaS solutions to measure client side performance of a website and mobile apps like SOASTA mPluse, TouchTest, webpagetest, Dynatrace UEM etc.  How can we leverage same technique to measure Sales person experience for Mobile POS applications before releasing new features or how can we monitor Sales person app usages/behavior just like we do real user experience analysis for website?

Continue reading "Mobile Native App- Real User Experience Measurement" »

September 24, 2014

Accessibility Compliance : What different User groups look for ?

Accessibility compliance is gaining more strength across organizations due to legal mandates.

Web Content Accessibility Guidelines (WCAG 2.0) is referred as a broad guideline across geos, which takes into consideration major physical impairments and how to meet the needs of such users having impairments. t is vital for achieving accessibility compliance successfully in a program, that the teams engaged and working together, concerned groups are quite aware of their program specific accessibility guidelines.

Continue reading "Accessibility Compliance : What different User groups look for ?" »

September 2, 2014

Usability Testing: What goes into user recruitment and test session readiness?

Usability testing is meant to help discover design issues and get appropriate directions on further improving overall user experience. But, these sessions would not be possible without having right set of users who would review and provide feedback.

Getting right users 
At times, it is challenging to get right users for test sessions.For many organizations where usability process for design is newly followed, there is still less clarity on whether:

  • usability testing is required ( business impact)
  • if required, what type of users would be participating in test sessions
  • will such users be identified and available for such sessions
  • if available, which specific locations need to be considered.


Continue reading "Usability Testing: What goes into user recruitment and test session readiness?" »

August 8, 2014

Is the environment spoiling your party?


I frequently come across performance testing projects entangled in cost and time overruns - the culprit usually being environment issues. Since we can't wish away the environment and the issues that come with it, the next best thing is to be better prepared by figuring out the pitfalls and addressing them proactively.

Since a stable test environment is critical for test script development, load simulation and bottleneck analysis in the performance testing life cycle stages, let's take a good look at what to watch out for when we prepare for a testing cycle.

Know thy application: Any environment issue during the test execution phase is like the proverbial spanner in the works. It should come as no surprise if the performance testing team has to spend significant effort in debugging and analyzing it and, of course, following up with support teams for resolution. To be effective in test environment risk assessment and issue resolution, we must well know the application architecture, functionalities, workflows, and the interconnecting components. Stubs and virtualization techniques can be handy during test execution when one is familiar with the component level details and how to use them. While investigating environment issues, the development and infrastructure teams often seek the testing team's input - so lending a hand with specifics will mean a faster turnaround.

Dependencies - wheels within wheels: Another party-pooper can be the dependencies that may impact testing way before we even run the performance test. Multi-tiered enterprise computing systems have these dependencies on each tier and layer, within and outside the enterprise boundaries. In addition to the functionalities, there are other factors at play that may impact the performance test results. These could include high resource consumption by another process hosted on the same infrastructure, execution of batch jobs, or parallel test runs by another team. An outage in the environment during the test run can force you to reschedule the test. That's why, it is all important to gather information about all the possible dependencies that may impact the test execution during the planning stage itself. It is always good to document these issues as one comes across them for reference in future test cycles.

Stay in touch: The performance testing team cannot operate in a vacuum. Team members must establish proper communication with the development, infrastructure, functional and integration testing, and release management teams right from the strategy phase to synchronize test preparation as well as execution activities. The test schedule should be published well in advance in case you are using a shared test environment. Notifications prior to running a test must be sent out to the teams concerned to bring up the servers, mount monitors, clear logs and keep the environment stable during the test execution. A calendar that blocks shared computing resources can keep all stakeholders posted on the date with the test execution and reduce retest efforts significantly. A small tip: Keep your contact information handy and up-to-date.

Think ahead: Being well-prepared is half the battle won. For testing, this means to think about possible environment failures and look for workarounds well before the actual test execution. While preparing test estimates, don't forget to factor in unknown environment issues that could adversely impact the effort and the schedule. Keeping some buffers as a percentage of the overall estimate can save you a lot of grief later. It is also important to prioritize critical business transactions for the performance test so that, if some functionalities flop during the planned test window, a test run on a subset of the transactions can provide meaningful insights into application performance. Finally, remember that time is your most precious resource. So, if the test environment becomes completely unavailable during the planned test window, utilize that time effectively in activities such as offline reporting or knowledge management and promptly schedule the test in the next window available.

So these are a few best practices to keep in mind while preparing for a performance test cycle. You can use these to build your own set of rules specific to the challenges and constraints of your set-up. The bottom line for a successful testing cycle is to keep tabs on incidents and work through issues smartly.

August 6, 2014

Is User Experience measurable ?

Many organizations look for tangible numbers to justify efforts and money spent on improving user experience of websites or applications.

Of course, there has to be valid reasons to continue the investment for user experience and surpass the reviews done intermittently on the value it is providing.

Today, it is not enough only to get a subjective confirmation from users on the overall design acceptance and satisfaction.

What does it mean when users mention a website is 'easy to use' , 'it is good', 'that was easy', 'I saved a lot of time' ?

User experience can be measured by quantifying user feedback and efforts put in the design can be validated :

1)  Qualitative Feedback
 This type captures user feedback on usability or user experience statements. A Likert scale is used to capture such feedback. Typically, ratings from scale of 1- 7 or 1-5 are captured. User experience statements could be :
 a)'I found the design simple to comprehend'
b) 'I could locate where I am on pages while doing my tasks'

2)  Quantitative feedback

This type of feedback brings in more numbers to the table for everyone to see some tangible progress/issues on design.

Here are different ways a design team can get active, measurable user feedback on their design.


2.1) SUS Score:

The one number which needs to be mentioned is the System Usability Score or SUS Score .
 This number indicates whether the design in progress is acceptable to set of users and if design is going in right direction. Higher the score , higher is the acceptance of users towards design. Example : a score of 85 out of 100 means users are positive about the new design and will be keen to work on it.

2.2) Task Completion ratio:

Another number which signifies how many critical tasks a user is able to complete from the number of tasks given to perform during usability test sessions.

If 80 % of tasks can be completed by users, it means there are some tweaks needed to complete the other 20 %. But in general the design is going in right direction.


2.3) Time for Task completion

A user taking longer than expected time for a specific tasks to complete signifies there is some problem with either the way information is laid out, naming conventions used or visual clarity.
 This is not to know the exact time in terms of milliseconds, but, to get an overall impression whether the task is getting difficult for users to complete.


2.4) Number of Errors

If user is given 5 tasks and there are 8 errors/issues user comes across while performing or completing them, there is a problem with design.

Minimum number of errors could baseline a design. More number of errors could ask designers to go back to white board to see and analyze what went wrong.


These are some of the numbers which can provide insights on what is going good with design or what are the issues still need to be worked out.

Quality of User experience can thus be measured by both qualitative as well as quantitative feedback.

January 20, 2014

Social Media, Cloud, Analytics and Mobility (SCAM)

Social Media, Cloud, Analytics and Mobility: These are 4 common buzzwords that we hear today. They are indeed very much inter-related as well!  Social media allows instantaneous interactions, sharing of news, photos, videos etc. From a technical perspective, this requires elastic omnipresent storage capability.  Cloud provides this for the Social Media. The moment something is on cloud, it can be big - big data. Small data can be hosted locally. If data is big, cloud is a good medium and the data can be leveraged for analytics. This facilitates informed decision making. For an end user, this should be omnipresent, thus available at fingertips. Mobility facilitates that.

Continue reading "Social Media, Cloud, Analytics and Mobility (SCAM)" »

November 14, 2013

Are User feedback streams considered during Design ?

On occasion of World Usability day today, I still have a thought and also wonder if there are  many users who are still struggling with interfaces to perform their intended tasks and whether their voice was heard while the interfaces were being designed, be it a website or an application targeted for laptops, tablets or handheld devices.

The answer would still be inclined towards a 'Yes'. There might be still a good number of unhappy users. With such a number of sites and applications undergoing redesign, change, content updates everyday across geographies, and for so many different target users, there is a probability that target users in scope would not have been available or identified to gather feedback during the design of interfaces.

In a given project, where does the process of user feedback start and where does it end? What is the best phase of design to have such feedback sessions with users?  

Following are the possible phases to have user feedback sessions and as part of iterative design process : 

 1)  Wireframe/concept level (Paper, wire framing tool)

2)   Visual design level (static jpegs images with look and feel and branding)

3)   HTML prototype level

4)   User Acceptance Test

5)   Live version 

Notably, each phase provides different type of feedback on design in progress.

For example, concept/ wireframe phase gives user feedback primarily on basic site structure, navigation elements, information contents, naming convention of menu.

Visual design phase provides feedback on 'look and feel', branding, visual affordance, color and style in addition to the information architecture.  

It is recommended to have such user feedback sessions preferably first during wireframe phase. Simply, as it is easy to make changes in fundamentals of design based on user feedback very quickly and with least efforts. The design is still taking shape in a wire framing tool or even paper prototype. 

 The more delay in having these sessions arranged in later stages of design, more team members need to redo their work ( visual designers, HTML developers) and more cost it adds up to the project budgets.  

 I believe that with the advent of usability term used, realized and usability practice being evangelized across organizations in different domains, user feedback would be forming as one of the core input during design to achieve better user experience.

October 10, 2013

Preparing ourselves for Application Security Testing


Haven't we all as functional testers, done 'Authentication' and 'Authorization' testing? Almost every application demands some flavor of these tests. So, are we not already doing 'Application Security Testing'?

Let's explore and see what's the extra mile, we need to traverse, in each phase of SDLC, to say confidently, that the applications we are testing are secured ones.


Continue reading "Preparing ourselves for Application Security Testing " »

September 15, 2013

Different BPM Products: What difference does it make from a testing perspective?

This blog is the third in the 3-series blogs on validating BPM implementations. Here, we focus on the common products that are available in the market, their common use in enterprise implementations from a testing perspective.


According to a recent Forrester report on BPM, (The Forrester Wave: BPM Suites, Q1, 2013) in the year 2013, BPM suites are to take a center stage in Digital Disruption. The Disruptive forces of change include technical, business and regulatory changes.  It is a well-established fact that key to any change management or strategy implementation is to start small, think big and move fast! When it comes to the validation of BPM products, the story is not much different, except that we typically see testing organizations fail when it comes to scaling and moving fast. Business Process validation might be a success in silos or at an application/project level, however when it comes to Enterprise processes or integration involving several systems, across LOBs, we just cannot move at the same pace. One key reason is the lack of understanding of the overall BPM picture at hand.


Continue reading "Different BPM Products: What difference does it make from a testing perspective?" »

July 29, 2013

Increasing agility via Test Data Management


Does test data requirements need to be captured as part of the functional requirements or non-functional requirements?  Is test data management a costing exercise or business critical?  Since the testing team is provisioning the test data in whatever mechanisms, do we need test data management team? 


Continue reading "Increasing agility via Test Data Management" »

July 25, 2013

Performance Modeling - Implementation Know-Hows

As an extension to my previous blog titled 'Performance modeling & Workload Modeling - Are they one and the same?', I would like to share few insights about implementation know-hows of Performance Modeling for IT systems in this post.

Performance modeling for a software system can be implemented in Design phase and/or in Test phase. The objectives of performance modeling in these 2 phases are slightly different. In Design phase, the objective is to validate (quantitatively) if the chosen design and architectural components meet the required SLAs for given Peak Load. On the other hand in Test phase, the objective is to predict the performance of the system for future anticipated loads and for production hardware infrastructure.

Continue reading "Performance Modeling - Implementation Know-Hows" »

June 13, 2013

Testing for an Agile Project - Cool breeze in Summer


For a software testing professional, working in an agile project is no less than a breeze of cool air in the hot summer. The point that I am trying to drive is that for a tester, who works regularly in projects that operate in the traditional model, the agile model is a very welcome change. Let me tell you why...

Continue reading "Testing for an Agile Project - Cool breeze in Summer" »

May 30, 2013

Back to school! - Determine Optimum Number Of Sprints In Agile Engagements using Mathematics

There has been a rise in adoption of Agile methodology for software development due to benefits such as absorbing late requirement changes, early availability of first version etc. However, decision of adopting Agile needs to be arrived at by balancing project priorities/characteristics (such as lack of requirement clarity upfront) and effort/cost overruns. In certain scenarios, the effort required for developing a product/application is more in Agile compared to that of Waterfall model. It can be easily observed that incorrect number of sprints planned can lead to effort overruns. Several program metrics can influence the decision to adopt Agile as well as to arrive at right number of sprints to keep the efforts in check.


Continue reading "Back to school! - Determine Optimum Number Of Sprints In Agile Engagements using Mathematics" »

March 7, 2013

Test Tool Selection for Functional Automation - The missing dimension

"How do you select the appropriate test tool for your functional testing?"

"As per the listed features, the automation tool we invested in seems to be right. But, the % of automation we could really achieve is very low. What did we overlook at the time of tool selection?"

"Both my applications are of the same technology. We achieved great success in automation with one of them and failed miserably with the other. What could be the reason?"

Continue reading "Test Tool Selection for Functional Automation - The missing dimension" »

January 29, 2013

What tool fits best - Standard or Tailored?


It is back to business after the quiet of the holiday season and the cold weather and the flu are making it increasingly difficult to manage schedules. During the holidays, I was reminiscing on some of the interesting conversations with client organizations through the past year and a few of those are still fresh in my mind. Here's one interesting question that has come up time and again and something I myself have been grappling with as well.


"Which is better - a single tool (or toolset from a single vendor) that addresses most of your testing needs or specialized tools that completely meet each of the varied needs of testing?"



Continue reading "What tool fits best - Standard or Tailored?" »

November 30, 2012

Recommended structure for Organizational Security Assurance team


Security defects are sensitive by nature, always raised as top priority tickets and costlier than functional and performance defects. Apart from the business impact, there is impact on the company's image, lost data costs, loss of end-user confidence and it leads to compliance and legal issues. So, with such high levels of risk associated with security defects, it is surprising to see that many organizations do not have an internal structure towards security assurance.


Internal security assurance is needed for any organization to increase security awareness across the enterprise, have a structure to deal with various security compliance aspects and to use this structure to strengthen and build and test processes. Setting clear goals, reporting structure, defining activities and enlisting performance measurement criteria helps in smoother functioning of security assurance team. To know more about a team structure that is capable of providing enterprise-wide security assurance service for Web applications, read our POV titled "3-Pillar Security Assurance Team Structure for ensuring Enterprise Wide Web Application Security" at http://www.infosys.com/IT-services/independent-validation-testing-services/white-papers/Documents/security-assurance-team.pdf.

November 5, 2012

Big Data: Is the 'Developer testing' enough?


A lot has been said about the What, the Why and the How of Big Data. Considering the technical aspect of Big Data, isn't it enough that these implementations can be production ready with just the developers testing it? As I probe deeper into the testing requirements, it's clear that 'Independent Testers' have a greater role to play in the testing of Big Data implementations. All arguments in favor of 'Independent testing' hold equally true for the Big Data based implementations. In addition to the 'Functional Testing' aspect, the other areas, where 'Independent Testing' can be a true value add are:

· Early Validation of Requirements

· Early Validation of Design

· Preparation of Big Test Data

· Configuration Testing

· Incremental load Testing

In this blog, I will touch upon the listed additional areas and what should be the focus of 'Independent Testing'.

Continue reading "Big Data: Is the 'Developer testing' enough?" »

July 16, 2012

Testing BIG Data Implementations - How is this different from Testing DWH Implementations?

Whether it is a Data Warehouse (DWH) or a BIG Data Storage system, the basic component that's of interest to us, the testers, is the 'Data'. At the fundamental level, the data validation in both these storage systems involves validation of data against the source systems, for the defined business rules. It's easy to think that, if we know how to test a DWH we know how to test the BIG Data storage system. But, unfortunately, that is not the case! In this blog, I'll shed light on some of the differences in these storage systems and suggest an approach to BIG Data Testing.

Continue reading "Testing BIG Data Implementations - How is this different from Testing DWH Implementations?" »

June 6, 2012

Cloud Migration Testing Approach

While interacting with a stakeholder who wanted to move his production website from its existing physical infrastructure onto a private cloud, I understood that his primary focus was to leverage cloud from an infrastructure standpoint, which would potentially involve configuration changes for capacity planning. There were no changes being made to the code or the architecture of the particular website. In such scenarios, cloud migration testing is essential for the websites involved, to ensure that the websites performance, functional flow, data and access control security privileges remains intact. 

Continue reading "Cloud Migration Testing Approach" »

March 12, 2012

Overcoming challenges with Over-utilized systems with Service Virtualization & Cloud

The unavailability of environments for QA purposes is a very common challenge faced by most organizations. This is because a lot of delay is associated with the acquisition, installation, setup of the QA infrastructure and in gaining access to external & dependent systems. In my recent article(http://www.infosys.com/IT-services/independent-validation-testing-services/Pages/virtualized-systems.aspx), I give detailed review and help businesses understand how they can easily overcome these challenges with Service Virtualization along with cloud adoption. Let me know if this paper helped you and do share your feedback.

March 9, 2012

The Right Cloud Based QA Environment for your Business

I can clearly see that most enterprises are keen on cloud adoption, based on my interactions with them. But the first thing that perplexes them is how to go about evaluating and determining the appropriate cloud deployment that fits their business needs.

In an attempt to address these concerns, I talk about the various factors like understanding the QA infrastructure requirements, the existing infrastructure availability, application release calendar and the budget appetite, that need to be gauged for taking this decision in my latest POV.  To know more, please click here http://www.infosys.com/IT-services/independent-validation-testing-services/white-papers/Documents/cloud-based-QA-environment.pdf. As always I look forward to your views and feedback.

March 6, 2012

Overcoming challenges associated with SaaS Testing

Today's tough economic environment has put a lot of pressure on organizations to deliver business applications faster and at lower costs.  The rapid growth of the cloud coupled with the current economic environment constraints, has led to the growing adoption of SaaS based applications by organizations.SaaS based applications help organization's focus on their core business than on non-core activities like managing hardware, building applications and maintaining them. However, the adoption of SaaS demands comprehensive testing to reap all benefits associated with it. In my earlier paper, I had identified and described the challenges associated with SaaS testing (http://www.infosys.com/IT-services/independent-validation-testing-services/white-papers/Documents/saas-testing.pdf).

Continue reading "Overcoming challenges associated with SaaS Testing" »

January 20, 2012

Testing for cloud security - What is the data focus of QA teams (Part 2/3)

In my early blog on testing for cloud security (http://www.infosysblogs.com/testing-services/2011/12/testing_for_cloud_security-_wh.html), I had discussed the security concerns of cloud adoption from an infrastructure standpoint. Now, let us take a look at what would be the focus of cloud security testing from a data perspective. Enterprises are highly concerned about the security of their data in the cloud. They are well aware that any sort of data security breach could lead to non-compliance, resulting in expensive legal law suits that could cause long term damage to the overall credibility of the organization

Continue reading "Testing for cloud security - What is the data focus of QA teams (Part 2/3)" »

December 14, 2011

Testing for cloud security- What is the infrastructure focus of QA teams (Part 1/3)

One of the biggest barriers to cloud adoption is security concerns. Any enterprise that wants to migrate on to cloud based environments needs to ensure comprehensive cloud testing, encompassing infrastructure, software and the platform, in order to validate the security of the cloud, cloud related application and data.  I believe that cloud adoption is a radical change for any enterprise to make and the move from physical to virtual accessibilities poses several challenges from a security standpoint. To start, let us take a look at what security testing would need to focus on at an infrastructure level, since this is the first step on the path towards successful cloud adoption.


Any enterprise subscribing to the cloud cannot completely depend on the cloud service provider's contract for the security of the cloud infrastructure, the QA teams would also be needed to validate the security of the cloud from the infrastructure layer itself. Once the desired computing power is allocated along with the software, QA teams need to scan cloud instances for existing security vulnerabilities, malware and threats. This would help detect security flaws such as unpatched operating systems at the infrastructure layer.  Also, it's important to check if there are adequate security measures in place like user access control, privilege based access and security policies for governing the QA infrastructure itself. Lastly, the encryption of cloud instances need to be validated since there are security threats involved with recovering previously deleted data in case of unencrypted cloud instances.

December 13, 2011

What to expect from Test Automation of Oracle Applications?

I just finished putting together a response for a client trying to understand the value of automation during testing of Oracle Applications. Essentially the client was trying to evaluate whether automation is a worthwhile investment and evaluate the returns expected. I figured this topic to be a subject of much discussion, given widespread implementation of Oracle Applications.

Here are a few points one might want to keep in mind while considering functional automation in their strategy to test the implementation of your Oracle Applications

Continue reading "What to expect from Test Automation of Oracle Applications?" »

December 6, 2011

Looking for the First Step on the Cloud Adoption Path?- Cloud Based QA Environment

Some of the prime features of cloud such as on demand provisioning, elasticity, resource sharing, constant availability and security help address a lot of challenges related to QA environments; such as poor utilization of environments, unavailability, lack of environment oriented skill sets, budgets constraints for QA infrastructure setup and multi-vendor coordination situations. Overcoming these challenges with the cloud helps improve the efficiency of the QA teams which eventually has a positive impact on an organization's business outcome, with higher quality of applications at lower risks.


I believe that QA environment is the perfect place for an organization to begin its cloud journey before actually moving live applications on the cloud. By moving the QA environment to the cloud, organizations can see immediate advantages such as increased asset utilization, reduced proliferation, greater agility in servicing requests and faster release cycle times.


To find out more about the detailed benefits associated with Cloud based QA environments...read my latest POV at http://www.infosys.com/IT-services/independent-validation-testing-services/white-papers/Documents/begin-cloud-adoption.pdf. Do share your comments and feedback, I look forward to them.

November 3, 2011

Collaborative Testing Effort for Improved Quality

The collaboration amongst the business, development and testing teams can reduce the risk during the entire software development and testing lifecycle and considerably improve the overall quality of the end application. As a testing practitioner, I believe that the testing teams need to begin collaboration at an earlier stage as described below rather than the conventional collaboration during the test strategy phase:

·         During the requirement analysis phase the business/product teams need to collaborate with the development teams to validate the requirements.

·         The test data needs to be available earlier and the testing teams need to collaborate with business/product teams to validate the test data for accuracy, completeness and check if it's in sync with the business requirements spelled out.

·         Collaborate with the development team and share the test data which can be used in the unit/integration testing phases.

·         Collaborate again with the business teams to formulate a combined acceptance test strategy which would help reduce time to market.

·         Collaborate with the development team to review the results of unit testing/integration testing and validate them.

·         Collaborate with business/product teams to validate the test results of the combined acceptance testing.

Testing at each lifecycle stage has its own set of challenges and risks. If the potential defects are not detected earlier they escalate further down in the SDLC. However, an experienced and competent test practitioner can identify these defects earlier on, when it originates, and address them in the same stage. Below are some examples which reinstate this fact.


·         A good static testing of the functional, structural, compliance and non-functional aspects of an application during the requirement phase can reduce 60% of the defects from cascading down to production.

·         Similarly, getting all the required test data (as specified by the business requirement) as early as towards the end of requirements analysis phase can inject the sense of testing early in lifecycle which would improve test predictability.

·         Planning ahead for performance, stability and scalability testing during the system design phase can help reduce the costs of the potential defect incurred later on. Also, proactive non-functional testing (as required by business) contributes significantly for faster time to market.

·         Test modeling during the test preparation phase helps avoid the tight coupling of the system that is being tested with the test environment. This eventually helps in achieving continuous progressive test automation.

·         Collaboration with the development teams ensure that they have used and benefited from the test data shared by the testing teams. This collaboration helps the testing teams validate the architecture, design and code by following simple practical in-process validations through static testing of the functional, structural and non-functional aspects of the application.

·         Mechanisms which help predict when to end testing is a key requirement during execution. One such mechanism is a stop test framework based on the understanding of the application carved around the defect detection rate.

All the approaches described above let testers save on time and focus more on the right metrics collection and maintaining dashboards during test execution. It also ensures that testing is not limited to just one phase but is a fine thread that runs across the entire SDLC in order to improve quality, reduce costs and time to market for all business applications.

The benefits of this collaborative approach are many. I have listed a few benefits based on my collaborative team experiences:

·         De-risks the entire software development process by embedding testing as an inherent part of each stage of the SDLC process.

·         Defects are found early in the life cycle which reduces the total cost of fixing the defect at a later stage. The cost ratio between finding the defect at the requirements stage vs finding the same at the production stage is 1:1000.

·         Shortens the time to market by using this approach which has a built in self-corrective mechanism at each.

September 26, 2011

Performance Testing for Online applications - The Cloud Advantage

Organizations have finally realized that building brand loyalty online contributes significantly to the overall brand value of the organization. In order to achieve this brand loyalty in the online space, organizations need to focus on two key elements - user experience and application availability.


Organizations can improve their online end user experience by conducting usability testing and by taking feedback from users to uncover potential usability issues. Usability testing helps identify deviations from usability standards and provides improved design directions as part of its iterative design process.


Uninterrupted application availability can be achieved by focusing on the performance aspects of the business application. To do so, the prime focus needs to be on performance throughout the application life cycle stages, right from requirements gathering, understanding the business forecasts, accounting for seasonal and peak workloads, capacity planning for production and ensuring right disaster recovery strategies like multiple back-ups across geographies, etc. All these need to be further coupled with the right performance validation approach.


Performance testing should not only focus on simulating the user load. It should also focus on simulating the critical business transaction and resource intensive operations, all under realistic patterns of usage. While certifying applications for performance, testing teams need to ensure that the user load factor takes into consideration the growth projections for the next five years at least, along with the peak seasonal user hits. This can help the organization ensure scalability of the application to handle not only peak traffic for the current year, but also online customer traffic for the next 5 years.


While all this sounds good, the common client concern with such preparation is the need for setting up such production like performance environments for the enablement of perfect performance testing of online business applications.  The setting up of this environment will require huge amount of CAPEX investment and worst of all will remain underutilized post the completion of the performance testing exercise. Leveraging the cloud can help organizations quickly and effectively set up production like performance environments and convert this CAPEX requirement to OPEX. This pay as you go model of testing, in the form of cloud based environment and tools, is the modern way for an organization to be cost effective in the current economic scenario and achieve thorough, end to end, performance testing of online business applications.


However, organizations need to realize that moving an application to the cloud does not mean access to infinite resources. Most organizations make this assumption while moving to the cloud and this can prove very costly. Whether an application is on a cloud or an on-premise application, it still needs to be designed to diligently handle application and availability failures.  Even in the cloud, the organization needs to sign up for specific computing power, a certain amount of storage power for the anticipated peak user load, etc. Any wrong forecasting on the mentioned factors or in the traffic increase pattern can, and will, result in application unavailability for users. Further, whether on the cloud or not, a disaster recovery back up plan is a must, that too a multi-geo one. This would help avert any business disruption in the event of any outage in a particular geography.

September 7, 2011

Enabling Effective Performance Testing for Mobile Applications

Mobile performance testing approach/strategy is closely similar to other performance testing approaches. We just need to break the approach in order to ensure all facets relevant to performance testing are noted and taken care off.

Understanding technical details on how mobile applications work

This is a primary step. Most mobile applications use a protocol (WAP, HTTP, SOAP, REST, IMPS or custom) to communicate with the server, using wireless devices. These calls get transmitted via various network devices (e.g., routers, gateways of wireless service provide or ISP) to reach the mobile application server.

Performance test tool selection

Once we know the nitty-gritties of how the mobile application works, we need to select or develop performance tools which mimic the mobile application client traffic and record it from the mobile client or simulator. There are several tools available in the marketplace to enable the same - HP's LoadRunner, CloudTest, iMobiLoad, etc.   Besides this, the mobile application provider will not have control over network delays; however it is still very important to understand how network devices and bandwidths would impact performance and the end user response time for the application in question. Shundra plugin with HP LoadRunner or Antie SAT4(A) , have features that mimic various network devices and bandwidths.

Selecting the right monitoring tool


Once we have zeroed in on the load generating tool, we now need monitoring tools to measure client and server performance.

We can use DynaTrace, SiteScope or any other APM (Application Performance Monitoring) tools to measure server side performance. These tools will capture and display, in real-time, the performance metrics such as response times, bandwidth usage, error rates, etc. If monitoring is in place on the infrastructure side, then we'll also be able to capture and display metrics such as CPU utilization, memory consumption, heap size and process counts on the same timeline as the performance metrics. These metrics will help us identify performance bottlenecks quickly which eliminates the  possible negative impact on the end user experience.

Performance of mobile app client is also critical due to resource limitations with respect to CPU capacity, memory utilization, device power/battery capacity, etc. If the constructed mobile application consumes a lot of CPU and memory, then it will take more time to load on devices. This would in turn significantly impact the speed and the user's ability to multitask on the same device. Also, if the application consumes a lot of power/battery, it would also reduce the user acceptance for such mobile applications.  For this, app plugins can be developed to measure and log mobile client performance as well. We can install plugins in mobile devices and encourage users to use it when loading is being simulated.  Possible tools that can be used are WindTunnel, TestQuest, Device Anywhere, etc. Plugins can capture performance data and the same can be sent to a central server for analysis.


In nutshell, with the right performance test strategy and tools in place, we can ensure effective Mobile application performance testing. This would ensure that the organization is able to deliver high performance and scalable apps to businesses which positively impacts the top line growth

August 18, 2011

The Importance of Service Level Agreements (SLAs) in Software Testing

It's pretty well understood in the software industry that Testing is a specialized area which helps organizations reduce risk and derive greater business value across the entire software development lifecycle. However many organizations continue to struggle with figuring out the best way to define service-level agreements (SLAs) and the outcomes that can govern testing relationships. Through my experience over the years I believe it's extremely important for customers to define SLAs upfront in order to ensure 100% alignment of goals between service provider and customer and to accelerate trust in relationships especially with first time partners.  

Before we go on to define the SLAs, it's important to define the Key Result Areas (KRAs). These are broad level areas where the SLAs will be measured and they could be in areas like governance, process, resources/staff and transition. Once these are defined, we can define the SLAs within each KRA. It's Important to choose SLAs which are relevant to the engagement (managed service, co-sourced, staff augmentation) or the type of testing (functional/automation/performance/SOA etc.). A common mistake made while defining SLAs is not defining the criticality of a particular SLA. This is important because it's not necessary to have the same level of criticality for all SLAs. Some are more relevant than others and hence we can use a classification like critical, high, medium or low for the same. Once the level of criticality is assigned to the SLAs, we need to decide on how they would be measured. I have invariably seen that customers are unsure about the measurement of SLA's.  However, deciding the tools and the methodology of calculation for measuring the SLAs is imperative. Finally, a decision on the frequency of the SLA capture (release wise, monthly or quarterly), when (release wise, monthly or quarterly) and how (spreadsheet, sharepoint, document) will it be decided upon.

In any new engagement where SLAs are defined for the first time, there will invariably be questions about the targets like how do we determine these targets? In such situations, it's always important to define a "Nursery Period". The purpose of this "Nursery Period" is to benchmark the targets for those SLAs where a period of demonstration is required before it can be set. At the end of this exercise, all SLAs should be specific, quantifiable and measurable.

The commercial framework for a risk and reward model is the key component of SLA definition process. Before delving into commercials, it's important to decide how the SLA scores would be computed. Each individual SLA should be measured and a weighted score determined based on the SLAs criticality weighting. The individual weighted scores should then be averaged. The final average weighted score is then used to calculate the commercials "Debits" or "Credits". To make this less complicated it may make sense to include only the critical and high SLAs in determining the score. Working out modalities (frequency, process of payments) of "Debits" or "Credits" is the last leg in definition of R&R model. My recommendation is to implement separate governance for risk and reward model to facilitate a collaborative and transparent relationship for this key aspect. The governance framework should have clearly defined procedures for issue resolution and escalation to allow both parties to efficiently work through issues inherent in a risk and reward agreement.

To continue the relevance of SLAs, it's important that it is reviewed regularly. I think it should be reviewed every quarter. The SLA's that do not serve their purpose need to be eliminated. "Raise the bar" for SLAs which are being consistently met for consecutive periods.

Lastly, it is extremely important to create awareness about the SLAs amongst internal and external stakeholders and project participants. This is necessary so that everybody understands the SLAs and its objectives. Communication of the SLAs is very critical and essential to the successful execution and completion of the testing assignment at hand.

July 28, 2011

Service Virtualization - Completing the Cloud Story

Organizations that have applications in production are required to have atleast 4-5 different sets of pre-production environments like System Testing, Performance Testing, User Acceptance Testing, Automated Regression Testing environments, to ensure 100% validation of the different set of requirements associated with an application. This in all probability increases the CAPEX budget for the organization. Organizations typically consolidate, virtualize and share these infrastructures for validating applications across different Lines of Business (LoBs). But this exercise is also bound to have significant OPEX associated with it due to the costs incurred in terms of having dedicated teams/personnel to manage the environments, rental costs and infrastructure costs. Also, even in these setups testing teams are constrained with situations like waiting for access to expensive test tool licenses/legacy systems, external/dependent systems, etc. In order to overcome these issues in testing traditional environments, it is but natural for us to look at the need to virtualize external/dependent systems using techniques like Service Virtualization.

Let us consider a scenario, where we have a Payments Processing Engine (PPE) which is currently undergoing changes and is hosted in a traditional QA environment. This PPE systems needs to talk to two major external systems, Legacy and Data warehouse, which are not currently available and are out of scope for testing. If the organization is to test the PPE system end-to-end, then they will need to acquire access to the external systems. Further, not being available on the virtualized environment is not the only constraint that this situation has to offer. Access to legacy system is expensive and it's made available in a 2 hour time window only. Also, the Data warehouse system is not available in the pre-production environment. When there are such constraints/dependencies on external systems, delays in time-to-market and increase CAPEX requirements are bound to bring down the overall testing efficiency. The way out for Organizations faced with such situations is to adopt Service Virtualization or virtualize services for all external/dependent systems, like the Legacy and Data warehouse systems in this particular example.

Today's market dynamics forces business to be more cost effective, agile and scalable to service ever changing market demands. The advent of cloud computing has made it possible for organizations to achieve the above mentioned points, in addition to helping organizations move from a CAPEX to OPEX business model. Though this movement to the cloud brings sizable benefits and cost savings, it doesn't however answer the question of dependencies on external systems. Organizations will need to spend huge amounts on setting up cloud images for these large external systems, making the entire process unfeasible. So, how can organizations do away with the issue of external system dependences in a cloud environment? This is where Service Virtualization comes in. With Service Virtualization, organizations can create virtual models of external dependent systems and bring them to the cloud as Virtual Services (VSE), with 24/7 availability and low cost.

Let us consider a scenario to understand the applicability of Service Virtualization in a Cloud environment.  Currently, we have an Order Management System (OMS), hosted in a cloud based environment, undergoing changes. This OMS system in turn needs to talk to 3 major external systems - Mainframe, ERP and Databases - that are not on the cloud and are out of scope for testing. If the organization is to test the OMS along with 3 external systems, then they will need to spend huge amounts in setting up the external systems - Mainframe, ERP and Database, in the cloud. This will result in higher CAPEX for the organization, which could very well blunt the cloud benefits of Cost Saving and Optimized IT spending. With Service Virtualization, the organization can host the OMS application in a virtual machine in the cloud, while the external/dependent systems - Mainframe, ERP and Databases, can be modeled and used as Virtual Services (VSE) implementations in the cloud. Thus by applying Service Virtualization all the external/dependent systems are provisioned at a fraction of the overall external system setup costs. With Service Virtualization, Organizations can achieve goals of elastic capacity consumption. Organizations can also cut down significant wait times associated with effort of infrastructure acquisition, installation and setup, and with accessing of external /dependent systems from months/weeks to a few minutes.

Thus with Service Virtualization, Organizations can achieve their overall goal/ objective of moving to the cloud and being responsive and relevant to the ever changing market dynamics and demands.

March 1, 2011

What is in the recently release HP ALM 11 version?

That HP is the market leader in the Quality Assurance and Testing tools space with a 50-60% of market share is no secret. If the version 11 of the suite of products released recently are any indication, HP intends to keep it that way for a very long time to come. Below are my notes from one of their latest road shows that I attended.


In this posts I comment on what Quality Centre 11 (QC 11), specifically the HP Sprinter feature within QC 11 has to offer us.

Continue reading "What is in the recently release HP ALM 11 version?" »

December 24, 2010

Why does Testing Metrics Program Fail?

Test management Guide Series-Topic 3

Why does Testing Metrics Program Fail?

1.0  Introduction


We all know that statistical data is collected for metric analysis to measure the success of an organization and for identification of continuous improvement opportunities, but not many organizations are successful in addressing this objective.

Senior management love to see metrics but fail to understand why the teams are unable to produce meaningful metric trends which can provide clear actionable steps for continuous improvements. It was surprising to see many client senior executives started asking me for help in establishing meaningful metrics programs for their respective organizations. Several questions popped in my mind - what makes testing metrics program such a challenge? We collect and present tones of data, what exactly is missing? Even though right from the CIO wants a peak view of various metrics to understand the progress made, why does several testing organization fail to produce the same in a meaningful way? All metrics definition and techniques are available in multiple sources, still why does organizations struggling in collect, Analyze and report the same?

After thinking about these questions I started asking myself few fundamental questions and started looking at several metrics reports. I was not surprised to find the below

·         Most of the Testing Metric reports were internal focused. They were trying to tell how good the testing organization is operating with no actionable next steps

·         Metric reports had several months of data with minimal analysis finding and no actionable items for each stake holder

·         99% of the action items identified were only related to actions taken by the testing organization

2.0  Revisiting the Golden Rules of Testing Metric Management:


While I was conducting a detailed study of the same, I felt like re-establishing few golden rules of metric management which we all know, but might fail to implement

Rule #1: Metrics program should be driven and reviewed by executive leadership atleast once in a Quarter.  In reality, very few organizations have a CIO dashboard to understand the reasons for Quality and Cost issues inspite of hearing repeated escalations from business partners on production defects, performance issues and delayed releases

a)         Metrics collection is an intense activity which needs right data sources and acceptance by entire organization (not just testing). Unless driven by senior leadership, metrics collection will have limited actionable items

b)         Requires alignment from all IT and business organizations to collectively make improvements

Rule #2: Ensure all Participating IT and Business organizations are in alignment with the metrics program


ü  Testing organization shows that it met 100% of its milestones related to schedule adherence, but the project would have got delayed by 6 months

ü  Testing effectiveness would be 98%, while several production defects resulted in an couple of million production support activities which was caused by issues which was not in control of testing organization

ü  Business operations and UAT teams are focused on current projects, while metrics provide trending information. All teams should be aware of improvement targets set for the current year or current release


Rule #3: Ensure necessary data sources are available and data collection processes are in place which ensures uniform data collection.

Uniform standards are not followed among IT groups, which make data collection a challenge

·         Effort data is always erroneous

·         Schedule changes multiple times in the life cycle of the project

·         Defect resolutions is always a conflict

·         Defect management systems are not uniformly configured


ü  There is no infrastructure or process to view and analyze production defects

ü  More than 40% defects are due to non coding issues, but defect management workflow does not capture the same

ü  Requirement defects are identified in testing phase making it 10 times more expensive, but there is no means to identify these trends and quantify

Rule #4: Ensure metrics are trended and analyzed to identify areas of improvements. Metrics data is garbage unless you trend the data, analyze them to find identify action items. Trending analysis should help in

·         Identify improvement opportunities for all participating IT groups

·         Set improvement targets

·         Plotted graphs which can give meaningful views

·         View to compare the performance with Industry standards

While it is not always easy to follow all the 4 golden rules of testing metrics, but an attempt to achieve compliance with these 4 rules would significantly improve the success of testing metrics program

3.0  What should I look in my testing metrics?


While I continued to look for reasons and answers for failure of metrics program, I realized that most of testing organization selects metrics which are more internal focused and analysis is also carried out to indentify issues within testing organization or to defend the actions of the testing organization. I am making an attempt to summarize suggestions on how you can make better use of metrics data  

Metric Perception


Suggested Action Items

Testing Effectiveness


 Testing organization reports effectiveness as 98%, but many production defects reported

·  Absence of a process to collect production data


·  System testing and UAT execution in parallel


·  Lack of effective requirement traceability


·   Lack of End to End test environment to replicate production issues


·   Extensive ambiguity in business requirement and results in defects in production due to unclear requirements

·  Publish Testing organizations effectiveness and Project Testing effectiveness to clearly highlight the overall project issues


·   Establish a process to involve testing team and production support team to analyze every production defect and take necessary corrective action


·   Associate $ value lost due to every defect in production to get attention from senior management towards infrastructure or improving requirements management



Test Case Effectiveness -


No Industry standard recommendations on targets based on testing type and hence not much action items identified.  Data is reported regularly


·   Difficult to set targets


·   Lack of actionable decision points based on increase or decrease of test case effectiveness


·   RTM is not prepared and hence the quality of test case and its coverage is a concern


·   Based on test case preparation and execution productivity, set targets for test case effectiveness. This will ensure right effort is spent on test planning and execution to achieve the target testing effectiveness


·   If test case effectiveness in below threshold for 3 consecutive releases, optimize your test bed and eliminate test cases which are not yielding any defects


·   If test case effectiveness in below threshold, validate the applicability of Risk based testing


·   If the test case effectiveness is higher then threshold, unit testing might be an issue and recommend corrective action

Schedule Adherence


Project is delayed by 6 months, but testing team has met its schedule

·   Project managers baseline schedule after delays in every life cycle stage. Testing reports delays in schedule only if it is due to issues related to testing


·   Testing wait time increases due to schedule delays in each stage and testing cost increases. Testing does not quantify wait times


·   Testing team do not use historic wait times in estimates resulting in testing budget overruns and increasing testing to development spend ratio


·   Track schedule milestones across life cycle stages and collect metrics for delays in any milestone


·   Create action triggers if the total delayed milestones crosses 10% of overall schedule milestones


·   Calculate the additional QA spend due to missed scheduled milestones and report the same in the metrics


·   Establish operational level agreements b/w various team to identify schedule adherence issues


Requirement Stability


Senior management believes that achieving requirement stability is a myth and requirements continue to change due to various reasons. This is because of the fact that 90% of the industry has been facing this issue for decades


·   Requirement stability is a broad measure and has several parameters contributing to this measure (Ambiguous, mistake in articulation, Incomplete, addition of new requirement, edits, deletions, etc)


·   Each parameter is not captured and reported in separate categories and represented by life cycle phase. Requirement can be changed due to design issue, testability issue, cost of development and implementation, operational challenges, change in mandates, organizational polices, etc


·   Lacks quantification of its effect in each testing phase


·   Lack of forums to discuss requirement stability index issues and improvements


·   Lack of requirement management tool to automate traceability and versioning


·   Report # of times requirement review signoff from test team was missed


·   Report test case rework effort due to requirement deletion, edits, addition


·   Report additional test case execution effort due to requirement deletion, edits, addition


·   Report # of defects due to requirement issues (missing, ambiguous, incomplete) and quantify the effort needed to correct them by dev and testing


·   Quantify the SME bandwidth requirement to support testing activities. If requirements are elaborate, SME bandwidth needs should reduce with time


·    Report # of CR added and its effort to measure scope creep


·   Report # of times requirement change not communicated to testing team






Defect Rejection Ratio and other defect Metrics


Senior management has no idea what to do with this data at end of the release



·   Reporting at the end of the release does not give opportunity to make course correction in the ongoing project


·   Metrics like defect severity, defect ageing, defect rejection needs immediate corrective action


·   Threshold points for each of this defect metrics has not been established and automated trigger points calling for action not defined


·   Report these metrics on a weekly basis rather than at the end of the release


·   Create automated trigger points. For e.g. If # of critical defects goes beyond 5, action point has to be triggered



Test Case  preparation & execution productivity


Senior management has no idea what to do with this data at end of the release




·   Reporting at the end of the release does not give opportunity to make course correction in the ongoing project


·   Difficult to set target due to difficulty in establishing testing unit


·   Report these metrics on a weekly basis rather than at the end of the release


·  Create automated trigger points.


·  Report reduction in test execution productivity due to

o   environment downtime

o   wait time due to delays in build

o   wait time due to delays in issue resolution

o   rework effort during test case preparation

o   rework due to lack of understanding and application knowledge


4.0  Conclusion


The above table provides insights into the complexity involved in metrics program to collect analyze and report metrics. There are clearly 2 types of metrics

·         Metrics which has impact on the overall project and hence very important to senior management of the organization

·         Metrics which are internally focused towards testing organization

Testing Effectiveness, Schedule Adherence and Requirement stability helps in identification of issues which impacts the entire projects and also which results in delays in projects, defects in production and budget overruns which are very important to CIO. The CIO metrics dashboard should have these metrics and reasons for failure.  As a QUALITY GOAL KEEPER for the entire organization, testing organizations should clearly identify actionable improvements for all stake holders for metrics program to be successful

In order to keep a tab on the effectiveness and efficiency of the testing organization itself, internal focused metrics like defect severity, defect ageing, # of defects, testing productivity, cost of quality, % automated and many more are important. Clear steps should be defined to identify actionable items. The actionable items are all internal to the testing organization and they should be used for identification of internal improvement targets and initiatives.



May 17, 2010

Black or white? Or is it grey that matters?

In the past few months, I had been having conversations with clients on the right test architecture and strategy for the testing of transaction processing systems especially in the financial services domain. As some of these discussions progressed to the specific problem areas, I realized a few things:

·          All these organizations have traditionally approached testing with a black box approach and are facing challenges in isolating the points of failure in their transaction flows

·          Most of their transaction processing systems which had once been monoliths, have evolved to build layers of processing, and have wrapped themselves with a service interface- while the approach to testing continued to treat the systems as monoliths

·          A fair amount of automation has been attempted, most of it is focused on the User Interface and hence dependent on the UI automation tools-  primarily HP Quick Test Pro and IBM Rational Functional Tester

·          The realization that data is a critical factor in testing has come quite late and all of these organizations are trying to put in place a test data management strategy and the tools around it.


Given this background, most of these organizations are asking the all-important question: "how do I redesign my test strategy to ensure quality in my modern day applications?" 

Continue reading "Black or white? Or is it grey that matters?" »

Subscribe to this blog's feed

Follow us on

Infosys on Twitter