Testing Services provides a platform for QA professionals to discuss and gain insights in to the business value delivered by testing, the best practices and processes that drive it and the emergence of new technologies that will shape the future of this profession.

« December 2018 | Main | May 2019 »

March 31, 2019

RPA Performance Testing

In today's rapidly changing technology landscape, new ground breaking trends are emerging every day. Some of today's key trends driving financial services industry imperatives are -

1. Robotic Process Automations (RPA)
2. AI and Digital Assistant such as Chatbot
3. Block Chain
4. Big Data

RPA has created lot of buzz in the industry. Organizations are reaping in immense benefit by implementing RPA. As per Mckinsey, "110-140 million FTE's could be replaced by automation tools and software by 2020 ". RPA implementation has necessitated strong testing support to avoid any failures because it can be very expensive in the later stages of the development. One of the challenges faced by organizations is identification of bottlenecks and hotspots. As per IBM World Testing Report, 65% of organization are facing challenges related to Performance testing.

ChallengesImage.png

While organization are reaping RPA benefits, it is equally important to ensure the performance of RPA processes is up to the mark and it meets 3S (speed, scalability and stability) mantra.

Before dwelling deeper in RPA PT challenges and solution, let's understand the typical RPA landscape.

  • RPA landscape

RPAComponent_new.png

As seen from above diagram, RPAs possesses immense capability for integration with varied landscape. It can be easily integrated with legacy, web based, API based, mainframe applications and many others. They also promote reuse by "exposing" their learning's to shared library which can be used by other bots. PRA interacts with different systems via screen scarping, emails, OCR, APIs etc. replicating user actions.

  • Performance Testing areas

Having understood the landscape, let's focus on what are the key elements of the performance testing that should focus on.

1. Capacity related issues when concurrent jobs are scheduled by robots

2. Tasks completed in given time per bot

3. Licensing and bot utilization -

  • Licenses - Monitors total number of acquired robot licenses
  • Robot utilization vs. capacity - Monitors the percentage of acquired robot licenses that are utilized in production 

4. hourly/daily variability in robot usage

5. Elastic Scalability - Dynamically upscaling and down scaling hundreds of robots to ensure RPA meets user demands

6.Complete eco system performance - Along with RPA processes, we need to focus on each application in the eco system.


  • Challenges faced

While now we understand what should be focus areas, there are inherent performance testing challenges faced by RPA. They are -

1. Dissimilar technologies: As seen from RPA landscape, each application under RPA execution may belong to a different technology. We need to assure that each component meets performance in isolation and in E2E eco system

2. Performance testing tools availability: Diverse landscape adds complexity that one single PT tool cannot support varied needs of ecosystem. For RPA systems, there are no record and playback mechanisms available while for RPA backend systems, we have to explore appropriate COTS/commercial tools based on protocol support via POC, knowledge sharing, etc.

3.Test environment: The Performance testing environment may not be exact production replica due to cost or any other factors. We need to plan the realistic workload which will cater to scaled down version and any other dependency to achieve desired results within the ecosystem.

4. Monitoring solutions: Similar to performance testing tool availability challenge, narrow set of monitoring solutions exists to monitor platform, detect the performance issues and for bottleneck analysis. We have to explore COTS/open source tools to cover the varied technology landscape.

5. Continues delivery pipeline: Current RPA solutions are mostly commercials solutions and RPA engineers don't have any open source options available due to proprietary the binary file formats. This should likely change down the line as RPA adopts open source standards. Infosys AssistEdge RPA community edition is certainly a revolutionary step towards this.

6.Unavailability of RPA backend / interacting systems: Since complete RPA ecosystem is a complex one, there are chances that one of the interfacing system may be behaving poorly or down temporarily.. 

How do we overcome these challenges? What strategy we adopt? The solution lies in sociability test.

  • Sociability Performance Testing

Sociability test will focus on core RPA process and any systems interacting with RPA. Refer to the diagram below.

RPAComponent_new - Solution.png

  • Key aspects to look at -

1. Tools and technology - Tools used will vary and can be combination of open source and COTS system. We need to assess the complete technology landscape and consider two separate areas here - RPA vs other IT systems.
For RPA there is no specific tool for PT but we can collect critical stats by observing the monitoring console. For E.g. process run time, number of records processed, computing units used, license usage etc. So the monitoring console is currently our best bet to fine tune RPA processes.
For other IT systems, we can explore use of open source systems such as jmeter or COTS such as micro focus performance center, NEOLOAD, etc.
Key is to ensure E2E ecosystem testing to ensure accurate stats and stable systems.

2. Utilizing strong APM - APM such as Dynatrace/AppD will need to be installed in order to get the detailed system metrics and transaction response times on downstream IT systems. APM tool can help in baselining transactions etc.  These can be used to monitor the RPA Infrastructure on which it is hosted and backend/interfacing systems as well.

3. Test Data - For setting up data for test, you can look at RPA itself to create required test data etc. as well i.e. system under test will be leveraged for automation as well.

4. Service Virtualization - Service virtualization using tools CA Service virtualization, Parasoft virtualize etc. can help to emulate the behaviors of various interacting components. It may not be possible to leverage this solution in all situations but should help in cutting down the testing cycle wherever possible.

5. Establishing CoE - PT CoE will play crucial role as we have multiple teams involved in E2E testing. Establishing proper processes and governance models will ensure testing is done in minimal time and less cost.

To summarize, RPA itself automation process and script less, again scripting it using another automation tool may not work. So monitoring is our focus area along with workload formation to test in pre-prod. It's like batch run where workload will be initiated by RPA itself but you will use another tools for monitoring performance etc.

March 28, 2019

Service Virtualization using Mock Server

THE BEGINNING

Service Virtualization is a technique for integrating a mock server in a test suite to remove dependencies on real back end systems or external party systems from test environment. It is an ideal solution for Test Driven Development (TDD) and Business Driven Development teams who want to quickly test the application and API services to find out the major problems.

Service Virtualization is best suited in Micro Services based Architecture, Service - Oriented Architecture and Cloud based Architecture.  It is the most important component of DEVOPS community.

Problem Statement

This is a compact view of micro services based architecture in which application is communicated to real end back systems through a number of API calls to receive output responses. For instance, In banking applications -  Some of the important REST API calls like accounts, payments, transactions etc.

Here is the list of problems with this kind of application infrastructure:

-          There is no dedicated environment for Automation testing, UAT testing and Performance testing. As environment has been    shared between all the teams that causes delays.

-          Environment is mostly down due to deployment releases and server configuration issues.

-          As data is different in Automation Testing and Performance Testing, test data setup is also a big challenge for teams.

-          Tests are brittle not robust, that means no reusability hence not able to achieve cent percent test coverage due to environmental issues.

 Implementation of Service Virtualization

In my recent assignment, Our QA team was struggling with test coverage issues in automation testing, unexpected environmental issues, performance related issues and many more in real end back systems because these systems were associated with third party vendors and these systems were not accessible to our teams.

With these problems, Our Teams were blocked and handicapped to perform any testing operations hence unexpected delays in production releases that impacts the project schedule and delivery.

To come out from these challenges we have created and implemented a mock server virtualization solution.

Proposed Solution
a.   Introduction of Solution

-          This is a Wire Mock Server or Virtual Service based environment model. One way of solving the dependencies and issues. Using Virtual services or Mocks, allow you to dismantle the testing from real back end systems and provide independent environment to different testing teams. The problems described above are resolved completely. People are happy and satisfied.

b. Application of Solution

 Here are the advantages of using Service Virtualization Over Traditional approach:

-          Test Coverage has been improved upto cent percent and avoid unexpected environment issues hence test quality has been improved.

-          All the QA teams used similar environment independent of other teams.

-          As per the business requirements, Test data set up is easy to create and handle it in an optimize way.

-          Test Development is robust and having less number of issues.

-          There are no issues in environment deployment and configuration.

-          Service Virtualization model is more agile over traditional models.

-          Less or no cost in the development and implementation of Mock Server Virtualization.

-          Flexible to fit in any kind of application architecture.

-          Leverage testers to become developers by manipulating the output responses according to their needs.

-          It's very quick and fast solution to resolve all the issues related to real end environments.

-          This approach reduces the man working hour efforts and time by 90%; it's a very effective solution for company business.

 Future Direction / Long-Term Focus

-          Service Virtualization is one way. Especially for large software projects, this practice can dramatically reduce the company cost.

-          Enhances the practical reusability of Service Virtualization,  hence reduces the future development efforts.

-          Implement such kinds of testing practices in other business necessities such as Cloud based Architecture, Service Oriented Architecture.

Results / Conclusion

We believe, this kind of approach will help people to accomplish various upcoming engagements and produce remarkable results.



March 25, 2019

Role of Artificial Intelligence in Performance testing and Engineering


A typical Performance Testing starts with analyzing the application UI and creating the test scripts. Post that users hit the application server and generate beautiful dashboards from Load testing tools indicating the Response time, Throughput, CPU utilization time, memory utilization etc.

In the era of AI (Artificial Intelligence) powered softwares, during the early stages of application design, performance engineers should be able to answer questions like:  What should we expect once the application is in production? Where are the potential bottlenecks? How to tune application parameters to maximize performance?

Critical applications need a mature approach to Performance testing and monitoring. AI is the intelligent part of Performance Testing process. It acts as brain in the process. Daily Tasks like test design, Scripting and implementation can be handled using AI, so that test engineers can focus on creative side of software testing.

One reasonable use case of using AI in PT (Performance Testing) can be codeless automation script. Writing performance scripts using Natural Language Processing(NLP) can make the scripting task way easier. In this type of testing, computers learn from the data given to them without programming it. Below are the aspects of solution empowered by AI-ML (Artificial Intelligence- Machine Learning) in performance testing:

  • The testing environment developed using ML, will have advanced capabilities in terms of self-healing and intuitive dashboarding. using deep learning algorithms, the corrections can be handled automatically.
  • The test flows are recorded and can be tested using data. No coding required in most of the scenarios.
  • Reusable functions and objects can be generated and grouped using semi-supervised learning. Scenarios are flow-based, and thus the implementation is transparent to user.

Yet another use case would be performance test modelling processes. AI's pattern recognition strength can extract relevant patterns while load testing which is very useful for modelling performance process. The PT model consists of the algorithms being used, from which AI learns from the given data. The ability of AI to anticipate future load problems helps in creating Performance test model efficiently. It deals with lot of data and can predict the system failures. Once the system data is analyzed, Performance test model can be created based on the system behavior.

Another area can be SLA design. SLAs should be measurable, attainable, simple, realistic and time bound, but most SLA are not designed like this. This is the basic limitation of human powered systems. However, once AI takes the role, the situation will change. It can track all the affecting areas and gets reinforced into monitoring system with providing granularity. It can analyze the complexity of the system and suggest the appropriate SLA. For example, if the lines of code are 1000 then SLA can be considered as 500 milliseconds. AI can detect working trends in a system directly, as system performance changes, SLA can fine-tune in real time.

 Monitoring tools like Dynatrace, AppDynamics introduced AI into their system which are helping in identifying the bottlenecks in multiple tiers of applications in early stages of software development. It can analyze the application and can predict the performance defects at the code level. Many open source tools like webpage test, GTmetrix, Yslow pinpoint specific problems like server request issues and help engineers to solve the issues quickly. Automation Tools like Test.ai is useful in getting the performance metrics of your application as well.

Role of AI in every phase of performance testing and engineering is proved very beneficial and is future of performance testing. Use of AI in performance testing will make tasks like scripting, monitoring highly impactful and help to get real time results very quickly. I believe, in future role of AI in performance testing will be a game changer!