Testing Services provides a platform for QA professionals to discuss and gain insights in to the business value delivered by testing, the best practices and processes that drive it and the emergence of new technologies that will shape the future of this profession.

« July 2016 | Main | September 2016 »

August 25, 2016

Agile and testing: What banks need to know

Author: Gaurav Singla, Technical Test Lead

Agile -- a testing framework introduced in 2001 -- is now being used across various banks in order to cater to their IT project needs. The basic principles of the agile methodology are as follows:

• Rapid and effective response to change
• Customer collaboration on a frequent and regular basis
• Fewer processes and tools, but more interactions
• Less documentation and enhanced use of working software

Most banking software applications require testing for the following critical objectives:

  1. Ensuring that the banking software complies with the guidelines and regulations set by the regulatory authorities of its operating region
  2. Making sure that systems are user-friendly and cater to the new needs of today's banking customers.
  3. Changing existing product offerings and introducing new products in market across self-assisted channels like Internet banking/Mobile banking. 

In regulatory compliance, the testing phase is considered to be one of the most important stages in the software's development. Banks' IT teams need to make sure that all changes made to the systems not only comply with regulatory guidelines, but are also implemented within the time frame provided by the bank, in accordance with the regulatory deadlines.

Traditionally, banks have always implemented changes using the 'waterfall' model, which certainly caters to the bank's needs, but can sometimes be ineffective to in-cooperate the change in requirements during the project execution phase and forecasting timelines.

The solution lies in adopting the agile model for implementing changes -- especially in regulatory compliance projects. Testing in an agile framework caters to all the critical needs of a regulatory-compliance project:

1.  Effective handling of change in requirements:
Regulatory-compliance projects require banks to align their internal systems as per the guidelines issued by the regulatory authorities. These requirements are collated by the business team at the very start of the project, so that the testing team can work accordingly right from the start. Such projects are generally long-term and there are always chances of a sudden change in requirements if regulatory authorities make changes to the guidelines.

Under such circumstances, it becomes very difficult for the testing team to understand the new requirements (in context of the existing requirements) and correspondingly make changes to the test scenarios.

The agile framework addresses this problem by ensuring that test scenarios are regularly discussed with the business and development teams. Any change in requirements can immediately be brought into the project and the testing team can either plan a new sprint for the same, or even try to accommodate it in the current sprint.  This ensure that the scenarios and test conditions are aligned to changed requirements and project/product goes live with all requirements including changed requirements that came up during project execution .


2.   Effective handling of timelines:
When such projects are implemented in a waterfall model, the business team needs to baseline all the requirements; the development team has to work on making changes; and in the end, the testing team needs to test all the changes -- all in a very short go-live time. Generally, testing teams are left with very little time and face the pressure of signing off on the project. There are always chances of spotting defects in the end due to the last-minute updates. Agile Methodology helps testing team to forecast the timelines in such cases as well.  Agile testing team can foresee the timelines by looking at the product backlog. Further, daily standup and retrospective meetings can help testing team to adhere to timelines.

There are similar issues in change requirements for improving the user interface for clients, or in developing new product offerings across multiple, self-assisted channels like internet banking, mobile banking, ATMs, and POS (point of sale).

The projects are generally tracked with very tight timelines so as to achieve a competitive advantage over other market players. In this context, the waterfall model is always constrained by the lengthy process of first gathering all the requirements, then making changes in the product, and finally taking it to market. Often, it so happens that by the time such a product goes live, other market players have already enabled their products with the same capabilities.

The timely implementation of such projects can be ensured through the agile methodology. Project teams can get the backlog and plan for incremental changes in the product, after which, the critical and maximum revenue-generating modules can be implemented right at the beginning of the sprint. This allows the bank to implement product changes in even shorter time frames, by adhering to smaller sprints to successfully implement the changes and make the product ready for market.

In a nutshell, project implementations in an agile framework can ensure that banks implement regulatory projects and develop innovative product offerings in an effective and timely manner.

IT service providers should consider agile as one of the preferred testing methodology while offering testing services to banking clients and enable its employees in Adopting agile methodology. Infosys is an IT solutions and consulting service provider and has focus on implementing projects/products in agile methodology. We are well-experienced in handling complex projects in the banking sector, across various geographies like the Americas, Europe, APAC, and several other regions. We have also developed our own certification program for employees, in order to strengthen them in the 'agile methodology' and ensure that 'agile' is not used just as a fashion statement, but as a mature practice adding significant value to organizations.


August 16, 2016

Culture is everything in Performance Engineering

Author: Sanjeeb Kumar Jena, Test Engineer

We are living in a bulging knowledge economy where anyone can access information from anywhere with a device that fits into their pocket. We are living in a hyper-connected world via the World Wide Web. Today, business is not the 19th century one-way 'producer-consumer' relationship. Instead, it's a two-way communication. An effective business model is 'not about finding customers for your products' but it's about 'making products for your customers'.

How does this model fit into performance engineering? And how do performance engineers provide insights or values and grow businesses accordingly?
Isaac Asimov once said, "The saddest aspect of life right now is that science gathers knowledge faster than society gathers wisdom." It holds true in performance testing (PT) as well.

As an independent PT quality assurance (QA) team, we plan and execute load tests and report the results to developers. That works well if the changes in the application stack are not frequent and applications are not continuously evolving (frequency of changes in codebase) as per end user interaction with the application.

With the introduction of low-cost mobile devices and cheaper bandwidth, another web penetration of users (around two billion) is expected by 2020. With more users interacting with the applications (web and mobile) on these devices, it's not only about scalability, but also about generating insights on application behavior at scale. That's the demand for the 21st century economy and challenge (or opportunity) for performance engineering.

Building a product is the business need, but building the right culture in the team is the human need that drives business in this fast-changing world.

How do we empower our team to create such a sustainable and learning culture?

Let's borrow an idea from human psychology and a performance engineering point of view.
To quote the iconic inventor, Steve Jobs, "The things you call life around you are created by people no smarter than you, once you realize that, everything will not be the same again."

In other words, people are the critical factor in any domain of engineering. In an organization, typically 'extrinsic motivators' (bonus, rewards, recognition, etc.) are used to drive innovation and build a similar culture. However, things are changing. With all kinds of information just a click away, people in any organization are considering 'intrinsic motivators' as well. These include:

  1. Mastery
  2. Autonomy
  3. Purpose

1. Mastery
Mastery is not only about becoming an expert on the bits and atoms of a tool or technology, but understanding how using those tools can help you learn about the domain and its users. Nowadays, Design Thinking is involved in every process of development. To me, it's not new. It's just another name to solve a problem in a quick and effective way with empathy towards the real user.

When we indulge in Performance Engineering, we get requirements, business transaction lists, and access to tools. After that, we test and report the performance analysis results from tools like log analyzers (Splunk or ELK), application performance management (APM) tools (AppDynamics, Dynatrace), live test monitor (Graphite, and Grafana with Jmeter for live test monitoring).

What if, in addition to performance analysis, we can get information on:

  • How our work impacts the code deployment process?
  • How the application is being used to drive business?
  • How important is the application for the entire application stack?
  • How does its performance impact other applications?

This knowledge on business domain brings mastery in understanding the impacting of performance engineering practices in overall business operation and with better quality information on both business process and consumer behavior, better decisions on performance can be taken and applied.

2. Autonomy
It results from the fact that human needs compound every day. Nobody wants tomorrow as it is today. They want better every day. This is where automation comes into play in our work culture. It's a basic human need to thrive for better and give a modular architecture to repetitive actions.

We usually define our performance engineering practices as combination of defined steps in ALM (Application Life-cycle Management). In Agile model, we define steps like stakeholder interaction, design and execution of tests and reporting in iterative fashion. But in today's consumer or people driven economy-- change is the only constant. With new people joining the new economy, business model needs constantly evolving and the only key to success is to adaptability to changes ensuring better performance. 

Every day while working on the same things, anyone can come up with an idea to improve something. As an innovative culture, we need to capture those ideas and have a platform to quickly try them out to see if they work. Innovation is not magic and it doesn't happen in any exact moment, but ideas for innovation can come to us at any time and from anywhere. The question is "Do we have a platform or medium to capture those ideas, and most importantly, try them out on dummy applications to see its full potential?"

The concept of DevOps comes from this thought. DevOps is not just about collaboration between development and operation management. It involves every stakeholder responsible for running the application as well continuous improvement with learning patterns from user experience.
DevOps in Performance Engineering (PerfOps) creates a platform with an automatic provision for regular activities so that engineers can take full responsibility to gather wisdom from collected performance knowledge.

3. Purpose
Purpose-driven work can improve results in a factor of hundreds over task-driven work (without a defined end-goal) What if we can integrate them both? Purpose in performance engineering can be divided into two categories:

  • General purpose: This includes individual career perspective and the impact of business growth on the client. PE is not limited to web apps or data warehouse applications. It's growing rapidly in areas ranging from wearable technologies to the Physical Internet around every physical object. Performance engineering as a discipline is not limited to what we do today. Its potential lies in the near future where everything is connected and with such scale and speed that it will penetrate every field  

  • Daily purpose: How do we share our best practices across teams so that we do our best every day? There's a saying in Silicon Valley - 'Fail Fast, Fail Often' - to grow faster. But in fact, it's not about failing, it's about learning faster. At the end of the day, performance testing is about identifying bottlenecks and improving your system's performance. If your load testing tool lets you check the scalability of your system in terms number of synthetic users but does not help you much with analyzing results, quickly identifying performance bottlenecks, and pinpointing the location of the problem, then perhaps you're not fully achieving the goal of performance testing. You might then consider integrating with an Application Performance Management (APM) tool for better visibility up to code level and a log analyzer for live visualization of your system while the test is running.

What if we don't have these tools in our project?

Most of these commercial tools provides free-trial versions for a limited period and many popular performance profilers are available as open source.

So the real question is: How do you create a real demo using these tools and show (not only tell) clients the benefits of adapting this trend?

We can leverage cloud computing to test our assumptions rapidly because rapid prototyping needs the absence of development environment complexities such as resource (servers, third party libraries, etc.).  Cloud solutions such as AWS and Azure on the Public Internet can be useful to create a rapid prototyping test platform to try out available solutions in PE and then integrate it into the existing toolchain, if it meets both technical and business criteria.

In the next blog, we will discuss about building a PerfOps framework that can be a part of pragmatic performance engineering culture.


The following blog elucidates further on this framework:
Performance engineering in Agile landscape and DevOps

August 3, 2016

Crowd Testing: A win-win for organizations and customers

Author: Manjunatha Gurulingaiah Kukkuru, Principal Research Analyst
            Swati Sucharita, Senior Project Manager

Software is evolving with every passing day to accommodate newer prospects of its usage in day-to-day life.

Some of the key factors for a new software to succeed in today's market place include:

  1. Faster time-to-market
  2. Collective responsibility of the software's quality, taken up by the development and quality assurance (QA) teams along with the product / application's owner
  3. A strong market survey conducted in order to understand competing products, imitate end user scenarios, and simulate end users in the test environment

Even in today's rapidly changing scenarios, traditional testing with dedicated testers continues to be integral in ensuring product quality. However, given the challenges posed by modern-customers, a traditional testing team is insufficient to ensure a quality product as the team operates within the confines of the organization's processes and has the same mindset as that of the developers or the product's creators. It is to address these challenges that crowdsourced testing is now being leveraged by large, independent software vendors as well as services companies.

Crowd testing brings diversity to testing techniques, works with low-cost testing devices, and ensures better test coverage across multiple geographic regions.

The role and impact of using a platform for crowd testing

Crowd testing's success depends on ensuring better test coverage, along with device and platform diversity. All these factors, combined, result in a high-quality products, faster time-to-market, short release cycles, the ability to execute in parallel with normal project delivery schedule, and an economical process with elastic resource utilization / on-demand testing.

While these success factors encourage us to look at crowd testing as an important and complementary part of our testing services, there are multiple challenges surrounding test coverage, motivation of the 'crowd', the quality of findings, and consistency throughout the testing phase.

In order to bring the crowd together and execute testing, while simultaneously mitigating key challenges; a platform that's easily accessible is absolutely essential. The platform should provide the ability to monitor test coverage and consistency, with different metrics, throughout the testing phase. Additionally, by providing continuous ratings for the contributions of the crowd, which is visible to peers in the test cycle, can create a sense of inclusion amongst the testers.

Infosys Crowd Testing platform and our success stories

The Infosys Crowd Testing platform is our in-house innovation that addresses all the challenges we've discussed so far. It provides features to define the scope of testing along with various details like problem statements, prioritized test scenarios / use cases, timelines, and contact information. The platform has discussion forums where the crowd can share their experiences. It also provides different matrices to track test coverage across the multiple devices used by the crowd. Additionally, the platform also provides standard matrices to monitor testing progress and review its findings. We have also integrated gamification into the platform, which publishes a leader board with 'reputation points' scored by testers and validators. This serves to keep the crowd motivated throughout the testing phase.

We have performed crowd tests for an external facing application by leveraging our bench resources, who were well-versed with Infosys testing methodologies, and an external crowd, who in just four days identified 30 defects / possibilities for improvements across the diverse array of devices that they used for testing. This created an opportunity for Infosys to participate in a global 'request for proposal' (RFP), in which we were selected as one of the two partners who would deliver crowd testing services. We have also successfully adopted this concept in the delivery practice for a leading telecom client, wherein we performed user interface and usability testing of the application by leveraging the Infosys talent as crowd and fixed all issues well before the application went into production.

Conclusion and call-to-action

In most cases, crowd testing is classified as working with freelancers or a testing organization whose business model involves pooling in a crowd of testers. However, although this is one form of crowd testing, it is typically called 'public' crowd testing. There are other creative ways of crowd testing that include leveraging the crowd right within an organization by:

  • Sourcing people with relevant skills, who otherwise might not be directly working on the same project. This is typically called 'private' crowd testing
  • Building a pool of end users with a mix of internal resources and external / private MVPs, who can continuously provide feedback in all phases of product development. This is called a 'hybrid model'

Many organization fail to reap the benefits of their internal, diverse talent pool due to their reluctance in accepting new and emerging trends, and their resistance to transformations. With crowd testing, however, organizations can build a whole pool of talents with diversity of wisdom, which can help sustain them through changing scenarios.

August 2, 2016

Pragmatic Performance Engineering

Author: Sanjeeb Kumar Jena, Test Engineer

When you hear the term 'performance engineering,' what's the first thing you think of? The famed battle of 'performance testing versus performance engineering' or because we are engineers, do you immediately think of making performance testing a better quality assurance process?

The dictionary says that 'engineering' means 'skillfully arrange for something to occur'; and so, we can safely agree that performance testing (PT) in an agile scrum or sprint is called performance engineering (PE).

However, this is just the tip of the iceberg for performance engineering as a domain. So, here's what I think (or rather, believe) about being a performance engineer:

"Response time, throughput, and resource utilization" are the major key performance indicators (KPIs) for a web system to be scalable and reliable. However, it is not enough neither to just report the information with a visualization nor mathematically calculate the Work Load Model (WLM) for next product release based on these KPIs only.

So what we report with this process is information, it's not knowledge that's directly applicable.

Performance engineering is an end-to-end process -- from design (pre-SDLC), through the entire software development life cycle (SDLC), aimed at ensuring quality management.

Instead of looking at PE as a specific domain, we can think of it as a part of the business model. Business is not just about delivering the end-product to customers, but is a collaboration or organization of people to achieve a predictable goal. As performance engineers, we do business in terms of time and our resources are the budget (as we can only spend what we have) and the goal (customer satisfaction in terms of application performance; that is, speed and reliability).

Thus, PE can't be one single process in just one phase of the Product Development process. Instead, it's a process that starts from the first day and continues forever, to always make things better.

  • PE in design phase: In the design phase, the designer or architect makes the interface and the system architecture, while the performance engineer establishes the performance budgets (reflects the input of resources and the output of services for each unit of user interaction on web app) considering the design of the system. Thus, instead of the developer just jumping to a narrow development phase, a collaboration between the designer, performance engineer, and the developer can lay the path to a sustainable and scalable design.

  • PE in development: This phase involves the introduction of continuous integration (CI) and version control to the usual 'load testing' scenario. When code is pushed to a version control system, it triggers a build on Jenkins (CI server), which in turn runs a load testing script with the required parameters (synthetic users, duration, etc.) in the development environment. Thus, developers can get continuous real-time reports on performance without worrying about how to use load testing tools, and can also make incremental pragmatic changes to the code starting right from the early phase.

  • PE in QA: The QA phase would involve designing a test case scenario and testing it against system at scale (synthetic user load testing) to test the system's readiness in a production-like environment for critical situations.

  • PE in application monitoring: Here, intelligent bots / agents can be injected into the 'codebase' or the application environment. These agents collect system performance data and continuously send it to an analysis server. This is the basic principle behind application performance management (APM) tools like Dynatrace, AppDynamics, New Relic, and open source Java virtual machine (JVM) / non-JVM agents. With RESTful integration, the knowledge from these tools can provide a deeper, Continuous code-functionality-level performance analysis based on iterative development of the application.

  • PE in operations management: Automation makes the process faster and introduction of extreme automation solutions in application operation management provides insights for optimization in its performance. It is the performance engineer's responsibility to ensure that the gathering and analysis of most of the performance data is fully automated and make this engineered automation available as a service or API for faster integration with the system.

  • PE in analysis and reporting: This phase consists of logs that show patterns in the system when it is tested against a simulated real-time user load. These patterns help predict the resource utilization and optimize the resource utilization model. Using log analysis tools like Splunk, ELK (Elastic search-Logstash-Kibana), or a basic UNIX / PowerShell utility, trends or patterns can be obtained from historical data and presented as a continuous trending visualization. With rapid growth in digitalization, the size of the user-generated data is growing and becoming real-time. 
    Real-time performance analysis and prediction can be performed by introducing machine intelligence (machine learning or deep learning models) in order to derive accurate knowledge.
    This way, the Work Load Model(WLM) for next feature release can be more accurate as it captures the behavioral trend from performance data rather calculating from a statistical formulae.

All of the processes mentioned above can be combined into a single framework (not just a tool) to engineer performance in application development to build better performance for end-user engagement on the application, so it makes better consumer/people-driven business.

The real question however is:  What do we need to do to make this pragmatic PE framework a reality in our team?

To me, if the performance engineer's target is to provide the best end user experience -- in terms of application performance -- then, from a design-thinking or human-centered design  perspective, a cultural shift is required.

I'd say- Think of PE as a culture or a mindset, rather than a set of principles or tools.  

In order to bring innovation in PE, we need empathy and collaboration. Almost everything in business has become data-driven. There's a thinking...If you can't measure it, you can't manage it.  Data certainly provides valuable insight, but it doesn't tell the whole story. Collaboration among every stakeholder creates greater insights from these performance data and makes greater value for overall system. When every person in the organization understand the feelings of the end-users, they build better systems utilizing performance data and targeted user-behavior. Empathy towards end-user plays a big role in performance engineering.

In fast changing world, speed matters; and to get speed, every part of the system must work in symphony.

Think of a situation where:

  1. A performance test is conducted by simulating a heavy load scenario
  2. PT tools and APM provide a detailed analysis 
  3. Developers immediately receive the performance feedback and make iterations as required 
  4. With each change in the code, an automatic performance suite with a unit test is triggered on CI/CD server like Jenkins
  5. Visualization dashboards from the real-steaming data shows how real-users/synthetic users in test environment using the application, so business owners can make data-driven decisions on features without a long-debate in the war-room.
  6. If the analyzed performance metrics meet the available software budget, the code moves into production.
  7. If not, the entire process goes through another iteration.

In this automated performance engineering framework, performance at scale can be analyzed and achieved in a faster and pragmatic (practical) manner.

In the next blog, we will discuss 'performance as a culture'; building a platform or test environment on cloud (AWS/Azure) to rapidly test solutions to find the best available solution at the moment, then integrate it or improve on top of it.