Testing Services provides a platform for QA professionals to discuss and gain insights in to the business value delivered by testing, the best practices and processes that drive it and the emergence of new technologies that will shape the future of this profession.

« May 2016 | Main | July 2016 »

June 27, 2016

Three Generations streaming in a Network

Author: Hemalatha Murugesan, Senior Delivery Manager

Are you using iPhone 6s asked my 80+ year old neighbor as we were transiting in apartment lift?  I responded nope, Samsung and enquired what help is needed.   He wanted assistance in how to use the various apps as he just got the iphone 6s as a gift.   Sure why not and will come over to your place, I winked and concluded.  

Smilingly I walked back to my apartment as we are living in an environment where different groups are trying to embrace the technological influence with no escapism from it.  As I view it, there are three generations who are trying to get on or using the digitization at rapid pace. Generations who were born prior to 1960's who have used the old telephone, booked trunk calls, sent telegrams, paying the bills standing in queue, with lot of planning to go to bank, groceries, personal doctors visiting home, local kirana shop, etc. Generations born between 1970 and 2000 embracing the change, adapting to it, building the change as well. Generation born post 2000, have no clue on how it was done in the past as they are born with the gadgets at their fingertips. So, in a nutshell, every generation today is trapped between the past way of delivering, experiencing vs. catching up with rapid change lest they become obsolete.

So what has changed over the last 5 decades or so - all three generations require unique end user "personal" experience. Experiences that they have built over years in various transactions and yet prefer a similar one in all modes as they embrace digitalization. With massive explosion, by 2020, it is expected with over 20 billion connected devices fuelling the industry growth - each with a browser, Wifi or a cellular connection. With fast degradation on the models, the type of platform, OED's or the version of the operating system do not matter--each and every device will have a web browser, which by itself is getting more feature rich every day in a highly short shelved competitive world.

So today's network carries all three generations to provide different yet individual personalized end user experience. What it means is, that the applications must be resilient, high performing and with high availability servicing every personalization under all forms of network streaming -4G, 3G, 2G... and under heterogeneously hosted systems. Any fluctuations on the application's performance has the end user snap instantly the app only to google to go to the next service provider i.e. their competitor. 

Today's new age shopper across generations are demanding is not just the new norm but they seek absolute individuality. Customers today are digital-savvy, Omni-channel and hyper connected. Instant gratification and constantly be engaged with newer, newer experiences yet remain unsatisfied. They have to be treated special and social media can wreak the firm if even a single user is unsatisfied.

A successfully performing application is one that is resilient to fluctuations under diverse conditions, both in its network availability and performance.  It needs to plan for and continually work when conditions degrade and constant production monitoring is crucial. A keen ear to the feedback from the diverse generations and user groups is essential to survive in a narrowly fought competition and any impact has a cascading impact on brand, trust as well as on its market capitalization. 

Firms would need to ensure that they cater to not just diverse geographies, ethnic groups but also generations with past, present and born in present to ensure that their applications are not just highly available, performing under adverse conditions too but provide personalized individual experiences at the same speed.

June 23, 2016

Cost effective non-functional validation for web applications

Author: Rohini Mukund Sathaye, Group Project Manager

There are several myths about non-functional testing such as only highly technical testers can carry out such type of testing, cost is very high and ROI is not favorable.This blog talks about the simple techniques which are cost effective yet help validate non-functional requirements.
Any software system is bounded by functional and non-functional requirements. Non-functional requirements define 'ability' characteristics such as 'Scalability, Reliability, Availability, Usability' etc. along with other 'Quality' properties like security, performance, exception handling capabilities etc.

In this blog, a set of testing techniques are proposed that enable cost-effective non-functional requirements validation for web based applications. The key requirements which are considered are Performance/Scalability, Availability, Usability, Accessibility and Security.

1. Scalability/Performance Validation -

Performance Validation is done to check if the system performance in terms of application response time, server utilization metrics is acceptable under normal, peak and projected workloads. Scalability indicates if the applications/systems can scale up to the workload that is anticipated due to business growth in next 5-10 years without QoS degradation.

Cost Effective Techniques to measure performance and validate scalability:

Option 1 - Measuring single user performance with open-source tools such as HTTPWatch, Yslow etc. If the application is not performing well for one user, it will never give good performance under load. Identify the pages which demand optimization; drill down to individual objects/queries for optimization

Option 2 - Simple incremental load test by selecting open-source tools such as Jmeter, LoadUI for load testing rather than COTS tools which are very expensive. There are multiple options available for web based applications, web services/REST API, mobile performance testing. Analyze an incremental load test results to identify the problematic transactions and components.

Additional cost reduction can be achieved by further reducing the scripting efforts using additional utilities (E.g. Blazemeter Chrome Add-in along with Jmeter) and data setup efforts using freeware utilties like Databene Benerator

2. Availability Testing
Availability is the percentage of time when system is operational. MTBF and MTTR are two important aspects of availability. Mean Time between Failures (MTBF) is the average duration for which the application will remain up till a failure occurs. Mean Time to Repair (MTTR) is mean time to restore an application once it goes down. (Has components such as Mean Time to Discover, Mean Time to Isolate & Mean Time to Repair) MTTR does not include planned downtimes such as upgrades, maintenance and deployment activities.

Availability = MTBF / (MTBF + MTTR) 

Availabilty, MTBF & MTTR can be calculated by measuring the application uptime and downtime over a period of time.

Cost Effective Techniques to measure Availability:

Option 1 - Writing a simple macro to parse HTTP/Application logs. There are several open-source utilities available like IIS log parser to get this data. 

Option 2 - Using monitoring tools with agents running on individual application boxes. This way it is not only possible to measure application downtime but one can measure other hardware failures as well

Option 3 - Writing a simple CRON job to ping the webservice at regular intervals, and record the http responses

3. Usability Validation
Usability validation is carried out to find out if the built application/product is user friendly. It also indicates if the users are comfortable with the application/product based on certain parameters like layout, navigation, content. With the advent of mobility and responsive web design (RWD), it is imperative to validate usability. 

Cost Effective Techniques to validate usability:

Option 1 - There are several open-source/freeware utilities available for usability testing e.g. UserPlus, Usabilla etc. These tools give readymade recommendations to fix UX issues

Option 2 - Manually checking some of the key aspects. In case of eCommerce website, following are the simple checks which any tester can carry out:
  1. Check the complexity of the checkout process
  2. Check if enough security checks are there while entering the credit card/payment information
  3. Check the final cost of the product and compare with expected cost
  4. Check the currency of the price
  5. Check if information related to product return policy, shipment process, contact information is readily available and easily accessible
  6. Check if the associated images provide enough information about the product so that purchase of the same is encouraged
4. Accessibility Validation
Accessibility validation is a sub-set of usability validation. There are multiple accessibility guidelines available specific to regions, some of the examples are 508 standard, W3C standard etc. There are multiple freeware tools available for this which includes browser extensions as well. http://usabilitygeek.com/10-free-web-based-web-site-accessibility-evaluation-tools/

Key manual checks are:
  • Keyboard Shortcuts for every button on the screen (including up, down arrows, standard windows shortcuts)
  • Compatibility with Screen Readers like JAWs
  • High contrast setting for application

5. Security Validation
Security testing is performed to assess the sensitivity of the system against unauthorized internal or external access. There are several black-box manual techniques for security validation:
  1. Brute-force: a trial and error mechanism employed to crack passwords
  2. Insufficient authentication: To check if the anonymous user is able to access sensitive information without appropriate access
  3. Session prediction: To impersonate a session or user by predicting the session value
  4. Cross site scripting: To check if it is possible to execute an embedded malicious script on client machine
  5. Buffer overflow: To identify invalid memory referencing through input validation
  6. SQL Injection: To check if unauthorized users get access to database through input validations
  7. Directory indexing: To check if directory listing is forbidden

Conclusion 
There are several myths about non-functional testing and one of them is higher cost for carrying out non-functional testing. However with afore-mentioned simple yet effective steps/methods, non-functional testing can be carried out in frugal way for any web based application throughout out the lifecycle.

References

June 16, 2016

Mobile Performance Testing - is it possible?

Author: Rekha Manoharan, Project Manager

We are now living in a digital world where we all pursue digitization and mobile technology. Everyone wants information to be available in a jiffy. Can we then imagine a world without mobile phones in this era? I don't think so! It has become part and parcel of our lives.

As stated in a white paper by Cisco titled 'Cisco Visual Networking Index: Global Mobile Data Traffic Forecast Update', mobile data traffic has grown 4,000-fold over the past ten years and almost 400-million-fold over the past 15 years.

Untitled.png
Cisco forecasts 30.6 exabytes per month of mobile data traffic by 2020.

Coming to the performance testing of mobile applications, you must also have the following questions in mind:
  • Can mobile applications be performance tested?
  • What are the challenges faced during mobile performance testing?
  • How is it different from regular performance testing? What are the factors to be considered?
  • What is the client expectation in this new era? 
Yes, Mobile applications can be performance tested and have different user scenarios to it - like accessing native mobile application, viewing the desktop version of the web application on a mobile browser, and then viewing mobile websites. 

There are various tools available in market today to performance test mobile applications. To name a few - HP Loadrunner, Neotys Neoload, IBM RPT, Silk Performer, SOASTA - CloudTest Mobile, JMeter, Monkey Talk, Radview Webload, etc. Most of these tools provide the option of selecting different types of phone, network latency, bandwidth, etc., which would help in customizing the script as per requirement. For example, in HP Loadrunner, select the Mobile TruClient protocol; in recording options choose iPhone, user agent as Mozilla / 5.0, and display size. HP TruClient is a new browser-based virtual user generator that supports next-generation applications. 

The major challenge faced with mobile performance testing is, the application speed depends on network, bandwidth, the phones that we use, and our geographical location. We might have observed the same application working faster in some places while slowing down in other places. Users have high expectations when it comes to speed which, in turn, depends greatly on the points mentioned above.

Remember to consider following factors while performance testing mobile applications:
  • Interruptions: Interruptions could range from SMS to network outage
  • Spike load test: Typically test with two to three times of peak load. With the reach of social networking sites as well as business and marketing trends, anything could spark a sudden surge in mobile traffic
  • Different types of mobile networks, phones, latency, bandwidth, signal strength
  • Different types of browsers
  • Different geographical locations as the application can be accessed from anywhere in the world
  • Different operating system - IOS, Android, Windows, Blackberry
Consumer expectations have forced businesses to launch mobile websites and apps to retain their market share and not lose out to competitors. Consumers have high expectations when it comes to mobile apps, with 78% expecting apps to load as fast, or even faster than a mobile website, and 80% demanding that mobile apps launch in less than three seconds. Hence, app performance is critical for customer satisfaction rather than having a flashy site which does not open half the time.


June 8, 2016

Golden rules for large migration

Author: Yogita Sachdeva, Group Project Manager

In my experience of working with large banks, I have worked on small programs related to acquisitions and mergers. I have also worked on voluminous upgradations and migrations. I wondered what was different which actually a tie breaker for large program was. I always tried to scratch my brain to figure out what it takes to make a large program run. I realized that smaller programs generally get successfully delivered with the apt technical skills of the team. However, large programs are normally meant to deliver a business strategy as big as the creation of a new bank. A large program encompasses a group of related projects, managed in a coordinated manner, to obtain benefits and to optimize the cost control.

Programs may include elements of related work outside the scope of discrete projects in a program. Some projects within a program can deliver useful and incremental benefits to the organization before the program itself has been completed. Program management also involves, coordinating and prioritizing resources across projects, managing links between projects and overlooking the overall costs and risks in the program. In short, programs deliver strategic outcomes.

Typical challenges in large migration programs

The complexities of large projects requires particular attention to be directed towards planning the project, developing and delivering the solution, selecting team members, and sustaining a high-performing team over the long haul.

In general, programs encompass typical challenges like:

  • Continuously evolving requirements
  • Multiple and parallel cycles
  • Constant pressure due to crunched timelines
  • Multiple stakeholders
  • Bringing together the right team

Of the various elements that make long-duration projects complex, the most significant are the inevitable changes that occur in the business environment, necessitating adjustments to virtually all elements of the project. Knowing this, successful project leadership team evolves, to practice situational project leadership, by adapting and modifying their approach to accommodate  the inevitable changes. In most of the programs, requirements keep changing till we reach execution. Gone are the days where we had frozen requirements before the design phase started. However, without having the requirements frozen beforehand, there are chances that the initial estimated effort could overshoot by considerably higher percentage which in turn could impact the profit margin and resource coverage. As an in today's world, the expectation is to absorb the overshot effort with 100%  coverage, a workable solution has been proposed, which involves - a detailed explanation of what was done earlier and how it is being done currently. Getting an alignment on '"what"' actually helps to justify "'how,"' (in the above statement), which in turn becomes extremely helpful for controlling the "cost".

Theoretically, we need to  maintain a balanced role ratio with the right skill set. But in a large program, we also need to keep the harmony of the team. Success of a large program resides in a well-connected, well-united, and well-integrated team. Any lapse in skill can be compensated with upliftment. Role maturity comes with increased work experience; but there is no ground rule for harmony. For successful program one needs to maintain the harmony in the team by making sure that positivity is engraved within the team by overcoming any negativity within the team. In my opinion, a large program should be treated like a tree, and any branch that is not growing with the correct harmony should be removed as and when discovered. This can be done successfully if the root is strong. Otherwise, sometimes, that branch can be so powerful that it might infect the root itself.

As they say, "Ignorance is bliss" and "Incomplete knowledge is dangerous", these sayings stands true in large programs as well. A large program generally involves delivery from multiple groups, involving business analysis team, development team, design team, QA team  defect management team, release management team and last, but not the least, the implementation team. If the flow of information is not uniform, it creates a gap. There will be a lot of hidden overhead effort that will go in bridging this gap. Identification of the right stakeholder is the most suitable solution for this. Definition of the 'right stakeholder' is very important, which is beyond any designation or role. It is purely affiliated to the right responsibility of the stakeholder. Adherence to this 'right stakeholder through right responsibility' matrix certainly ensures the right flow of information, which is required for each layer that is responsible for delivery.

Changes in schedules is a rampant issue in large programs. There are possibilities that there might be a delay in delivery by any of the teams and from there the chain of delay begins, as a result of which the team at the fag end is the one which is most affected. Communicating the clients and the team needs to be done periodically. We generally get into the mode of pressurizing the team who is at the fag-end; but actually, we ignore the original cause of delay. We strengthen the communication by keeping the clients informed that there is a delay and we shall recover; but at that time, we don't seem to bother much about the team at the fag-end. Adherence to a schedule is not fitment of boxes in a schedule -- it actually starts at the stage of requirement. Delayed requirement, should mandatorily be accompanied with a revised schedule.
Rigorous risk management preempts challenges and seizes new opportunities.

Conclusion
For large programs, the ability to adapt is the difference between success and failure. A systematic, reliable approach increases confidence and accuracy. It also helps in overcoming roadblocks in a seamless manner.

June 7, 2016

Role of Validation in Data Virtualization

Author: Kuriakose KK, Senior Project Manager

How can I see the big picture and take an insightful decision with attention to details now?

Jack, the CEO of a retail organization with stores across the world, is meeting his leadership team to discuss disturbing results of the Black Friday sale. He enquires about reasons behind why they were unable to meet their targets and the reason is promptly answered by his leaders as missed sales, delayed shipping, shipping errors, overproduction, sales teams not selling where market demand exists, higher inventory, etc. Jack is disturbed by these answers, and on further probing understands most of these are judgment errors. 

A judgment error is not something he can go back and explain to his shareholders. Returning the retailer brand to growth has been his top priority. Jack has the best team in the market and his product line is superb; so what is going wrong? He gets into further deep-dive sessions with his leaders and understands that everyone has a different view of things. Even though his organization has consolidated information of customers in its data warehouse, it only has key attributes relating to customers visiting their stores. Information regarding online customers, however, is spread across different systems segregated based on the brand, the campaigns, etc. The way information is stored and leveraged for stores and online sales is different. There is no cross-selling happening today, which in itself can increase the company's sales by at least 8-10%. Similar issues exist when it comes to core functions, like products, sales, and inventory. Furthermore, some lines of business have stale or outdated data.

Jack sees the need for a common view of all enterprise information, across all functions throughout the organization. Jack is also aware that they have been consistently investing in BI projects, so that they can integrate information from multiple applications across different functions, which is a very time consuming process. He is also aware of the huge monthly maintenance expenses for the existing BI system. He asks Tim to look into the matter and come up with reasons for why their BI reports are unable to provide them with a common view of the system when it was designed for the very purpose.

Tim, the chief architect, is well-known in the organization for solving complex problems with simple and economical solution. After a week, Tim comes up with the following reasons:

  • No integration with social media data and insights
  • Even though customer feedback and surveys are recorded in the system, they are not integrated in a manner that the business can leverage when they need to
  • No centralized view to pull all relevant data corresponding to a core data in demand
  • No way of changing the source data on demand, without staging and post-processing
  • No just-in-time data availability for the business's real-time or near-real-time needs

Tim summarizes by saying that the current challenges are with data complexity, disparate data structures, multiple locations, latency, and completeness.

How to get a consolidated view?

Jack requires a solution that can seamlessly bring the abstracted data out of the complex data architecture, exposing a common data model layer which can then adapt as per his needs. He also needs:

  • A business representation of data in this data model, enabling the business to become partially independent
  • The ability to carry out certain data integrations independently
  • Quick availability of data, when needed, and in the required format

Jack's needs can be addressed with the help of data virtualization, which employs a layered architecture, using a combination of physical and virtual data stores, depending on parameters like performance, storage availability, etc. Most leading data virtualization solution providers in the industry, such as Denodo, Cisco, SAS, IBM, and Informatica, use data integration techniques to ensure consistent data access, supporting complex disparate data sources and structures across various locations.

In today's data-centric world, where having the right data at the right time is key to successful decision-making, data virtualization addresses four key challenges:

Speed: Traditional methods of receiving data in a specific format have long cycle times, in terms of raising a CR with IT teams, followed by requirement gathering, impact analysis, integration, unit testing, system testing, production deployment, etc. Data virtualization, however, can integrate data from disparate data sources and formats into a single data layer, thus providing a unified view with limited / no data replication. 

Quality: Data virtualization provides users access to high-quality data through functions like data standardization, cleansing, transformation, enrichment, and validation.

Control: With data virtualization, data doesn't have to be replicated across instances and can instead be maintained in a single repository, which helps users maintain better control over it.

Cost: Data virtualization also helps organizations move away from the practice of maintaining multiple copies of the same data, thus enabling businesses to become more independent, reduce cycle time for report generation, and thereby bring down costs.

Conquering the challenges of data virtualization testing

Tim knows very well that no one will accept his solution without validation from the QA team. He reaches out to Mike, who heads the QA transformation and consulting team. To Tim's surprise, Mike already has a solution ready in place and informs him that this is very well an extension of a strategy that he is using in the BI world.

According to Mike, data virtualization is great for business users as it hides the complexity involved in generating complex reports. However, validating it is complex and can be expensive. Thus, he suggests that the following parameters be taken into consideration:

Parameters

Requirement

Recommendations

Test strategy

Detailed test strategy based on requirements covering:

  • Data migration testing
  • Integration testing
  • Web services
  • Data virtualization testing
  • Report testing
  • Security testing

Test-driven development: It is very useful for development of complex reports when there are multiple integration checkpoints, so that the user is able to check every component and integration independently, against the final and broken-down business outcome expected

Runtime monitoring of data: This can provide valuable insights to testers in terms of tuning their test cases to match the real needs of the systems

Reliability testing: Based on policies defined for fault models for individual / composite data, services have tests set to validate for scenarios like invalid / unexpected data and timing conditions like deadlock, concurrency, etc.

Regression testing: Efficient regression test cases and testing can help reduce the cost of retesting.

Risk-based testing approach: Helps one decide on the critical components being validated extensively with 100% record validation, which reduces the probability of failure and ensures that business-critical reports are accurate. Higher weightage is given to all components that are business-critical and have high-usage or a high failure rate.

Test planning

Detailed requirement capturing and validation procedure as part of test planning

Develop a tracker-based validation system to check for:

  • Migration of data
  • Integration of data sources
  • Report validations
  • Metric validations
  • Key SLA validations

Skill set

A specialized team of testers with skills in:

  • Data testing, with hands-on experience in disparate data sources
  • Web service testing
  • Performance testing
  • Detailed understanding of business and data flow

Cross-enable team on various skills during the test planning phase, based on recommendations received from the test strategy phase.

Staffing model

Initial heavy-loading of resources to test individual components, followed by a small team at the end specializing in end-to-end testing

A core flex model of staffing to support heavy loading at start.

Test process

By understanding the details of disparate data sources and multiple integration checkpoints, a new set of test process assets are required at every test life cycle stage.

Test process needs to be customized, providing key focus on:

  • Integration checkpoints
  • Test data availability
  • Report visualization

A data analyst, in collaboration with a test process engineer, will help in the development of a strong testing process.

Tooling

A framework that supports a toolset that can support various testing needs of data virtualization testing

Identify tools supporting various needs of a data virtualization and try opting for open source tools that suit your testing suits.

Also work on building automated regression suites that can validate the core business entities.


Mike also recommends various types of testing, such as:

Type of testing

Details

Data acquisition testing

This involves validating the acquisition of data from multiple data sources in different formats. It is a complex validation procedure, as it can also involve semi-structured or unstructured data, along with structured data. As part of testing, various validation checks like data extraction, filtering, completeness, and consistency have to be carried out.

Data migration testing

Migration of data from multiple data sources is covered here. Level of complexity goes up in scenarios where large volumes of data or data transformations are involved.

Data virtualization testing

A separate test sub-network availability replicating the actual implementation, will help in true validation of the system. A check needs to be carried out to validate the support for all required configurations and operating systems at the server and client end. We also need to emulate a global network and validate for scenarios like delays in data availability.

Data quality testing

It is important to validate the quality of the data stored, as it is the base for all business decision-making. Along with traditional methods of data quality, like schema, metadata, look up, format, data structure, pattern, statistical, etc., we also need to check the quality of data in terms of:

  • Its business usage; like whether the data satisfies all the key business rules
  • If the data is transformed and organized in a format that can provide quality information to the business for not just today's need but also for future needs

Data integration testing

Integration and end-to-end testing for validation; such that post the data virtualization implementation, disparate data sources and systems act as one. Validations to be carried out for data completeness in terms of record count checks between source and target, removal of duplicate records after the integration between systems, ability to correctly identify matching records across data sources. Data integrity checks to validate data consistency between source and target, validation for lookups, aggregate, expression transformations.

Report testing

A detailed reporting system, keeping the future needs of the business in mind and validating UI navigation, filters, prompts, data correctness in reports. Also, the report needs to validate various browser compatibility needs.

Security testing

We also need to ascertain that information is only accessible to the people authorized to view the data. Unauthorized access to data can lead to issues like privacy breach, non-compliance of regulations, financial irregularities, litigations, etc.

Performance testing

A core benefit of data virtualization is quicker access to data when it is needed. Hence, all SLAs need to be validated in detail against the actual production load of data. Key metrics like throughput, latency, etc. need to be tracked closely.

Business usability testing

Business entity validation: Validate for accuracy of business entities with all data validation checks satisfying business rules. Involves data checks like duplicate, record format, consistency, accuracy, referential integrity.

Operational accuracy: Accuracy of reports in terms of parent report data tying with the drill-down data and the ability to reconcile with key business metrics.

Take regular feedbacks from the business during SIT, rather than waiting till UAT. This will help you develop systems and reports that do not just meet the technical details, but also ensure that you develop a more business-friendly product.


Business is happy and growing

Jack and his leaders are happy with the solution, as the data virtualization implementation and the QA validations carried out to ensure accuracy of the reports have helped them address data-related challenges in making the right decisions. Jack's business can now:

  • Help new business lines integrate data with existing data warehouse, with limited cost and in a short cycle time
  • Integrate structured, unstructured data
  • Integrate real-time data with an application and a data warehouse
  • Have a 360-degree-view of customers based on data across various systems

 Conclusion

Many organization fail to reap benefits from their diverse data sources due to their reluctance in accepting new trends in the data space. With data virtualization, business users can economically access data from disparate data sources, on a need-basis.  At the same time, a validation procedure is also required, with the right set of strategies, tools, and practices to enable the needs of tomorrow's integration and reporting.

June 1, 2016

Predictive Analytics Changing QA

 Author: Pradeep Yadlapati, AVP

Today's mobile economy is changing the way enterprises do business. A recent survey indicates that the mobile ecosystem generates 4.2% of the global GDP, which amounts to more than US $3.1 trillion of added economic value. It is no surprise that organizations are fast embarking on digital transformations.

The pervasiveness of devices is altering interaction as well as business models. Customers expect a seamless experience across different channels. Everyone wants one-touch information and they expect applications to display preferences and facilitate quicker and smarter decisions. 

A cartoon strip published recently captured these dynamic interactions very well. It showed a street vendor organizing his fruits to indicate how people who purchase mangoes also buy apples and grapes. This is the impact of data and analytics - personalized interactions. And this personalization is changing how business models operate, creating a reverse cycle in the sale of goods and services.

Besides enabling agility, the digital revolution underscores the need for a superior user experience. This is critical in the light of studies showing that the average customer tends to shift to a different provider if their response time is over 3 seconds. Thus, to retain customers, one must provide an unparalleled user experience with lightning-quick responsiveness.

Consider the last time you were dissatisfied with a service/product. Typically, you would express dissatisfaction through online posts on social media and experience some level of gratification for being listened to and empathized with. Today, the urge to share experiences is more prevalent - and much easier - than ever before. 

To keep pace with these changing interaction and business models, enterprises want to know: 

  • How can they listen to customers faster to improve their services?
  • How do they build resilient systems through continuous listening?
  • What self-learning systems do they implement to gain accurate insights into what customers want?
  • How do they ensure that testing ensures high quality and a better user experience

The answer to all these questions lies in data. Data yields actionable insights about customers that can be leveraged by testing teams.

The recommended approach is to apply multi-dimensional analytics on 4 data sources to get accurate data. As illustrated in the picture below, enterprises typically analyze defects to understand failure rates, pass rates, closure times, turnaround times, etc. While some departments such as marketing analyze social media to understand customer sentiment, the most valuable source of insights is from machine logs - and this is where enterprises should focus their efforts. 


Let us explore the four ways that enterprises can leverage effective testing to gain a competitive edge and create relevant user experiences that ensure customer delight and loyalty.




1. Listen to your Customer
According to Bill Gates, "Your most unhappy customers are your greatest source of learning." In an age where every sentiment has a digital footprint, companies can understand and change customer sentiment easily through active listening.

Say, for instance, you purchase a Wi-Fi-extender from an online retailer that was delivered earlier than expected allowing you to get connected faster than planned. You may express your satisfaction through positive online reviews. Alternatively, if you were unhappy with your experience, your likely course of action would be to visit the retailer's social media site and express your dissatisfaction.

Social media analytics can track customer reviews and classify them into 'positive' and 'negative'. Negative reviews provide valuable information regarding functional, performance, security, etc., issues. While several enterprises already conduct such sentiment analysis, they often do not share these insights with the enterprise IT owners or the managers of online and mobile testing teams. Sharing insights about factors that impact user experience enables testing departments to proactively address issues by creating new test cases, automating scenarios and building a comprehensive repository.

2. Learn from failures
Every enterprise has a repository of defects captured during each release/sprint. These defects indicate parts of an application that have failed, helping enterprises to evolve smarter testing techniques based on accurate data.

Let us take the example of a bank rolling out a new core banking platform using agile methodology. Each sprint has logged defects in the application lifecycle management (ALM) tool regarding the user story, backlog, area of failure, etc. Since higher functionalities increase the risk of regression failure, enterprises must identify regression-heavy sprints. Here, machine-learning algorithms can be used to mine the defect data and perform predictive modelling to gain insights into failure patterns, which can be further fed into visualization tools such as Tableau, QlikView, etc., to visualize each defect by sprint and module. Such visualizations can help businesses identify vulnerable modules and choose whether to regress or retain error-free functionalities.

With defect analytics, enterprise can easily prioritize what to test and the sequence of testing based on vulnerability while significantly reducing the cost of testing.

3. Insights from incidents
Customer service representatives (CSRs) who handle on-call issue resolutions often capture valuable information during their conversations. Typically, incident management teams analyze the root causes using ITSM tools, thereby gaining information on how to curtail problem scenarios. However, as a direct interface with the customer, CSRs are privy to insightful suggestions from customers on what impacts their experience and how problems may occur in production.

Recently, I faced an issue using an online application to recharge my travel card from my savings account. Despite feeding the correct details, the transfer was unsuccessful. On calling the customer care number, we discovered there were several issues with the application and while the customer representative could not understand why the application malfunctioned, he captured my suggestions for a support expert from level 2 or 3 to analyze it.

Root-cause analyses on incidents are critical to discovering how IT can prevent incidents during production by understanding failures and proactively creating test cases to address them in the future. Organizations can create utilities tools that continuously read incident records, classify them into different categories (such as functional, regression, performance, etc.), create test cases, and feed these into a repository. The creation of test cases for all boundary scenarios allows businesses to get a constant feedback loop that tracks production activities.

4. Predict application performance
From the time an application is developed, it generates a variety of logs related to application, database, app server, web server, etc. Each log captures details about failed code components, error causes, etc.

By analysing these logs, businesses can get information about areas of failure such as modules, code components, database requests, memory overflows, etc. Further, machine-learning algorithms continuously learn from these logs and predict application performance, which can viewed through visualisation tools that offer hot-spot views on potential failures in each module/code component. Thus, testing workflows become more effective by understanding vulnerable areas, sequencing them appropriately and conducting risk-based testing. Coupled with powerful machine-learning, this approach helps testing teams predict the performance of an application before it reaches testing.

Conclusion:
The new paradigm of digital-first creates unique opportunities for testing teams to leverage multi-level predictive analytics and get insights that were previously unavailable. Predictive analytics revolutionizes the role of testing, making it a powerful contributor to the end-user experience. To enable testing success, businesses should leverage machine-learning algorithms and enable rich visualisations for better business decision-making about potential issues, thereby delivering an unparalleled user experience.