Testing Services provides a platform for QA professionals to discuss and gain insights in to the business value delivered by testing, the best practices and processes that drive it and the emergence of new technologies that will shape the future of this profession.

« January 2016 | Main | May 2016 »

April 28, 2016

Manage tests, the automation way

Author: Swathi Bendrala, Test Analyst 

Most software applications today are web-based. To keep pace with competition and highly demanding processes within the enterprise, these web-based applications undergo frequent updates, either to add new features incorporate new innovations. While these updates are necessary, the amount spent to roll-out these updates are important too. 

For faster testing, the testing team has to inevitably depend on automation. Among the various commercial and open source tools available in the test automation world today, selenium is the most popular and widely used tool. It can integrate with anything to give a robust framework for compilation, continuous integration, bug tracking, and reporting, to name a few.

HP ALM (formerly, HP Quality Center) is an HP offering, which is much more than a test management tool; used for managing user requirement documentation, test plan ,test execution and so on.

To get the best of the two worlds, we propose an integration framework, which reduces the overall cost of automation by using open source tools and brings the best of the capabilities for automation testing of a web application.

Integrating Selenium Webdriver with HP Application Lifecycle Management (ALM)

A web interface is the most common way of accessing ALM, but when accessing it from third-party tools, APIs are needed. Out of the different ways of communicating with ALM, this paper discusses HP ALM Open Test Architecture which offers a COM-based API to work with ALM through different programming languages, including Java. Open Test Architecture (OTA) APIs allow testers to customize their testing which cannot be done using web interface. OTA is an API which allows the integration of any third-party tools with ALM and also allows interaction with ALM without using the user interface. The entry point for this API is the "TDConnection" object which can access the ALM functionalities. The functions are accessible to any COM aware programming languages. Using this, we can fetch data from the ALM database such as the bug details or even update the status details in ALM from an external script. The below diagram shows the different ways of interacting with the ALM servers - using browser interface and third-party tools with OTA client.

Browser Interface.png

The Integration

Let's look into how the actual integration is done. The entire process is shown in the below diagram. In the local machine, we need to have the HP ALM Client, the selenium test scripts, which in turn use the script having the ALM integration logic. In the ALM server, we need to have a test lab, which has the test plan with the script to call the selenium script added in it. Now, whenever required, the selenium script can be directly triggered from ALM to get the status updated at the end of the test. The sections - before scripting, scripting phase, and triggering test from ALM - throw more light to the process.

ALM Integration.png

Before Scripting

The below files are required before we proceed with the scripting of the actual code for integration:

  • Selenium-Server-Standalone-[version].jar
  • Com4j.jar (The Open Test Architecture of ALM can be accessed through DLLs.OTAClient.dll is the one which is used with Java to connect to ALM server, which is created when ALM is  accessed from a machine for the first time.Com4J is the jar file, which is used to generate java files from the dll)
  • Hpqc.jar (The jar file created for the java class files generated by Com4j. This is required to setup the communication  with the ALM components)

DLL registration

DLL registration is required to create an instance of the object and to tell COM, which DLL should be loaded for instance creation. This can be done using the regxvr32 command, which is a command line utility for registering and unregistering the DLLs.

HP Client installation

  • From the ALM add-ins section in the HP Website download and install the ALM Client MSI Generator
  • Provide the ALM server address and generate MSI (if required, you may change the output folder location)
  • In the output folder, click on the installer and click on the installer to install the HP ALM Client

Ant setup

Ant is required to get your dependencies automatically at run time. It can be used to script builds to run from command line. It allows migration of software between different environments and the issues related to compilation, path setup and so on, which is cumbersome and time consuming. You may refer to the below manual for ant setup:

Reference: http://ant.apache.org/manual/install.html

Scripting Phase

Pre-requisites:

  • ALM server URL, login credentials, project and domain details should be known before proceeding with the scripting
  • Create a VAPI-XP test plan in ALM and add it in the test lab
  • Create a java project in an IDE like eclipse and include the above mentioned jars in the build path

Scripting:

  • Create a java script (ALM connectivity) for connecting, disconnecting and performing all other functions in ALM
  • Create a test script to capture the actual application test scenarios and calls to the ALM connectivity script

Building the project:

Build the project using ant and generate build.xml file

Triggering test from ALM

There are different options of running a test. One such possibility is to run individual script / the entire build file using command line execution (using shell commands) in the Test Scripts section of Test Plan in ALM.

Why OTA CLIENT?

Out of the different ways of integrating Selenium with ALM, Rest API and OTA Client are the most popular and most widely used. The main advantage of using OTA Client is that it exposes more functionalities of ALM and also object-based nature of API is easier to work with. The Rest API, currently, provides only the CRUD operations for the core entities.

Key benefits of the integration

The key benefits of this integration are:

  • With the integration of ALM and selenium in place, we expect a potential reduction in the cost compared to using QTP with ALM
  • For open source functional automation tools such as Selenium, which can be used to generate the automation tests with the scripting language of our choice, there is good online support
  • Traceability can be brought in place with ALM's built-in traceability feature
  • Having an ant build in place and triggering the selenium scripts from ALM using Ant build helps us bring in automation of test execution in true sense
  • With this setup, selenium scripts being run from ALM using Ant build, the dependability on the script developers for test execution is eliminated, thus ensuring that the scripts can be triggered by anyone, not necessarily the automation test engineers

Conclusion

Software test automation has always been considered critical in major software companies, but it is often considered to be too expensive. More often than not, the tools used in the testing lifecycle are chosen with lot of analysis, where, one of the key factors is the cost. Though UFT supports web UI automation, which is also supported by selenium, the demand for selenium in the market is more since UFT comes with a cost. Without doubt, ALM is one of the leading test management tools in the market. Thus most teams look for the integration of Selenium and ALM. Considering the support that HP is currently providing for the OTA and the functionalities that it supports, it is the best option for integrating selenium with ALM. In addition, in several teams, there is a huge dependency on automation test engineers to run the automation scripts, generate automation test reports, and so on. With this integration in place, the manual test engineers, who are well aware of and comfortable with test management tools such as ALM, can also trigger the automation tests. This reduces the dependency and effort of the automation test engineer, who can utilize the time saved in automating more test cases. By using ant build, any changes made to the script dependencies by different automation test engineers at different point in time are taken care of, without any changes needed in the ALM script call, thus enabling the testers to manage tests the automation way.

April 26, 2016

Validate to bring out the real value of visual analytics

Author: Saju Joseph, Senior Project Manager

Everyday enterprises gather tons of data streaming in from all directions. The challenge lies in taking this huge volume of data, sometimes unstructured in nature, synthesizing it, quantifying it, and increasing its business value. One way to achieve this is by moving from traditional reporting to analytics.

And a valuable form of analytics is visual analytics which helps enterprises visualize their strategic and operational goals, and drive future financial success. The innate tendency of humans to draw insights from visual representations makes data visualization powerful and visual analytics an effective arsenal in business decision making.

Visual analytics differs from traditional BI reports in three key ways:


Traditional BI reports

Visual analytics reports

           Indicates the status at a specific point in time

-         Displays progress over time towards specific goals of the enterprise

     Translates raw data into information

          Transforms information into insights

   Follows a push approach, where reports are passively pushed to users

        Allows end-users to both enter and retrieve data from enterprise systems, by following a pull approach to answer specific business questions


Visual analytics - Validate to enhancing credibility

Visual analytics operates with three major goals in mind:

  • Efficiently communicate the right information to the right user
  • Enable better decision making
  • Tell the story

Analysts need assurance about the reports presented to them. Specifically, they need to know how accurately the report showcases the results and account for that information when deciding how to use the reports. Establishing such credibility is not easy. There are two ways to enhance the credibility of analytics reports:

  • Transparency - Describes  how the analytic model / algorithm is built
  • Reliability - Describes how well it replicates reality

A validation process in visual analytics helps achieve these two goals and enhances the credibility of the analytics reports. But the strategies applied for validating analytics reports should be over and above the strategies applied for validating traditional BI reports as analytics reports transform information into insights in addition to what traditional BI reports are meant to achieve.

Strategies for visual analytics validation

In general, the following four strategies can be applied for validating analytics reports to impart user confidence in them:

  • Expert validation
  • Predictive validation
  • External validation
  • Cross validation
Strategies for visual analytics validation.png

Expert validation: This phase of verification includes evaluating the model / algorithm used in the analytics report for structure, data sources used, problem formulation, assumptions made, and results expected by experts in the problem area. Expert validation is a subjective evaluation and the process increases the credibility and acceptance of the report results.

Predictive validation: In this process, the analytics report results are generated by prospectively varying input attributes. This is the most desirable type of validation, predicting what will happen in the future. Sensitivity analysis can be performed to explore how results change with variations in the input parameters.

External validation: This method makes use of the huge amount of historical data available within enterprises before implementing new analytics reports. A strategy is devised and applied to feed the historical data as report inputs and for comparing the results with real world results. It is important to perform multiple validations that crisscross a wide range of scenarios. This validation tests the analytic reports ability to calculate actual outcomes. External validation results are limited to the extent to which the data sources are available and the types of data within data sources.

Cross validation: This is the most crucial part of the validation process - validating the analytics reports results independently with similar / comparable model / algorithm addressing the same problem. The difference in the results and their causes are then examined. A high degree of dependency among the models / algorithms reduce the value of cross validation.

Analysts and senior management use analytics reports in the areas of consumer analytics, experience analytics, risk analysis, fraud analysis, etc. for better data-based decision-making, better enablement of key strategic initiatives, better sense of risk, and the ability to react to changes. Application of proper strategies for analytics reports testing increases the credibility and transparency of reports and establishes quality standards by optimal practices. It also gives information on how accurately the reports predict the outcome of interest to account for that information when deciding on how to use the reports results.

In the world of visual analytics, validating the data represented and performing eye-candy tests done as part of traditional BI report testing are just not enough. Visual analytics reports are developed to help the decision maker when the questions are too complex. The team responsible for validating analytics reports should strive to ingrain these strategies in their visual analytics process. A well-validated analytics report can provide invaluable insights that cannot be obtained otherwise.

April 14, 2016

Is Performance Testing really Non-Functional Testing?

Author: Navin Shankar Patel, Group Project Manager

The world of testing has evolved over the years and like most evolutions in the technology world, it has spawned a plethora of methodologies and philosophies. We now have super specializations in the testing world - UI testing, data services testing, service virtualization, etc. However, some beliefs remain unchanged. Especially the belief that the testing universe is dichotomous - characterized by functional and non-functional testing.

This duality of testing universe has its roots in the traditional way of application development, in which the software development life cycle has requirements, design, development and testing as distinct and sequential activities. The requirements are broadly classified into functional and non-functional requirements.  Functional requirements broadly cater to needs like screens, workflow, reports and non-functional requirements to elements like performance, security, etc. In the traditional world, functionality testing precedes non-functional testing (NFT) and more often than not non-functional testing is the last activity to be performed before deployment. The unsaid, but commonly acknowledged view is that functional testing has more significance in this bimodal world of testing. In this view, if the application screen shows the required details, we adjudge that the functional verification is complete and then we move on to see if the screen response time is within the desired limits. 

In practice, NFT and Performance Testing are treated synonymously. But is performance testing really non-functional anymore? If 'functional' is defined as something that customer needs, can we say that the performance expectations from customer has less significance?

It is being increasingly acknowledged that customer experience is of paramount importance in the digital world. It is almost as if customer experience has become mission critical. Gone are the days where it was sufficient to validate the screen elements or the fields of a report in a batch process. Performance testing was an afterthought. But with a generation of users indulged by the lightning speed of a Google or an Amazon and customer experience centered on the speed of application performance, it is passé to treat performance testing as non-functional. It is, in fact the most desirable customer requirement and is the primary functional requirement!

In fact, the traditional definitions and processes of performance testing may not be applicable at all in the changing scenario. How does one go about defining performance testing process for a Big Data-driven realtime analytics application?  Do we just test for the so-called functional aspects and leave the performance testing to a later stage? When the term 'realtime' itself signifies performance, is it not imperative that we test the performance aspects in parallel along with the traditional functional features?

The change in thinking in performance testing is not only necessitated by the technological disruption in the form of Big Data or IoT, but also due to rapidly changing software delivery processes. With many corporations adopting DevOps / Agile methods, the release cycles of enterprise software is reducing. In the older world, Performance Engineering (PE) and Performance Testing (PT) occupied the two extremes of a development cycle. While PE focused on engineering aspects and pre-emption of issues, PT focused on postmortem and validation aspects. With the changed dynamics and reduced release cycles, performance testing is undergoing 'shift-left', where a lot of PT activities are moving much earlier in the life cycle.  It may not seem exaggerated to claim that PE and PT will eventually converge.  It can be said that Quality Assurance is moving towards Quality Engineering!

In this light, IT organizations may need to fundamentally change the way they approach Performance Testing.

1)      Treat nonfunctional requirements on par with functional requirements
2)      Create Early Performance Testing (EPT) strategies for projects with longer gestation cycles
3)      Create PT strategies to enable shift-left and integrate elements of PT in Unit Testing and SIT
4)      Leverage and integrate tools like Application Performance Monitoring (APM) in  unit testing and SIT phases
5)      Redefine the performance metrics and performance testing processes in the newer areas like Big Data and IoT