Testing Services provides a platform for QA professionals to discuss and gain insights in to the business value delivered by testing, the best practices and processes that drive it and the emergence of new technologies that will shape the future of this profession.

« December 2012 | Main | February 2013 »

January 29, 2013

What's new in HP Quality Centre 11?

Someone who has ever been remotely associated with software testing at some point in time is sure to have come across HP Quality Centre (QC). Such is the penetration of QC that it has profound impact on the way testing is conducted in an enterprise. As far as Quality Assurance professionals are concerned, the extent to which QC is utilised and the way in which it is done, often reflects the maturity of the testing processes adopted in the enterprise.. 

As they say, Rome was not built in a day! Likewise even QC did not evolve into its current form in just one major upgrade. From being a not-so-user friendly thick client Test Director, QC has evolved immensely through periodic releases and today it is a tool of pride for the majority of 'black box' testers.

The latest release of HP QC 11 as part of the ALM 11 suite brings with it major feature upgrades. Test managers need to appreciate them as well as their impact on their current testing processes. My whitepaper, "HP Quality Centre 11 - What's new?" has been written from a test manager's point of view. It attempts to evaluate the new features that would interest end users. The whitepaper can be accessed at http://www.infosys.com/IT-services/independent-validation-testing-services/white-papers/Documents/HP-quality-center11.pdf.

What tool fits best - Standard or Tailored?

 

It is back to business after the quiet of the holiday season and the cold weather and the flu are making it increasingly difficult to manage schedules. During the holidays, I was reminiscing on some of the interesting conversations with client organizations through the past year and a few of those are still fresh in my mind. Here's one interesting question that has come up time and again and something I myself have been grappling with as well.

 

"Which is better - a single tool (or toolset from a single vendor) that addresses most of your testing needs or specialized tools that completely meet each of the varied needs of testing?"

 

 

Several folks, I have spoken to, have gone for a diverse set of tools - each meeting the needs in its area of testing completely or pretty close to that. However, there are others who look at this from an enterprise wide and long term perspective, and talk about the challenge of integrating these diverse tools and maintaining the integration through the changes over time (tool upgrades, app changes).

 

I find merit in both these arguments and I am taking a middle position on this. I would like to suggest the following parameters to consider while making that decision.

 

·         Extent of Usage (How much are you planning to use the tool?): The testing tools landscape has changed significantly over the years and there are lots of niche tools with capabilities that go deeper than broader - so you might as well take advantage of these deep capabilities in your testing. I would recommend this especially if the usage of this tool is going to be widespread across your testing teams (as a thumb rule, say >50%)

 

·         Long term viability (How long are you going to use the tool?): You may want to look beyond the short term and see if the tool vendor has a long term roadmap and organizational capabilities to execute that roadmap. In other words, would you get adequate support from the product vendor going forward?

 

·         Technical Integration Capabilities (What are you going to use the tool with?): This is critical to the enterprise as it could be a management nightmare if integration between the tools is not seamless. For example, if your automation tool does not integrate with your Test Management tool and your defect management tool, a significant portion of your automation benefits will be eaten up by the manual task of integrating these.

 

·         Team skills (Who is going to maintain the tool?): This is often overlooked by most clients that I work with, especially since the product vendors have perfected the art of making clients believe that testing tool implementations are as easy as falling off a log and that their product support teams can support any future changes. I have found this aspect quite challenging - and my take is that you can go for a diverse tool set with several integration points ONLY if you have a dedicated (and centralized) team of tools specialists who can maintain the integrated toolset.

 

So here ends my 2 cents on the topic of finding the best fit testing tool and I am eager to hear what others think of the question.

January 25, 2013

Communicating the value delivered by testing - An organizational necessity

More often than not testing is perceived as a cost instead of a value-add. This perhaps can be best attributed to the inability of the testing organization to justify the true value or impact testing brings to business. This inability also stems from the fact that today's testing teams don't have a solid framework that helps them clearly map testing metrics to the key business objectives outlined by the organization.

Most of the existing frameworks provide a mechanism to map QA metrics to engineering impacts like quality, cost, and productivity improvements, etc. However, the issue arises when testing organizations try to use the same frameworks to extend the mapping of the engineering impact to business levers like increase in revenue and reduction in cost.

Let me elaborate!

I had a client, a bank to be precise, that was trying to upgrade an existing software product in order to ensure continued support from the product vendor. The bank was generating revenues of approximately 8 MUSD per annum using this software product. The client's internal systems (IS) team knew that if the support for the product was lost the bank would have to put this revenue at risk. However, from a business standpoint, they could care less of whether the IT upgrade was done or not. They were clear that the product was important to business and hence all they wanted was for the upgrade to be done quickly, with minimal cost to them. In order to achieve this, the QA team identified the accelerators/ enablers to use and came up with a strategy to automate the test data mining activity.  By executing the plan, the QA team helped business to roll out the new upgraded version of the product with a very aggressive cycle time.

However, while executing on the program, the team happened to unearth some functional defects that were present all the while in the system. These were potential bombs waiting to go off in production. So to prevent these from seriously disrupting business, the test team automated the entire data mining activity, making it possible to find data combinations that caused these defects to show up in functional testing. 

Now, while the process of testing the product upgrade was pretty straight forward, the unearthing of the underlying defects in the older version of the software, proactively by the testing team, definitely demanded a Wow response from business. This is where I believe a value articulation framework becomes absolutely necessary - A framework that captures value delivered by test teams beyond the usual elements of cost, quality and productivity improvements.

The value articulation framework can help test teams to convert the improved quality impact into a more concrete business impact statement. In the above example, with a value articulation framework, the team would be able to clearly analyze and figure out the loss the bank would incur had these defects rolled into production. This type of analysis would help the test team clearly establish the kind of business value being delivered by them to the organization.

Articulating the value in business terms, especially for test teams, is no more a good to have, it is mandatory. A good value articulation framework is an essential piece, without which, test teams will continue to struggle to justify their value to business.

Key Challenges in Test Automation

One of the questions I encounter frequently is, 'Do we need test automation?' The answer is not always a YES. In some cases, the challenges outweigh the benefits. The knack lies in identifying the right candidates for automation. Test automation is meant for reducing the test execution effort by automatically executing the test cases. Sounds simple, doesn't it? Hang on! In reality, this is a herculean task with many obstacles.

In order to automatically execute the test cases we need to put in lot of effort to create the automation scripts using a suitable tool. So, here comes the foremost challenge of choosing the right tool for test automation. Based on my experience, there is no single tool which fits all applications. We need to consider the application technology, custom controls used, ease of use of the tool, cost of the tool, support from tool vendor, etc. before we can zero in on the test automation tool to be used.


The second challenge is to decide on what to automate. Typically the tests that need to be run frequently are the ideal candidates for automation. Build acceptance tests, tests that need to be tested for different sets of data, tests that require a great deal of precision, which is difficult to achieve with manual testing, and tests which do not require any manual intervention such as plugging a new hardware or inserting a card, etc. can be considered for automation. It is better to start with low hanging fruits.

 

The third challenge is to determine a suitable framework for automation. The scope of the testing, need of testing, type of functionalities to be tested and kind of actions to be performed, type of validations, etc. will determine the suitable framework. The framework chosen should be such that it enables even the manual tester to execute automation scripts. Maintainability, Reliability and Performance are the key factors that will play a major role in designing a custom framework. The framework should aid in reducing the maintenance effort. Even if the control properties of the application changes, the effort required for updating the script should be minimal. The framework should aid in reporting reliable test results and should be robust enough to handle the un-predicted events that may occur during the test execution. The framework should have the capability to run the tests in parallel and also aid in executing more number of tests in short span of time.


I will discuss more about each of these in my upcoming blogs. Keep watching this space for my next blogs to overcome your automation challenges.

January 3, 2013

Future demands testers to broaden their areas of expertise

While scanning the list of best practices for Traditional projects recommended by one of the top technology research firms, I happened to realize that quite a few of them like early involvement in SDLC cycle or early automation etc. have already been implemented in the projects executed by Infosys. To me, this broadly meant, we have already adapted what others foresee as a future trend. On the other hand, it inspired me to think about what could be the possible forms that software testing might embrace in the near future.

 

In the current world, where there is expectation on every penny spent to be realized, to me the reality in the near future will be as below:

 

1.       Economic uncertainty across the world (with the possible exception of Brunei or Madagascar) will force governments and corporations to cut costs and squeeze more output for what they spend. From a software testing perspective, this may result in the testing team no longer limiting itself to regular testing, but also foraying into other SDLC aspects like test data management, test environment maintenance, reporting performance issues, ensuring optimal leverage and usage of testing tools as part of the testing scope.

2.       Accountability of development and testing teams for number of bugs/defects identified, and slipped.

3.       With more and more software testing tools available as freeware, tool usage may be priced based on benefits realized rather than usage

4.       All facets of business right from banking to gaming embracing mobility applications, resulting in quicker development and deployment, leading to shorter and effective testing cycles.

 

To sum things up, the preceding points indicate a common direction for software testing, where testing team will be made accountable for more activities in the SDLC than ever before, thus increasing the need for a tester to equip himself/herself with enhanced skills on domain and technical perspectives.