Testing Services provides a platform for QA professionals to discuss and gain insights in to the business value delivered by testing, the best practices and processes that drive it and the emergence of new technologies that will shape the future of this profession.

« August 2011 | Main | November 2011 »

September 27, 2011

Infosys and Co-operative Banking Group @Iqnite UK 2011

 

Hi there! A little over a week remains for the UK leg of the Iqnite conference (http://www.iqnite-conferences.com/uk/index.aspx) to begin. I can't begin to explain how excited I am to be presenting again at this premier conference for software testing professionals. To me the conference provides a great opportunity to learn more of the latest and most relevant QA practices from other QA professionals. My association with the event goes back to 2009 when I presented a session on Progressive Test Automation. This year I am teaming up with Paul Shatwell of the Co-operative Banking Group to present a session on "The Co-operative Banking Group's Approach to a One-Test Service Transformation".  

 

Wondering what "One-Test Service" is?

"One-Test Service" in simple words is the centralization of QA activities. This could either be the integration of existing QA teams, an outcome of a merger between 2 business entities, or, consolidation of QA activities within the same organization. In all the cases the final objective is the same - establishing a central QA organization to manage all the software testing needs of an organization while introducing standardization and innovation. Needless to say, centralized QA helps improve the quality of software applications and time-to-market. The session being presented by Paul and me deals with the former - integration of existing QA teams. However the approach is also applicable in cases of consolidation of QA activities within the same organization.   

 

We look forward to interacting with you at the event and would love to answer any questions you may have about centralization of QA.

 

To see the entire programme schedule please follow - http://www.iqnite-conferences.com/uk/programme/programme.aspx. I look forward to the keynote by Jason Taylor on Managed Testing Services, which sounds quite interesting. Let's see how that goes. For now, I need to focus on my own presentation! 

 

Watch out for my next post which will have updates from the event!

Transforming 'Testing Program Metrics' to 'Business Program Metrics'

Software Program Metrics is a necessary evil today. Be it around quality, productivity, efficiency, coverage, scope, defects, etc., organizations need to collect these data points to measure/ analyze the success of any given program. While this sounds very straight forward, it's not. Most of the times, this data collection exercise remains an exercise, with very limited time being spent on the analysis part to identify clear areas of improvement, for ensuring program/project success. So, how do you ensure that testing metrics are defined correctly and more importantly are linked to business outcomes, so that all parties involved take them very seriously?

 

For the purpose of this discussion, let's take the example of defining metrics for a QA/ Testing Program. While defining the Program Metrics, for a QA project, the following factors need to be taken into consideration:

 

·         Number of Metrics: They need to be apt for the size of the project.

 

·         The Larger Purpose: The metrics need to serve a larger purpose of improvement than just having another metrics for the sake of it.

 

·         Informative: Are the defined metrics giving meaningful information to the stakeholders

 

·         Stakeholder: Is each metrics mapped to one or more stakeholders

 

·         Systems & Tools: Does the project team have the tools and systems in place that capture data for computing the metrics

 

·         Benchmark: Do the metrics have a bench mark

 

·      Definition & Measurement: Does the definition and measurement methodology if the metrics remain constant throughout a project

 

·         SLA: Are the metrics covering the critical SLA's of the project

 

·         Project Success: Is the success criteria of the project mapped to the metrics

 

Stakeholder's Satisfaction: Are all the stakeholders involved in the project satisfied on at least one metrics

 

After addressing these questions, the next recommended step is the conversion of the testing program metrics to business program metrics. This can be a challenging task but it is very important and relevant to the involved stakeholders, especially to the client manager and the IT manager, for ensuring the success of the QA program in question. The objective of all metrics collected is to track, analyze data and use this analysis for further improvements. For this to be achieved the testing metrics needs to be linked to a possible business benefit of the project.

 

To elaborate how testing program metrics can be linked to business benefit outcomes, let us consider the following example of "Test Case Execution Productivity Program Metrics".

 

Definition: To measure the # of test cases executed in a cycle by a QA team during a defined time period

Usage: Measures the # of test cases that can be completed by the team

Formula: # of test cases executed / total time spent in execution

Trend/Analysis: A high number in this metrics signifies a good understanding of the application, domain and a possibility of lesser execution time

Business Value: Indicates faster time to market

 

In order to link test case execution productivity metrics to a business benefit, the below mentioned points need to be taken into consideration and addressed

 

1.    The stakeholder's definition of the test case execution productivity

2.    Is the client or stakeholder really interested in the # of test cases executed?

3.    How many test cases should be written for a requirement or scenario?

4.    How many of those test cases are mapped to the complexity of the scenario?

5.    What if the test cases executed are high, and the defect count is low?

6.    What is the test case execution productivity, and is the defect count more than expected?

7.    What if the productivity is high but the coverage of the functionality is low?

8.    Is there a tool to increase the productivity (is this tool being used by the team or are they focusing on manual execution)?

9.    Can risk based testing be introduced during execution instead of just focusing on the end result of high productivity?

10. Is the scenario coverage during execution mapped to the test coverage/ test execution productivity? Is there a close correlation between them?

 

Depending on the type of project and the nature of technology, this simple testing program metric "test case execution productivity" can reflect a lot from a measurement standpoint within a project.

 

To summarize, the detailed analysis of any testing program metrics should lead to business benefits and value ("time to market" in this case) to ensure the success of the program. Without this linkage, metrics additions become a theoretical exercise with no real value being delivered to the overall project and to the stakeholders involved.

September 26, 2011

Performance Testing for Online applications - The Cloud Advantage

Organizations have finally realized that building brand loyalty online contributes significantly to the overall brand value of the organization. In order to achieve this brand loyalty in the online space, organizations need to focus on two key elements - user experience and application availability.

 

Organizations can improve their online end user experience by conducting usability testing and by taking feedback from users to uncover potential usability issues. Usability testing helps identify deviations from usability standards and provides improved design directions as part of its iterative design process.

 

Uninterrupted application availability can be achieved by focusing on the performance aspects of the business application. To do so, the prime focus needs to be on performance throughout the application life cycle stages, right from requirements gathering, understanding the business forecasts, accounting for seasonal and peak workloads, capacity planning for production and ensuring right disaster recovery strategies like multiple back-ups across geographies, etc. All these need to be further coupled with the right performance validation approach.

 

Performance testing should not only focus on simulating the user load. It should also focus on simulating the critical business transaction and resource intensive operations, all under realistic patterns of usage. While certifying applications for performance, testing teams need to ensure that the user load factor takes into consideration the growth projections for the next five years at least, along with the peak seasonal user hits. This can help the organization ensure scalability of the application to handle not only peak traffic for the current year, but also online customer traffic for the next 5 years.

 

While all this sounds good, the common client concern with such preparation is the need for setting up such production like performance environments for the enablement of perfect performance testing of online business applications.  The setting up of this environment will require huge amount of CAPEX investment and worst of all will remain underutilized post the completion of the performance testing exercise. Leveraging the cloud can help organizations quickly and effectively set up production like performance environments and convert this CAPEX requirement to OPEX. This pay as you go model of testing, in the form of cloud based environment and tools, is the modern way for an organization to be cost effective in the current economic scenario and achieve thorough, end to end, performance testing of online business applications.

 

However, organizations need to realize that moving an application to the cloud does not mean access to infinite resources. Most organizations make this assumption while moving to the cloud and this can prove very costly. Whether an application is on a cloud or an on-premise application, it still needs to be designed to diligently handle application and availability failures.  Even in the cloud, the organization needs to sign up for specific computing power, a certain amount of storage power for the anticipated peak user load, etc. Any wrong forecasting on the mentioned factors or in the traffic increase pattern can, and will, result in application unavailability for users. Further, whether on the cloud or not, a disaster recovery back up plan is a must, that too a multi-geo one. This would help avert any business disruption in the event of any outage in a particular geography.

September 7, 2011

Enabling Effective Performance Testing for Mobile Applications

Mobile performance testing approach/strategy is closely similar to other performance testing approaches. We just need to break the approach in order to ensure all facets relevant to performance testing are noted and taken care off.


Understanding technical details on how mobile applications work
    

This is a primary step. Most mobile applications use a protocol (WAP, HTTP, SOAP, REST, IMPS or custom) to communicate with the server, using wireless devices. These calls get transmitted via various network devices (e.g., routers, gateways of wireless service provide or ISP) to reach the mobile application server.


Performance test tool selection

Once we know the nitty-gritties of how the mobile application works, we need to select or develop performance tools which mimic the mobile application client traffic and record it from the mobile client or simulator. There are several tools available in the marketplace to enable the same - HP's LoadRunner, CloudTest, iMobiLoad, etc.   Besides this, the mobile application provider will not have control over network delays; however it is still very important to understand how network devices and bandwidths would impact performance and the end user response time for the application in question. Shundra plugin with HP LoadRunner or Antie SAT4(A) , have features that mimic various network devices and bandwidths.

Selecting the right monitoring tool

 

Once we have zeroed in on the load generating tool, we now need monitoring tools to measure client and server performance.


We can use DynaTrace, SiteScope or any other APM (Application Performance Monitoring) tools to measure server side performance. These tools will capture and display, in real-time, the performance metrics such as response times, bandwidth usage, error rates, etc. If monitoring is in place on the infrastructure side, then we'll also be able to capture and display metrics such as CPU utilization, memory consumption, heap size and process counts on the same timeline as the performance metrics. These metrics will help us identify performance bottlenecks quickly which eliminates the  possible negative impact on the end user experience.
   

Performance of mobile app client is also critical due to resource limitations with respect to CPU capacity, memory utilization, device power/battery capacity, etc. If the constructed mobile application consumes a lot of CPU and memory, then it will take more time to load on devices. This would in turn significantly impact the speed and the user's ability to multitask on the same device. Also, if the application consumes a lot of power/battery, it would also reduce the user acceptance for such mobile applications.  For this, app plugins can be developed to measure and log mobile client performance as well. We can install plugins in mobile devices and encourage users to use it when loading is being simulated.  Possible tools that can be used are WindTunnel, TestQuest, Device Anywhere, etc. Plugins can capture performance data and the same can be sent to a central server for analysis.

 

In nutshell, with the right performance test strategy and tools in place, we can ensure effective Mobile application performance testing. This would ensure that the organization is able to deliver high performance and scalable apps to businesses which positively impacts the top line growth