Testing Services provides a platform for QA professionals to discuss and gain insights in to the business value delivered by testing, the best practices and processes that drive it and the emergence of new technologies that will shape the future of this profession.

« June 2013 | Main | August 2013 »

July 31, 2013

IT Operations that beleaguered on-premise testing

In my previous blog, I had mentioned how delays and overhead costs proved to be burden while conducting on-premise testing. But these are not all. Legacy datacenter operations too have added to its anguishes. Let's take a look at the issues I encountered, in some of my engagements, because of clients' legacy infrastructure.

  • Lack of Agility
  • Lack of Automation
  • Availability of testing environments

Lack of Agility

There is a growing business affinity towards agile testing owing to its distinctive advantages such as   

·         Faster testing process

·         Accuracy

·         Scalability

 

With the advent of Cloud computing, organizations can follow agile methodology without hurdles. So what are the areas where legacy datacenters lagged behind and to which cloud based testing have delivered? Here are some salient points.  

Agile testing is known for faster testing processes. It has made an impact while doing parallel testing on multiple environments; multiple applications' build versions and multiple releases. In these scenarios we require multiple test servers, with all different pre-requisite software, to be ready for use with minimum delay. This is achievable easily in a cloud environment due to its flexibility to accommodate new servers within a very short period of time. However, legacy datacenters involve long processes and efforts in getting new servers operational and to make it ready for the testers.

Another characteristic of Agile testing is Accuracy, which is directly related to the test coverage. More the number of tests being carried out by the testing team more accurate is the software delivery. In a complex and multi-release scenario, testers need to test in a diversified environment. To get such test environments, in a very short span, is a challenge in a legacy datacenter. Often it has been observed that due to release pressure the test coverage shrinks which has direct bearings on the quality of deliverables. Thanks to cloud computing this challenge is now a matter of past.

Agile testing requires highly scalable infrastructure. Unlike the legacy datacenter where a single hardware box is usually provided for the development, staging and testing phases; cloud computing can provide multiple servers to the testing team with multiple builds and applications resulting in faster time to market.

Lack of Automation

It takes a long time to create a test environment in a legacy datacenter. On Cloud, things move faster through automation. While creating  new test server take multiple hours , cloud uses Template to make it ready in few minutes. A template is an image of a server which can be saved and stored in a common repository known as "service catalogue". When new requirements come from testers, they themselves can deploy a new server from the stored image (Template), and start working on it within minutes. This not only helps in test efficiency but also reduces cycle time.

During a potential bug fixing exercise when it is required to reciprocate a typical environment among several testers, a single reference server template can be created and then it can be used for multiple deployments. Several test environments, with the same configurations, would be ready within minutes.

Similarly, during the release cycles of an application, the build gets created every week or sometimes every day. The testers are required to do sanity/smoke testing of these builds. Creating a Template for each app build and deploying test server from it for each tester is a far efficient way of doing things rather than manually configuring servers in a traditional datacenter approach. This whole process of deployment can be automated using simple scripts. Manual intervention, while creating test environments, becomes minimum on Cloud which provides a leading edge over the manual legacy datacenter approach.

Availability of testing environments

A test server crash in a datacenter attracts a number of stakeholders, e.g. Infra team, Apps team, testing team, to work on urgent basis till the server is up and running. This demands a concentrated effort from all the teams. The server vendor has to be contacted, the defect in the server has to be repaired/replaced, the Apps need to be restored to its latest configuration and mode before the test teams can validate and resume work. Application clustering can be a workaround but only in certain cases. This whole process becomes redundant on Cloud. There is redundancy configured at application as well as hardware level so that in case of failure the test environment keeps on running or gets restarted without any manual intervention.

Conclusion

All these issues seemed to be so frustrating before our exposure to Cloud computing. It has resulted in a paradigm shift in testing. In my forthcoming blogs, I will share exactly those features which have made IT operations so agile and automatic.

July 29, 2013

Increasing agility via Test Data Management

 

Does test data requirements need to be captured as part of the functional requirements or non-functional requirements?  Is test data management a costing exercise or business critical?  Since the testing team is provisioning the test data in whatever mechanisms, do we need test data management team? 

 

Often, IT organizations are plagued with many questions on should we invest in setting up Test Data management framework, procure tools, does it really help bring in the necessary efficiency in the release of the applications/products? What is the need for having a Test Data Management team coupled with sound framework and processes deployed? All this and more is addressed in the article"Test Management- Left Shift for Business Agility" which provides insights into the inevitability and what organizations must do?

http://www.infosys.com/IT-services/independent-validation-testing-services/features-opinions/Pages/test-data-management.aspx

July 25, 2013

Performance Modeling - Implementation Know-Hows

As an extension to my previous blog titled 'Performance modeling & Workload Modeling - Are they one and the same?', I would like to share few insights about implementation know-hows of Performance Modeling for IT systems in this post.

Performance modeling for a software system can be implemented in Design phase and/or in Test phase. The objectives of performance modeling in these 2 phases are slightly different. In Design phase, the objective is to validate (quantitatively) if the chosen design and architectural components meet the required SLAs for given Peak Load. On the other hand in Test phase, the objective is to predict the performance of the system for future anticipated loads and for production hardware infrastructure.

In either case, performance modeling can be done using 'Analytical Models' or 'Simulation techniques'. In my view, analytical models are easy to use, less expensive (with acceptable deviations) compared to simulation software which needs to be either built or purchased from marketplace. I suggest using Queuing Network Models (QNM), which is an analytical model, as they are time-tested and mathematically proven. QNMs are widely used in telecommunications, operations research and traffic engineering to name a few.

Before we dive into the nitty-gritties of QNM, I would like to bring out a simple point - 'a software system is a network of resources/service centers, with each service center associated with a queue'. To model application performance, it is necessary to consider all such resources - CPU, Memory, DISK and Network - at different application layers i.e. Web/App/DB/EIS.

The following are the primary steps to be followed for Performance Modeling exercise using QNM / Analytical models:

1. Identify various layers of the given application which are significant from processing standpoint. Exact layers depend on application domain, its functionality and end users. For instance,
      • Application Server and Database server  are the significant layers for retail banking, application
      • WebServer is the primary layer for processing for a marketing campaign application

2. Identify the compute-intensive resources such as CPU, Memory, DISK and Network within each layer. For example-
      • For database layer, CPU and DISK are the most compute intensive resources 
      • Application and Web servers are predominantly CPU & Memory intensive

3. Categorize the application workload as 'Open' or 'Closed' and choose the right QNM model. A system is said to be 'Open' system if the incoming requests are infinite and have no boundary. On the other hand, a system is said be 'Closed' if a finite user or transaction load alone is permitted. In general, Batch systems are 'Closed' in nature and OLTP systems are 'Open' in nature.

4. Capture 'Service Demand' of each resource by running 1-2 different simple-to-moderate load tests as Service Demand is the 'attribute' of a 'resource' and is constant. Service Demand does not change with different concurrent user/transaction load.

5. Using QNM, one can predict performance of software systems by changing the inputs. Below are the typical inputs and outputs for a Performance Model:
      a. Inputs - Arrival Rate of requests (user/ transaction load / incoming messages etc.)
      b. Output - Residence time, Wait time and Utilization

 

In both Design & Test phases, the above steps are common but with a slight difference. In Design Phase, one will not have the luxury to build entire system with all the external systems and 3rd party interfaces. Hence, it is suggested to develop a prototype or a Proof-of-Concept (POC) of few use cases that can represent major critical end-user actions. However, in Test phase, data can be captured from a completely developed system making results more accurate. The ideal way is to implement performance modeling in Design phase, and fine tune it with more accurate data points in Test phase and thereafter.

To conclude, implementing performance modeling depends on the SDLC phase, the objective and the complexity of the application under consideration. In my next blog, I will share the implementation challenges and the benefits of performance modeling.

 

 

 

July 22, 2013

Next-Gen QA - Output To Outcome Based Engagement Models and KPIs

 

Continuing the Next-Gen QA series, we take a look at the fifth and final parameter that will help in the paradigm shift - OUTPUT TO OUTCOME BASED ENGAGEMENT MODELS AND KPIs.

The success and value of QA organizations will be measured not merely by the inward testing output-focused KPIs such as the number of defects raised, schedule/effort adherence, etc. It will be measured also by the business outcomes and how it helps IT and business achieve some of the goals listed below:

 

Ø  Stability of systems in production without show stoppers and major outages.

Ø  Reduction in the maintenance budget of subsequent releases.

Ø  Reduction in the testing effort/cost over a period of time through increased productivity.

Ø  Reduction in the time to market of new products/services.

Ø  100% regulatory compliance

 

In future, one can expect to see more new pricing and engagement models oriented toward the business outcome. Metrics analysis and benchmarking against industry standards will be done on a continuous basis. QA organization goals would be derived from the IT/business goals to make them more relevant and purposeful.

 

You can access other blogs in this Next-Gen QA series at:

·         QA Basic to QA Advisory Service

·         Isolated Tools to Integrated Solutions

·         Inward Development to Outward Business Focus

·         Independent to Optimization

·         Next-Gen QA - Five Paradigm Shifts that can change the game

 

 

July 12, 2013

Accelerating Business Intelligence through Performance Testing of EDW Systems

Enterprise Data-warehouse (EDW) systems help businesses take better decisions by collating massive operational data from disparate systems and converting them into a format that is current, actionable and easy to comprehend. Gartner has identified performance as one of the key differentiators for data warehouses. With data warehouses growing in size, meeting the performance and scalability requirements has been an incessant challenge for enterprises. EDW performance testing will uncover the bottlenecks and scalability issues before go-live thereby reducing performance related risks and scalability concerns. EDW performance testing covers the following key aspects:

-          Performance of  jobs to Extract, Transform and Load (ETL) data into EDW

-          Performance during  report generation and analytics

-          Scalability of Data-Warehouse

-          Stability  and resource utilization

However, there are a few challenges with EDW Performance Testing, namely

-          Complexity due to the huge volume of data

-          Load simulation challenges due to self-service and  ad-hoc reporting and analytics

-          Interdependency on interfaces and external systems

To address these challenges, one must start with a clear strategy of the performance testing i.e. defining objectives such as benchmarking, troubleshooting, scalability and capacity assessment of EDW. Once the objectives are set the next step is to model the EDW workload. The workload can be a mix of six types of transactions - Batch Load, Reporting, Online Analytical Processing, Real-time Load, Data Mining and Operational BI. After the workload is determined EDW performance characterization is done by simulating the workload and analyzing the results.

For ETL, the performance at each stage of the ETL process has to be measured and verified against the SLAs. Slowest part of an ETL process usually is the database load phase.  Metrics such as total time for load, number of records updated per hour, number of feeds consumed per run and server resource utilization have to be collected and analyzed for performance. An incremental approach should be adopted in terms of volume, load and coverage while strategizing the performance testing.

Load, Scalability and Endurance tests are important for performance validation of EDW.  Load testing is to measure and validate the performance of business critical reports and dashboards. Scalability test is for validating whether the underlying hardware has sufficient capacity to handle the growth. Also, it detects any bottleneck when EDW is subjected to the large volume of data and multiple users. Whereas Endurance tests help us detect stability, memory leaks, and resource consumption issues.

Superior EDW performance for decision support is dependent on the response time of data fetch queries and analytics, Performance testing validates whether the hardware and software are tuned optimally for the expected response time. Also, it confirms whether EDW can withstand the anticipated future load thereby ensuring the business continuity. 

EDW aggregates data from various operational sub-systems such as sales, marketing, customer support etc., impacting the EDW performance. Testing uncovers any performance issues caused by these sub-systems.

Vendors, who supply the software and hardware for building the EDW, claim several performance characteristics of their products. Since the IT infrastructure varies from enterprise to enterprise, there can be inconsistencies. Testing helps in benchmarking EDW performance and taking corrective actions before go- live.