Testing Services provides a platform for QA professionals to discuss and gain insights in to the business value delivered by testing, the best practices and processes that drive it and the emergence of new technologies that will shape the future of this profession.

« February 2013 | Main | April 2013 »

March 29, 2013

Next Gen QA - Inward Development to Outward Business Focus

In my first two blogs in this series (1 & 2), I listed out the 5 key areas that will enable QA organizations to play a vital role in the client's IT optimization journey. Taking this discussion forward, in this blog, we will take a look at the second parameter that will help in the paradigm shift - INWARD DEVELOPMENT TO OUTWARD BUSINESS FOCUS.

 

QA organizations need to shift their focus from Inward Development to a more outward business (UAT) focus. They must earn the credibility of business by involving domain experts in requirements gathering/FS analysis upfront. Alternatively, they need to devise innovative engagement models in order to leverage the domain knowledge of the business users.

 

Listed below are a few measures that would help to establish this credibility:

 

·      Enable domain experts from QA to act as a bridge between IT development and business.

·     Complement QA and UAT organizations goals and efforts through better collaboration, training and knowledge-sharing.

·   Requirements/Functional Specifications to be thoroughly verified. Defects detection should start from these documents.

·      Reduce time to market by optimizing test scenarios, thereby reducing implementation hassles and early detection of critical business defects.

·      Devise new engagement models, tools and solutions to manage expectations and enhance collaboration.

·      Identify automation tools and solutions that are easy to understand and use from business user perspective. Model-based testing and functional automation tools usage should be simplified keeping this in mind. 

These measures will help in the identification of critical business defects upfront. Besides, they will also significantly reduce the overall time to market, by considerably reducing time and effort required for ST/SIT/UAT.

In my next blog, we will talk about another key parameter - ISOLATED TOOLS TO INTEGRATED SOLUTIONS.

 

See you soon.

 

March 21, 2013

The Testing Standard journey session @ QUEST 2013

Join Allyson Rippon, Executive Global IT Manager, UBS and me at this year's QUEST 2013 QAI Conference & Expo @ Chicago, when we share our Testing Standard journey. 

For more information on my session and the conference schedule, please visit the following link: http://www.qaiquest.org/2013/conference/the-standardization-journey-how-to-centralize-your-testing-standards/

What is Testing Standard anyway?

In everyday life, standards have a profound positive influence on all of us. Be it assuring a level of safety, ensuring quality or providing consistency in experience, standards affect all parts of our life. From defining the safety requirements for a child seat in your car to adopting universal standards for measuring time, standards have come to define how advanced our civilisation is.


In the field of Software Testing, we are beginning to see major emphasis on implementing a Testing Standard for the IT functions. This is particularly true in regulated environments such as financial services organisations. QA teams are expected to be directly accountable to several internal controls and external regulatory requirements. Geographically distributed teams, varied SDLCs, technologies, tools and team autonomy though complicate the implementation cycle.


How can QA managers faced with such challenges achieve their goal? How can they define a Testing Standard that is fit for purpose? How then could they go about implementing it across the organisation? How to stay focussed during this process to ensure it delivers the expected benefits? Can they learn from the experience of people who have already done it? 


Hope to see you there to find out more. 

March 20, 2013

Outsourcing the Test Function - Being Prepared

Organizations today tend to jump into outsourcing the testing function with expectations of significant cost savings, reduced cycle times, and higher quality without knowing exactly how to achieve these goals. Having worked on outsourced testing engagements both as the client and as the vendor, I have observed common stumbling blocks and challenges and can provide lessons learned that will help you to avoid, or at least be better prepared for, these challenges. 

I will begin with things to consider and do before you begin the RFP process.  Once your organization has committed to proceeding, I will identify key items to address during the RFP process. What happens after you have signed the contract?  The final section will take you from the transition to steady state and into continuous improvement. 

To learn about these items and more, come join me and other quality professionals in Chicago for the QAI Quest Conference & Expo, April 15-19. For more information on my session, please proceed to http://www.qaiquest.org/2013/conference/testing-outsourcing-challenges-views-from-both-sides/.

Hope to see you in Chicago!

 

March 13, 2013

Next-Gen QA - Independent to Optimization

 

In my last blog, I listed out the 5 key areas that will enable QA organizations to play a vital role in the client's IT optimization journey. Taking this discussion forward, in this blog (series 2), we will take a look at the first parameter that will help in the paradigm shift - INDEPENDENT TO OPTIMIZATION.

 

What is "Test Optimization"? Let me illustrate this with a case study from my experience. 

 

A leading banking IT organization analyzed their IT spend across all phases of SDLC for few of their large technology programs (fresh development). This analysis revealed an interesting pattern. The cost of testing (across all phases of SDLC) accounted for nearly 60% of the overall program cost.

 

Further analysis indicated that the testing effort had far exceeded development and other efforts primarily because the Program Managers had (blindly) factored-in all types of testing - Unit Testing, System Testing, System Integration Testing, Regression Testing, Performance Testing, Security Testing, Users Acceptance Testing, Build Validation, Operational Acceptance Testing, Production Readiness Testing, etc., while estimating for the program. In short, the Program Manager was just playing the role of an aggregator and consolidating the estimates shared by different teams. Deep-diving into the testing scope and estimates revealed that different teams were assuming a pre-determined scope and deploying ready-made estimation frameworks and test strategies. Not one among them was capable of challenging the testing scope, strategy and the estimates. As a result, inherent redundancies crept in and testing activities were getting duplicated across all phases of the SDLC.

 

The development team factored the effort for Unit Testing and System Testing, followed by independent testing teams who factored for System Testing and SIT. This was because they considered themselves as truly independent and did not want to rely on development testing. The User Acceptance Team factored the effort right from test planning to test case preparation because they did not believe in the business capabilities of either of the teams. The Operational Acceptance Team estimated the effort for running all batch jobs without even understanding the need for running them. As always, the Program and Business Heads challenged the overall estimates as usual, but were never in a position to challenge the testing scope and identify the duplication/redundancies in testing activities from a holistic view. Through this analysis the organization clearly underscored the need for optimization by reducing duplication/redundancies and building efficiencies across all phases of the SDLC, through multiple quality gates.

 

Today, more than ever, IT organizations and businesses are looking for QA to take on the role of an optimizer and lead this journey. Next-generation QA will be all about how QA organizations are gearing themselves to play this optimizer role more effectively.

 

Testing effort, cost, schedule, efficiency and quality - all fall under the umbrella of Test Optimization, which may conflict or compromise Independent Testing in certain cases. QA organizations must be prepared to manage such conflicts through effective QA Governance/Assurance, internal controls and audits. As Test Optimization calls for improvements in upstream (requirements, design, development) and downstream (User Acceptance Testing - UAT) phases, QA organizations will be expected to play collaborative roles with development and business teams to realize this goal. Shift Left and Right (UAT and business) approaches will gain more prominence.

 

Listed below are a few measures that will help QA organizations achieve test optimization:

 

ü  Prepare a comprehensive end-to-end (E2E) Test Strategy/Approach covering all test phases from unit testing to UAT to production readiness.

o   From a Test Optimization perspective, this will be critical to reduce time to market, eliminate redundancies, focus on progressive testing and define KPIs of different teams.

 

ü  Focus on Test Governance (besides Test Delivery) for E2E Test Assurance.

o   This will enable assurance of E2E test process compliance with multiple stakeholders (development and business). QA organization must play an audit/assurance role for the testing activities carried out by other teams and multiple vendors.

 

ü  Define and measure KPIs for upstream phases (Business, development and Design teams) to drive the message and inject the DNA  "Quality is every one's responsibility". Similarly, measure QA organization efficiency through stringent KPIs in UAT and production.

o   This is a pre-requisite to rely on multiple quality gates

o   This will drive the overall quality and performance management of the technology organization as a whole. 

 

ü  Look for innovative practices, processes and controls to play the Governance and Collaborator role effectively while retaining the independent QA stature and controls.

 

These measures will help the technology organization remove redundancies and improve the overall efficiency and quality.

 

In my next blog, we will talk about another key parameter - Inward Development to Outward Business Focus.

 

See you soon.

 

March 7, 2013

Test Tool Selection for Functional Automation - The missing dimension

"How do you select the appropriate test tool for your functional testing?"

"As per the listed features, the automation tool we invested in seems to be right. But, the % of automation we could really achieve is very low. What did we overlook at the time of tool selection?"

"Both my applications are of the same technology. We achieved great success in automation with one of them and failed miserably with the other. What could be the reason?"

You could keep adding no. of questions to the above list.

Do the above questions sound familiar? There is a lot of literature on the internet that helps compare different automation tools and decide the best one among them. Also, the specific 'Read Me' notes of the tools, list down the Operating Systems and the technologies which the tool is compatible with. We do take into consideration all the above mentioned inputs while arriving at the right tool for our application. Despite all this, where is that we go wrong? The answer lies in the dimension which we miss while selecting the tool.

What is this missing dimension? This is the detailed information about the specific application interfaces that we would be testing with the test tool. We check the compatibility of the 'Operating System' with the test tool (Ex : XXX tool works in Windows only or YYY tool works in Unix/Linux ), we check the compatibility of the 'Technology ' with which the application interfaces are built, with the test tool ( Ex: AAA tool Supports Active X controls or BBB tool supports Java), but, the detail which we fail to investigate is "Whether all the GUI objects of this particular technology are directly supported by the Test tool?"

Whether we achieve success in automation or not, largely depends on the test tool's compatibility with the objects in our application screens. Full support for all the screen objects by the test tool, directly translates into high automation script preparation productivity and greater ROI.

A test tool might provide support for the underlying technology of the application, but no test tool can guarantee that it provides 100% support for each and every GUI object of that technology. The reason being, apart from using the default GUI objects that are provided by the technology to build the application screens, the programmers might use their imagination to come out with their own objects (which can also be called as 'Custom objects') and build the application screens with them. These are the kind of objects for which no tool can provide the built-in functions.

Some tools, allow us to map the custom object to a standard object and proceed using the built-in functions of the standard object, for the same. Many-a-time, these mappings do not work properly for the tests to be automated. In these cases, sometimes, it is possible for the automation engineer to come out with a custom function to work with these kinds of objects. It may not be possible to work with those objects at all, neither through mapping nor through custom code. These are the times when we wonder, whether we have selected the right tool!

So, what's the best process to take care of this dimension while arriving at the best test tool for your applications? One is to do a 'Static Check'. Another method is to do a 'Dynamic Check'.

Static Check:

'Static Check' involves, gathering the information from the Development Team on the kind of GUI objects they used in the application. The success of this technique depends on the communication channel between the Development and Testing teams. This is more suitable for agile projects where both the teams work hand-in-hand. Cross check the tool documentation to see whether the tool supports these objects.

Dynamic Check:

This requires the tool to be run systematically on all the application screens to check whether the tool is able to capture all the objects. This is more suitable for applications that are already developed and the automation addresses regression.

Use the information from both the checks wisely, along with the other parameters (like Cost, compatibility with the OS, Compatibility with the Technology, DB Support, Ease of Maintainability, Ease of Reporting etc.) to arrive at the final decision.

While, it is not possible to figure out all the challenges that we might come across with the GUI objects at the time of tool selection, a conscious analysis of the compatibility of various test tools with different type of objects would help us to understand objectively the effectiveness of the test tools. Make an informed choice and there will be no confusion regarding the tool selection.

March 3, 2013

Performance Modeling & Workload Modeling - Are they one and the same?

 

Performance modeeling ver 3.0.jpgI have quite frequently encountered scenarios in which the clients, managers and developers keep using the terms 'Performance Model' and 'Workload Model' synonymously. However, I figured out in many cases they actually wanted to refer to the usage patterns of their application business processes by the end users and the respective application load. So, I kept wondering if these terms lost their meaning over a period of time or is it the mere ignorance of what each term specifically stands for. In this blog, I attempt to provide a distinct difference to these terms which in reality represent two completely different concepts that have their own meaning in the Application Performance Management (APM) space. In general terms, a mathematical or scientific 'Model' represents a system with certain inputs and specific outputs. 

The purpose of creating 'Performance Models' in APM discipline during the Design phase is to represent a given IT system/application as a black box that takes inputs such as 'User/Transaction Load' and provides output such as 'Response Time / Throughput / System Utilization' to evaluate the application design and architecture to gain better insights into application's performance and scalability before it is implemented. In my experience, I have come across many performance problems in the Test and Deploy phases for enterprise applications which are actually back-rooted to few design flaws that are difficult and (sometimes) impossible to change in the fag-end, making the application non-compliant with performance SLAs.

In order to ensure that performance is taken care right from the design phase (not as an afterthought), 'performance modeling' should be adopted to model system's performance and continuously refined in later phases of SDLC. The process of creating performance models is called 'Performance Modeling'. Performance Models are used to evaluate any design/architectural trade-offs based on scientific approach and measurements in design phase before investing time and money to actually go for a full-blown implementation, which might end up as a disaster, if the design is not scaled for 'performance'.  Performance Modeling can be implemented using different techniques such as 'Analytical Models', 'Simulation Techniques' or 'Extrapolation Techniques' which are to be chosen based on the client scenario, cost-effectiveness and time factor. The outcomes from Performance Modeling typically represent performance prediction which can be refined with more data captured from any given application/system as it gets developed.

On the other hand, 'workload model' represents the critical business processes of a given IT system, the peak usage volumes (in terms of user load or transaction load) and their percentage mix which should be considered as the basis to evaluate application's performance for concurrency. Generally, for systems already existing in production, workload models can be built using webserver logs or usage logs of the systems. If systems are to be built from scratch, inputs for the workload model need to be captured from various business and technical stakeholders of the application. If the 'Workload Model' does not represent the true nature of the concurrent usage anticipated in Production, performance metrics, data captured from Performance Tests (Load/Stress/Endurance/Volume) might look compliant with the performance SLA, while the application might face severe performance problems once rolled out. Hence, workload models form the basis for the performance tests to evaluate system's performance.

To summarize, Workload Model represents usage pattern of the application whereas Performance Model represents the performance characteristics or behavior for inputs such as user load or transaction load that increases application performance predictability.

So next time, you are in a discussion with a business or technical group to discuss about the strategy or action plan to improve performance of  an IT system, be clear on what the requirement is - Performance Modeling or Workload Modeling?  Or is it something totally different.