Testing Services provides a platform for QA professionals to discuss and gain insights in to the business value delivered by testing, the best practices and processes that drive it and the emergence of new technologies that will shape the future of this profession.

« July 2013 | Main | September 2013 »

August 27, 2013

Testing of Business Process Implementations

 

There is no well-defined framework or solution when it comes to a testing strategy for BPM implementation. Testing methodology, to a great extent depends on the specific implementation (available points of interception, data protocols, transport protocols, traceability of business and design requirements etc.). In other words, not all BPM implementations are exact in nature. However, there are two major focus points when it comes to testing a BPM implementation

(a)    Service level validation

(b)   Business Process / Integration validation

Service level validation focuses on validating the service as a standalone entity. Tools exist which facilitate the automated validation of the functionality embedded behind the end point url. Some of the common tools are IBM RIT, CA LISA, HP Service Test, Parasoft SOATest, and Infosys IP Middleware Testing Solution. It has been witnessed in several engagements and projects that focus should not only be on the functional aspects of the service, but also on the performance, security and governance aspects. This helps in shift left of the entire validation process.

Business Process validation focuses on orchestration testing or choreographic validation of composite services. It doesn't stop at validating the routing logic alone. There is much more. Business Process validation is often not done using the Presentation tier. This means that the test requirements, tools and strategy are not going to be the same as used in Functional Presentation Tier testing. Key advantages with the process layer testing are:

(a)    Early Defect Detection

(b)   Reduced analysis

Below graph shows the defect detection pattern for two similar projects, where a white box testing for the Business Process Layer was done (Project A) and where it was not done (Project B). We can see that testing is halted by the critical 1 or 2 middle tier defects on the initial days in Project B (black box testing).

 

BPM.png

  Figure 1: Defect Detection Pattern

The defects in BPM testing are 'pin point' defects. They are isolated and the defective flow / attribute is pin pointed. This is different from the black box testing defects, where majority of the defects in the middle tier manifest as 'page not found' or 'service not available' to the end user.

To conclude, Business Process Method implementation requires a different testing strategy. The test strategy, to a great extent depends on the underlying technology architecture. Plugging in 3rd party BPM products from vendors like IBM, TIBCO and Pega will help eliminate the redundant development efforts to a great extent (more on the different BPM products in my next blog). Industry specific products are also available from most of these vendors. Nevertheless, validation of the integrated middle tier is of utmost importance as the architectural landscape DNA of every enterprise is different.

 

August 21, 2013

Performance Modeling - Implementation Challenges and Benefits

 

To continue my previous blog "Performance modeling - Implementation Know-Hows", I would like to highlight the implications of not doing performance modeling, the implementation challenges and the benefits.

 

"Ok so what will happen if I do not implement Performance Modeling in Design phase?" is one of the most frequent questions that I come across from the business stakeholders and application managers especially when planning for application performance management.

 

Well, here are my views with regards to implications of not doing performance modeling:  

 

1.     The predictability of application's performance will be low.

·        No numbers to back the choices made in Design and Architecture - one needs to wait till Test phase to figure out if the system really meets performance and scalability SLAs.

 

2.     The individual product specific benchmarks & industry standard benchmarks (SPEC4j, TCP, TPH to name a few) are not helpful because:

·        The workload used for the industry-standard benchmarks typically refers to basic operations and is not representative of production-like workloads (in general, these benchmark are better off when choosing between two different hardware platforms).

·        Most of the enterprise applications are custom-built using multiple products/stacks from different vendors which will have different performance characteristics when compared with individual benchmarks.

 

3.     In general, Design and Architecture reviews are qualitative in nature and hence need to be supported by numbers, especially when planning to implement systems that are business critical and have financial penalties associated with performance SLA non-compliance.

 

So, to make informed decisions and avoid any performance and scalability risks that arise from Design and Architecture of a software system, it's a good idea to go for Performance Modeling exercise.

 

At this point, I would like to reveal few of the challenges while implementing performance modeling:

 

1.     In design phase, sometimes it's not possible to build a prototype that can represent critical user operations such as an application heavily depending on external systems.

 

2.     The accuracy of performance models depends upon the accuracy of the service demand measurements and the layers that are considered to be modeled:

·        Especially in shared hardware systems, service demand measurement is a tough task, unless coordinated well with other application teams.

·       Modeling few resources such as Network is not straightforward - one has to go with certain assumptions with respect to network bandwidth, network congestion and model them as 'Delay Centers' with fixed delay which is not a correct representation of real-world network.

·       It is difficult to model few aspects of applications such as Caching and Application level Queuing. Experience with different systems and applications in performance modeling will help decide which component to 'model' and which 'not-to'.

 

3.     Tendency to be 'as-much-accurate-as-possible' makes model heavy, complex and time-consuming from implementation standpoint - for instance, with server infrastructure advancements, Physical Memory has become inexpensive and hence modeling 'memory' resource might not be required.

 

However, the benefits of implementing performance modeling are significant and it's worthwhile to deal with some of the challenges. Let's take a look.

 

1.     One can assess performance of Design & Architecture early in SDLC and predict performance & scalability for current and future workloads.

 

2.     Surprises with respect to application performance can be minimized due to early performance validation.

 

3.     Models built in Design phase can be re-used with the data captured from Test phase which can be further used to plan for hardware capacity needs.

 

4.     It saves Load Simulation licensing costs by avoiding execution of Load Tests with very high virtual user load (5x, 10x etc.) which is rare but application teams would like to have the data.

 

To conclude, performance modeling helps understand IT systems' performance with certain accuracy levels right from the Design phase. However, the trade-off between the implementation challenges, and the benefits should be thoroughly reviewed before actually planning for Performance Modeling exercise.

 

 

August 20, 2013

Business Process Management(BPM) Implementation - An Introduction

 

Given the nature of BPM implementation, no silver bullet exists for testing BPM implementation. In this post, we will focus on how Business Process Implementations occur at enterprises first, and then we will define a testing methodology for the same.

Business world has come a long way from the days when IT or computing power was used for accounting and administrative purposes only. From being a mere facilitator, IT has come a long way to run the business in many sectors like Banking, Telecom, Retail marketing etc.

Let us take an example to understand how IT has changed in all these years. In today's times when a customer wishes to buy a Television set, he browses for the available brands, specification & price online. He places an order for the chosen model online and makes payment via Credit Card. This process invokes several other flows internally.

a)      As the order is placed, the inventory is checked and a 'unit' is blocked, inventory count is reduced accordingly.

b)      During this process, if the inventory stock falls below a threshold, an order might be placed to the distributor for more pieces.

c)       The credit card information entered by the user is validated by another gateway. A web service might return the authenticity, available balance etc. with respect to the credit card.

d)      When the shipping mode is selected by the user, an order is placed with the external shipping agency.

e)      The workflow might also place an order once in 2 days, consolidating all orders or immediately based on the shipping mode that was selected

f)       Shipping company, in turn initiates a set of services based on this new request... ...so on. 

This integrated workflow is an unattended automated way of B2B integration. This is much different from the way a customer visits the shop, asking for a TV, shopkeeper looking for a TV in his store and giving him the TV.

There are several methods of implementing the integration workflow. Real-world workflows could be much more complex than the above example. The automated workflow makes the entire transaction much faster and convenient for the end user and merchant. However, the fact that it is unattended and automated requires it to function right. The more complex is the workflow ecosystem, the more integration points surface and thus more chances of failure.  A defect in the integration tier can result in loss of goodwill as well as revenue.

BPM products today, embed such industry specific integration solutions and are available as Commercial-Off-The-Shelf (COTS) products. Many enterprises plug such reusable integration products into their architectural landscape. Testing this orchestration is of utmost importance. We will see how to test a BPM implementation in the next blog..........

August 9, 2013

Testing Center of Excellence - Value articulation to business is the key

The recent TechValidate survey of Infosys TCoE clients, across a spectrum of industries, corroborates that 'efficiency and cost' (63%) are the primary reasons for setting up TCoE. This comes as no surprise as most of the IT organizations are moving in the direction of articulating the business value of most of the services that they offer.  However, almost everyone faces challenges while articulating the business value of technical offerings. Some of the challenges that I faced in testing world are

·         Testing is an activity that does not have a tangible outcome like development

·         Testing is still considered as a small subset under the gamut of activities in software development

·         Lack of industry data to derive the business benefits

 

I think, we can overcome some of these challenges by shifting our focus from activity testing to product quality. Let me share my experience to this community from a recent assignment related to value articulation of testing service. We were standardizing the defect management capability for one of the clients. Defects were neither formally recorded nor tracked by the application teams. As part of the standardization effort, we formalized the defect attributes and defect metrics. Application teams started using these defect metrics for various status reports. As a next step, we used industry data to convert defect metrics to 'cost due to defect avoidance in production'. The industry data that we used were the difference in cost of fixing a defect in testing versus production. This data was very well received by the management and senior leadership.  Every project has a business case to justify the effort, however this additional information on the cost helped the management to appreciate the effort that testing team spent on various types of testing to ensure product quality.

The above example demonstrates how we can showcase tangible outcomes of testing. Outcomes that help in business decisions and can improve KPIs of business.  I am certain that there could be many more such outcomes, which are relevant to business and can open new avenues for cost savings and improved quality. We can extend this to other capabilities of TCoE such as Risk Based Testing, Early Defect Detection, Root Cause Analysis of production issues etc.

Do share your experience and thoughts on this subject.