Testing Services provides a platform for QA professionals to discuss and gain insights in to the business value delivered by testing, the best practices and processes that drive it and the emergence of new technologies that will shape the future of this profession.

« The Changing face of Testing | Main | Business Process Validation to Increase Customer Satisfaction »

Black or white? Or is it grey that matters?

In the past few months, I had been having conversations with clients on the right test architecture and strategy for the testing of transaction processing systems especially in the financial services domain. As some of these discussions progressed to the specific problem areas, I realized a few things:

·          All these organizations have traditionally approached testing with a black box approach and are facing challenges in isolating the points of failure in their transaction flows

·          Most of their transaction processing systems which had once been monoliths, have evolved to build layers of processing, and have wrapped themselves with a service interface- while the approach to testing continued to treat the systems as monoliths

·          A fair amount of automation has been attempted, most of it is focused on the User Interface and hence dependent on the UI automation tools-  primarily HP Quick Test Pro and IBM Rational Functional Tester

·          The realization that data is a critical factor in testing has come quite late and all of these organizations are trying to put in place a test data management strategy and the tools around it.


Given this background, most of these organizations are asking the all-important question: "how do I redesign my test strategy to ensure quality in my modern day applications?" 

While trying to answer this question for them, I had to face the bigger question- "Is the black box approach to testing no longer relevant in modern day applications"? While I was tempted to answer with a quick "NO", I did some introspection and this is what I found.

The black box approach has allowed us to abstract the application's anatomy and helped us focus on its functional behavior. Many a times, this abstraction relieves the tester from the complexities of design and architecture and helps to  certify the quality of the application by validating its behavior - and this was the right thing to do when systems were monoliths, home-grown  and built in a limited time period by a finite set of people. But today, when the development cycles have become increasingly shorter, and focus has shifted to "buy/build and integrate", a black box approach fails to address the gaps between components in terms of capabilities and scope of operations. The fact that each of these components are developed by different teams, with differing levels of understanding of the application's functionality compounds the problem.

So, given this new reality, what are the options we have? Should we turn to a white box approach which inspects each and every element of the application code, programming constructs and design? I shouldn't hesitate in screaming "NO" here.

I feel that we need to seek a middle path; one that focuses on the functional behavior while being cognizant of the underlying structural elements and their interactions. This "grey-box" approach is based on validating each of the elements in the functional flow as a self-contained entity and ensuring functional correctness in each of them. It is also based on inspecting the data flows into and out of these functional elements and ensuring that they conform to expected structure and content.

Let me attempt to summarize this approach in a set of layman's steps:

(i)              Look under the hood: In a complex transaction flow, breaking down into elements is the first basic step. The idea is to detail out each logical element of the process flow, and treat each of these elements as a black box, with pre-specified inputs and expected outputs from the processing. The tester should focus on understanding what goes into and out from each of these processing elements


(ii)             Automate for efficiency :  Once each of the building blocks in the validation flow have been laid out, each of these need to be automated. The intent is to make each of the validations to be performed repeatedly without adding to the effort of testing.


(iii)            Integrate for completeness: Integrate each of these automated elements and ensure that the interfacing data between these elements is compatible.  The integration layer could well be a test management tool.


(iv)            Enrich test data for coverage: Treat data separately. Create test data through synthetic creation mechanisms/ extract from production. Ensure comprehensive data sets to achieve the desired test data coverage.


So, what would be the right approach to test a multi-tier transaction processing system built on a Service Oriented Architecture- Black or White or Grey? The simple answer is "all of them". A good test strategy would leverage each of these approaches at different stages of the software's lifecycle and combine them to ensure quality.

What would such a test strategy look like?  That is for us to explore in the next discussion.


Good to see the meaninglessness elicited in old "meanings"...

The type of problems we see can't be even addressed by a general umbrella of white-grey-black. I feel gone are the days we can even use these terms. Now a days we deal only with business problems and the requirements ... So most of my approaches are "Business Requiremensts, Functional, Non functional, system and derived requirements centric".

I take requirements as the yardstick and strategize for each areas

Eg : Say for performance testing

- Take all the KPI commitments from the requirements, priorize for focus
- Recognise real life scenarios that demands performance, fix KPIs of interest
- Derive SUT architecture to cover that
- Identify tools "that only satisfy the load to derive the KPIs"

Eg :
If App server - it is the TPS, If OSS server - it is the EVPS , If Web server - it is the RPS. If an OEM box testing - Say a switch with say OSPF protocols, take the smallest possible denomination of signalling broadcats , ie Flooding
focus tools that can give that KPI set of load.

- Identify Key Perf Metrics , I am going to look at that only. I put more effort in deriving KPMs rather than identify each transaction and diagnose them

Eg : Request failure , Average Latency, Errored responses, Usatisfied requests

In summary we tell X business problems are addressed with capabilities with a set of KPIs, which are stressed for performance by tools A,B,C and measured by KPMs E,F,G.

I get traceability to Reqs, I get reusability by methods, tools, I get benchmark, I become smart :)next time I am predictive...

Hope to see more from you ..

Post a comment

(If you haven't left a comment here before, you may need to be approved by the site owner before your comment will appear. Until then, it won't appear on the entry. Thanks for waiting.)

Please key in the two words you see in the box to validate your identity as an authentic user and reduce spam.