Black or white? Or is it grey that matters?
In the past few months, I had been having conversations with clients on the right test architecture and strategy for the testing of transaction processing systems especially in the financial services domain. As some of these discussions progressed to the specific problem areas, I realized a few things:
· All these organizations have traditionally approached testing with a black box approach and are facing challenges in isolating the points of failure in their transaction flows
· Most of their transaction processing systems which had once been monoliths, have evolved to build layers of processing, and have wrapped themselves with a service interface- while the approach to testing continued to treat the systems as monoliths
· A fair amount of automation has been attempted, most of it is focused on the User Interface and hence dependent on the UI automation tools- primarily HP Quick Test Pro and IBM Rational Functional Tester
· The realization that data is a critical factor in testing has come quite late and all of these organizations are trying to put in place a test data management strategy and the tools around it.
Given this background, most of these organizations are asking the all-important question: "how do I redesign my test strategy to ensure quality in my modern day applications?"
While trying to answer this question for them, I had to face the bigger question- "Is the black box approach to testing no longer relevant in modern day applications"? While I was tempted to answer with a quick "NO", I did some introspection and this is what I found.
The black box approach has allowed us to abstract the application's anatomy and helped us focus on its functional behavior. Many a times, this abstraction relieves the tester from the complexities of design and architecture and helps to certify the quality of the application by validating its behavior - and this was the right thing to do when systems were monoliths, home-grown and built in a limited time period by a finite set of people. But today, when the development cycles have become increasingly shorter, and focus has shifted to "buy/build and integrate", a black box approach fails to address the gaps between components in terms of capabilities and scope of operations. The fact that each of these components are developed by different teams, with differing levels of understanding of the application's functionality compounds the problem.
So, given this new reality, what are the options we have? Should we turn to a white box approach which inspects each and every element of the application code, programming constructs and design? I shouldn't hesitate in screaming "NO" here.
I feel that we need to seek a middle path; one that focuses on the functional behavior while being cognizant of the underlying structural elements and their interactions. This "grey-box" approach is based on validating each of the elements in the functional flow as a self-contained entity and ensuring functional correctness in each of them. It is also based on inspecting the data flows into and out of these functional elements and ensuring that they conform to expected structure and content.
Let me attempt to summarize this approach in a set of layman's steps:
(i) Look under the hood: In a complex transaction flow, breaking down into elements is the first basic step. The idea is to detail out each logical element of the process flow, and treat each of these elements as a black box, with pre-specified inputs and expected outputs from the processing. The tester should focus on understanding what goes into and out from each of these processing elements
(ii) Automate for efficiency : Once each of the building blocks in the validation flow have been laid out, each of these need to be automated. The intent is to make each of the validations to be performed repeatedly without adding to the effort of testing.
(iii) Integrate for completeness: Integrate each of these automated elements and ensure that the interfacing data between these elements is compatible. The integration layer could well be a test management tool.
(iv) Enrich test data for coverage: Treat data separately. Create test data through synthetic creation mechanisms/ extract from production. Ensure comprehensive data sets to achieve the desired test data coverage.
So, what would be the right approach to test a multi-tier transaction processing system built on a Service Oriented Architecture- Black or White or Grey? The simple answer is "all of them". A good test strategy would leverage each of these approaches at different stages of the software's lifecycle and combine them to ensure quality.
What would such a test strategy look like? That is for us to explore in the next discussion.