Testing Services provides a platform for QA professionals to discuss and gain insights in to the business value delivered by testing, the best practices and processes that drive it and the emergence of new technologies that will shape the future of this profession.

Main | June 2010 »

May 24, 2010

How lean is too lean? - Making testing lean!

One of the topics that is often discussed during my interactions with the customers is around how the current testing can be leaner, smarter and cost effective?  While most of these customers agree that testing is a necessity, they are worried about the cost. Some of them have gone ahead and cut on their testing staff and budgets this has impacted the quality and timelines of their products and services adversely. Can organizations go too far with the cost and people cuts?

The need to change the services delivered to thrive in the new world is becoming more and more relevant in today's business environment. Choosing a path that will be appropriate in the long run is important for these customers. From my experience, there are a few levers that are relevant towards the lean thinking for testing services and can possibly make your testing service leaner, smarter and cost effective. These levers can help you retain this crucial function and help your organization to deliver high quality products and services on time.

1. Improving upstream quality - Testing team needs to focus on enhancing the upstream quality. Use early lifecycle validation strategies to capture "end-user experience" early in the SDLC and avoid rework.

2. Predictably delivering a program right the first time (and eliminating expensive scrap and rework) - Risk mitigation through use of an Independent Validation team. This team can give early indicators of potential risks of these programs by conducting testing in an independent manner thereby avoiding costly delays and bad quality.

3. Better alignment with business by validating technology for the processes outlined by the business by executing techniques like Requirements validation, Model-based testing, performance simulation testing etc. This will improve the ability to understand end-user experience which ensures high quality experience for end-users.

4. Reuse - Usage of pre-configured test platforms and reuse of test assets. Use pre-packaged business model based testing solutions to reduce dependency on package expertise. Use Test Case generation tools which can automatically generate test scripts based on pre-defined business model. This will save time and provide better accuracy in testing.

5. Automation - end to end - Extend test automation beyond test execution to areas such as test case design. Automate in a progressive manner rather than regression only. Automate testing across all tiers and technologies. This will provide you faster time-to-market and reduced TCO.

Conclusion

Cutting on your testing budgets may provide you with immediate short term gains, but choosing a long term path and implementing the lean themes in the context of your business, will result in true business value and make your services leaner, smarter and cost effective.

 

May 18, 2010

Business Process Validation to Increase Customer Satisfaction

Recently I was going through the NAIC (National Association of Insurance Commissioners) website (www.naic.org) and I came across a list of top 10 Closed Confirmed Consumer complaints. This site explains Confirmed Closed Complaints as those that have been investigated by the state Insurance Dept and have been given a resolution code; meaning the State Insurance department determines that an insurer or his/her representatives has violated a state / federal law or a term / condition stated in the Insurance contract. In simple words, Closed Confirmed Complaints are those that have violated a law or the spirit of the insurance contract and could result in serious customer satisfaction issues and credibility loss.  

 

According to the NAIC table (From NAIC site), the top three reasons identified for Confirmed Closed Complaints are Denial of Claim Handling, Delays in Claim Handling and Unsatisfactory settlement/offer of Claim Handling. These three reasons account for over 50% of the total complaints.

 

This is definitely not a pretty picture for Insurance companies, especially in this day and age of  social networking where a dissatisfied customer is much more than just bad publicity. These  unsatisfied customers have the potential to drive away new business and also shake up the confidence levels of existing repeat customers. On the contrary, earnest attempts to address the issues faced by these customers can get appreciated quickly and may just result in applause and greater business.

As an Insurer, you need to be proactive in identifying similar issues with your company and fixing them immediately, rather than waiting for the regulator to uncover it for you.  Considering the fact that many of the large insurer have automated substantial parts of their business process, I was speculating if it is possible to develop a framework for proactive business validation. If our objective is to differentiate ourselves from competition by being the best in customer satisfaction, then we should be looking at doing the following proactively:

 

1.    List out our top-10  Closed Confirmed consumer complaints (start doing this by taking the Top 10 list of Confimred Closed Complaints from the previous year)

 

2.    Conduct an end-to-end business process validation to identify the root cause for each of  the pain points mentioned as part of the consumer complaints. For example, if the pain point is 'Claim Handling - Delays', then the root cause uncovered could be that the external partners (Claims Adjusters) are not integrated into the process, thus leading to delays.

 

3.    The insurer will need to upgrade the business processes and supporting applications to ensure that the root causes are eliminated. Example, we can web enable or mobile enable the claims interface, so that the external claims adjusters can directly access the system.

 

4.    Repeat the Business Process Validation to ensure that the solution achieves desired results.

 

If the above 4 mentioned steps are done correctly, then we should see a substantial reduction in number of complaints mentioned against our the organization next year, especially in the 'Claim Handling - Delays' section.

 

The key element in this approach is end-to-end Business Process Validation and this needs to be driven by an internal / external business SMEs, supported by the testing team and BAs. If we, the insurer, is  able to reduce the number of complaints following year, then this will be a worthwhile effort/ process to follow.

 

May 17, 2010

Black or white? Or is it grey that matters?

In the past few months, I had been having conversations with clients on the right test architecture and strategy for the testing of transaction processing systems especially in the financial services domain. As some of these discussions progressed to the specific problem areas, I realized a few things:

·          All these organizations have traditionally approached testing with a black box approach and are facing challenges in isolating the points of failure in their transaction flows

·          Most of their transaction processing systems which had once been monoliths, have evolved to build layers of processing, and have wrapped themselves with a service interface- while the approach to testing continued to treat the systems as monoliths

·          A fair amount of automation has been attempted, most of it is focused on the User Interface and hence dependent on the UI automation tools-  primarily HP Quick Test Pro and IBM Rational Functional Tester

·          The realization that data is a critical factor in testing has come quite late and all of these organizations are trying to put in place a test data management strategy and the tools around it.

 

Given this background, most of these organizations are asking the all-important question: "how do I redesign my test strategy to ensure quality in my modern day applications?" 

While trying to answer this question for them, I had to face the bigger question- "Is the black box approach to testing no longer relevant in modern day applications"? While I was tempted to answer with a quick "NO", I did some introspection and this is what I found.

The black box approach has allowed us to abstract the application's anatomy and helped us focus on its functional behavior. Many a times, this abstraction relieves the tester from the complexities of design and architecture and helps to  certify the quality of the application by validating its behavior - and this was the right thing to do when systems were monoliths, home-grown  and built in a limited time period by a finite set of people. But today, when the development cycles have become increasingly shorter, and focus has shifted to "buy/build and integrate", a black box approach fails to address the gaps between components in terms of capabilities and scope of operations. The fact that each of these components are developed by different teams, with differing levels of understanding of the application's functionality compounds the problem.

So, given this new reality, what are the options we have? Should we turn to a white box approach which inspects each and every element of the application code, programming constructs and design? I shouldn't hesitate in screaming "NO" here.

I feel that we need to seek a middle path; one that focuses on the functional behavior while being cognizant of the underlying structural elements and their interactions. This "grey-box" approach is based on validating each of the elements in the functional flow as a self-contained entity and ensuring functional correctness in each of them. It is also based on inspecting the data flows into and out of these functional elements and ensuring that they conform to expected structure and content.

Let me attempt to summarize this approach in a set of layman's steps:

(i)              Look under the hood: In a complex transaction flow, breaking down into elements is the first basic step. The idea is to detail out each logical element of the process flow, and treat each of these elements as a black box, with pre-specified inputs and expected outputs from the processing. The tester should focus on understanding what goes into and out from each of these processing elements

 

(ii)             Automate for efficiency :  Once each of the building blocks in the validation flow have been laid out, each of these need to be automated. The intent is to make each of the validations to be performed repeatedly without adding to the effort of testing.

 

(iii)            Integrate for completeness: Integrate each of these automated elements and ensure that the interfacing data between these elements is compatible.  The integration layer could well be a test management tool.

 

(iv)            Enrich test data for coverage: Treat data separately. Create test data through synthetic creation mechanisms/ extract from production. Ensure comprehensive data sets to achieve the desired test data coverage.

 

So, what would be the right approach to test a multi-tier transaction processing system built on a Service Oriented Architecture- Black or White or Grey? The simple answer is "all of them". A good test strategy would leverage each of these approaches at different stages of the software's lifecycle and combine them to ensure quality.

What would such a test strategy look like?  That is for us to explore in the next discussion.

May 11, 2010

The Changing face of Testing

Testing teams have long been viewed as an insurance by IT departments to assure themselves and their business partners on what is being delivered. Over the years, IT departments have spent more time and money in trying to ascertain the delivery worthiness of code

More than ever, business teams are asking today how testing teams could deliver better insights and greater value into what is being produced by development teams. The argument is if testing teams could serve as Quality Gates throughout the development lifecycle, there would be fewer surprises towards the end, and lesser tradeoffs and compromises between inadequate functionality and faster time-to-market.

Several testing teams are fast morphing into Quality Assurance teams and are introducing newer verification techniques early on in the software development lifecycle. These include

• Requirements and business blueprinting reviews to identify incomplete requirements

• Architectural audits to determine scalability and compliance issues

• Code quality analyzer to detect memory leaks and redundant code

Quality Assurance teams are also strengthening their capture and articulation of business value delivered. These include

• Progressive test automation to leverage maximum ROI of automation development

• Integrated set of measures and consistent reporting on various quality, reliability and availability metrics

• Translation and articulation of the various quality and process engineering metrics into business outcomes e.g. business availability, throughput, support costs etc.

The key questions however are:

· "Do these early Quality Assurance techniques work?"

· "Is a higher emphasis on delivering quality products really resulting in greater stability and scalability?"

· "Are organizations really spending less time and money overall across all stages of software development lifecycle than before?"

· "How successful are testing teams in articulating the business value delivered by QA? What are the roadblocks they face in doing this and how could these be overcome?"

May 5, 2010

The 5W and 1H methodology of Metrics Reporting

Without knowing what to improve or what to deliver, it is extremely difficult to manage projects and project management in software testing is no exception. Metric driven approach eases the life of the project manager in handling requirements with regards to people, client expectations, quality, productivity, releases, etc.  However, there is a school of thought that software metrics have evolved but the interpretation of the metrics is yet to evolve. There is a shift that is taking place between having and reporting process driven metrics to reporting value-added metrics. Hence the clarity of metrics, and the way one sees the value, has a strong influence on the making of critical decisions.

Take an example of releasing a business critical application. What does it take for a project manager to sign-off on a release? What are the data points that he/she has to validate to certify that the release is successful? While test coverage, defects, etc., are just few of the dozen things to check, the data points list could be exhaustive depending on the impact of the failure of the release. The idea here is that there should never be a scope for interpreting a metric data. If the release has 300 defects, the QA manager should be in a position to defend that it is good-to-go or place the release on hold. How will this number 300 hold the decision for the release? Will the manager be looking for just the # of defects or more information to assure the quality of release?

Metrics are derived from data points and data points are what we see and get in the projects. Every attribute of the project and people working on the task is a data point (requirement, requirement change, defects, effort, time, productivity, cost, activities associated with test case, test data are few examples).

However, the issues in the current metrics scheme are -

  1. Inability to gather data due to lack of tools or process
  2. Measurement of data is too complex
  3. Too many metrics versus too little metrics
  4. Right metrics versus metrics of little or no value
  5. Intelligence to analyze data and metrics leading to wrong interpretation

While the first 3 points are normally managed in most of the top organizations through use of tools and setting up process and methodology, the challenge for a working project manager would be identifying the right metrics and analyzing data.

So how does one arrive at right metric that makes sense for the project and the stakeholders? The issue is that metrics are provided by the process or governance body. The typical project manager takes all that is recommended for the project. Hence in most of the situations, the project manager either has many metrics to manage or ends up leaving the necessary data points out. One way to check the use and need of metrics is to apply the 5W and 1H methodology (What-Why-When-Who-Which-How). In order to understand this approach, an example is illustrated below.

1. Take a metric, say # of defects, that is recommended to be tracked in a project

2. Apply the first W to it - "What"

  • What is the significance of "# of defects"?
  • What does the "# of defects" mean for the project?
  • What does the "# of defects" mean for the client and the release?
  • What data is required to gather that # of defects?
  • What tool is required to gather this metric?
  • What will different stakeholders do with "# of defect"?
  • What is the definition of defects as per the client and project standards?
  • What are the variations of defects classification?
  • What is the bench mark for "# of defects"?

3. Apply the next W - "Why"

  • Why is "# of defects" important for the different stakeholders?
  • Why is "# of defects" related to the business outcome of the project?
  • Why is "# of defects" mapped to the success of the project/release?
  • Why is "# of defects" high or low?

4. Apply the third W - "When"

  • When is "# of defects" measured?
  • When is "# of defects" critical?
  • When can "# of defects" vary beyond limits?
  • When can "# of defects" value put project into risk?

5. Apply the fourth W - "Who"

  • Who all are interested in "# of defects"?

6. Apply the fifth W - "Which"

  • Which are the sources of "# of defects"?
  • Which is the factor that can influence "# of defects"?

7. Now apply the last question - the H "How"

  • How is the "# of defects" to be tracked?
  • How is the "# of defects" measured in the system?
  • How is the "# of defects" going to impact the downstream process?
  • How is the "# of defects" going to influence the stakeholders?
  • How is the "# of defects" computed and defined?

Using the 5W-1H method, one can derive the various questions and arrive at the metric and its interpretation/relevance for the project. The above example has an illustrative list of questions for a metric and same can be expedited for any metric. It is also recommended to map the metric to $ value and thus bring it closer to the business value. Metrics driven project management is relatively superior as long as the right metrics are chosen.

Business Value of Testing - The 4R Framework

In my interactions with CXOs on the subject of Testing and QA in their organizations, the most common question I get asked is "what is the business value of Testing and QA activities for my organization?" Like any other non trivial, non mathematical and non axiomatic question there obviously is not a single right answer to this question. Unless, of course, you consider the answer "It depends on your situation" - which in my opinion is the right answer for every management question that can be asked!

Over the years what I have come to realize is that for most organizations the answer will be one (or more) "R's" in what I call the"4R's of testing":

  • Risk Management: Classic example of this "R" is the Banking industry. If you have billions of $'s of payments flowing through your systems every day wouldn't you really make sure that your payment system is thoroughly tested? The risks here are not just operational, but reputational and fiduciary risks.
  • Regulation and Compliance: Great example of this is the Pharmaceutical industry where there are specific regulations around CSV (Computer System Validation) which mandate by law a level of testing on any computer system.
  • Richer Customer Experience: One of the reasons why e-commerce testing in the Retail Industry is such a hot and growing area is precisely because of this. The switching costs on the web for a customer are practically zero so can you really afford a buggy front end? Or for that matter can you really afford a buggy system being used by your CEO, CFO or the Board?
  • ROI: Yes that's true. Today a lot of our applications have moved from the back and mid office to the front office. This means that they have become revenue generating applications (example a trading system for a brokerage house) rather than an expense reduction application. Upfront testing ensures that you improve important parameters like system uptime and performance - after all the cost of a revenue generating application going down for a few hours can be in millions, while the cost of testing it is in thousands.

The business value of testing hence really depends on the value that your organization attributes to each of the 4R's. After all Value = Benefits - Cost, and for each "R" the benefits attributed by the CXO are really determined by the organizational culture on risk and regulation and organizational preferences on customer experience and long Vs short term ROI.

So the answer to the originally posed question still remains "it depends", but perhaps it can now be expanded to "it depends on the value that your organization attributes to risk management, compliance to regulation, customer experience and TCO reduction".

Any guesses on what is the second most frequently asked question I get from CXOs regarding Testing and QA?