Testing Services provides a platform for QA professionals to discuss and gain insights in to the business value delivered by testing, the best practices and processes that drive it and the emergence of new technologies that will shape the future of this profession.

« January 2013 | Main | March 2013 »

February 28, 2013

Next-Gen QA - Five Paradigm Shifts that can change the game

 

ParadigmShift2.jpg 

 

IT professionals who started their careers in the 1980s-early 1990s can easily understand how testing services has emerged as one of the front pillars of IT service organizations. If there is any industry or service that has grown as a result of the 'inefficiencies' in the earlier phases of the Software Development Life Cycle (SDLC), it is undoubtedly "testing". The fact that testing has been growing steadily - at 15% CAGR and with an expected market size of $29 billion by 2016 - implies that the rise in inefficiency levels continues.

 

The growth story of independent testing in the past decade has been phenomenal. This growth is primarily attributed to the ineffective quality gates at the early stages of SDLC necessitating independent testing, which has been positioned as a panacea to all the ills and evils of poor quality. Now the stage is set for the Next Gen QA where client organizations are expected to optimize the overall testing effort, cost and duration through a holistic approach. As organizations are pushed to extract greater value from every dollar spent, there is an urgent need, across industries, to look for optimization opportunities in testing by identifying and removing the redundancies across the SDLC.

 

While IT organizations understand the imperatives for establishing an independent/central QA organization to realize their business goals, they are also looking for agility and innovation to optimize their overall testing effort, cost and quality without compromising the independent stature of QA organizations. Further, the quality and value of testing will be measured by the value/outcome that accrues to the business and overall IT in the form of tangible business outcome and system stability in production. This expectation has increased the responsibilities and multiplied the complexities for QA/Testing organizations.

 

QA organizations are expected to play a pivotal role to meet this challenge and help client IT organisations in their optimization journey. In order to rise up to this challenge, I believe that QA organizations have to think radically and should not hesitate to embrace innovative approaches. Some of these approaches will challenge the basic independent testing premise and will demand a paradigm shift in the following five critical areas:

 

1.    Independent to Optimization

2.    Inward Development to Outward Business Focus

3.    Isolated Tools to Integrated Solutions

4.    QA basic to Advisory Service

5.    Output to Outcome-based Engagement Models and KPIs

 

This shift necessitates the need to develop a new breed of test professionals with diversified skill sets or adopt innovative resourcing models to supplement the skills needed to meet these challenges.   

 

Over the next few weeks, through a series of blogs, I will take you through each of the above mentioned key areas and share my perspective as to how QA organizations can gear up to this  paradigm shift in each of the above areas.    

 

See you soon

February 11, 2013

8 Key Test Data Management Challenges

Today's uncertain business environment is witnessing a lot of budget cuts, high competition, job cuts and much more.  In such a scenario, addressing Test Data Management (TDM) with the right approach will not only save huge effort and costs but also bring in business agility, reduced time to market with reliability and coverage of the test cases along with reduction in test environment costs.  As per analysts, the effort spent in doing TDM ranges anywhere between 12% - 14% while in some cases where the applications are data intensive, the effort can go beyond 21%.  That is phenomenal amount of effort and time spent on TDM activities that can be addressed.

Following are the 8 key challenges w.r.t. Test Data Management:

  • Lack of awareness on Test Data Management - Often, the testing team themselves provision the test data required resulting in improper coverage in turn leading to production defects.  It is noticed that almost over 10% of the defects raised in production are due to data that could have easily been captured during the various testing phases
  • Lack of Standardization - As different teams request data in different formats for different types of testing - System testing, Data warehouse testing, Performance testing, UAT, etc., there is no standard data request form and provisioning process followed resulting in longer test cycle times
  • Poor data quality and data integrity - Lack of streamlined process and inconsistent approach to test data refresh process results in poor data quality and integrity issues.  With complex and heterogeneous systems coupled with different file formats having multiple touch points,  inappropriate process followed leads to serious data quality and integrity issues
  • Regulatory and Compliances -   With increased adherence to regulatory compliances such as PCIDS, Gramm-Leach-Bliley Financial Act (GLBA),  BASEL II, Dodd Frank Act, Solvency II etc., test data would need to ensure that sensitivity of the data is maintained along with adherences to compliances which could also be geo-specific
  • High storage cost - High storage, license and maintenance cost as different teams take full production copies.  Due to lack of reuse, redundant data sets and production clones are spread across various test environments increasing the overall CAPEX and OPEX
  • Absence of traceability - No traceability between test data to test cases to business requirements leading to issues on the test data coverage for a particular test case
  • Reduced efficiency  -  With no standard  processes followed, teams working in silos doing manual operations for data engineering, data provisioning, data mocking etc. results in inefficiency as there are no plans for reuse of the test data artifacts and optimization of data, /environment.
  • Huge effort spent on TDM - Significant amount of effort & time is spent in taking huge volume of production data to various environments for different types of testing. Test data identification, extraction and conditioning consume large effort in testing life cycle as much as 12-14% and at times much higher.

With proper TDM processes and frameworks deployed, most of the above challenges can be addressed resulting in good coverage as well as faster time to market.

February 10, 2013

ETL, ELT, ETLT...How to devise the best test strategy for your Data warehouse applications?

 ETLELT_ version 2 .jpg 

The success of a data warehouse system doesn't merely depend on how intelligently it can mine data but more often than not it is also based on how quickly it can serve the user's needs. Traditional data warehouse systems or ETL (Extract, Transform, Load) were designed with user output in the mind and the input data had to be processed every time there was a need for a user report to be generated. However, modern methods are more radical. They focus on creating a cleaner input data so that reports can be created 'on-the-go' that fit the purpose rather than a one-fits-all approach. ELT (Extraction, Load, Transformation) is the modern method of designing data warehouse systems.

In the current scenario, more and more organizations are redesigning their data warehouse platforms to ensure correctness, completeness and appropriateness of the data they provide to their external and internal stakeholders. Testing a data warehouse system that is being redesigned is a challenge in itself, especially if it involves change in the process such as ETL to ELT or ETL to ETLT (Extract, Transform, Load, Transform). There is a need to devise a test strategy for testing the existing data, table structures, relationship between data models, integrity of backup databases and the underlying processes. At the same time, it is very important not to ignore the bigger picture ie., ensuring the correct data 'output' to the stakeholders. I have implemented this strategy in some of the complex data warehouse programs and realized that an effective and binding test strategy is extremely critical for the success of those systems. If I have got you interested in the topic, you can read my latest POV at http://www.infosys.com/IT-services/independent-validation-testing-services/features-opinions/Pages/data-warehouse-testing.aspx.

 

February 8, 2013

Testing Times Need More than Time-Tested Actions

Business lore has it that the LED might never have been invented had a Japanese corporation not pulled the plug on a blue light-emitting diode project. This only steeled the resolve of genius innovator Shuji Nakamura who secretly persisted, despite budgetary obstacles and laboratory explosions.
 
History is replete with the story of maverick leaders who were spurred to success by those who didn't see it their way. These leaders are the greater for it because they dared to dream against all odds, break the rules, and disrupt a tangible today to build an unknown tomorrow. They knew they could fail, but they took their chances. Many were motivated not by considerations of personal gain but by a sense of responsible leadership, a belief that it was their job as a leader to light new pathways when the old ones were overgrown.
 
Click here to read the complete blog by Manish Tandon.

Why the big buzz around left shift in testing?

Let's face it. We are in a world of cut throat competition where, for most organisations, increasing revenue, reducing costs, weighing margins etc. are regular exercises. Similar world exists in testing market. Thus, given the very nature of the regimen, can we afford to ignore the demands of the market?

You all must have already guessed, the answer is NO. But, look at the whole thing more closely and you'll be surprised. The demands are simple and logical. How can management be confident enough of progress at every stage of project? Why cannot a project be rectified mid-way if there are issues with deliverables? And, most importantly, how to avoid implementation delays and thus, save cost?

Now let's go deeper and analyse the origin of demands. About 70% of the defects are induced during the course of Requirements and Design phases. Take this one step ahead and picture the downstream impact. Not a pleasant picture, was it? The resulting problems are more prominent in large scale projects. To add to this, imagine getting to know the bad state of project mid-way Integration testing.

Costs associated with corrective actions, rework by stakeholders, low morale of teams, regaining management support and coping with delivery pressure are few of the after effects. Makes you wonder, doesn't it? What if I can find these defects early and avoid the unnecessary trouble? And, you're right! After all, even chefs are considerate enough to first check the pie mix rather than wait till it's fully baked. So, why not do the same for million dollar projects? Wouldn't the savings be worth working for? No doubt, a Big Yes. Needless to say, advantages of taking preventive action are multi fold - well-timed corrections, risks reduction, scheduled implementation, cost savings, and so on.

Left shift in testing takes care of the ailments as they occur and ensures that there are no or very minimal after effects. Simply put, left shift is to improve upstream quality.  Detecting defects early is the key. This could be accomplished by static testing requirements and design, assuring code quality, uplifting system test among other things.

In my upcoming blogs, I would discuss more on approaches for identifying defects early. I would also discuss certain practices that can provide significant value with less effort. Till then, think on this quote from Plutarch, "To make no mistakes is not in the power of man; but from their errors and mistakes the wise and good learn wisdom for the future."

February 5, 2013

Usability Testing: Why is it beneficial to test early?

Often, when I am taking usability testing session, participants ask me what would be ideal window to perform usability testing in a project. Incidentally, this is the same query I have received in the past from some clients as well, during usability testing strategy phase.

Usability testing has been integral part of design process for long now due to upfront benefits. Primarily, it provides direct user feedback which leads to deeper user insights related to design.
It is one of those rare occasions in the entire project where one has the opportunity to interact directly with users and get their feedback.

Usability testing can be carried out at any stage of design process: Initial wireframes, visual design, HTML prototype or live website/application.Also, there could be more than one round of usability testing depending on project scenario.

The best time/window to have usability testing in a project is during initial design stage due to following benefits it provides:

Quick to Iterate
Doing early testing ensures the design changes are quick to incorporate. Early design captures feedback on global, main navigation, terminologies and overall page structure.
The wireframes are still not designed in detail (or it could be just a mock up of static images linked together) and hence can  quickly be  changed based on user feedback and validation.

Less Rework
A change in design at wireframe level (initial design stage) will always be done with less effort than changing a fully developed HTML page at later development stage, and which has even the visual design incorporated by then.

Getting design right
I have mentioned in my previous blog on Business value of usability testing that early testing helps to get design right  during initial stage itself.
Simply because users provide feedback on overall design concept and information architecture at wireframe level.
These wireframes undergo modifications based on user feedback, get aligned to most of the user expectations and level and form a strong foundation for design.Majority of design issues are taken care at initial stage itself.

User Acceptance
Early testing helps in understanding acceptance level of representative users during the design process and also provides further design directions.
Once designers get confidence level on their design since has been through the initial feedback session of users, delivering the final best suited design to end users as per usability standards becomes a fairly achievable target.

Cost Effective
During initial stages, it is very cost effective to make any design iterations at wireframe level rather than making changes during later stages of development.
The cost associated to have visual design team and developers get involved for page iterations at later stages of development is thus saved.

Usability testing is a powerful technique to discover usability issues. Early inclusion of it will help to fix major usability issues during early stages, thus helping design to mature progressively.