Testing Services provides a platform for QA professionals to discuss and gain insights in to the business value delivered by testing, the best practices and processes that drive it and the emergence of new technologies that will shape the future of this profession.

Main

November 22, 2016

A.E.I.O.U of New Age Quality Engineering

Author: Srinivas Yeluripaty, Sr. Industry Principal & Head, IVS Consulting Services

In today's digital world, 'change' is the only constant and organizations are grappling with ways to meet the ever-changing expectations of key stakeholders, especially ubiquitous consumers. With the GDP transformed by mobile economy, globalization leading to "Global One" customers, payment industry transforming from "Cashless" to "Card-less" to "Contactless" transactions, ever growing emphasis on security and compliance, the expectations on IT are reshaping significantly. To achieve that pace and flexibility, organizations are increasingly adopting agile methods and DevOps principles.

Continue reading "A.E.I.O.U of New Age Quality Engineering" »

November 21, 2016

SAP Test Automation approach using HP UFT based solution

Author: Kapil Saxena, Delivery Manager

Problem statement
Most businesses that function on SAP, or plan to implement SAP must consider multiple factors, as their entire business runs on this backbone. Their major worries are testing effectiveness, preparedness, cost, and time to market. Infosys SAP Testing Unit has answers to all four, which have been well implemented and proven but I am reserving this blog for the last two.

Continue reading "SAP Test Automation approach using HP UFT based solution" »

November 18, 2016

Darwin and world of Digital transformations

Author: Shishank Gupta - Vice President and Delivery Head, Infosys Validation Solutions

When Charles Darwin proposed the theory of 'survival of the fittest', I wonder if he imagined its applicability beyond the life forms. Since the advent of the internet, the bargaining power of the consumers is steadily increasing and product and service providers often find themselves playing the catch up game to provide the best of the product features bundled with the best of consumer experience. What would Darwin's advice to product companies in today's Digital world be?

Continue reading "Darwin and world of Digital transformations" »

October 6, 2016

Testing the Internet of Things Solutions

Author: Tadimeti Srinivasan, Delivery Manager

The Internet of Things (IoT) is a network of physical objects (devices, vehicles, buildings, and other items) that are embedded with electronics, software, sensors, and network connectivity to collect and exchange data.

Continue reading "Testing the Internet of Things Solutions" »

Infosys view on relevance of Continuous Validation in DevOps Journey

Author: Prashant Burse, AVP-Senior Delivery Manager

Rapid digitization is forcing organizations to develop more and more consumer driven enterprise strategies. This is forcing businesses and IT organizations to respond with increased agility to ever changing consumer needs. IT organizations are hence shifting from traditional waterfall based delivery models to Agile and eventually DevOps methodology.

Continue reading "Infosys view on relevance of Continuous Validation in DevOps Journey" »

October 3, 2016

Evolution of mobile devices and Impact on Performance

Author: Yakub Reddy Gurijala, Senior Technology Architect

In last decade, mobile devices evolved from point to point communication like phone calls and SMS to smart features with advanced OS capable of executing native applications. These change created lot of opportunities and challenges for online businesses and application developers.

Continue reading "Evolution of mobile devices and Impact on Performance" »

September 30, 2016

Starwest 2016- Infosys is a Platinum sponsor

Author: Pradeep Yadlapati - Global Head of Consulting, Marketing, Alliances, and Strategy 

Starwest 2016 - As the curtains rise, we are looking forward to another year of exciting conversations with customers, prospects, and partners. This provides a fantastic platform for us that offers a great way to stay connected with practitioners, evangelists, and product vendors.

Continue reading "Starwest 2016- Infosys is a Platinum sponsor " »

September 27, 2016

Trends impacting the test data management strategy

Author: Vipin Sagi, Principal Consultant

Test data management (TDM) as a practice is not new. It has been around for a few years now. But only in the last decade, has it evolved at a rapid pace with mature IT organizations ensuring that it is integrated into the application development and testing lifecycles. The key driver for this has been technology disruptions, compelling IT organizations to deliver software faster with high quality and at low cost.

Continue reading "Trends impacting the test data management strategy" »

September 26, 2016

Hadoop based gold copy approach: An emerging trend in Test Data Management

Author: Vikas Dewangan, Senior Technology Architect


The rapid growth in data volumes of both structured and unstructured data in today's enterprises is leading to new challenges. Production data is often required for testing and development of software applications in order to simulate production like scenarios.

Continue reading "Hadoop based gold copy approach: An emerging trend in Test Data Management " »

September 13, 2016

Lift the curb on Performance Engineering with Innovation

Author: Sanjeeb Kumar Jena, Test Engineer

In my previous two blogs, I discussed about bringing pragmatism and a cultural mind-set change to performance engineering teams. In this blog, we will evaluate the outcome of these shifts during the transformation journey from performance testing (only limited to quality assessment phase of software applications) to performance engineering (covers the entire life-span of software applications to ensure higher return on investment).

Continue reading "Lift the curb on Performance Engineering with Innovation" »

August 25, 2016

Agile and testing: What banks need to know

Author: Gaurav Singla, Technical Test Lead

Agile -- a testing framework introduced in 2001 -- is now being used across various banks in order to cater to their IT project needs. The basic principles of the agile methodology are as follows:

Continue reading "Agile and testing: What banks need to know" »

August 16, 2016

Culture is everything in Performance Engineering

Author: Sanjeeb Kumar Jena, Test Engineer

We are living in a bulging knowledge economy where anyone can access information from anywhere with a device that fits into their pocket. We are living in a hyper-connected world via the World Wide Web. Today, business is not the 19th century one-way 'producer-consumer' relationship. Instead, it's a two-way communication. An effective business model is 'not about finding customers for your products' but it's about 'making products for your customers'.

Continue reading "Culture is everything in Performance Engineering" »

August 3, 2016

Crowd Testing: A win-win for organizations and customers

Author: Manjunatha Gurulingaiah Kukkuru, Principal Research Analyst
            Swati Sucharita, Senior Project Manager

Software is evolving with every passing day to accommodate newer prospects of its usage in day-to-day life.

Continue reading "Crowd Testing: A win-win for organizations and customers" »

August 2, 2016

Pragmatic Performance Engineering

Author: Sanjeeb Kumar Jena, Test Engineer

When you hear the term 'performance engineering,' what's the first thing you think of? The famed battle of 'performance testing versus performance engineering' or because we are engineers, do you immediately think of making performance testing a better quality assurance process?

Continue reading "Pragmatic Performance Engineering" »

July 27, 2016

The three ingredients for a perfect test strategy

Authors: Gayathri V- Group Project Manager; Gaurav Gupta- Senior Project Manager

Last week, we won a testing proposal for a mandatory program that cuts across multiple applications. Although we, the 'pursuit team,' were celebrating the win, a voice in our head incessantly kept saying, "Winning is just one part of the game; but delivering such huge programs on time is always a tall order!"

Continue reading "The three ingredients for a perfect test strategy" »

July 26, 2016

Performance engineering in Agile landscape and DevOps

Author: Aftab Alam, Senior Project Manager

Over the last couple of years, one of the key shifts in the software development process has been to move away from the traditional waterfall approach and instead, embrace newer models like DevOps. One of the main goals of development and operations (DevOps) is to monetize investments as soon as possible. In the traditional waterfall model, UI mockups are all business owners (investors) have access to, before agreeing to invest.

Continue reading "Performance engineering in Agile landscape and DevOps" »

July 18, 2016

Four approaches to big data testing for banks

Author: Surya Prakash G, Delivery Manager

Today's banks are a stark contrast to what they were a few years ago, and tomorrow's banks will operate with newer paradigms as well, due to technological innovations. With each passing day, these financial institutions experience new customer expectations and an increase in interaction through social media and mobility. As a result, banks are changing their IT landscape on priority, which entails implementing big data technologies to process customer data and provide new revenue opportunities. A few examples of such trending technology solutions include fraud and sanctions management, enhanced customer reporting, new payment gateways, customized stocks portfolio-based on searches, and so on.

Continue reading "Four approaches to big data testing for banks" »

June 27, 2016

Three Generations streaming in a Network

Author: Hemalatha Murugesan, Senior Delivery Manager

Are you using iPhone 6s asked my 80+ year old neighbor as we were transiting in apartment lift?  I responded nope, Samsung and enquired what help is needed.   He wanted assistance in how to use the various apps as he just got the iphone 6s as a gift.   Sure why not and will come over to your place, I winked and concluded.  

Continue reading "Three Generations streaming in a Network" »

June 8, 2016

Golden rules for large migration

Author: Yogita Sachdeva, Group Project Manager

In my experience of working with large banks, I have worked on small programs related to acquisitions and mergers. I have also worked on voluminous upgradations and migrations. I wondered what was different which actually a tie breaker for large program was. I always tried to scratch my brain to figure out what it takes to make a large program run. I realized that smaller programs generally get successfully delivered with the apt technical skills of the team. However, large programs are normally meant to deliver a business strategy as big as the creation of a new bank. A large program encompasses a group of related projects, managed in a coordinated manner, to obtain benefits and to optimize the cost control.

Continue reading "Golden rules for large migration" »

June 7, 2016

Role of Validation in Data Virtualization

Author: Kuriakose KK, Senior Project Manager

How can I see the big picture and take an insightful decision with attention to details now?

Jack, the CEO of a retail organization with stores across the world, is meeting his leadership team to discuss disturbing results of the Black Friday sale. He enquires about reasons behind why they were unable to meet their targets and the reason is promptly answered by his leaders as missed sales, delayed shipping, shipping errors, overproduction, sales teams not selling where market demand exists, higher inventory, etc. Jack is disturbed by these answers, and on further probing understands most of these are judgment errors. 

Continue reading "Role of Validation in Data Virtualization" »

June 1, 2016

Predictive Analytics Changing QA

 Author: Pradeep Yadlapati, AVP

Today's mobile economy is changing the way enterprises do business. A recent survey indicates that the mobile ecosystem generates 4.2% of the global GDP, which amounts to more than US $3.1 trillion of added economic value. It is no surprise that organizations are fast embarking on digital transformations.

The pervasiveness of devices is altering interaction as well as business models. Customers expect a seamless experience across different channels. Everyone wants one-touch information and they expect applications to display preferences and facilitate quicker and smarter decisions. 

Continue reading "Predictive Analytics Changing QA" »

May 31, 2016

Performance Testing in the Cloud

Author: Navin Shankar Patel, Group Project Manager

If a layperson, frozen in time for 10 years, suddenly wakes up and eavesdrops on a conversation between CIOs, he/she might assume that a group of weather forecasters are conversing. That is because the entire discussion is centered on 'cloud' and is interspersed with a lot of mostly unintelligible words.

Continue reading "Performance Testing in the Cloud" »

April 28, 2016

Manage tests, the automation way

Author: Swathi Bendrala, Test Analyst 

Most software applications today are web-based. To keep pace with competition and highly demanding processes within the enterprise, these web-based applications undergo frequent updates, either to add new features incorporate new innovations. While these updates are necessary, the amount spent to roll-out these updates are important too. 

Continue reading "Manage tests, the automation way" »

April 26, 2016

Validate to bring out the real value of visual analytics

Author: Saju Joseph, Senior Project Manager

Everyday enterprises gather tons of data streaming in from all directions. The challenge lies in taking this huge volume of data, sometimes unstructured in nature, synthesizing it, quantifying it, and increasing its business value. One way to achieve this is by moving from traditional reporting to analytics.

Continue reading "Validate to bring out the real value of visual analytics" »

April 14, 2016

Is Performance Testing really Non-Functional Testing?

Author: Navin Shankar Patel, Group Project Manager

The world of testing has evolved over the years and like most evolutions in the technology world, it has spawned a plethora of methodologies and philosophies. We now have super specializations in the testing world - UI testing, data services testing, service virtualization, etc. However, some beliefs remain unchanged. Especially the belief that the testing universe is dichotomous - characterized by functional and non-functional testing.

Continue reading "Is Performance Testing really Non-Functional Testing?" »

January 14, 2016

Crowdsourced Testing: Leveraging the power of a crowd to do the testing

Author: Harsh Bajaj, Project Manager

What is crowdsourcing?

The term 'crowdsourcing' was coined in 2006 when Jeff Howe wrote an article for Wired Magazine - "The rise of crowdsourcing." Crowdsourcing is a combination of two words - crowd and outsourcing. The objective is to get the work done from an open-ended crowd in the form of an open call. 

Continue reading "Crowdsourced Testing: Leveraging the power of a crowd to do the testing" »

December 28, 2015

Elevating QA to CXO

Author: Harleen Bedi, Principal Consultant

With rapidly evolving and emerging technology trends such as Cloud, Mobile, Big data, Social, etc. QA is now being looked as a key component in the modernization and optimization agenda of any CXO. This is supported by - World Quality Report that reveals that Application Quality Assurance and Testing Now Accounts for almost a Quarter of IT Spending. 

Continue reading "Elevating QA to CXO" »

December 14, 2015

Ensure the Quality of Data Ingested for True Insights

Author: Naju D. Mohan, Delivery Manager

I sometimes wonder whether it is the man's craze for collecting things, which is driving organizations to pile up huge volumes of diverse data at unimaginable speeds. Amidst this rush for accumulating data, the inability to derive value from this heap of data is causing a fair amount of pain and a lot of stress on business and IT.   

Continue reading "Ensure the Quality of Data Ingested for True Insights" »

September 18, 2015

Assuring quality in Self Service BI

Author: Joye R, Group Project Manager

Why self-service BI?

Organizations need access to accurate, integrated and real-time data to make faster and smarter decisions. But, in many organizations, decisions are still not based on BI simply due to the challenges in IT systems to keep up with the demands of businesses for information and analytics.

Self-service BI provides an environment where business users can create and access a set of customized BI reports and analytics without any IT team involvement.

Continue reading "Assuring quality in Self Service BI" »

September 7, 2015

Role of Open Source Testing Tools

Author: Vasudeva Muralidhar Naidu,Senior Delivery Manager

Quality organizations are maturing from Quality Control (Test the code) to Quality Assurance (building Quality into the product) to Quality Management. In addition to bringing Quality upfront and building Quality into the product, Quality Management also includes introduction of devOps principles in testing, optimization of Testing Infrastructure (Test environments and tools).

Continue reading "Role of Open Source Testing Tools" »

August 21, 2015

Are We Prepared to Manage Tomorrow's Test Data Challenges?

Author: Sunil Dattatray Shidore, Senior Project Manager

As tomorrow's enterprises are embracing latest technology trends including SMAC (Social, Mobile, Analytics, Cloud) and adopting continuous integration and agility, it is imperative to think of more advanced, scalable and innovative ways to manage test data in non-production environments. It can be for development, testing, training, POC, pre-prod purposes. The question really is - have we envisioned the upcoming challenges and complexity in managing test data and are we prepared and empowered with right strategies, methodologies, tools, processes and skilled people in this area?


Continue reading "Are We Prepared to Manage Tomorrow's Test Data Challenges?" »

August 17, 2015

Extreme Automation - The Need for Today and Tomorrow

Author: Vasudeva Muralidhar Naidu,Senior Delivery Manager

We have read about the success of the 'New Horizon Spacecraft', and its incredible journey to Planet Pluto. This is extreme engineering and pushing human limits to the edge. Similarly, when we hear about the automobile industry and the fact that one additional car is getting assembled every 6 minutes, we are quite amazed at the level of automation that has been achieved.

Continue reading "Extreme Automation - The Need for Today and Tomorrow" »

August 3, 2015

Three Stages of Functional Testing 3Vs of big data

Author: Surya Prakash G, Group Project Manager

By now, everyone has heard of big data. These two words are heard widely in every IT organization and across different industry verticals. What is needed, however, is a clear understanding of what big data means, and how big data can be implemented in day-to-day businesses. The concept of big data refers to a huge amount of data, petabytes of data and huge mountains of data. With ongoing technology changes, data forms an important input for making meaningful decisions.

Continue reading "Three Stages of Functional Testing 3Vs of big data" »

Balancing the Risk and Cost of Testing

Author: Gaurav Singla,Technical Test Lead

A lot of things about banking software hinge on how and when it might fail and what impact that will create.

This drives all banks to invest heavily in testing projects. Traditionally, banks have been involved in testing software modules from end-to-end and in totality, which calls for large resources. Even then, testing programs are not foolproof, often detecting minor issues while overlooking critical ones that might even dent the bank's image among its customers.

Continue reading "Balancing the Risk and Cost of Testing" »

July 21, 2015

Is your testing organization ready for the big data challenge?

Author: Vasudeva Muralidhar Naidu,Senior Delivery Manager       

Big data is gaining popularity across industry segments. From being limited to lab research in niche technology companies to being widely used for commercial purposes; big data has achieved a wider scope of application. Many mainstream organizations, including global banks and insurance organizations, have already started using big data technologies (open source) to store historical data. While this is the first step to value realization, we will soon see this platform being used for processing unstructured data as well.

Continue reading "Is your testing organization ready for the big data challenge?" »

July 14, 2015

Automation - A new measurement of client experience

Author: Rajneesh Malviya, AVP - Delivery Head - Independent Validation Solutions

A few months ago, I was talking to one of my clients who visited us on the Pune campus. She shared how happy she was with the improved client visit process. She was given a smart card like any of our employee and was mapped / marked as visitor and with that she could move from one building to another without much of a hassle.  She no longer had to go through the manual entry process at each building. Like an employee, she could use her smart card at the turnstile to enter & exit our buildings and at the same time her entry was recorded as per compliance need. As she had been in our campus before, she was clearly able to experience the great difference brought about by automation.


Continue reading "Automation - A new measurement of client experience" »

July 6, 2015

Automated Performance Engineering Framework for CI/CD

Author: Aftab Alam, Senior Project Manager, Independent Validation and Testing Services

with contribution from Shweta Dubey

Continuous Integration is an important part of agile based development process. It's getting huge attention in every phase of Software Development Cycle (SDLC) to deliver business feature faster and confidently.

Most of the times it's easy to catch functional bugs using test framework but for performance testing, it requires scripting knowledge along with load testing and analysis tools.

Continue reading "Automated Performance Engineering Framework for CI/CD" »

June 30, 2015

Transforming Paradigms - From Quality Assurance to Business Assurance

Author: Srinivas Kamadi, AVP - Group Practice Engagement Manager.

Today's digital landscape is riddled with disruptive forces that are transforming business models and industries alike. The proliferation of social channels and continuous creation of big data is fuelling this transformation and heating up global competition. Forces such as Social, Mobile, Analytics and Cloud (SMAC) and the Internet of Things (IoT) are now critical to delivering omni-channel experiences. These digital imperatives guide how businesses engage with their customers, employees and stakeholders. Customers demand 24/7 connectivity and free flowing information accessibility. This has made it contingent upon companies to deliver superior customer experience in an agile fashion.

Continue reading "Transforming Paradigms - From Quality Assurance to Business Assurance" »

June 29, 2015

New Age of Testing - The WHAT, WHY and HOW?

Author: Mahesh Venkataraman, Associate Vice President, Independent Validation Services.

While testing has always been important to IT, the last decade has seen it emerge as a discipline in its own right. Hundreds of tools have been developed and deployed, commercially as well as 'openly'. New methodologies have been formulated to test the latest business and technology transformations. IT organizations today recognize testing as a critical function that assures the readiness of a system to go live (or a product to be released to the market).

Continue reading "New Age of Testing - The WHAT, WHY and HOW?" »

March 26, 2015

Accessibility Compliance: What Developers need to know and when?

At times, Accessibility Testing defects get Development team focus at fag end of a release in the right perspective. Prior to this, Functional testing is a priority since basic functionality of system must work to get through Integration and User Acceptance Testing (UAT).

It is important for Development team to understand core concepts of Accessibility as well as the exact code level changes to be done during defect fixing. And this needs to happen well in advance during initial stage when requirements would be signed off.

This understanding could happen in several ways. 

Continue reading "Accessibility Compliance: What Developers need to know and when?" »

October 27, 2014

Mobile Native App- Real User Experience Measurement

MPOS-dynatraceUEM.png


As we all know that testing server and making sure server side is up and running does not guarantee good end user experience. There are several SaaS solutions to measure client side performance of a website and mobile apps like SOASTA mPluse, TouchTest, webpagetest, Dynatrace UEM etc.  How can we leverage same technique to measure Sales person experience for Mobile POS applications before releasing new features or how can we monitor Sales person app usages/behavior just like we do real user experience analysis for website?

Continue reading "Mobile Native App- Real User Experience Measurement" »

September 24, 2014

Accessibility Compliance : What different User groups look for ?

Accessibility compliance is gaining more strength across organizations due to legal mandates.

Web Content Accessibility Guidelines (WCAG 2.0) is referred as a broad guideline across geos, which takes into consideration major physical impairments and how to meet the needs of such users having impairments. t is vital for achieving accessibility compliance successfully in a program, that the teams engaged and working together, concerned groups are quite aware of their program specific accessibility guidelines.

Continue reading "Accessibility Compliance : What different User groups look for ?" »

September 2, 2014

Usability Testing: What goes into user recruitment and test session readiness?

Usability testing is meant to help discover design issues and get appropriate directions on further improving overall user experience. But, these sessions would not be possible without having right set of users who would review and provide feedback.

Getting right users 
At times, it is challenging to get right users for test sessions.For many organizations where usability process for design is newly followed, there is still less clarity on whether:

  • usability testing is required ( business impact)
  • if required, what type of users would be participating in test sessions
  • will such users be identified and available for such sessions
  • if available, which specific locations need to be considered.

 

Continue reading "Usability Testing: What goes into user recruitment and test session readiness?" »

August 8, 2014

Is the environment spoiling your party?

 

I frequently come across performance testing projects entangled in cost and time overruns - the culprit usually being environment issues. Since we can't wish away the environment and the issues that come with it, the next best thing is to be better prepared by figuring out the pitfalls and addressing them proactively.

Since a stable test environment is critical for test script development, load simulation and bottleneck analysis in the performance testing life cycle stages, let's take a good look at what to watch out for when we prepare for a testing cycle.

Know thy application: Any environment issue during the test execution phase is like the proverbial spanner in the works. It should come as no surprise if the performance testing team has to spend significant effort in debugging and analyzing it and, of course, following up with support teams for resolution. To be effective in test environment risk assessment and issue resolution, we must well know the application architecture, functionalities, workflows, and the interconnecting components. Stubs and virtualization techniques can be handy during test execution when one is familiar with the component level details and how to use them. While investigating environment issues, the development and infrastructure teams often seek the testing team's input - so lending a hand with specifics will mean a faster turnaround.

Dependencies - wheels within wheels: Another party-pooper can be the dependencies that may impact testing way before we even run the performance test. Multi-tiered enterprise computing systems have these dependencies on each tier and layer, within and outside the enterprise boundaries. In addition to the functionalities, there are other factors at play that may impact the performance test results. These could include high resource consumption by another process hosted on the same infrastructure, execution of batch jobs, or parallel test runs by another team. An outage in the environment during the test run can force you to reschedule the test. That's why, it is all important to gather information about all the possible dependencies that may impact the test execution during the planning stage itself. It is always good to document these issues as one comes across them for reference in future test cycles.

Stay in touch: The performance testing team cannot operate in a vacuum. Team members must establish proper communication with the development, infrastructure, functional and integration testing, and release management teams right from the strategy phase to synchronize test preparation as well as execution activities. The test schedule should be published well in advance in case you are using a shared test environment. Notifications prior to running a test must be sent out to the teams concerned to bring up the servers, mount monitors, clear logs and keep the environment stable during the test execution. A calendar that blocks shared computing resources can keep all stakeholders posted on the date with the test execution and reduce retest efforts significantly. A small tip: Keep your contact information handy and up-to-date.

Think ahead: Being well-prepared is half the battle won. For testing, this means to think about possible environment failures and look for workarounds well before the actual test execution. While preparing test estimates, don't forget to factor in unknown environment issues that could adversely impact the effort and the schedule. Keeping some buffers as a percentage of the overall estimate can save you a lot of grief later. It is also important to prioritize critical business transactions for the performance test so that, if some functionalities flop during the planned test window, a test run on a subset of the transactions can provide meaningful insights into application performance. Finally, remember that time is your most precious resource. So, if the test environment becomes completely unavailable during the planned test window, utilize that time effectively in activities such as offline reporting or knowledge management and promptly schedule the test in the next window available.

So these are a few best practices to keep in mind while preparing for a performance test cycle. You can use these to build your own set of rules specific to the challenges and constraints of your set-up. The bottom line for a successful testing cycle is to keep tabs on incidents and work through issues smartly.

August 6, 2014

Is User Experience measurable ?

Many organizations look for tangible numbers to justify efforts and money spent on improving user experience of websites or applications.

Of course, there has to be valid reasons to continue the investment for user experience and surpass the reviews done intermittently on the value it is providing.

Today, it is not enough only to get a subjective confirmation from users on the overall design acceptance and satisfaction.

What does it mean when users mention a website is 'easy to use' , 'it is good', 'that was easy', 'I saved a lot of time' ?

User experience can be measured by quantifying user feedback and efforts put in the design can be validated :

1)  Qualitative Feedback
 This type captures user feedback on usability or user experience statements. A Likert scale is used to capture such feedback. Typically, ratings from scale of 1- 7 or 1-5 are captured. User experience statements could be :
 a)'I found the design simple to comprehend'
b) 'I could locate where I am on pages while doing my tasks'

 
2)  Quantitative feedback

This type of feedback brings in more numbers to the table for everyone to see some tangible progress/issues on design.

Here are different ways a design team can get active, measurable user feedback on their design.

 

2.1) SUS Score:

The one number which needs to be mentioned is the System Usability Score or SUS Score .
 This number indicates whether the design in progress is acceptable to set of users and if design is going in right direction. Higher the score , higher is the acceptance of users towards design. Example : a score of 85 out of 100 means users are positive about the new design and will be keen to work on it.
 

2.2) Task Completion ratio:

Another number which signifies how many critical tasks a user is able to complete from the number of tasks given to perform during usability test sessions.

If 80 % of tasks can be completed by users, it means there are some tweaks needed to complete the other 20 %. But in general the design is going in right direction.

 

2.3) Time for Task completion

A user taking longer than expected time for a specific tasks to complete signifies there is some problem with either the way information is laid out, naming conventions used or visual clarity.
 This is not to know the exact time in terms of milliseconds, but, to get an overall impression whether the task is getting difficult for users to complete.

 

2.4) Number of Errors

If user is given 5 tasks and there are 8 errors/issues user comes across while performing or completing them, there is a problem with design.

Minimum number of errors could baseline a design. More number of errors could ask designers to go back to white board to see and analyze what went wrong.

 

These are some of the numbers which can provide insights on what is going good with design or what are the issues still need to be worked out.

Quality of User experience can thus be measured by both qualitative as well as quantitative feedback.

January 20, 2014

Social Media, Cloud, Analytics and Mobility (SCAM)

Social Media, Cloud, Analytics and Mobility: These are 4 common buzzwords that we hear today. They are indeed very much inter-related as well!  Social media allows instantaneous interactions, sharing of news, photos, videos etc. From a technical perspective, this requires elastic omnipresent storage capability.  Cloud provides this for the Social Media. The moment something is on cloud, it can be big - big data. Small data can be hosted locally. If data is big, cloud is a good medium and the data can be leveraged for analytics. This facilitates informed decision making. For an end user, this should be omnipresent, thus available at fingertips. Mobility facilitates that.

Continue reading "Social Media, Cloud, Analytics and Mobility (SCAM)" »

November 14, 2013

Are User feedback streams considered during Design ?

On occasion of World Usability day today, I still have a thought and also wonder if there are  many users who are still struggling with interfaces to perform their intended tasks and whether their voice was heard while the interfaces were being designed, be it a website or an application targeted for laptops, tablets or handheld devices.

The answer would still be inclined towards a 'Yes'. There might be still a good number of unhappy users. With such a number of sites and applications undergoing redesign, change, content updates everyday across geographies, and for so many different target users, there is a probability that target users in scope would not have been available or identified to gather feedback during the design of interfaces.

In a given project, where does the process of user feedback start and where does it end? What is the best phase of design to have such feedback sessions with users?  

Following are the possible phases to have user feedback sessions and as part of iterative design process : 

 1)  Wireframe/concept level (Paper, wire framing tool)

2)   Visual design level (static jpegs images with look and feel and branding)

3)   HTML prototype level

4)   User Acceptance Test

5)   Live version 

Notably, each phase provides different type of feedback on design in progress.

For example, concept/ wireframe phase gives user feedback primarily on basic site structure, navigation elements, information contents, naming convention of menu.

Visual design phase provides feedback on 'look and feel', branding, visual affordance, color and style in addition to the information architecture.  

It is recommended to have such user feedback sessions preferably first during wireframe phase. Simply, as it is easy to make changes in fundamentals of design based on user feedback very quickly and with least efforts. The design is still taking shape in a wire framing tool or even paper prototype. 

 The more delay in having these sessions arranged in later stages of design, more team members need to redo their work ( visual designers, HTML developers) and more cost it adds up to the project budgets.  

 I believe that with the advent of usability term used, realized and usability practice being evangelized across organizations in different domains, user feedback would be forming as one of the core input during design to achieve better user experience.

October 10, 2013

Preparing ourselves for Application Security Testing

 

Haven't we all as functional testers, done 'Authentication' and 'Authorization' testing? Almost every application demands some flavor of these tests. So, are we not already doing 'Application Security Testing'?

Let's explore and see what's the extra mile, we need to traverse, in each phase of SDLC, to say confidently, that the applications we are testing are secured ones.

 

Continue reading "Preparing ourselves for Application Security Testing " »

September 15, 2013

Different BPM Products: What difference does it make from a testing perspective?

This blog is the third in the 3-series blogs on validating BPM implementations. Here, we focus on the common products that are available in the market, their common use in enterprise implementations from a testing perspective.

 

According to a recent Forrester report on BPM, (The Forrester Wave: BPM Suites, Q1, 2013) in the year 2013, BPM suites are to take a center stage in Digital Disruption. The Disruptive forces of change include technical, business and regulatory changes.  It is a well-established fact that key to any change management or strategy implementation is to start small, think big and move fast! When it comes to the validation of BPM products, the story is not much different, except that we typically see testing organizations fail when it comes to scaling and moving fast. Business Process validation might be a success in silos or at an application/project level, however when it comes to Enterprise processes or integration involving several systems, across LOBs, we just cannot move at the same pace. One key reason is the lack of understanding of the overall BPM picture at hand.

 

Continue reading "Different BPM Products: What difference does it make from a testing perspective?" »

July 29, 2013

Increasing agility via Test Data Management

 

Does test data requirements need to be captured as part of the functional requirements or non-functional requirements?  Is test data management a costing exercise or business critical?  Since the testing team is provisioning the test data in whatever mechanisms, do we need test data management team? 

 

Continue reading "Increasing agility via Test Data Management" »

July 25, 2013

Performance Modeling - Implementation Know-Hows

As an extension to my previous blog titled 'Performance modeling & Workload Modeling - Are they one and the same?', I would like to share few insights about implementation know-hows of Performance Modeling for IT systems in this post.

Performance modeling for a software system can be implemented in Design phase and/or in Test phase. The objectives of performance modeling in these 2 phases are slightly different. In Design phase, the objective is to validate (quantitatively) if the chosen design and architectural components meet the required SLAs for given Peak Load. On the other hand in Test phase, the objective is to predict the performance of the system for future anticipated loads and for production hardware infrastructure.

Continue reading "Performance Modeling - Implementation Know-Hows" »

June 13, 2013

Testing for an Agile Project - Cool breeze in Summer

 

For a software testing professional, working in an agile project is no less than a breeze of cool air in the hot summer. The point that I am trying to drive is that for a tester, who works regularly in projects that operate in the traditional model, the agile model is a very welcome change. Let me tell you why...

Continue reading "Testing for an Agile Project - Cool breeze in Summer" »

May 30, 2013

Back to school! - Determine Optimum Number Of Sprints In Agile Engagements using Mathematics

There has been a rise in adoption of Agile methodology for software development due to benefits such as absorbing late requirement changes, early availability of first version etc. However, decision of adopting Agile needs to be arrived at by balancing project priorities/characteristics (such as lack of requirement clarity upfront) and effort/cost overruns. In certain scenarios, the effort required for developing a product/application is more in Agile compared to that of Waterfall model. It can be easily observed that incorrect number of sprints planned can lead to effort overruns. Several program metrics can influence the decision to adopt Agile as well as to arrive at right number of sprints to keep the efforts in check.

 

Continue reading "Back to school! - Determine Optimum Number Of Sprints In Agile Engagements using Mathematics" »

March 7, 2013

Test Tool Selection for Functional Automation - The missing dimension

"How do you select the appropriate test tool for your functional testing?"

"As per the listed features, the automation tool we invested in seems to be right. But, the % of automation we could really achieve is very low. What did we overlook at the time of tool selection?"

"Both my applications are of the same technology. We achieved great success in automation with one of them and failed miserably with the other. What could be the reason?"

Continue reading "Test Tool Selection for Functional Automation - The missing dimension" »

January 29, 2013

What tool fits best - Standard or Tailored?

 

It is back to business after the quiet of the holiday season and the cold weather and the flu are making it increasingly difficult to manage schedules. During the holidays, I was reminiscing on some of the interesting conversations with client organizations through the past year and a few of those are still fresh in my mind. Here's one interesting question that has come up time and again and something I myself have been grappling with as well.

 

"Which is better - a single tool (or toolset from a single vendor) that addresses most of your testing needs or specialized tools that completely meet each of the varied needs of testing?"

 

 

Continue reading "What tool fits best - Standard or Tailored?" »

November 30, 2012

Recommended structure for Organizational Security Assurance team

 

Security defects are sensitive by nature, always raised as top priority tickets and costlier than functional and performance defects. Apart from the business impact, there is impact on the company's image, lost data costs, loss of end-user confidence and it leads to compliance and legal issues. So, with such high levels of risk associated with security defects, it is surprising to see that many organizations do not have an internal structure towards security assurance.

 

Internal security assurance is needed for any organization to increase security awareness across the enterprise, have a structure to deal with various security compliance aspects and to use this structure to strengthen and build and test processes. Setting clear goals, reporting structure, defining activities and enlisting performance measurement criteria helps in smoother functioning of security assurance team. To know more about a team structure that is capable of providing enterprise-wide security assurance service for Web applications, read our POV titled "3-Pillar Security Assurance Team Structure for ensuring Enterprise Wide Web Application Security" at http://www.infosys.com/IT-services/independent-validation-testing-services/white-papers/Documents/security-assurance-team.pdf.

November 5, 2012

Big Data: Is the 'Developer testing' enough?

 

A lot has been said about the What, the Why and the How of Big Data. Considering the technical aspect of Big Data, isn't it enough that these implementations can be production ready with just the developers testing it? As I probe deeper into the testing requirements, it's clear that 'Independent Testers' have a greater role to play in the testing of Big Data implementations. All arguments in favor of 'Independent testing' hold equally true for the Big Data based implementations. In addition to the 'Functional Testing' aspect, the other areas, where 'Independent Testing' can be a true value add are:

· Early Validation of Requirements

· Early Validation of Design

· Preparation of Big Test Data

· Configuration Testing

· Incremental load Testing

In this blog, I will touch upon the listed additional areas and what should be the focus of 'Independent Testing'.

Continue reading "Big Data: Is the 'Developer testing' enough?" »

July 16, 2012

Testing BIG Data Implementations - How is this different from Testing DWH Implementations?

Whether it is a Data Warehouse (DWH) or a BIG Data Storage system, the basic component that's of interest to us, the testers, is the 'Data'. At the fundamental level, the data validation in both these storage systems involves validation of data against the source systems, for the defined business rules. It's easy to think that, if we know how to test a DWH we know how to test the BIG Data storage system. But, unfortunately, that is not the case! In this blog, I'll shed light on some of the differences in these storage systems and suggest an approach to BIG Data Testing.

Continue reading "Testing BIG Data Implementations - How is this different from Testing DWH Implementations?" »

June 6, 2012

Cloud Migration Testing Approach

While interacting with a stakeholder who wanted to move his production website from its existing physical infrastructure onto a private cloud, I understood that his primary focus was to leverage cloud from an infrastructure standpoint, which would potentially involve configuration changes for capacity planning. There were no changes being made to the code or the architecture of the particular website. In such scenarios, cloud migration testing is essential for the websites involved, to ensure that the websites performance, functional flow, data and access control security privileges remains intact. 


Continue reading "Cloud Migration Testing Approach" »

March 12, 2012

Overcoming challenges with Over-utilized systems with Service Virtualization & Cloud

The unavailability of environments for QA purposes is a very common challenge faced by most organizations. This is because a lot of delay is associated with the acquisition, installation, setup of the QA infrastructure and in gaining access to external & dependent systems. In my recent article(http://www.infosys.com/IT-services/independent-validation-testing-services/Pages/virtualized-systems.aspx), I give detailed review and help businesses understand how they can easily overcome these challenges with Service Virtualization along with cloud adoption. Let me know if this paper helped you and do share your feedback.

March 9, 2012

The Right Cloud Based QA Environment for your Business

I can clearly see that most enterprises are keen on cloud adoption, based on my interactions with them. But the first thing that perplexes them is how to go about evaluating and determining the appropriate cloud deployment that fits their business needs.

In an attempt to address these concerns, I talk about the various factors like understanding the QA infrastructure requirements, the existing infrastructure availability, application release calendar and the budget appetite, that need to be gauged for taking this decision in my latest POV.  To know more, please click here http://www.infosys.com/IT-services/independent-validation-testing-services/white-papers/Documents/cloud-based-QA-environment.pdf. As always I look forward to your views and feedback.

March 6, 2012

Overcoming challenges associated with SaaS Testing

Today's tough economic environment has put a lot of pressure on organizations to deliver business applications faster and at lower costs.  The rapid growth of the cloud coupled with the current economic environment constraints, has led to the growing adoption of SaaS based applications by organizations.SaaS based applications help organization's focus on their core business than on non-core activities like managing hardware, building applications and maintaining them. However, the adoption of SaaS demands comprehensive testing to reap all benefits associated with it. In my earlier paper, I had identified and described the challenges associated with SaaS testing (http://www.infosys.com/IT-services/independent-validation-testing-services/white-papers/Documents/saas-testing.pdf).

Continue reading "Overcoming challenges associated with SaaS Testing" »

January 20, 2012

Testing for cloud security - What is the data focus of QA teams (Part 2/3)

In my early blog on testing for cloud security (http://www.infosysblogs.com/testing-services/2011/12/testing_for_cloud_security-_wh.html), I had discussed the security concerns of cloud adoption from an infrastructure standpoint. Now, let us take a look at what would be the focus of cloud security testing from a data perspective. Enterprises are highly concerned about the security of their data in the cloud. They are well aware that any sort of data security breach could lead to non-compliance, resulting in expensive legal law suits that could cause long term damage to the overall credibility of the organization

Continue reading "Testing for cloud security - What is the data focus of QA teams (Part 2/3)" »

December 14, 2011

Testing for cloud security- What is the infrastructure focus of QA teams (Part 1/3)

One of the biggest barriers to cloud adoption is security concerns. Any enterprise that wants to migrate on to cloud based environments needs to ensure comprehensive cloud testing, encompassing infrastructure, software and the platform, in order to validate the security of the cloud, cloud related application and data.  I believe that cloud adoption is a radical change for any enterprise to make and the move from physical to virtual accessibilities poses several challenges from a security standpoint. To start, let us take a look at what security testing would need to focus on at an infrastructure level, since this is the first step on the path towards successful cloud adoption.

 

Any enterprise subscribing to the cloud cannot completely depend on the cloud service provider's contract for the security of the cloud infrastructure, the QA teams would also be needed to validate the security of the cloud from the infrastructure layer itself. Once the desired computing power is allocated along with the software, QA teams need to scan cloud instances for existing security vulnerabilities, malware and threats. This would help detect security flaws such as unpatched operating systems at the infrastructure layer.  Also, it's important to check if there are adequate security measures in place like user access control, privilege based access and security policies for governing the QA infrastructure itself. Lastly, the encryption of cloud instances need to be validated since there are security threats involved with recovering previously deleted data in case of unencrypted cloud instances.

December 13, 2011

What to expect from Test Automation of Oracle Applications?

I just finished putting together a response for a client trying to understand the value of automation during testing of Oracle Applications. Essentially the client was trying to evaluate whether automation is a worthwhile investment and evaluate the returns expected. I figured this topic to be a subject of much discussion, given widespread implementation of Oracle Applications.

Here are a few points one might want to keep in mind while considering functional automation in their strategy to test the implementation of your Oracle Applications

Continue reading "What to expect from Test Automation of Oracle Applications?" »

December 6, 2011

Looking for the First Step on the Cloud Adoption Path?- Cloud Based QA Environment

Some of the prime features of cloud such as on demand provisioning, elasticity, resource sharing, constant availability and security help address a lot of challenges related to QA environments; such as poor utilization of environments, unavailability, lack of environment oriented skill sets, budgets constraints for QA infrastructure setup and multi-vendor coordination situations. Overcoming these challenges with the cloud helps improve the efficiency of the QA teams which eventually has a positive impact on an organization's business outcome, with higher quality of applications at lower risks.

 

I believe that QA environment is the perfect place for an organization to begin its cloud journey before actually moving live applications on the cloud. By moving the QA environment to the cloud, organizations can see immediate advantages such as increased asset utilization, reduced proliferation, greater agility in servicing requests and faster release cycle times.

 

To find out more about the detailed benefits associated with Cloud based QA environments...read my latest POV at http://www.infosys.com/IT-services/independent-validation-testing-services/white-papers/Documents/begin-cloud-adoption.pdf. Do share your comments and feedback, I look forward to them.

November 3, 2011

Collaborative Testing Effort for Improved Quality

The collaboration amongst the business, development and testing teams can reduce the risk during the entire software development and testing lifecycle and considerably improve the overall quality of the end application. As a testing practitioner, I believe that the testing teams need to begin collaboration at an earlier stage as described below rather than the conventional collaboration during the test strategy phase:

·         During the requirement analysis phase the business/product teams need to collaborate with the development teams to validate the requirements.

·         The test data needs to be available earlier and the testing teams need to collaborate with business/product teams to validate the test data for accuracy, completeness and check if it's in sync with the business requirements spelled out.

·         Collaborate with the development team and share the test data which can be used in the unit/integration testing phases.

·         Collaborate again with the business teams to formulate a combined acceptance test strategy which would help reduce time to market.

·         Collaborate with the development team to review the results of unit testing/integration testing and validate them.

·         Collaborate with business/product teams to validate the test results of the combined acceptance testing.

Testing at each lifecycle stage has its own set of challenges and risks. If the potential defects are not detected earlier they escalate further down in the SDLC. However, an experienced and competent test practitioner can identify these defects earlier on, when it originates, and address them in the same stage. Below are some examples which reinstate this fact.

 

·         A good static testing of the functional, structural, compliance and non-functional aspects of an application during the requirement phase can reduce 60% of the defects from cascading down to production.

·         Similarly, getting all the required test data (as specified by the business requirement) as early as towards the end of requirements analysis phase can inject the sense of testing early in lifecycle which would improve test predictability.

·         Planning ahead for performance, stability and scalability testing during the system design phase can help reduce the costs of the potential defect incurred later on. Also, proactive non-functional testing (as required by business) contributes significantly for faster time to market.

·         Test modeling during the test preparation phase helps avoid the tight coupling of the system that is being tested with the test environment. This eventually helps in achieving continuous progressive test automation.

·         Collaboration with the development teams ensure that they have used and benefited from the test data shared by the testing teams. This collaboration helps the testing teams validate the architecture, design and code by following simple practical in-process validations through static testing of the functional, structural and non-functional aspects of the application.

·         Mechanisms which help predict when to end testing is a key requirement during execution. One such mechanism is a stop test framework based on the understanding of the application carved around the defect detection rate.

All the approaches described above let testers save on time and focus more on the right metrics collection and maintaining dashboards during test execution. It also ensures that testing is not limited to just one phase but is a fine thread that runs across the entire SDLC in order to improve quality, reduce costs and time to market for all business applications.

The benefits of this collaborative approach are many. I have listed a few benefits based on my collaborative team experiences:

·         De-risks the entire software development process by embedding testing as an inherent part of each stage of the SDLC process.

·         Defects are found early in the life cycle which reduces the total cost of fixing the defect at a later stage. The cost ratio between finding the defect at the requirements stage vs finding the same at the production stage is 1:1000.

·         Shortens the time to market by using this approach which has a built in self-corrective mechanism at each.

September 26, 2011

Performance Testing for Online applications - The Cloud Advantage

Organizations have finally realized that building brand loyalty online contributes significantly to the overall brand value of the organization. In order to achieve this brand loyalty in the online space, organizations need to focus on two key elements - user experience and application availability.

 

Organizations can improve their online end user experience by conducting usability testing and by taking feedback from users to uncover potential usability issues. Usability testing helps identify deviations from usability standards and provides improved design directions as part of its iterative design process.

 

Uninterrupted application availability can be achieved by focusing on the performance aspects of the business application. To do so, the prime focus needs to be on performance throughout the application life cycle stages, right from requirements gathering, understanding the business forecasts, accounting for seasonal and peak workloads, capacity planning for production and ensuring right disaster recovery strategies like multiple back-ups across geographies, etc. All these need to be further coupled with the right performance validation approach.

 

Performance testing should not only focus on simulating the user load. It should also focus on simulating the critical business transaction and resource intensive operations, all under realistic patterns of usage. While certifying applications for performance, testing teams need to ensure that the user load factor takes into consideration the growth projections for the next five years at least, along with the peak seasonal user hits. This can help the organization ensure scalability of the application to handle not only peak traffic for the current year, but also online customer traffic for the next 5 years.

 

While all this sounds good, the common client concern with such preparation is the need for setting up such production like performance environments for the enablement of perfect performance testing of online business applications.  The setting up of this environment will require huge amount of CAPEX investment and worst of all will remain underutilized post the completion of the performance testing exercise. Leveraging the cloud can help organizations quickly and effectively set up production like performance environments and convert this CAPEX requirement to OPEX. This pay as you go model of testing, in the form of cloud based environment and tools, is the modern way for an organization to be cost effective in the current economic scenario and achieve thorough, end to end, performance testing of online business applications.

 

However, organizations need to realize that moving an application to the cloud does not mean access to infinite resources. Most organizations make this assumption while moving to the cloud and this can prove very costly. Whether an application is on a cloud or an on-premise application, it still needs to be designed to diligently handle application and availability failures.  Even in the cloud, the organization needs to sign up for specific computing power, a certain amount of storage power for the anticipated peak user load, etc. Any wrong forecasting on the mentioned factors or in the traffic increase pattern can, and will, result in application unavailability for users. Further, whether on the cloud or not, a disaster recovery back up plan is a must, that too a multi-geo one. This would help avert any business disruption in the event of any outage in a particular geography.

September 7, 2011

Enabling Effective Performance Testing for Mobile Applications

Mobile performance testing approach/strategy is closely similar to other performance testing approaches. We just need to break the approach in order to ensure all facets relevant to performance testing are noted and taken care off.


Understanding technical details on how mobile applications work
    

This is a primary step. Most mobile applications use a protocol (WAP, HTTP, SOAP, REST, IMPS or custom) to communicate with the server, using wireless devices. These calls get transmitted via various network devices (e.g., routers, gateways of wireless service provide or ISP) to reach the mobile application server.


Performance test tool selection

Once we know the nitty-gritties of how the mobile application works, we need to select or develop performance tools which mimic the mobile application client traffic and record it from the mobile client or simulator. There are several tools available in the marketplace to enable the same - HP's LoadRunner, CloudTest, iMobiLoad, etc.   Besides this, the mobile application provider will not have control over network delays; however it is still very important to understand how network devices and bandwidths would impact performance and the end user response time for the application in question. Shundra plugin with HP LoadRunner or Antie SAT4(A) , have features that mimic various network devices and bandwidths.

Selecting the right monitoring tool

 

Once we have zeroed in on the load generating tool, we now need monitoring tools to measure client and server performance.


We can use DynaTrace, SiteScope or any other APM (Application Performance Monitoring) tools to measure server side performance. These tools will capture and display, in real-time, the performance metrics such as response times, bandwidth usage, error rates, etc. If monitoring is in place on the infrastructure side, then we'll also be able to capture and display metrics such as CPU utilization, memory consumption, heap size and process counts on the same timeline as the performance metrics. These metrics will help us identify performance bottlenecks quickly which eliminates the  possible negative impact on the end user experience.
   

Performance of mobile app client is also critical due to resource limitations with respect to CPU capacity, memory utilization, device power/battery capacity, etc. If the constructed mobile application consumes a lot of CPU and memory, then it will take more time to load on devices. This would in turn significantly impact the speed and the user's ability to multitask on the same device. Also, if the application consumes a lot of power/battery, it would also reduce the user acceptance for such mobile applications.  For this, app plugins can be developed to measure and log mobile client performance as well. We can install plugins in mobile devices and encourage users to use it when loading is being simulated.  Possible tools that can be used are WindTunnel, TestQuest, Device Anywhere, etc. Plugins can capture performance data and the same can be sent to a central server for analysis.

 

In nutshell, with the right performance test strategy and tools in place, we can ensure effective Mobile application performance testing. This would ensure that the organization is able to deliver high performance and scalable apps to businesses which positively impacts the top line growth

August 18, 2011

The Importance of Service Level Agreements (SLAs) in Software Testing

It's pretty well understood in the software industry that Testing is a specialized area which helps organizations reduce risk and derive greater business value across the entire software development lifecycle. However many organizations continue to struggle with figuring out the best way to define service-level agreements (SLAs) and the outcomes that can govern testing relationships. Through my experience over the years I believe it's extremely important for customers to define SLAs upfront in order to ensure 100% alignment of goals between service provider and customer and to accelerate trust in relationships especially with first time partners.  

Before we go on to define the SLAs, it's important to define the Key Result Areas (KRAs). These are broad level areas where the SLAs will be measured and they could be in areas like governance, process, resources/staff and transition. Once these are defined, we can define the SLAs within each KRA. It's Important to choose SLAs which are relevant to the engagement (managed service, co-sourced, staff augmentation) or the type of testing (functional/automation/performance/SOA etc.). A common mistake made while defining SLAs is not defining the criticality of a particular SLA. This is important because it's not necessary to have the same level of criticality for all SLAs. Some are more relevant than others and hence we can use a classification like critical, high, medium or low for the same. Once the level of criticality is assigned to the SLAs, we need to decide on how they would be measured. I have invariably seen that customers are unsure about the measurement of SLA's.  However, deciding the tools and the methodology of calculation for measuring the SLAs is imperative. Finally, a decision on the frequency of the SLA capture (release wise, monthly or quarterly), when (release wise, monthly or quarterly) and how (spreadsheet, sharepoint, document) will it be decided upon.

In any new engagement where SLAs are defined for the first time, there will invariably be questions about the targets like how do we determine these targets? In such situations, it's always important to define a "Nursery Period". The purpose of this "Nursery Period" is to benchmark the targets for those SLAs where a period of demonstration is required before it can be set. At the end of this exercise, all SLAs should be specific, quantifiable and measurable.

The commercial framework for a risk and reward model is the key component of SLA definition process. Before delving into commercials, it's important to decide how the SLA scores would be computed. Each individual SLA should be measured and a weighted score determined based on the SLAs criticality weighting. The individual weighted scores should then be averaged. The final average weighted score is then used to calculate the commercials "Debits" or "Credits". To make this less complicated it may make sense to include only the critical and high SLAs in determining the score. Working out modalities (frequency, process of payments) of "Debits" or "Credits" is the last leg in definition of R&R model. My recommendation is to implement separate governance for risk and reward model to facilitate a collaborative and transparent relationship for this key aspect. The governance framework should have clearly defined procedures for issue resolution and escalation to allow both parties to efficiently work through issues inherent in a risk and reward agreement.

To continue the relevance of SLAs, it's important that it is reviewed regularly. I think it should be reviewed every quarter. The SLA's that do not serve their purpose need to be eliminated. "Raise the bar" for SLAs which are being consistently met for consecutive periods.

Lastly, it is extremely important to create awareness about the SLAs amongst internal and external stakeholders and project participants. This is necessary so that everybody understands the SLAs and its objectives. Communication of the SLAs is very critical and essential to the successful execution and completion of the testing assignment at hand.

July 28, 2011

Service Virtualization - Completing the Cloud Story

Organizations that have applications in production are required to have atleast 4-5 different sets of pre-production environments like System Testing, Performance Testing, User Acceptance Testing, Automated Regression Testing environments, to ensure 100% validation of the different set of requirements associated with an application. This in all probability increases the CAPEX budget for the organization. Organizations typically consolidate, virtualize and share these infrastructures for validating applications across different Lines of Business (LoBs). But this exercise is also bound to have significant OPEX associated with it due to the costs incurred in terms of having dedicated teams/personnel to manage the environments, rental costs and infrastructure costs. Also, even in these setups testing teams are constrained with situations like waiting for access to expensive test tool licenses/legacy systems, external/dependent systems, etc. In order to overcome these issues in testing traditional environments, it is but natural for us to look at the need to virtualize external/dependent systems using techniques like Service Virtualization.

Let us consider a scenario, where we have a Payments Processing Engine (PPE) which is currently undergoing changes and is hosted in a traditional QA environment. This PPE systems needs to talk to two major external systems, Legacy and Data warehouse, which are not currently available and are out of scope for testing. If the organization is to test the PPE system end-to-end, then they will need to acquire access to the external systems. Further, not being available on the virtualized environment is not the only constraint that this situation has to offer. Access to legacy system is expensive and it's made available in a 2 hour time window only. Also, the Data warehouse system is not available in the pre-production environment. When there are such constraints/dependencies on external systems, delays in time-to-market and increase CAPEX requirements are bound to bring down the overall testing efficiency. The way out for Organizations faced with such situations is to adopt Service Virtualization or virtualize services for all external/dependent systems, like the Legacy and Data warehouse systems in this particular example.

Today's market dynamics forces business to be more cost effective, agile and scalable to service ever changing market demands. The advent of cloud computing has made it possible for organizations to achieve the above mentioned points, in addition to helping organizations move from a CAPEX to OPEX business model. Though this movement to the cloud brings sizable benefits and cost savings, it doesn't however answer the question of dependencies on external systems. Organizations will need to spend huge amounts on setting up cloud images for these large external systems, making the entire process unfeasible. So, how can organizations do away with the issue of external system dependences in a cloud environment? This is where Service Virtualization comes in. With Service Virtualization, organizations can create virtual models of external dependent systems and bring them to the cloud as Virtual Services (VSE), with 24/7 availability and low cost.

Let us consider a scenario to understand the applicability of Service Virtualization in a Cloud environment.  Currently, we have an Order Management System (OMS), hosted in a cloud based environment, undergoing changes. This OMS system in turn needs to talk to 3 major external systems - Mainframe, ERP and Databases - that are not on the cloud and are out of scope for testing. If the organization is to test the OMS along with 3 external systems, then they will need to spend huge amounts in setting up the external systems - Mainframe, ERP and Database, in the cloud. This will result in higher CAPEX for the organization, which could very well blunt the cloud benefits of Cost Saving and Optimized IT spending. With Service Virtualization, the organization can host the OMS application in a virtual machine in the cloud, while the external/dependent systems - Mainframe, ERP and Databases, can be modeled and used as Virtual Services (VSE) implementations in the cloud. Thus by applying Service Virtualization all the external/dependent systems are provisioned at a fraction of the overall external system setup costs. With Service Virtualization, Organizations can achieve goals of elastic capacity consumption. Organizations can also cut down significant wait times associated with effort of infrastructure acquisition, installation and setup, and with accessing of external /dependent systems from months/weeks to a few minutes.

Thus with Service Virtualization, Organizations can achieve their overall goal/ objective of moving to the cloud and being responsive and relevant to the ever changing market dynamics and demands.

March 1, 2011

What is in the recently release HP ALM 11 version?

That HP is the market leader in the Quality Assurance and Testing tools space with a 50-60% of market share is no secret. If the version 11 of the suite of products released recently are any indication, HP intends to keep it that way for a very long time to come. Below are my notes from one of their latest road shows that I attended.

 

In this posts I comment on what Quality Centre 11 (QC 11), specifically the HP Sprinter feature within QC 11 has to offer us.

Continue reading "What is in the recently release HP ALM 11 version?" »

December 24, 2010

Why does Testing Metrics Program Fail?

Test management Guide Series-Topic 3

Why does Testing Metrics Program Fail?

1.0  Introduction

 

We all know that statistical data is collected for metric analysis to measure the success of an organization and for identification of continuous improvement opportunities, but not many organizations are successful in addressing this objective.

Senior management love to see metrics but fail to understand why the teams are unable to produce meaningful metric trends which can provide clear actionable steps for continuous improvements. It was surprising to see many client senior executives started asking me for help in establishing meaningful metrics programs for their respective organizations. Several questions popped in my mind - what makes testing metrics program such a challenge? We collect and present tones of data, what exactly is missing? Even though right from the CIO wants a peak view of various metrics to understand the progress made, why does several testing organization fail to produce the same in a meaningful way? All metrics definition and techniques are available in multiple sources, still why does organizations struggling in collect, Analyze and report the same?

After thinking about these questions I started asking myself few fundamental questions and started looking at several metrics reports. I was not surprised to find the below

·         Most of the Testing Metric reports were internal focused. They were trying to tell how good the testing organization is operating with no actionable next steps

·         Metric reports had several months of data with minimal analysis finding and no actionable items for each stake holder

·         99% of the action items identified were only related to actions taken by the testing organization

2.0  Revisiting the Golden Rules of Testing Metric Management:

 

While I was conducting a detailed study of the same, I felt like re-establishing few golden rules of metric management which we all know, but might fail to implement

Rule #1: Metrics program should be driven and reviewed by executive leadership atleast once in a Quarter.  In reality, very few organizations have a CIO dashboard to understand the reasons for Quality and Cost issues inspite of hearing repeated escalations from business partners on production defects, performance issues and delayed releases

a)         Metrics collection is an intense activity which needs right data sources and acceptance by entire organization (not just testing). Unless driven by senior leadership, metrics collection will have limited actionable items

b)         Requires alignment from all IT and business organizations to collectively make improvements

Rule #2: Ensure all Participating IT and Business organizations are in alignment with the metrics program

Examples:

ü  Testing organization shows that it met 100% of its milestones related to schedule adherence, but the project would have got delayed by 6 months

ü  Testing effectiveness would be 98%, while several production defects resulted in an couple of million production support activities which was caused by issues which was not in control of testing organization

ü  Business operations and UAT teams are focused on current projects, while metrics provide trending information. All teams should be aware of improvement targets set for the current year or current release

 

Rule #3: Ensure necessary data sources are available and data collection processes are in place which ensures uniform data collection.

Uniform standards are not followed among IT groups, which make data collection a challenge

·         Effort data is always erroneous

·         Schedule changes multiple times in the life cycle of the project

·         Defect resolutions is always a conflict

·         Defect management systems are not uniformly configured

Examples:

ü  There is no infrastructure or process to view and analyze production defects

ü  More than 40% defects are due to non coding issues, but defect management workflow does not capture the same

ü  Requirement defects are identified in testing phase making it 10 times more expensive, but there is no means to identify these trends and quantify

Rule #4: Ensure metrics are trended and analyzed to identify areas of improvements. Metrics data is garbage unless you trend the data, analyze them to find identify action items. Trending analysis should help in

·         Identify improvement opportunities for all participating IT groups

·         Set improvement targets

·         Plotted graphs which can give meaningful views

·         View to compare the performance with Industry standards

While it is not always easy to follow all the 4 golden rules of testing metrics, but an attempt to achieve compliance with these 4 rules would significantly improve the success of testing metrics program

3.0  What should I look in my testing metrics?

 

While I continued to look for reasons and answers for failure of metrics program, I realized that most of testing organization selects metrics which are more internal focused and analysis is also carried out to indentify issues within testing organization or to defend the actions of the testing organization. I am making an attempt to summarize suggestions on how you can make better use of metrics data  

Metric Perception

Issues

Suggested Action Items

Testing Effectiveness

 

 Testing organization reports effectiveness as 98%, but many production defects reported

·  Absence of a process to collect production data

 

·  System testing and UAT execution in parallel

 

·  Lack of effective requirement traceability

 

·   Lack of End to End test environment to replicate production issues

 

·   Extensive ambiguity in business requirement and results in defects in production due to unclear requirements

·  Publish Testing organizations effectiveness and Project Testing effectiveness to clearly highlight the overall project issues

 

·   Establish a process to involve testing team and production support team to analyze every production defect and take necessary corrective action

 

·   Associate $ value lost due to every defect in production to get attention from senior management towards infrastructure or improving requirements management

 

 

Test Case Effectiveness -

 

No Industry standard recommendations on targets based on testing type and hence not much action items identified.  Data is reported regularly

 

·   Difficult to set targets

 

·   Lack of actionable decision points based on increase or decrease of test case effectiveness

 

·   RTM is not prepared and hence the quality of test case and its coverage is a concern

 

·   Based on test case preparation and execution productivity, set targets for test case effectiveness. This will ensure right effort is spent on test planning and execution to achieve the target testing effectiveness

 

·   If test case effectiveness in below threshold for 3 consecutive releases, optimize your test bed and eliminate test cases which are not yielding any defects

 

·   If test case effectiveness in below threshold, validate the applicability of Risk based testing

 

·   If the test case effectiveness is higher then threshold, unit testing might be an issue and recommend corrective action

Schedule Adherence

 

Project is delayed by 6 months, but testing team has met its schedule

·   Project managers baseline schedule after delays in every life cycle stage. Testing reports delays in schedule only if it is due to issues related to testing

 

·   Testing wait time increases due to schedule delays in each stage and testing cost increases. Testing does not quantify wait times

 

·   Testing team do not use historic wait times in estimates resulting in testing budget overruns and increasing testing to development spend ratio

 

·   Track schedule milestones across life cycle stages and collect metrics for delays in any milestone

 

·   Create action triggers if the total delayed milestones crosses 10% of overall schedule milestones

 

·   Calculate the additional QA spend due to missed scheduled milestones and report the same in the metrics

 

·   Establish operational level agreements b/w various team to identify schedule adherence issues

 

Requirement Stability

 

Senior management believes that achieving requirement stability is a myth and requirements continue to change due to various reasons. This is because of the fact that 90% of the industry has been facing this issue for decades

 

·   Requirement stability is a broad measure and has several parameters contributing to this measure (Ambiguous, mistake in articulation, Incomplete, addition of new requirement, edits, deletions, etc)

 

·   Each parameter is not captured and reported in separate categories and represented by life cycle phase. Requirement can be changed due to design issue, testability issue, cost of development and implementation, operational challenges, change in mandates, organizational polices, etc

 

·   Lacks quantification of its effect in each testing phase

 

·   Lack of forums to discuss requirement stability index issues and improvements

 

·   Lack of requirement management tool to automate traceability and versioning

 

·   Report # of times requirement review signoff from test team was missed

 

·   Report test case rework effort due to requirement deletion, edits, addition

 

·   Report additional test case execution effort due to requirement deletion, edits, addition

 

·   Report # of defects due to requirement issues (missing, ambiguous, incomplete) and quantify the effort needed to correct them by dev and testing

 

·   Quantify the SME bandwidth requirement to support testing activities. If requirements are elaborate, SME bandwidth needs should reduce with time

 

·    Report # of CR added and its effort to measure scope creep

 

·   Report # of times requirement change not communicated to testing team

 

 

 

 

 

Defect Rejection Ratio and other defect Metrics

 

Senior management has no idea what to do with this data at end of the release

 

 

·   Reporting at the end of the release does not give opportunity to make course correction in the ongoing project

 

·   Metrics like defect severity, defect ageing, defect rejection needs immediate corrective action

 

·   Threshold points for each of this defect metrics has not been established and automated trigger points calling for action not defined

 

·   Report these metrics on a weekly basis rather than at the end of the release

 

·   Create automated trigger points. For e.g. If # of critical defects goes beyond 5, action point has to be triggered

 

 

Test Case  preparation & execution productivity

 

Senior management has no idea what to do with this data at end of the release

 

 

 

·   Reporting at the end of the release does not give opportunity to make course correction in the ongoing project

 

·   Difficult to set target due to difficulty in establishing testing unit

 

·   Report these metrics on a weekly basis rather than at the end of the release

 

·  Create automated trigger points.

 

·  Report reduction in test execution productivity due to

o   environment downtime

o   wait time due to delays in build

o   wait time due to delays in issue resolution

o   rework effort during test case preparation

o   rework due to lack of understanding and application knowledge

 

4.0  Conclusion

 

The above table provides insights into the complexity involved in metrics program to collect analyze and report metrics. There are clearly 2 types of metrics

·         Metrics which has impact on the overall project and hence very important to senior management of the organization

·         Metrics which are internally focused towards testing organization

Testing Effectiveness, Schedule Adherence and Requirement stability helps in identification of issues which impacts the entire projects and also which results in delays in projects, defects in production and budget overruns which are very important to CIO. The CIO metrics dashboard should have these metrics and reasons for failure.  As a QUALITY GOAL KEEPER for the entire organization, testing organizations should clearly identify actionable improvements for all stake holders for metrics program to be successful

In order to keep a tab on the effectiveness and efficiency of the testing organization itself, internal focused metrics like defect severity, defect ageing, # of defects, testing productivity, cost of quality, % automated and many more are important. Clear steps should be defined to identify actionable items. The actionable items are all internal to the testing organization and they should be used for identification of internal improvement targets and initiatives.

 

 

May 17, 2010

Black or white? Or is it grey that matters?

In the past few months, I had been having conversations with clients on the right test architecture and strategy for the testing of transaction processing systems especially in the financial services domain. As some of these discussions progressed to the specific problem areas, I realized a few things:

·          All these organizations have traditionally approached testing with a black box approach and are facing challenges in isolating the points of failure in their transaction flows

·          Most of their transaction processing systems which had once been monoliths, have evolved to build layers of processing, and have wrapped themselves with a service interface- while the approach to testing continued to treat the systems as monoliths

·          A fair amount of automation has been attempted, most of it is focused on the User Interface and hence dependent on the UI automation tools-  primarily HP Quick Test Pro and IBM Rational Functional Tester

·          The realization that data is a critical factor in testing has come quite late and all of these organizations are trying to put in place a test data management strategy and the tools around it.

 

Given this background, most of these organizations are asking the all-important question: "how do I redesign my test strategy to ensure quality in my modern day applications?" 

Continue reading "Black or white? Or is it grey that matters?" »