Testing Services provides a platform for QA professionals to discuss and gain insights in to the business value delivered by testing, the best practices and processes that drive it and the emergence of new technologies that will shape the future of this profession.

« April 2014 | Main | September 2014 »

August 8, 2014

Is the environment spoiling your party?


I frequently come across performance testing projects entangled in cost and time overruns - the culprit usually being environment issues. Since we can't wish away the environment and the issues that come with it, the next best thing is to be better prepared by figuring out the pitfalls and addressing them proactively.

Since a stable test environment is critical for test script development, load simulation and bottleneck analysis in the performance testing life cycle stages, let's take a good look at what to watch out for when we prepare for a testing cycle.

Know thy application: Any environment issue during the test execution phase is like the proverbial spanner in the works. It should come as no surprise if the performance testing team has to spend significant effort in debugging and analyzing it and, of course, following up with support teams for resolution. To be effective in test environment risk assessment and issue resolution, we must well know the application architecture, functionalities, workflows, and the interconnecting components. Stubs and virtualization techniques can be handy during test execution when one is familiar with the component level details and how to use them. While investigating environment issues, the development and infrastructure teams often seek the testing team's input - so lending a hand with specifics will mean a faster turnaround.

Dependencies - wheels within wheels: Another party-pooper can be the dependencies that may impact testing way before we even run the performance test. Multi-tiered enterprise computing systems have these dependencies on each tier and layer, within and outside the enterprise boundaries. In addition to the functionalities, there are other factors at play that may impact the performance test results. These could include high resource consumption by another process hosted on the same infrastructure, execution of batch jobs, or parallel test runs by another team. An outage in the environment during the test run can force you to reschedule the test. That's why, it is all important to gather information about all the possible dependencies that may impact the test execution during the planning stage itself. It is always good to document these issues as one comes across them for reference in future test cycles.

Stay in touch: The performance testing team cannot operate in a vacuum. Team members must establish proper communication with the development, infrastructure, functional and integration testing, and release management teams right from the strategy phase to synchronize test preparation as well as execution activities. The test schedule should be published well in advance in case you are using a shared test environment. Notifications prior to running a test must be sent out to the teams concerned to bring up the servers, mount monitors, clear logs and keep the environment stable during the test execution. A calendar that blocks shared computing resources can keep all stakeholders posted on the date with the test execution and reduce retest efforts significantly. A small tip: Keep your contact information handy and up-to-date.

Think ahead: Being well-prepared is half the battle won. For testing, this means to think about possible environment failures and look for workarounds well before the actual test execution. While preparing test estimates, don't forget to factor in unknown environment issues that could adversely impact the effort and the schedule. Keeping some buffers as a percentage of the overall estimate can save you a lot of grief later. It is also important to prioritize critical business transactions for the performance test so that, if some functionalities flop during the planned test window, a test run on a subset of the transactions can provide meaningful insights into application performance. Finally, remember that time is your most precious resource. So, if the test environment becomes completely unavailable during the planned test window, utilize that time effectively in activities such as offline reporting or knowledge management and promptly schedule the test in the next window available.

So these are a few best practices to keep in mind while preparing for a performance test cycle. You can use these to build your own set of rules specific to the challenges and constraints of your set-up. The bottom line for a successful testing cycle is to keep tabs on incidents and work through issues smartly.

August 6, 2014

Is User Experience measurable ?

Many organizations look for tangible numbers to justify efforts and money spent on improving user experience of websites or applications.

Of course, there has to be valid reasons to continue the investment for user experience and surpass the reviews done intermittently on the value it is providing.

Today, it is not enough only to get a subjective confirmation from users on the overall design acceptance and satisfaction.

What does it mean when users mention a website is 'easy to use' , 'it is good', 'that was easy', 'I saved a lot of time' ?

User experience can be measured by quantifying user feedback and efforts put in the design can be validated :

1)  Qualitative Feedback
 This type captures user feedback on usability or user experience statements. A Likert scale is used to capture such feedback. Typically, ratings from scale of 1- 7 or 1-5 are captured. User experience statements could be :
 a)'I found the design simple to comprehend'
b) 'I could locate where I am on pages while doing my tasks'

2)  Quantitative feedback

This type of feedback brings in more numbers to the table for everyone to see some tangible progress/issues on design.

Here are different ways a design team can get active, measurable user feedback on their design.


2.1) SUS Score:

The one number which needs to be mentioned is the System Usability Score or SUS Score .
 This number indicates whether the design in progress is acceptable to set of users and if design is going in right direction. Higher the score , higher is the acceptance of users towards design. Example : a score of 85 out of 100 means users are positive about the new design and will be keen to work on it.

2.2) Task Completion ratio:

Another number which signifies how many critical tasks a user is able to complete from the number of tasks given to perform during usability test sessions.

If 80 % of tasks can be completed by users, it means there are some tweaks needed to complete the other 20 %. But in general the design is going in right direction.


2.3) Time for Task completion

A user taking longer than expected time for a specific tasks to complete signifies there is some problem with either the way information is laid out, naming conventions used or visual clarity.
 This is not to know the exact time in terms of milliseconds, but, to get an overall impression whether the task is getting difficult for users to complete.


2.4) Number of Errors

If user is given 5 tasks and there are 8 errors/issues user comes across while performing or completing them, there is a problem with design.

Minimum number of errors could baseline a design. More number of errors could ask designers to go back to white board to see and analyze what went wrong.


These are some of the numbers which can provide insights on what is going good with design or what are the issues still need to be worked out.

Quality of User experience can thus be measured by both qualitative as well as quantitative feedback.