Testing Services provides a platform for QA professionals to discuss and gain insights in to the business value delivered by testing, the best practices and processes that drive it and the emergence of new technologies that will shape the future of this profession.

« June 2015 | Main | August 2015 »

July 21, 2015

Is your testing organization ready for the big data challenge?

Author: Vasudeva Muralidhar Naidu,Senior Delivery Manager       

Big data is gaining popularity across industry segments. From being limited to lab research in niche technology companies to being widely used for commercial purposes; big data has achieved a wider scope of application. Many mainstream organizations, including global banks and insurance organizations, have already started using big data technologies (open source) to store historical data. While this is the first step to value realization, we will soon see this platform being used for processing unstructured data as well.

Are testing organizations ready to test implementation? Many might say the obvious, 'what is the big deal?' Many technologies have evolved in the last few years and testing organizations have built robust testing strategies. This includes mobility, cloud, digital, telematics, to name a few. So why is big data different? The three tables below offer a deeper understanding of whether employing big data is a big deal or not.

big-data-testing                                                                             Figure 1: What is new in Big Data?

If you look closely, every single point in all three tables are new - input data, data formats, input data types, storage mechanism, processing mechanism, and the software used to extract, process, store and report. A completely new experience that calls for a brand new tester with new technology skills, testing process skills, ability to build new tool sets, and instrumentations.

Considering the above scenario, let us look at the big data process chain and identify the type of testing to be conducted at each stage. This should help us understand realistically what is needed to get ready for big data testing programs. 

2.jpg

Source data: The first obvious point is input source format. Unlike RDBMS, the formats can be anything ranging from Twitter feeds, Facebook, Google search, data generated from RFID, digital sensors, mobile phones, cameras, GPS etc. This calls for specific knowledge on data acquisition. The tester should have the ability to validate data acquisition methods and its accuracy.

Aggregate and analyze: Once the data is acquired and aggregated, special algorithms will analyze and mine the data needed based on pattern matching. The tester has to validate the aggregation rules, pattern matching algorithms, and patterns fulfilling reporting needs.

Consume: Once the data is mined and stored, the speed of report generation and accuracy of jobs have to be put to test.

Based on understanding the above, the diagram below describes various types of testing to be carried out.

big-data-testing-programs.png

Figure 2: Big data testing needs

What do testing organizations need to execute these various types of tests?

types-of-big-data-testing.png

Conclusion:

Testing organizations need significant preparation to face big data challenges. As the entire base is built from scratch, dedicated focus is necessary. The following guidelines can help:

  • Testers should possess a variety of new technical and process skills
  • Requirements for big data should be focused on end user reports and MapReduce logics
  • Testing of extreme information management calls for extreme automation
  • Simple excel sheet macros might not work, requires scripting / tools for validation purposes
  • Test data management should also be determined based on MapReduce logic
  • Test environments scaling  factor should be driven by #sources and network bandwidth
  • Key for testing team success is in defining how much to test and when to stop

Big data programs are expected to grow and testing will play a major role in ensuring the success of these programs. Testing will help fine-tune the pattern matching algorithms, which in turn will help increase the usefulness of unstructured data. The more prepared you are as a testing organization; the higher the success you will achieve in big data programs.


July 14, 2015

Automation - A new measurement of client experience

Author: Rajneesh Malviya, AVP - Delivery Head - Independent Validation Solutions

A few months ago, I was talking to one of my clients who visited us on the Pune campus. She shared how happy she was with the improved client visit process. She was given a smart card like any of our employee and was mapped / marked as visitor and with that she could move from one building to another without much of a hassle.  She no longer had to go through the manual entry process at each building. Like an employee, she could use her smart card at the turnstile to enter & exit our buildings and at the same time her entry was recorded as per compliance need. As she had been in our campus before, she was clearly able to experience the great difference brought about by automation.


This made me think. Are we creating too much of a hype on the word "automation"? Are we over complicating automation problems and making it difficult to understand resulting in very little success? Do I really need a special army to do these automations? Have we really understood the objective of automation?

These questions made me to go back to the basics, pull out the fundamental objectives of automation which are:

  • To reduce manual interventions (small or big) resulting in improving the productivity and reducing cost
  • To improve time-to-market

But above these two objectives, and more importantly, it is taking into consideration a smoother, quicker experience for our clients / users. The satisfaction & excitement users have because of these small automations result in in the client returning again & again. If we can do this, then we can say we have accomplished true automation.

Automation in Testing Services

In testing services (QA services), automation is important as no one can do everything manually. The automation in test execution, especially in regression testing, is seen as a necessity and many testing tools / solutions have evolved over the years that have helped simplify the automation process. Automation experts became involved in automating & maintaining test cases. Then came the era of progressive automation / Shift-left automation. That is, automation done along with functional testing to further improve the level of automation.

All of these developments have definitely given us new ways of improving the automation level beyond regression testing. Automation now takes up to 20% - 30% of total test execution (with some exception up to 40%). In this entire process, the objective has remained the same - to reduce manual interventions resulting in reduced testing effort and improved time-to-market especially because scripts can run 24x7. But the benefit of automation has been limited because automation is done only on a limited set of activities.

With the adoption of devops, continuous integration and agile methodologies, the distance between testing and development is reducing. Now testers are expected to work more closely with developers and testing is done while the development process is in progress. This means testers need to have an understanding of code / architecture / design. Gone are the days when testing was seen as low-end work done by anybody who could follow a set of  instructions. Today, testers are expected to not only understand functionality but also have coding / scripting knowledge.

Extreme Automation

As emerging forces like digital transformations, big data, cloud & mobility are changing the business model, testing services are also gearing up to adapt to current trends and move towards extreme automation to speed up the deployment timeline. Extreme automation means, automating every part of the testing process that is manually done. This includes automating requirements, design, and code reviews; automating builds followed by automated tests; and then automating deployments. In reality, can we do 100% automation of all that we manually do today? The answer is 'no' BUT we can always improve the current level of automation and take it to next highest level.

750px-1.jpg

July 6, 2015

Automated Performance Engineering Framework for CI/CD

Author: Aftab Alam, Senior Project Manager, Independent Validation and Testing Services

with contribution from Shweta Dubey

Continuous Integration is an important part of agile based development process. It's getting huge attention in every phase of Software Development Cycle (SDLC) to deliver business feature faster and confidently.

Most of the times it's easy to catch functional bugs using test framework but for performance testing, it requires scripting knowledge along with load testing and analysis tools.

The key to success in CI/CD is

  •          Find issues at early stage.
  •         Automate anything that is done more than once
  •          Automated build , various quality gates and release process
  •          Automated metrics collection and correlate metric - data based automate quality gates

 There are many test driven framework which enable software development and test engineers (SDE/SDETs) to write automated test cases to verify functional behavior of the code but what about performance of the software? It is important to have automated performance tests as well to get early feedback as well.

 

CI-Performance engineering framework (CI-PEF) is built to provide performance validation capability to devs as part of build phase itself.

CI-PEF enables SDE/SDET to run performance test plan as part of CI pipeline on a click of a button and get performance feedback without spending time on test scripts, load infrastructure, data  and test execution.

Performance engineering framework can be built using various open source and/or license tools. Keys features of framework should be

1)     Ability to schedule and execute performance test on demand or base on certain condition e.g. time based, dependencies on success of functional test.

 Any CI build server can be used (Jenkins, Bamboo, Puppet, Go, Ansible etc) to trigger load test plan(Jmeter, SOASTA ,HP LoadRunner , NeoLoad or any load test tool that supports command like or APIs based execution)

2)     Ability to execute same test scripts in different environment

Maven/Ant can be used to manipulate target environment in test scripts at run time

3)     Ability to change data based on targeted environment

Maven/Ant can be used to manipulate data set provide by test data management(TDM) solution based on target environment

4)     Ability to trend and compare performance KPIs( response time, errors) and mark performance execution as PASS/FAIL based on pre-defined performance threshold and send alert/email

5)     Ability to pull and store performance report from other tools like dynatraceappdynamicsnewrelic to provide deep dive analysis at one place

6)     Ability version control scripts and test configurations

Traditional way vs automate way of performance testing-small-final.pngView image

Traditional way of performance testing and key challenges in agile SDLC

  1. When teams are moving towards agile model (fast deployments, short sprints) to achieve business requirement, Performance test cycle become elephant in the room
  2. Long wait to validation performance or get feedback of fix done. (dev env-> test env->perf env->Developer)
  3. No continuous integration or version control support to load testing. 
  4. Manual build status comparison and result analysis
  5. Login into individual APM or monitoring tools to check status
  6. Dev just waits for test report from performance engineers
  7. End up turning off features due to performance issues as no time to fix in on -going release(increased time to market)


Performance testing using CI-PEF and key benefits

  1.         Performance testing now takes a shift left approach into development phase to make process agile and flexible to track some perf issues as early as possible.
  2.     Code can be perf tested as part of CI build cycle in any lower environment using CI-PEF (for existing scripts) and dev can get the feedback compared to previous code.
  3.     If there is small code change and no time to wait for performance engineers, code can be deployed to PROD if CI-PEF report is good.
  4.     Continuous Integration with Jenkins and test scripts version controlled with git embodies agile practices in TDD environment.
  5.     Jenkins provides comprehensive reports on continuous builds and present performance trends in graphical format, so one can get instant feedback.
  6.     Integrated auto email notification to team with customized content.
  7.     Allocate the available load agents (from the pool) for the test

Please note that automated performance engineering framework does not replace the need of a full scale performance testing completely. This framework enables team to get performance feedback of new features or bug fixes without full scale performance test and left in SDLC(e.g dev sandbox, DIT or in SIT). Full scale performance test is still required for capacity planning, infrastructure changes or upgrades. 


Frame work provides quick view of build performance health


Response time and errors comparison by build e.g. over all response time increases in build #5. 


Jenkin response time.PNGTransaction level details e.g checkout transaction has caused overall spike in response time.


Jmeter-report-1.png

Transaction report

If framework is integrated APM tool, code profiling and design/architecture validation can be done using APM tool reports. e.g below report from apm tool (dynatrace) shows that high response time is due to an Update statement. Due to high response time from JDBC layer, DB connection pool was 100% utilized and there were connection pool exception as well.


Dynatrace report
Dynatrace-Image-2.png
WIN-WIN situation for both developer and performance engineers:

With help of performance engineering framework, all these can be found and fixed in dev sandbox or system environment. Hence reducing development efforts and overall mean time to release (MTTR) feature in production. At the same time performance engineering team is able to spend time on framework enhancement or other high
priority tasks