Testing Services provides a platform for QA professionals to discuss and gain insights in to the business value delivered by testing, the best practices and processes that drive it and the emergence of new technologies that will shape the future of this profession.

« A case for mobile accessibility | Main | Shifting extreme left during cloud migration - A case for involving QA teams from day one for cloud migration projects »

Testing of Legacy Modernization using Infosys Big Data framework - A case study

Posted on behalf of Surya Prakash G., Delivery Manager, Infosys Validation Solutions


Many organizations would like to reduce the data being processed on legacy systems to minimize maintenance and support costs. Modernization to an open source big data solution (Ex: Hadoop solution) is an option. Such modernization enables faster time to market and helps leverage big data technology features to handle not just volume, but also velocity and variety.

Infosys was involved for one such migration testing work, where Infosys big data service offering helped client to achieve faster time to market and reduce costs as well.


The client, a leading UK based bank with global presence in Retail, Investment and Commercial banking, wanted to develop an Anti-Money Laundering (AML) solution for its correspondent banks and third party agents and integrate the same with existing NORKOM which covers direct clients of bank. Bank was looking for a flexible architecture that can ingest huge amount of data coming from around 60 million customers in 80 different countries. Testing objective of this complex migration was to validate huge amount of structured data coming from different customers and third party agents in different formats to Hadoop and ensure the integration of Oracle Mantas with Hadoop with enriched data quality.

Challenges for testing were multifold:
End-to-end testing to ensure that the Implementation of modern AML architecture integrates well with an enterprise data hub, Hadoop
• Setting up a stable testing environment
• Testing massive complex and long term descriptive data from different sources and to ensure quality data
• Consolidation and validation of data from multiple sources which are in different formats

To solve the above challenges of huge amount of data and variety of data, Infosys came out with an approach to validate 100% of the data by testing at each stage to cover all the permutations with automated tools as well as manually. Also, since structured & unstructured data was coming from different sources and resides in Hive tables, HOF-Hive output formatter was used to get the pipe delimited files. Data validation was done using automated utilities developed for different stages.

Key benefits included:
Reduction of COQ by 10-15%  through early detection of data issues at source
10% reduction in test execution effort for every execution cycle through automated execution / error monitoring of workflows by triggering corresponding sequence through UNIX scripts
100% test coverage through automated approach for validation of all scenarios
100% compliance to AML through reconciliation of activities across direct clients (NORKOM) and correspondent banks (Oracle Mantas)
 
I invite you to visit Infosys booth #59 at STARWEST 2016 and meet our experts who can definitely help you out in your big data journey. More info on our participation is here.

Post a comment

(If you haven't left a comment here before, you may need to be approved by the site owner before your comment will appear. Until then, it won't appear on the entry. Thanks for waiting.)

Please key in the two words you see in the box to validate your identity as an authentic user and reduce spam.

Subscribe to this blog's feed

Follow us on

Infosys on Twitter