Infosys’ blog on industry solutions, trends, business process transformation and global implementation in Oracle.

« Hyperion EPMA Way | Main | First Timer for OAC- Essbase on Cloud..Let's get Hand's On -Rocket Speed! »

Summary

The core intention of writing this blog is to describe features of legacy mainframe and the emerging era of Big Data. It also demonstrates the process of offloading data to Hadoop.  

 

Advantages of Mainframe:

Legacy systems used by many organizations are more secure, scalable and reliable machines which are capable of tackling huge workloads.

Mainframe handles even mission-critical applications with minimal resistance, like processing banking transactions, where both security and reliability and security are equally important.

 

Drawbacks of Mainframe:

They sustain Large hardware and software.

Processing Prices.

 

Many organizations, in the current era, take the urge to initiate a part of migration and continue the same in all the aspects of business applications to newer reliable platforms.

This process helps organizations to reduce costs incurred and meeting the evolving needs from the business 

 

 

Advantages of Big Data technology over Mainframe:

Cost Effective.

Scalable.

Fault Tolerant.

 

The cost in maintaining and to process the mainframe can further be reduces by the assimilating a layer of Hadoop or to completely off load the batch process to Hadoop

 

Similarities of mainframe and Hadoop are as below :

Distributed Frameworks

Handle massive volumes of data

Batch workloading

strong sorting

 

Business Benefits of adopting the Big Data Technologies along with Mainframe or over Mainframe

 

Using Hadoop ecosystems like PIG , HIVE or MapReduce the Batch processing can easily be done.

 

Jobs from the Mainframe systems can be taken and processed on the Hadoop the output of the same can be viewed at the mainframe end reducing million instructions per second (MIPS) costs.

(MIPS is a way to measure the cost of computing: the more MIPS delivered for the money)

 

Organizations look at return on investments at every corner during up-gradation or migration. similarly Migrating mainframe to Hadoop is this condition met due to minimal infrastructure , the batch process costs and flexible upgrade of applications.

 


Process of Offloading Data to Hadoop

 

Offloading approach is recommended in the following simple steps.

To create Active Archives and copies of limited mainframe datasets in the Hadoop distributed File system.

Secondly to migrate larger amount of data from the source from sources like semistructured data sources or Relational DBs

Final iteration of moving the expensive mainframe batch workloads to the much-sophisticated Hadoop

 

 

 

Post a comment

(If you haven't left a comment here before, you may need to be approved by the site owner before your comment will appear. Until then, it won't appear on the entry. Thanks for waiting.)

Please key in the two words you see in the box to validate your identity as an authentic user and reduce spam.

Subscribe to this blog's feed

Follow us on

Blogger Profiles