This blog is for Mainframe Modernization professionals to discuss and share perspectives, point of views and best practices around key trending topics. Discover the ART and science of mainframe modernization.

« August 2016 | Main | October 2016 »

September 27, 2016

Migrating workloads from Mainframe to Cloud - a perspective

For most of us, the word 'cloud' brings to mind advanced computing capabilities. Not surprisingly, people commonly think that mainframes and cloud represent two extremes on the scale of how infrastructure is deployed - while mainframes deal with legacy infrastructure, cloud is next-generation.

Historically, enterprises used mainframes to host any application on integrated infrastructure with compatible services. A glance at IBM's mainframe proves this: the IBM mainframe provisions capacity and charges the customer based on usage. Sounds familiar? Quite possibly, mainframes were the first version of the pay-as-you-go-model that is popularly used today by cloud infrastructure providers!

Today, there is a lot of buzz around cloud migration. Many enterprises want to move their workloads and applications out of legacy onto the more nimble cloud. But, if they aren't that different, you may wonder: what is the purpose of a mainframe to cloud migration? How exactly does cloud reduce cost and deliver sophisticated capabilities?

On the premise that any application can be run on cloud, based on my experience and analysis, I think enterprises should carefully consider some important aspects before embarking on a cloud migration journey.

  Considerations for Mainframe to Cloud Migration

Mainframes provide extremely high computing speed to process high volumes of data reliably and quickly. They support workloads where downtime is unacceptable. Here, zOS workload manager (WLM) ensures highly-available applications and provision data/resources instantly for quick scalability. Workloads can run during specific time periods or when an event is triggered e.g. as the last day of the month, 2 PM every Saturday, every week post IPL. Mainframe applications are highly secure, thanks to mature and comprehensive mainframe security solutions.

Now, let us take a look at what makes the cloud different.

Cloud optimizes workload performance and memory by giving you the option to select which compute processing instance can reliably manage non-functional requirements. For different workloads, you can provision them to run only for a specific duration - and pay only for the time used, enhancing the pay-as-you-go functionality. To prevent application downtime, on-cloud workload balancers (WLB) and auto scaling function, one can provisions multiple instances that distribute and scale workloads in real-time. Due consideration must be given to data and resource dependencies.

Thus, for cloud to be a true driver of reduced cost, it is up to the enterprises to identify the right workloads and provision the right capacity of compute instances to meet demands that were previously achieved even on mainframe.

On cloud, security operates as a shared model where enterprises can isolate a virtual private cloud (VPC) or use customizable security solutions that handle user permissions, restrict access to migrated applications and ensure other enterprise security needs. Additionally, most cloud providers offer connectivity from cloud to on-premises data centers. Various integration patterns can also be used to integrate cloud application and/or data into on premise mainframe systems. 

Simply put, all the due considerations and design pattern that one would apply when migrating workloads from mainframe to on-premises - the same applies for cloud migration.

This is not to say that cloud does not have its distinct advantages. Native cloud capabilities offer significant benefits in terms of reduced cost, platform integration and 'as-a-service' delivery. Enterprises can even choose to bring their own components and services as long as the components are compatible with the cloud compute instance.

It is worthwhile to evaluate your business case for migrating mainframe workloads to the cloud - particularly if you want to reduce cost and realize modern capabilities.

September 15, 2016

Key Success Factors for Mainframe Renewal [Part 2/6]

MFM_Blog 2of6.png
My earlier blog talked about approaches for non-disruptive renewal of mainframe systems. Now, let's look at what are the key parameters for a successful transformation.

Whenever we discuss mainframe renewal with senior business & IT leaders with large mainframe footprints, they are worried and concerned about specific areas. It's important to address these concerns and risks to lay down a good foundation and business case for the Mainframe renewal. What are these concerns?

Knowledge (What, Why, How): The mainframe applications have been developed and enhanced over decades and during this period, several programmers would have enhanced and updated the code. In most organizations, there is minimal documentation and even if it exists, it's dated and point in time. Most of the knowledge resides with few SME's and it's all there in the head. Due to the subject matter expertise, they are too busy with day to day operations and it's very difficult to get quality time from them to capture the knowledge. It's important to understand the inventory, data flows, program/job flows, critical business path, and dependencies on other jobs/programs in order to develop a least disruptive, ring-fencing, renewal and migration approach.

This is where the discovery and reverse engineering tools available in the market have an important role to play. However, very few tools make the knowledge available in a software readable format so that it could be used to perform 'what-if' analysis, build correlation between code and issues/tickets/logs, determine critical paths, map it to business process and capabilities and use it to make modernization decisions (retain, retire, optimize, invest, transform etc). Infosys modernization platform can be used to accelerate the knowledge discovery process as for a large US credit card issuer, Infosys curated knowledge from million lines of code in a very short period of time.

De-Construction: Most of the mainframe application portfolios are like spaghetti ball, which are very tightly linked to each other, to the databases and data files. Depending on the size of the mainframe portfolio (1 million line of code to 100 million lines of code), the complexity could significantly increase. For such large and complex portfolios, a big bang approach will not work and hence an evolutionary approach is required. The big question in the mind of senior leaders is where to start and how?

The mainframe portfolio should be deconstructed into a set of firelanes and each firelane should be ring-fenced so that they could be evolved independently with minimal impact to the other programs/applications. To deconstruct into firelanes, the portfolio should be analyzed through multiple lenses - business (core, differentiated and commoditized business capabilities), rate of change (required across the different tiers of applications from front office to back office) and technical complexity. A 'de-construction framework' based on these patterns is required to make this process data driven and more pragmatic.

Next Generation Architecture: While planning for renewal, while it's important to understand the current systems using the knowledge discovery approach highlighted earlier, when envisioning the future state business capabilities and architecture, one should always look at the front mirror (future state solution) and not the back mirror (current systems). This will ensure we architect the solution using the best technology available and for future business capabilities and business model.

The next generation architecture has to be open, infrastructure independent, hybrid (to enable old and new world to co-exist during transition), reactive, DSL based (to separate 'what' from 'how') and leverage platforms (saas, paas, iaas).

People + Software: Renewal of large mainframe footprints is a herculean task and given that most of the workforce supporting these systems are ageing or are too busy with their other daily operations, doing all the renewal activities manually would take longer and would be very cumbersome. Automation using tools and accelerators can significantly accelerate the knowledge discovery, optimization, renewal, migration and testing activities. Automation coupled with Modernization SMEs can shrink the timelines of these renewal projects by up to 30%.

Tooling would be required in 4 key areas - reverse engineering, analysis of current portfolio, forward engineering and enabling functions like DevOps, testing, operations etc. There is no single tool vendor that can comprehensively support all these areas in a very integrated and efficient manner and hence you will have to integrate multiple best-in-class tools or accelerators to fulfill your needs.

Business Case: This is the single most important critical success factor for mainframe renewal. As most of the mainframe systems have evolved over a period of time and would be stable, there has to be a very strong business case to accelerate, renew or transform out of mainframe. While cost is an important driver, it's always worth looking at other drivers like improved time to market for business initiatives, delivering new business capabilities, new customer experience and new business models to build a compelling case.

In order to make a compelling business case, outcome based pricing model is a good option and it also forces everyone to look hard into the desirability, feasibility and viability aspects of the program. Would like to know the concerns that are giving you sleepless nights.

Subscribe to this blog's feed

Follow us on

Blogger Profiles

Infosys on Twitter


Categories