This blog is for Mainframe Modernization professionals to discuss and share perspectives, point of views and best practices around key trending topics. Discover the ART and science of mainframe modernization.

March 1, 2017

Legacy Modernization - Where do I begin?

Many enterprises in different industry verticals have made huge investments in the legacy systems in the past - which are now leading to operational inefficiencies with the increase in their business growth. These legacy systems inhibit the organizations from embracing next-generation technology that will enable business innovation. Many firms have looked at these investments as additional expenses but modernization is the key differentiator to capture market share and stay atop among the competitors. 

How to do more with less?
As a partner, it is important for vendors to understand the modern technology trends and evaluate, how can it help transform the enterprise and prepare it for the future. The below 5-step strategy will provide a simplified approach towards modernization of application and systems within the enterprise:

5- Step Strategy for IT Modernization:
  1. Identify the key business goals of the enterprise.
  2. Identify the barriers & challenges across IT systems and its impact to the business
  3. List out the key Modernization themes based on the gap between business and IT
  4. Lay out the Strategic Solutions to modernize the platform
  5. Choose the best suited solution and define the transition road map to future state
The 5-step strategy works. A case in point
As a strategic partner, we were part of a modernization initiative for one of the leading insurance brokers. The 5-Step strategy greatly helped us to arrive at the modernization solutions that helped transform their business. 

Let me explain how this 5-step strategy was applied to simplify the client's landscape modernization and achieve business innovation.

1. Identify the key business goals of the enterprise
A combination of interviews & questionnaires with all the key stakeholders of the enterprise helped us understand their vision and arrive at their business goals as listed below.
  • Reduce Total Cost of Ownership
  • Better Return on Investment
  • Enhance Operation Efficiency
  • Faster Time to Market
  • Increase Global Footprint
  • Gain Agility
  • Enrich User experience
  • Scale for Future
2. Identify the barriers & challenges across IT systems and its impact to the business  
It is important to comprehend the challenges & the gaps in the existing landscape that will lead us to find the right opportunities for investment.
  • Understand the business and IT constraints
  • Perform portfolio assessment of different applications within the landscape
  • Document the key challenges across different applications 
  • Identify the impact to the business
An assessment of the IT landscape of the insurance broking company was carried out, which helped in identifying the key challenges across different portfolios of the enterprise and the business impact of such concerns to the enterprise, which is depicted in the below diagram:

1. IT Landscape assesment.png

3. List out the key Modernization themes based on the gap between business and IT  
Bridging the gap between business and IT is essential to the success of a transformation or modernization initiative in an enterprise. A holistic view of the organization, an understanding of their business expectations and IT strategy helped is in deriving the key modernization themes that will help in delivering the desired business outcome.

4. Lay out the Strategic Solutions to modernize the platform  
We arrived at different strategic solution options based on the identified modernization themes. It is to be noted that the strategic solutions described below may only suit the challenges associated with this case study. The strategic solutions will have to be tailored for the specific needs of the enterprise.

2. Strategic Solutions.png

5. Choose the best suited solution and define the transition roadmap to future state
The strategic solutions arrived at step-4 will have to be well-thought across different aspects like cost, ROI (Return on investment), business benefits etc. These solutions were compared against different factors like cost & business benefits that they offer. A quick comparison of different solution options against these aspects helped the clients make the right choice.

3. Mod Options.png


In this case study, the client chose the 'Rationalize' option - where they wanted to prioritize their lines of business that will have to undergo modernization. The client was also keen on reusing their existing assets as much as possible. Accordingly, the road map was defined, where-in the modernization was planned in different phases.

The best suited solution should be chosen for the enterprise such that it improves the corporate agility and competitiveness.  Future state roadmap is to be defined and the transition architectures will have to be detailed to transform the baseline architecture to target architecture in phases.          

Conclusion:
Having an appropriate strategy for IT modernization is imperative to the success of an enterprise modernization. The CIOs will have to make conscious decisions in terms of investments for transforming their IT systems in the most efficient and cost effective manner. It is important to ensure that IT modernization strategy is in alignment with the overall business strategy and the vision of the enterprise. The 5-Step modernization strategy detailed in this article will help the CIOs, business & IT directors and enterprise architects optimize their business & gain competitive advantage.  

Authored by:
Nandha Venkateswaran
Senior Technology Architect, Infosys

January 18, 2017

Blockchain and IoT: The future of API economy [3/3]

3. Blockchain_IoT.png
In the previous two series of blogs, I covered the genesis of API economy and then how does an organization roll out APIs faster. While writing, I was stuck with the thought - what else one can do with APIs and how will it power the future. Over the time, APIs and API economy will become pervasive as most organizations will start working with its suppliers and vendors as partners expanding their services to encompass the digital life of the consumer. There are talks about the next revolution in financial services  on how banks will turn into utilities and expose their capabilities which can be used by fin tech companies to offer enriched services. 

However, what I feel is the two technologies that are already unfolding will shape the world and show how business is done. These technologies will shape the API economy and the mainframe world significantly.  In this third series, I will elaborate how blockchain and IoT revolution will enrich the API economy.

Blockchain: Simply put, it is an electronic ledger that is shared between peers and every change is transmitted to the blockchain record with each peer. Hence, everyone can see what changes are happening simultaneously. Also every record is hashed so that no peer can delete any record once it is on blockchains. There are many examples of how business can run faster and made cost-effective as the information is available in each of the nodes of the blockchain instantaneously. Moreover, since each transaction is validated and authenticated, pharma and food manufacturing companies will be able to track the ingredients of the product being supplied to them or they are in turn supplying to the next business in the chain. 

I feel that there will be a huge impact on the APIs and mainframes systems. The existing processes will now need to work with the block chain networks to either pass information or receive information. This information exchange will happen through APIs on the mainframe platforms. Many of the mainframe intensive organizations will start running their blockchain networks on the mainframes because of faster hashing, quick encryption, and very high availability / scalability on mainframes. Whether blockchain runs on mainframes or outside, you need a robust set of APIs to expose and consume functionality and provide a link to the existing business processes. New business streams can be uncovered and these can be exposed to further enhance the API economy. 

IoT revolution: Of course we have heard of multiple devices that talk to each other online to do businesses and capture and share information such as data on human body, objects, apparels, auto motives, consumer good, etc. We have also heard of connected cars /autonomous (self-driving) cars, refrigerators that can indicate when the food needs to be replenished, and TVs that can sense the audience and roll out appropriate contents. In factories, machinery parts can sense and send out signals for predictive maintenance. Oil and gas companies can remotely monitor and replenish the stock in their storage based on the level of fuel and provide predictive maintenance to critical equipment. Logistics companies can track and monitor the condition of goods in transit. Insurance companies can roll out usage-based insurance. We also know that these processes are typically running on mainframes.
All of this would require hundreds and billions of API calls and since 60 percent of critical data resides on mainframe, quite a significant percentage of it will end up with mainframes. Each device will transmit smaller payload of data, but multiple times. Hence, we need to build a robust set of APIs that can take the load when billions calls end up on the APIs on the mainframe for sending and receiving data. The resiliency of the mainframes and the security it provides will be tested when IoT devices start sending data. Then large analytics workloads will start running on the mainframes to process the data. The insights gained can be exposed through APIs. 

I believe blockchain and IoTs will further enrich the API economy and drive more new types of workload on to the mainframes thereby opening up new revenue streams. As an organization, we have to be ready with our business process, technology, and organizations structure to respond to this new and varied type of business. 

January 10, 2017

Tools: The accelerators to your modernization journey [Part 6/6]

 As mentioned in an earlier blog in the series, our 'people+software' approach is a critical success factor for mainframe renewal. Automation, leveraging tools and accelerators is of utmost importance for speed and faster value realization. Our tooling approach focuses on 3 important aspects:
  • It should enable insights in addition to providing data
  • It should be non-intrusive
  • It should significantly reduce SME dependency
Our knowledge curation platform which is a key component of knowledge discovery has adopted these three key aspects of the approach. Some of our clients prefer the knowledge discovery to be done within mainframe as they are not comfortable moving the mainframe components outside the mainframe boundary. The platform, therefore has been designed to be able to work within and outside mainframe boundary. The platform is built on three components:

Extraction Engine: This component consists of multiple point tools which extracts metadata from various mainframe components which comprises of operations, workload, interface, and code details.

Knowledge Repository: This component stores the knowledge extracted from the mainframe landscape. 

Infosys Modernization Portal: This is our window to the knowledge repository which can consume information from the repository and can present it in a meaningful way.

1.1_Ki.png

In simple terms, the extraction engine is like the sensory organs, Knowledge Repository is the brain and the Infosys Modernization Portal is like the eyes and ears of the solution. We take an "onion-peel" approach towards knowledge discovery. We start from the portfolio and gradually unearth knowledge from the subsequent layers. However it does not stop us to start at any layer. It also provides us flexibility to use a part or the entire solution based on the context of the client. Recently this solution has helped us create an accurate inventory while creating a rehosting solution for a No. 1 food mail order catalog company in the United States. For an electric utility company, we have used this solution to list all tape and disk datasets as well as a list of all active online volumes. We were able to discover this knowledge without any SME dependency. This has helped us accurately estimate and develop a credible business case.

Apart from knowledge curation, we have standardized tool sets across all the three solution levers of our mainframe modernization solution:
                2. ToolSet.png

Some of the most notable ones are our optimization tool kit which can not only identify the high CPU consuming resources but can also identify the anti-patterns which leads to such in-efficiencies and high CPU consumption. The SMF Log analytics tool from the optimization kit can extract and present meaningful insights from mainframe SMF logs. We have leveraged our optimization tool kit for a US based managed health care company and have significantly reduced mainframe operations cost. Infosys DevOps platform accelerates deployment of changes that helps optimize the mainframe operations. Our robust home-grown migration and testing tools validates the migration and significantly reduces the risk of migration. We also have minimally invasive rehosting solutions that minimizes migration costs and overall TCO for our clients.

Hope you enjoyed this series of blogs explaining how non-disruptive mainframe modernization is possible and the key factors of success. We touched upon 3 strategies one can choose from to modernize mainframe - accelerate, renew and transform concluding with the last post in theisseries about the tool that can help accelerate the journey. We would be happy to hear your thoughts around these topics and experiences from your modernization initiatives. Stay tuned for more.

Blog co-authored by:
Rajib Deb and Sanjay Mohan, Infosys

January 4, 2017

A rapid API roll out with a robust funding approach [2/3]

2. Robust Funding.pngAccording to Gartner, 50 percent of the B2B collaborations will take place through Web application programming interfaces (APIs), especially business APIs. This phenomenal growth is based on the fact that APIs aim to increase speed and reach which are very essential for the businesses to grow faster than their competitors.  In my last blog, I also highlighted the essential areas of consideration for the organizations to start their API journey. In this blog, I have elaborated the aspects of governance structure / funding models, and technology aspects.


SOA and API governance: Both drive re-usability 
What is the difference between service-oriented architecture (SOA) governance and API governance? - A question that is asked most frequently. Both drive re-usability but sharing the required level of control is different. Since SOA is all about business reusability, the governance structure put in place has strict key performance indicators (KPIs) on reuse. APIs on the other hand are governed by speed and are typically built to extend business reach. The governance here is still promoting reuse, but when a decision between speed and reuse is to be made, typically speed wins the debate. Also in many cases, internal developers start using APIs and then they are exposed to external developers. By that very nature of adoption, it is possible to start with a lightweight governance and then move towards a more comprehensive governance model. But in all cases, speed is prime driver behind API-fication and the governance model will remain lightweight compared to a SOA governance model.

When you start building an API, you need to create a core team. Typically, the core team that decides and develops APIs includes:
  1. Product managers - APIs need product managers who periodically collect requirements from businesses and provide to the technical team that builds APIs. They also monitor how many calls have been made to the APIs and who are the consumers of the APIs. They also decide when to enhance or decommission the APIs
  2. Technical team / engineering team - Defines the granularity , technical design, products, and work closely with the product managers to enhance functionality 
  3. Operations team - Primarily ensures when the APIs are up and running, alerts teams for any downgrade in performance, and looks at the overall well-being of the APIs

The core team is set up by a steering committee with the mandate to fund the APIs and provide overall directions. External teams that need to support the core team will also include a legal team that can come up with legal clauses for exposing the APIs to external developers. For monetization, integration team needs to incorporate the APIs with the back-end core systems and the owners of the applications will expose the APIs. On the customers' side, where there are many large legacy applications, the legacy application owner needs to work very closely with the core team and define the APIs that will be exposed. In fact, for banks and insurers with large legacy footprint, these teams can often be part of the core team.

Staying in competition with funding models 
For funding, we need to understand two points:
  1. How APIs will be monetized to make the external consumers pay 
  2. Who will pay for building the initial platform and APIs
For the monetization of the APIs, there are multiple business models. They include the APIs that are offered for free, premium, developer pays, developer gets paid, and most importantly, where there is an indirect benefit to the organization. In my experience, telcos / media companies are way ahead in terms of monetizing the APIs. News feeds and videos syndication APIs are common in telcos and media companies.

However, other segments, especially financial sector is exposing APIs for indirect benefits they get in terms of better brand recall, competitive advantages, or just to stay in competition. Most banks have now exposed their usual services on mobile platform to remain in competition. Among financial organizations, many payment providers offer their payment APIs for consumption against one of the commercial models.

Who will pay for building the APIs if the APIs are not immediately monetized? In one of the banks, I was speaking to the core team in which legacy application owner was also present. The team wanted to know whether they should wait for the business to identify the APIs for them to build. Of course, this will delay the roll out and frustrate the business. On the other hand, if they build a platform and be ready with some of the common APIs, who will pay the bill? Though there is no easy answer to this question, here is the model that I have seen work well before:
  1. Collaborate with the CIO (Chief Information Officer) / Chief Digital Officer (CDO) to make a business case for a seed funding to build the initial platform with the core services
  2. Create an internal team that evangelizes APIs with the line of business (LOB) that is most likely to build the APIs. In fact, build few APIs for these LOBs for free and charge them when they start using the APIs
  3. If the internal evangelizing is proceeding well, LOBs generally come on board and they can share the cost of running the platform
  4. If you can monetize the APIs, you can start paying for subsequent enhancements
The advantage of this approach is that it is far easier to roll out APIs as the basic building blocks are ready in hand.

Technology considerations

Typically, APIs are identified through top-down approach. However, in most cases, the programs on the z/OS do not offer the capability that an API design demands and the code needs to be refactored. In many cases, the programs include both the business and the screen logic, and there is no separation between them. In such cases, the code is to be refactored to separate out the logic so that it can be encapsulated. In my experience, this work requires more effort than required to expose the APIs on the z/OS platform. 

For exposing APIs on z/OS, there are multiple technologies available. Traditionally, most organizations having large mainframe footprints also have CTG or CICS TS available with the z/OS systems. These technologies help in exposing APIs but these have the following disadvantages:
  1. They operate on the general processor and therefore are more expensive when they land multiple calls.
  2. These technologies do not implement entire representational state transfer (REST) APIs. They support POST (one of many request methods supported by the HTTP protocol used by the World Wide Web) methods only. This is not a good architectural principle. 

IBM has launched z/OS connect that operates on the z integrated information processor (zIIP) and has the following benefits:
  • Shields backend systems from requiring awareness of RESTful uniform resource identifiers (URIs) and JavaScript object notation (JSON) data formatting 
  • Provides API description using swagger specification
  • Offers an integrated development environment (IDE) to map the common business-oriented language (COBOL) copybooks fields to the JSON request / response fields 

Together with a strong governance and a robust funding approach, the right technology will help in rapid API roll out.

December 15, 2016

How can organizations with large legacy footprint plan their API journey? [1/3]

JS_Image 1.png
The cost of building a business is decreasing year-on-year, given the new innovations in cloud and open source software. I see that the application programming interface (API) economy is also factoring in cost-effectiveness while enabling profitable businesses, by allowing organizations to adopt capabilities of other players and create new business and revenue streams much faster. According to Gartner, APIs enable organizations to be transformed into a platform. They also enable organizations and their partners to use capabilities mutually and create an ecosystem of value for customers. It is no wonder that APIs are also considered as the building blocks of the digital economy.

Some interesting examples of how APIs are reshaping organizations are as follows:
  1. Retailers are exposing their catalogs, shopping cart services, and other common services as APIs, which other organizations are using to create new types of products and services, thereby paying the retailer on every order of API service
  2. Banks are working with FinTech companies to provide common services for payment gateways and treasury functions, which can offer new products and services to the customers
  3. Credit card companies are consuming geolocation APIs, offered by third parties, to monitor fraud in credit card usage by comparing the location of the credit card transaction with the location of the user
  4. In the UK, the government established the Open Banking Working Group in August 2015, mandating banks to design a detailed framework for the development of an open API standard. This framework will authorize banks to open up and make available certain information, which can be used by other third-party organizations to in turn provide data and service to their customers
  5. In the auto insurance sector, the information gathered from usage-based insurance is captured through APIs and used by retailers and gas stations along the way to provide targeted marketing messages

The list goes on and the potential of APIs as building blocks of the digital ecosystem is huge. In fact, many organizations are reaping benefits from the API revolution by conducting hackathons that request thousands of developers to create innovative solutions on the core capabilities of the organization.

"APIs enable organizations to be transformed into a platform" Let's explore this further.

When large organizations want to expose their core capabilities, they often realize their core functions are actually running on legacy. It becomes critical for them to plan API-fication of legacy. Some of the aspects of the planning include:

Governance structure
  1. Who decides on creating different kinds of APIs?
  2. How are the service level agreements (SLAs) on the API monitored?
  3. Who decides when an API needs to be decommissioned?
  4. Do we need a shared service organization to build APIs or can they be built by the IT in the business?

Funding models
  1. How does the platform for exposing APIs get funded?
  2. Who will fund the API development and the common services?
  3. Will the APIs be monetized and which model is most suitable?

Technology architecture
  1. Since exposing the legacy capabilities as an API implies opening up to millions of mobile user, how do we ensure that legacy components are able to manage the increased load?
  2. Is there any additional layer of security to be created?
  3. How do we build micro services architecture on legacy?
  4. What are the methods using which, APIs can be exposed on the legacy?

The explosive growth of APIs, demands of the digital economy, and the regulatory mandates that will require organizations to customize their capabilities, have made it imperative for all organizations to start charting an API journey for their legacy applications as well. More in my next blog. Stay tuned.

December 5, 2016

Transform Legacy to retain only the core on mainframe [Part 5/6]

"Everything should be made as simple as possible, but no simpler" - Albert Einstein

Once existed a world where all software was FREE, not chargeable to customers and only hardware contributed to sales and revenue. Too many customers wrote custom software on their hardware causing interoperability issues which led IBM to create System/360. This created the era of super computers called 'Mainframes'. When they came and conquered, Mainframes were considered almost super natural - modern, robust, multi programming, reliable, secure and with high throughput. 

Naturally there were lot of adoptions leading to monopoly and a 'King' was crowned. Most of the financial corporations acquired them and marketed security and performance of Mainframes to gain customer confidence. The systems lived to the hype and delivered the promise. With no alternatives to challenge, the hold that Mainframes had on transaction processing only became tighter and locked in a flock of customers. 

Time went by and slowly, just like the anti-biotic resisting bacteria, the normal computers of the world grew stronger and were available cheap. The gap is becoming shorter and shorter with scale-out, redundancy with cheaper hardware and increasing processing and memory power. The 'closed source' strategy of Mainframe has ensured the vendor lock-in and at the same time, has created complexity around sizing and capacity planning. 

Today,not all is well in the King's world. The technical debt built by the monolithic spaghetti code over the time has become the nightmare of CTO's to manage or think about re-writing. The difference in the way back-office operations are run now in financial world also had demanded more real time, analytical abilities out of the back-end system. The business demands customer-centric flexible and agile systems to compete with new comers like FinTech companies.

With new business goals and technical abilities of commodity hardware, now is the 'Perfect Storm' that is brewing to try and dismantle the crown. The modern distributed systems are definitely showing equivalent, if not better, alternatives. For example, Apache Spark can be an alternative for batch workload processing outside Mainframe, providing almost same and sometimes better scalability and throughput.

Even though the alternatives are emerging, Mainframes are still a better and default choice for complex workloads like transaction processing of structured data, query processing on RDBMS back end, etc. especially when the volume is high and robust throughput is a demand. This is true for most of the financial organizations' core functions that act as bread and butter of their business. 

Infosys approach after weighing in all this is 'True-Core and Core-Surround' so as to paraphrase Albert Einstein's quote at the first line, "Only the Core functionality need remain on the Mainframe, nothing more". The core-surround can work in simpler environments where Mainframe is not called for, whereas the true-core is reserved for the platforms that can provide the complexity that is there. 
Consider a banking organization where this approach can be extended to categorize business functions as below:

True-Core: Core functions should create and manage accounts, record transactions, maintain ledger information and provide account related information to all systems within the bank across products lines. Calculation and posting of interest is also a basic accounting function performed on an end-of-day basis within core system. This function can be retained in the Mainframe system. 

Core-Surround: Mortgages should be managed through a product based platform to segregate the processes and channels managing the mortgage through its life cycle. This functionality can be moved out of core banking system.

This simplification of core leads to functional consolidation, faster implementation, process optimization, complexity reduction, Microservices based Architecture etc. 

Extending the strategy our approach is to have a top-down view of related business functions and categorize them as System of Innovation, System of Differentiation and System of Records (as per Gartner Pace Layered Architecture). Normally the System of Innovation and System of Differentiation are good candidates for core-surround, and the System of Records for true-core. Once that demarcation and enabling of the True Core and Core -Surround has happened we can look at gradually even phase wise transformation of the Core. This approach follows the 'Strangler Application' pattern (Martin Fowler) that enables in transforming legacy. This leaves behind only what should essentially remain on Mainframe and nothing more.


5. Transform.png

                                                     Infosys "Transform" Model for Modernization

This pattern is something which has resonated with various clients that we have talked to and also which we are helping to implement. 

For example, for a major European Bank we are helping on transforming their monolithic Core Accounting system, which is a very tightly coupled system with multiple functionalities incorporated into one humongous Mainframe application. We are doing this by looking at the various Technical and Business Functionalities to help come up with a multi-year road map to phase wise move to a Core and Core-Surround model to bring better efficiencies.

For a cards client we have evaluated their loyalty platform, as they were tied down by platform limitations leading to missed business opportunities on time. We followed a 'meet in the middle' approach analyzing the current business process and the systems in place. A 1-3-5 Year road map was established and the journey started. The end result is expected to be an agile system with around 30% reduction in cost and 30-40% improvement in time to market.

Is your monolithic system slowing you down? Learn more about the modernization options available to you. 

Co-authored by:
Nareshkumar Manoharan and Tiju Francis

November 16, 2016

Renew mainframes for digital readiness and new user experience [Part 4/6]

We briefly touched upon three modernization strategies from Infosys in an earlier blog. If you remember, it was stated that in order to address the challenges that come with modernization and provide a non-disruptive way to modernize mainframe systems, we have a new ART (Accelerate, Renew and Transform) approach. Detailing out the 'Accelerate strategy' in the previous blog, today's blog would talk about the RENEW strategy to Modernize Mainframes.

The RENEW strategy focusses on using the existing mainframe assets and exposing them as services or APIs - making them digital ready for enhanced user experience, reducing the development lifecycle time by employing the agile processes and automation through DevOps, and improving agility by modularization and rationalization of code modules and externalizing business rules from the code.

The below diagram depicts the different Renew Solution levers that could be applied for API enablement, business rule extraction and externalization, implementation of CI/CD on Mainframe portfolio and componentization and consolidation of business functions:


4. Renew_4of6.png


It's been observed that 43% of IT executives (a) believe that a failure to integrate with legacy systems is the biggest barrier to implement future initiatives and this barrier results in high IT costs*. There are various ways to integrate back with the mainframe applications and data and I have described the most popular methods below:

1. Emulator based integration: The Mainframe black screen (CICS, IMS etc.) can be integrated via Screen Scrapping technology tools like IBM HATS, Open Legacy, LegaSuite. This provides quick and easy GUI Integration and delivers a new user experience at a relatively fast time to market.

2. Process Integration: It enables monolithic legacy code to be exposed as service and easily build APIs. The Mainframe Services / APIs can be exposed to the distributed, digital systems. The different integration patterns utilize various compatible integrators that expose the mainframe components as services/ API's like CICS Transaction Gateway, CICS Web services (REST / SOAP), CICS Transaction Server Integration, MQ based integration, IMS Connect, zOS connect. The messaging to these APIs and services can be orchestrated through any ESB's or API Gateway. This option provides integration option to multi-platform applications and enables loosely coupled services.

3. Data Integration: It enables access to data from system of records repositories on zOS. The interfacing application is only dependent to fetch data from the Mainframe and business logic, and integration layer is built outside the Mainframe. This pattern is useful where the data can be fetched in one pass and the aggregation and business rules are applied in the integration layer outside the mainframe.

Just to give an example - for one of our health insurance client, as a part of a large digital initiative, we had to enable the "Find a Doctor" feature for various user provided search parameters. The book of record for maintaining the doctor information was existing on the Mainframe DB2 DB. On analysis, we realized that a few mainframe COBOL modules /API's were already available, that could be reused to realize this functionality. We were able to perform a process integration and use one of the prevalent integration pattern in the client ecosystem (CICS MQ based integration) and expose the mainframe components as services. This design option reduced the overall development time and helped in achieving the desired client goal, in a timely fashion.

Most of our clients also complain about how changing a small piece of code on their legacy mainframe applications takes a long time. When we analyzed applications for few of these clients, we discovered that most of the business rules were hardcoded and maintained over a period of time. The knowledge of those business rules was embedded in the code and SME knowledge had diminished over a period of time.

As part of our renew strategy, we recommend our clients to maintain the knowledge of business rules deployed and if possible, externalize them to be maintained by the business users. To reduce the time to introduce any change, we also recommend the adoption of CI/CD methodology for developing and deploying mainframe applications. The usage of CI/CD toolset helps in better productivity and quality with quick analysis showing application structure and relationships, free up MIPS for production use and eliminate delays by providing low cost environment, and accelerates delivery time with improved governance across the SDLC. We assist our clients in integration and configuration of the CI/CD tools, from IBM or Microfocus, in their eco systems and assist in formulating a rollout strategy.


Have you considered renewing your mainframe landscape to reuse the investments already made on mainframes?


*References: https://www.mulesoft.com/webinars/esb/unlocking-legacy-systems-enterprise-mobility

November 15, 2016

Unleash the Power of Mainframe using API Economy

The trend across today's industries is to move toward open web APIs. Enterprises are transforming their business models to embrace API Economy in a connected digital world with a large community of developers.

What has mainframe got to do with the API Economy?

Open API blog_jigsaw_small.png
Existing mainframe systems have been built up over the years and encapsulated enormous valuable business logics and data. These rich and mission-critical sources of business services and information are critical for digital transformation. As enterprises address business transformation, they will have to make their mainframe data and applications available as APIs in the total digital solutions. 

In my recent engagements with enterprise clients, clearly, both the healthcare and financial services sectors are becoming major drivers of digital innovation.  I have been frequently asked to advise how to revitalize, and incorporate enterprises' mainframe assets in digital environments. Here are some enterprise clients' business cases related to digital transformation:

financial services company, which handles majority of credit and debit processing, including prepaid cards, is looking to re-engineer their mainframe based complex and high speed transaction and data processing. Currently, I am leading an architectural proposal to modularize the mainframe portfolio and expose core capabilities as micro services and REST APIs to expedite future development efforts in meeting business demands on a timely basis.  Our proposal also addresses their skill issues with emerging technologies for decades to come. The proposal lends itself for a commercial project with defined outcomes.

This year, Infosys was chosen as the preferred partner for an American health insurance company's digital transformation initiatives. While our engagement teams were working diligently to deliver the transformation, this client also actively approached Infosys for mainframe integration. Their digital initiatives will not work without incorporating their family of systems, which are mainframe based mission critical business capabilities. The primary challenge they are facing is to expose their mainframe assets as REST services, with transactional tracking and problem determination capabilities across multi-components and platforms.

The other case is a Brazilian large banking and financial services company. This bank's core banking system resides on mainframe, and processes more than 2.5 billion transactions every day. With increased transaction volumes for digital banking, including a recent merger, and upcoming public/hybrid clouds, the bank is facing challenges for continuing integration with mainframe systems with same QoS and reduced MIPS. The mainframe revitalization proposals have been well accepted by the Bank's technical community for execution.

How to expose mainframe assets as micro services, and nature APIs?

Creating a successful API, however, is not easy. Creating APIs for mainframe data and applications is even more difficult, because it requires deep integration capabilities, especially in exchanging data between legacy and newly-written applications in the API digital world.

Solving such difficult and challenging problems of data exchange require certain technology like CAM (Common Application Metamodel). CAM is a standards based technology which facilitates data transformation from one language and platform domain to another, and also simplifies new applications creation. CAM is highly reusable and independent of any particular tool or middleware, and has been adopted and implemented as the underpinning in heterogeneous systems for enterprise application integration. Language based importers, which parse programming language source files and import relevant language metamodels,  have already been implemented in a variety of AD tools, and can also be re-used in any new tools to be created.

A comprehensive development tooling can therefore readily be developed by leveraging CAM and associated language importers to expose mainframe applications, e.g. CICS and IMS transactional applications, as micro services. Such tooling will read and parse legacy applications' interface declarations, e.g. COBOL copybooks, PL/I include files, and other source files, to generate REST / JSON services.

Further considerations also need to be evaluated regarding data transformation techniques. A development tooling can be designed for interpretive marshalling by generating metadata to be used during execution time. The very same tooling can also be designated to generate stub routines for compiled marshalling instead. In general speaking, compiled marshalling is faster than interpretive marshalling, though there are trade-offs between these two marshalling techniques. However, a hybrid approach between these two techniques should be considered to resolve the trade-offs.

Once REST / JSON services are created from legacy applications, these services can then be easily mapped as standard, language-agnostic interfaces to REST APIs, according to the OpenAPI Specification. These generated invocable APIs allow both human and computer to discover and understand the capabilities of the mainframe services. New apps can also be easily developed with these APIs to interact with mainframe assets with minimal implementation logics. 

With mainframe APIs, enterprises can now readily extend their digital reach for consumption in mobile applications, cloud applications and Internet of Things (IoT).

October 12, 2016

Accelerate performance, Operational efficiency and Reduce run cost of core systems [Part 3/6]

I had discussed the three modernization strategies from Infosys in an earlier blog. If you remember, I'd stated that in order to address the challenges that come with modernization and provide a non-disruptive way to renew mainframe systems, we have a new ART (Accelerate, Renew and Transform) approach.

The ACCELERATE strategy focuses on improving the performance of batch and online mainframe core systems by reducing MIPS (Million Instructions per Second) consumption, improving the operational efficiency of these core systems using automation platforms; and for those systems that have been identified to have very high run cost (vis-à-vis business value delivered), migrate them to run on a scalable and cost efficient cloud based x86 platform.

The below diagram depicts the different 'Accelerate Solution' levers that could be applied for MIPS optimization, automation, migrating low value and low touch applications and processes out of Mainframe.

MFM_Blog 3of6_2.png

Levers that can be leveraged as part of the 'Accelerate' strategy


We have observed that most of the MIPS and performance optimization initiatives on Mainframe are focused only on what can be done within the constraints of the current application, platform and/or technologies that depend on the level of optimization already done, yielding minimal results. At times, it is important to look at it from an 'outside-in' perspective and see if these applications or workloads are best suited to continue to run on Mainframe or are they better suited to run on distributed platforms with minimal changes, or by re-architecting. Obviously, this has to be supplemented by a solid business case and minimal risk to make the chosen course viable.

When we recently analyzed the workloads for some of our large mainframe clients, we identified a set of processes which were of low business value resulting in high run cost. These jobs were of one of these types - ETL Jobs, Data/File Transmission Jobs, Reports, Data Archival, ODS and Data Sync Jobs. Most of these processes are batch processes and decoupled. Hence, these are best fit to be migrated to a cloud based data platform that can perform these tasks more efficiently, enabling new capabilities that are not easily possible on Mainframe.

Where are you seeing opportunities within your landscape to optimize?

September 27, 2016

Migrating workloads from Mainframe to Cloud - a perspective

For most of us, the word 'cloud' brings to mind advanced computing capabilities. Not surprisingly, people commonly think that mainframes and cloud represent two extremes on the scale of how infrastructure is deployed - while mainframes deal with legacy infrastructure, cloud is next-generation.

Historically, enterprises used mainframes to host any application on integrated infrastructure with compatible services. A glance at IBM's mainframe proves this: the IBM mainframe provisions capacity and charges the customer based on usage. Sounds familiar? Quite possibly, mainframes were the first version of the pay-as-you-go-model that is popularly used today by cloud infrastructure providers!

Today, there is a lot of buzz around cloud migration. Many enterprises want to move their workloads and applications out of legacy onto the more nimble cloud. But, if they aren't that different, you may wonder: what is the purpose of a mainframe to cloud migration? How exactly does cloud reduce cost and deliver sophisticated capabilities?

Continue reading "Migrating workloads from Mainframe to Cloud - a perspective" »

Subscribe to this blog's feed

Follow us on

Blogger Profiles

Infosys on Twitter


Categories