This blog is for Mainframe Modernization professionals to discuss and share perspectives, point of views and best practices around key trending topics. Discover the ART and science of mainframe modernization.

« December 2016 | Main | March 2017 »

January 18, 2017

Blockchain and IoT: The future of API economy [3/3]

3. Blockchain_IoT.png
In the previous two series of blogs, I covered the genesis of API economy and then how does an organization roll out APIs faster. While writing, I was stuck with the thought - what else one can do with APIs and how will it power the future. Over the time, APIs and API economy will become pervasive as most organizations will start working with its suppliers and vendors as partners expanding their services to encompass the digital life of the consumer. There are talks about the next revolution in financial services  on how banks will turn into utilities and expose their capabilities which can be used by fin tech companies to offer enriched services. 

However, what I feel is the two technologies that are already unfolding will shape the world and show how business is done. These technologies will shape the API economy and the mainframe world significantly.  In this third series, I will elaborate how blockchain and IoT revolution will enrich the API economy.

Blockchain: Simply put, it is an electronic ledger that is shared between peers and every change is transmitted to the blockchain record with each peer. Hence, everyone can see what changes are happening simultaneously. Also every record is hashed so that no peer can delete any record once it is on blockchains. There are many examples of how business can run faster and made cost-effective as the information is available in each of the nodes of the blockchain instantaneously. Moreover, since each transaction is validated and authenticated, pharma and food manufacturing companies will be able to track the ingredients of the product being supplied to them or they are in turn supplying to the next business in the chain. 

I feel that there will be a huge impact on the APIs and mainframes systems. The existing processes will now need to work with the block chain networks to either pass information or receive information. This information exchange will happen through APIs on the mainframe platforms. Many of the mainframe intensive organizations will start running their blockchain networks on the mainframes because of faster hashing, quick encryption, and very high availability / scalability on mainframes. Whether blockchain runs on mainframes or outside, you need a robust set of APIs to expose and consume functionality and provide a link to the existing business processes. New business streams can be uncovered and these can be exposed to further enhance the API economy. 

IoT revolution: Of course we have heard of multiple devices that talk to each other online to do businesses and capture and share information such as data on human body, objects, apparels, auto motives, consumer good, etc. We have also heard of connected cars /autonomous (self-driving) cars, refrigerators that can indicate when the food needs to be replenished, and TVs that can sense the audience and roll out appropriate contents. In factories, machinery parts can sense and send out signals for predictive maintenance. Oil and gas companies can remotely monitor and replenish the stock in their storage based on the level of fuel and provide predictive maintenance to critical equipment. Logistics companies can track and monitor the condition of goods in transit. Insurance companies can roll out usage-based insurance. We also know that these processes are typically running on mainframes.
All of this would require hundreds and billions of API calls and since 60 percent of critical data resides on mainframe, quite a significant percentage of it will end up with mainframes. Each device will transmit smaller payload of data, but multiple times. Hence, we need to build a robust set of APIs that can take the load when billions calls end up on the APIs on the mainframe for sending and receiving data. The resiliency of the mainframes and the security it provides will be tested when IoT devices start sending data. Then large analytics workloads will start running on the mainframes to process the data. The insights gained can be exposed through APIs. 

I believe blockchain and IoTs will further enrich the API economy and drive more new types of workload on to the mainframes thereby opening up new revenue streams. As an organization, we have to be ready with our business process, technology, and organizations structure to respond to this new and varied type of business. 

January 10, 2017

Tools: The accelerators to your modernization journey [Part 6/6]

 As mentioned in an earlier blog in the series, our 'people+software' approach is a critical success factor for mainframe renewal. Automation, leveraging tools and accelerators is of utmost importance for speed and faster value realization. Our tooling approach focuses on 3 important aspects:
  • It should enable insights in addition to providing data
  • It should be non-intrusive
  • It should significantly reduce SME dependency
Our knowledge curation platform which is a key component of knowledge discovery has adopted these three key aspects of the approach. Some of our clients prefer the knowledge discovery to be done within mainframe as they are not comfortable moving the mainframe components outside the mainframe boundary. The platform, therefore has been designed to be able to work within and outside mainframe boundary. The platform is built on three components:

Extraction Engine: This component consists of multiple point tools which extracts metadata from various mainframe components which comprises of operations, workload, interface, and code details.

Knowledge Repository: This component stores the knowledge extracted from the mainframe landscape. 

Infosys Modernization Portal: This is our window to the knowledge repository which can consume information from the repository and can present it in a meaningful way.

1.1_Ki.png

In simple terms, the extraction engine is like the sensory organs, Knowledge Repository is the brain and the Infosys Modernization Portal is like the eyes and ears of the solution. We take an "onion-peel" approach towards knowledge discovery. We start from the portfolio and gradually unearth knowledge from the subsequent layers. However it does not stop us to start at any layer. It also provides us flexibility to use a part or the entire solution based on the context of the client. Recently this solution has helped us create an accurate inventory while creating a rehosting solution for a No. 1 food mail order catalog company in the United States. For an electric utility company, we have used this solution to list all tape and disk datasets as well as a list of all active online volumes. We were able to discover this knowledge without any SME dependency. This has helped us accurately estimate and develop a credible business case.

Apart from knowledge curation, we have standardized tool sets across all the three solution levers of our mainframe modernization solution:
                2. ToolSet.png

Some of the most notable ones are our optimization tool kit which can not only identify the high CPU consuming resources but can also identify the anti-patterns which leads to such in-efficiencies and high CPU consumption. The SMF Log analytics tool from the optimization kit can extract and present meaningful insights from mainframe SMF logs. We have leveraged our optimization tool kit for a US based managed health care company and have significantly reduced mainframe operations cost. Infosys DevOps platform accelerates deployment of changes that helps optimize the mainframe operations. Our robust home-grown migration and testing tools validates the migration and significantly reduces the risk of migration. We also have minimally invasive rehosting solutions that minimizes migration costs and overall TCO for our clients.

Hope you enjoyed this series of blogs explaining how non-disruptive mainframe modernization is possible and the key factors of success. We touched upon 3 strategies one can choose from to modernize mainframe - accelerate, renew and transform concluding with the last post in theisseries about the tool that can help accelerate the journey. We would be happy to hear your thoughts around these topics and experiences from your modernization initiatives. Stay tuned for more.

Blog co-authored by:
Rajib Deb and Sanjay Mohan, Infosys

January 4, 2017

A rapid API roll out with a robust funding approach [2/3]

2. Robust Funding.pngAccording to Gartner, 50 percent of the B2B collaborations will take place through Web application programming interfaces (APIs), especially business APIs. This phenomenal growth is based on the fact that APIs aim to increase speed and reach which are very essential for the businesses to grow faster than their competitors.  In my last blog, I also highlighted the essential areas of consideration for the organizations to start their API journey. In this blog, I have elaborated the aspects of governance structure / funding models, and technology aspects.


SOA and API governance: Both drive re-usability 
What is the difference between service-oriented architecture (SOA) governance and API governance? - A question that is asked most frequently. Both drive re-usability but sharing the required level of control is different. Since SOA is all about business reusability, the governance structure put in place has strict key performance indicators (KPIs) on reuse. APIs on the other hand are governed by speed and are typically built to extend business reach. The governance here is still promoting reuse, but when a decision between speed and reuse is to be made, typically speed wins the debate. Also in many cases, internal developers start using APIs and then they are exposed to external developers. By that very nature of adoption, it is possible to start with a lightweight governance and then move towards a more comprehensive governance model. But in all cases, speed is prime driver behind API-fication and the governance model will remain lightweight compared to a SOA governance model.

When you start building an API, you need to create a core team. Typically, the core team that decides and develops APIs includes:
  1. Product managers - APIs need product managers who periodically collect requirements from businesses and provide to the technical team that builds APIs. They also monitor how many calls have been made to the APIs and who are the consumers of the APIs. They also decide when to enhance or decommission the APIs
  2. Technical team / engineering team - Defines the granularity , technical design, products, and work closely with the product managers to enhance functionality 
  3. Operations team - Primarily ensures when the APIs are up and running, alerts teams for any downgrade in performance, and looks at the overall well-being of the APIs

The core team is set up by a steering committee with the mandate to fund the APIs and provide overall directions. External teams that need to support the core team will also include a legal team that can come up with legal clauses for exposing the APIs to external developers. For monetization, integration team needs to incorporate the APIs with the back-end core systems and the owners of the applications will expose the APIs. On the customers' side, where there are many large legacy applications, the legacy application owner needs to work very closely with the core team and define the APIs that will be exposed. In fact, for banks and insurers with large legacy footprint, these teams can often be part of the core team.

Staying in competition with funding models 
For funding, we need to understand two points:
  1. How APIs will be monetized to make the external consumers pay 
  2. Who will pay for building the initial platform and APIs
For the monetization of the APIs, there are multiple business models. They include the APIs that are offered for free, premium, developer pays, developer gets paid, and most importantly, where there is an indirect benefit to the organization. In my experience, telcos / media companies are way ahead in terms of monetizing the APIs. News feeds and videos syndication APIs are common in telcos and media companies.

However, other segments, especially financial sector is exposing APIs for indirect benefits they get in terms of better brand recall, competitive advantages, or just to stay in competition. Most banks have now exposed their usual services on mobile platform to remain in competition. Among financial organizations, many payment providers offer their payment APIs for consumption against one of the commercial models.

Who will pay for building the APIs if the APIs are not immediately monetized? In one of the banks, I was speaking to the core team in which legacy application owner was also present. The team wanted to know whether they should wait for the business to identify the APIs for them to build. Of course, this will delay the roll out and frustrate the business. On the other hand, if they build a platform and be ready with some of the common APIs, who will pay the bill? Though there is no easy answer to this question, here is the model that I have seen work well before:
  1. Collaborate with the CIO (Chief Information Officer) / Chief Digital Officer (CDO) to make a business case for a seed funding to build the initial platform with the core services
  2. Create an internal team that evangelizes APIs with the line of business (LOB) that is most likely to build the APIs. In fact, build few APIs for these LOBs for free and charge them when they start using the APIs
  3. If the internal evangelizing is proceeding well, LOBs generally come on board and they can share the cost of running the platform
  4. If you can monetize the APIs, you can start paying for subsequent enhancements
The advantage of this approach is that it is far easier to roll out APIs as the basic building blocks are ready in hand.

Technology considerations

Typically, APIs are identified through top-down approach. However, in most cases, the programs on the z/OS do not offer the capability that an API design demands and the code needs to be refactored. In many cases, the programs include both the business and the screen logic, and there is no separation between them. In such cases, the code is to be refactored to separate out the logic so that it can be encapsulated. In my experience, this work requires more effort than required to expose the APIs on the z/OS platform. 

For exposing APIs on z/OS, there are multiple technologies available. Traditionally, most organizations having large mainframe footprints also have CTG or CICS TS available with the z/OS systems. These technologies help in exposing APIs but these have the following disadvantages:
  1. They operate on the general processor and therefore are more expensive when they land multiple calls.
  2. These technologies do not implement entire representational state transfer (REST) APIs. They support POST (one of many request methods supported by the HTTP protocol used by the World Wide Web) methods only. This is not a good architectural principle. 

IBM has launched z/OS connect that operates on the z integrated information processor (zIIP) and has the following benefits:
  • Shields backend systems from requiring awareness of RESTful uniform resource identifiers (URIs) and JavaScript object notation (JSON) data formatting 
  • Provides API description using swagger specification
  • Offers an integrated development environment (IDE) to map the common business-oriented language (COBOL) copybooks fields to the JSON request / response fields 

Together with a strong governance and a robust funding approach, the right technology will help in rapid API roll out.

Subscribe to this blog's feed

Follow us on

Blogger Profiles

Infosys on Twitter


Categories