This blog is for Mainframe Modernization professionals to discuss and share perspectives, point of views and best practices around key trending topics. Discover the ART and science of mainframe modernization.

October 6, 2017

Transform the IBM i-series/AS400 with DevOps: It's possible


i-series is an IBM midrange computer system used extensively by many large insurance firms, retail businesses and banks. Being a legacy platform, projects executed on it generally follow the conventional waterfall development and slow releases.

Today's environment demands agility. Can you transform your i-series and be agile - YES. Have we done it - YES. Let me help you understand with an example.

Here was the problem!

A leading global bank was embarking on a transformation program and wanted to reduce system downtime during scheduled outages of their Core application by routing the requests to a 'stand-by' application to provide business critical services.

The challenge was that the core platform was a 25-year-old monolithic application built on i-series and the entire concept to reality had to be achieved under very stringent timelines. Our team was up for it and we addressed the problem by successful implementation of DevOps & adopting agile principles. This is the first time it has been attempted by any service provider.

How did we address the problem?

Infosys partnered with the client's tooling team and spearheaded the DevOps pipeline implementation.  The deliveries happened in a MVP manner and shippable components were ready after every 2 sprints.

Tools adopted:

  • IBM Rational Developer for i (RDi) - Primary build tool based on the Eclipse platform. It comes with all features supported by any other IDE available in the market and gives a much needed break from the 'Green-Screen'
  • IBM Rational Team Concert (RTC) - Played a dual role of
    - Application Lifecycle Manager: Used for creation of Product Backlog, Sprint Backlogs, Work allocation & collaboration
    - Source Control Manager: Source codes were migrated to the RTC Jazz server and check-out/check-in were performed using RTC. It also allowed concurrent development and facility to auto-merge. 
  • Smartbear Collaborator - Provided the entire review workflow for logging review comments/defects. The reviewer could give in-line comments/log code defects against each line of code and the developer could action the same accordingly.
  • ARCAD Skipper & Deliver - These are two main tools provided by ARCAD for Build & Deploy. These are seamlessly integrated with RDi + RTC; build and deployment can be requested at the click of a button as opposed to the extended conventional release process.
  • ARCAD Observer - Cross-reference tool used to perform detailed impact analysis and understand program flows.
Devops Process and Tooling.png

How did the client realize value from the solution?

  • Increase in productivity from use of DevOps tools
  • Release cycle reduced to 2 sprints only
  • People & process moved from waterfall mode to DevOps mode enabling enterprise agility

And our teams in turn benefitted from:

  • Current staff being trained in open source CI-CD tools
  • Built expertise on Arcad specific tools
  • People & process moved from waterfall mode to DevOps mode

This journey however was not easy and we were faced with multiple challenges.  DevOps tools for i-series were unexplored and we didn't have any SMEs available. Additionally, the application built across geographies added to the complexity making cleaning up of the non-standard source code types a tedious and prolonged activity. The biggest challenge was to change the team's mindset to move out of the conventional way of doing things and step into the new world.

What made this even more difficult was that for stand-by application to perform its required task when the main application is down, it had to be installed on a separate iASP (Independent Auxiliary Storage Pool) from the main application. 

Based on ARCAD consulting, it was suggested to install one ARCAD instance per iASP which would come with a heavy licensing cost.

We did a PoC to move the ARCAD installation into system pool (SYBAS) from the iASP and this instance was accessible across all iASPs. This in turn provided a saving of around 78,000 USD 

It was a challenging task (took us almost 3 months to implement the entire pipeline) but surely not impossible - and made us the pioneers in implementing DevOps on i-series while adopting agile principles. I would love to hear your views and recommendations if you have done similar work at your clients.

Blog authored by
Vivek Narayanan, Sr. Technology Architect

August 7, 2017

Application Portfolio Analysis - The first step to Modernization

Organizations maintain a huge number of legacy systems that store a lot of hidden information and unidentified opportunities, accumulated over time. Owing to stiff competition and changes in customer needs, organization started moving on to adopt next gen architecture such as mobility, Cloud, DevOps etc. But before organizations can embark on digital transformation, they must understand their existing IT landscape. Application Portfolio Analysis helps organizations do just that either through top down approach such as surveys, meetings with SMEs or bottom up approach such as analyzing the inventory through tools or a combination of top down and bottom up approach.

Application portfolio analysis validates every application based on specific parameters of Business Adequacy such as revenue impact, total cost of ownership, end user experience, compliance and regulations, etc. and Technical Maturity such as availability, stability and Complexity. After analyzing the IT landscape, Infosys as a trusted partner, provides its unique 4Q solution i.e. Retire, Rehost, Renew or Reengineer as target disposition for every application.


Retire: If the application doesn't provide any business benefit and has poor technical capabilities, it can be considered as a good candidate for elimination/retirement. Due to compliance regulations, the data will be archived.
Benefits- Cost savings, removal of obsolete applications that reduce the complexity in IT environment.

Rehost: When application has business value but the current IT landscape results in huge costs, customers can migrate the applications on to a lower cost platform (e.g. Cloud) without modifying the existing functionalities.
Benefits- Cost savings, scalability for changing business needs.

Renew: Applications that have high business value and better technical features, requiring no major transformation can be enhanced with enabling web services (API), DevOps, etc.
Benefits- Enhanced customer experience, improved employee productivity.

Rewrite: Applications capable of generating greater business value but restrained by legacy systems in its ability to adapt to growing needs can be reengineered to next gen architecture such as Open Source, Cloud , DevOps, etc.
Benefits- Increase in revenue, agility, faster time to market and reduction in risk associated with aging workforce.

In a nutshell, if an organization decides to plan for major or any transformation, portfolio analysis helps them to avoid the risk of failure and also identify quick win opportunities that can reap immediate benefits

July 4, 2017

Migration of Mainframe Batch Workloads to ETL Platforms

"It's only after you've stepped outside your comfort zone that you begin to change, grow, and transform.
                                                                                                                                          ~ Roy T. Bennett

During the late nineties, technology pundits predicted the imminent downfall of the mainframes. These ginormous computing machines from an earlier age seemed to have outlived their usefulness. Now, two decades later, the Big Iron continues to rule the roost running inside some of the world's biggest companies including banks, retailers, insurance companies and airlines. 92 of the top 100 banks worldwide, 70% of the world's largest retailers, 10 out of 10 of the world's largest insurers and 23 of the world's 25 largest airlines run on mainframes. To paraphrase Mark Twain, news of the mainframe's demise was greatly exaggerated.

The Second Law of Thermodynamics states that the total entropy of an isolated system always increases over time. An IT system is no different. Most of the mainframes systems have been around for several decades and one of the most taxing challenges these applications face today is the muddle which has developed over years of operation. Bit by bit the complexity of the systems increased to such an extent that maintainability has become a huge challenge. Consequently, mainframes have become expensive to maintain and run.

IT organizations have a plethora of options where mainframe modernization is concerned. Every mainframe landscape is unique and the ability to create a foolproof modernization roadmap lies in the ability to understand the context of the mainframe landscape. This includes consideration of the application complexity, current spend, integration points, technology landscape, variety of processes as well as its adaptability to the proposed target framework. Owing to the similarity in their concepts, processing and terminologies, the ETL platform has evolved as a good alternative for running existing batch workloads migrated from a mainframe environment.

Migration to ETL involves extraction of business rules from the existing application and using them to create specifications or code for the new system. This is an excellent way of taking advantage of existing business logic within the mainframe system while introducing modern technologies and IT concepts. The rewritten IT systems can quickly adapt to changing markets, shifting customer needs and new business opportunities. But at the same time, the migration process can be time consuming and adds project risk. Following few strategies can help ensure a smoother transition:
  • Phased migration approach with clear fallback strategy is vital to ensure there is no adverse impacts to consumers of the IT system as part of this migration.
  • Outbound extracts can be the first to be targeted for migration. While data continues to be consumed from the legacy platform, the files generated on the ETL platform can be vetted against the files on the legacy platform using a generic comparator framework and flows cut-over once all certification activities are complete.
  • Inbound feeds are generally tricky as it involves maintenance of the database tables and any discrepancies can result in data corruption. For ensuring the processed data is unchanged in the new platform, the load ready files can be compared over a period of time before the flows are actually cut-over to the new platform.
  • Internal processes like housekeeping, purge, etc. can be the next to be rewritten on the ETL platform.
  • In many cases, the database continues to remain on mainframe. When feasible, moving the database to a distributed database can be considered as the next logical step.

Batch to ETL.png

Some of the key challenges faced during typical ETL migration projects are listed below:
  • Mainframe application complexity due to the extensions and additions done to the underlying architecture over a span of several decades. Mergers and acquisitions further add to the confusion with disparate systems, standards and practices.
  • Meeting existing SLAs is a key parameter for any successful ETL migration. While mainframe load utilities working at partition level can easily load several hundred million records on a daily basis, it may be a challenge for ETL processes.
  • Loss of subject matter expertise in the application is another common concern as mainframe experts move to other areas in the enterprise with mainframe stack being replaced by the ETL suite.

In a mission critical software life cycle, complexity is a given and quality is a requirement. ETL combines data preparation, data integration and data transformation into a single integrated platform to transform how IT and business can turn data into insight. ETL processes inherently lend themselves to automation and this brings a swarm of benefits to these processes including end to end data lineage, faster delivery times, improved productivity, reduced cost, decreased risk of error, and higher levels of data quality among others. With improved integration of ETL and Big Data technologies, it's now possible to augment data with Hadoop-based analytics. Features like data connectivity, transformation, cleansing, enhancement and data delivery can be run within a YARN-managed Hadoop platform. 

Infosys has a proven track record in collaborating with clients to migrate their mainframe workloads to ETL platforms. For example, Infosys has partnered with a leading US bank to ensure non-disruptive migration of their batch processing across several operational and analytical applications to an ETL based Data Integration platform featuring reusable frameworks such as generation of Gold Standard files, Common Maintenance frameworks, and Data Quality and Governance frameworks.

Is your mainframe systems and workflows limiting your agility? Consider migration to ETL platform. It might just be the right solution for you.

Blog authored by 
Ramakant Pradhan, Ramkumar Nottath and Arunshankar Arjunana

April 19, 2017

Better Managing Your Modernization Process

There is much that has been debated about legacy modernization, yet this topic continues to stay relevant as enterprises are increasingly challenged by a new, digital, and highly competitive business environment.  The impetus for modernization is no longer borne from a need to reduce costs or upgrade to new technologies, but from a more compelling need to prepare for changing business requirements and future readiness.

Tab_Experience.png

Today, technology has become a core aspect of every enterprise, from IT companies to retailers, and is directly shaping business strategy. In preparation for this paradigm shift, organizations can either build new technological capabilities within by hiring required skills or partner with an organization who has the expertise to guide them through the process. Both methods have their pros and cons, though with shrinking time-to-market cycles, the later may seem preferable. Interestingly, Infosys DevOps for Mainframe leverage tools by IBM and Microfocus and created a PoC for a global bank. The tools enabled the bank to save approximately 15% of build effort and reduced storage by approximately 25%.

Legacy modernization to improve user experience of internal stakeholders

With technology changing at a continuous pace, enterprises need to prepare for stakeholders that are not only external, by way of customers, but internal, by way of employees, as well.

With internal stakeholders undoubtedly exposed to smarter technology and smoother user-friendly UX through apps, legacy technology and systems can be the cause of strain as it involves working with manual, time consuming systems.  Thus organizations find themselves at an inflection point when it comes to delivering uniform, high-quality services and experience, across stakeholders. A case in point is how the mainframe modernization team at Infosys helped a large Australian bank decommission complex legacy systems and reclaimed approximately two million AUD.

Technology can make IT a revenue centre

As technology begins to play a more pervasive and decisive role in an organization, technologist and CIOs and going to increasingly have a voice in determining the direction of the organization as their opinions are going to have a direct impact on revenues. IT will no longer be a cost center but a crucial value add, and become the tool that enables organizations to work faster and smarter. The challenge for IT of course will be on how to collaborate with business and develop roadmaps that enable seamless, non-disruptive modernization of legacy systems. This can be a significant challenge as these vast legacy systems have been developed over many years, in a fragmented manner and most organizations do not have a comprehensive understanding of their IT landscape.  Fear of their unknown IT landscape often prevents many organizations from beginning the modernization journey. 

Steps towards effective legacy modernization

Thus when an organization decides to begin modernizing its IT infrastructure, the first thing would be for it to map its IT landscape Organizations need to create a roadmap on what needs to be modernized and set realistic timeframes. Organizations also need to be cognizant of the fact that the pace of modernization is often determined by external forces like competition and the disruption that the industry is going through. For example at a large fashion retailer the landscape is being redefined completely by looking at what functionality will reside where.

A second aspect of legacy modernization is the need for an organization to consciously undergo a mindset and culture change. CIOs need to be closely invested in bringing about this change and there needs to be a buy-in from business leaders to make this top-down process a success. Processes in organizations should support this change - from recruitment to training to performance management.

A third aspect of modernization is risk mitigation. Since modernization is a journey that can take two or three years, organizations need to engage in this process without impacting their customers or internal stakeholders. Additionally, organizations need to be willing to invest and allocate an annual budget towards optimization of critical applications where possible, and legacy modernization where necessary. Budgets are also required to acquire new skill sets that will make this possible, as well.

The fourth aspect is how we can leverage technology advantageously. Cloud and digital are the key destinations of change for all industries. Organizations need to identify applications that can move to the cloud, and the extent to which they can leverage mobile and open source. An organizations business and technical landscape is a key determinant here, and the level of disruption facing that particular industry.

It could start with rehosting applications on the cloud to accelerate performance. We could renew legacy applications by the effective use of DevOps, creating APIs, extreme deconstruction and this transition to a new user experiences.

Leading US based bank improved its business turnover by exposing legacy assets as SOAP (Simple Object Access Protocol) from multiplatform integration. This enables cross selling as a direct business benefit.

Migrating mainframe batch jobs to Hadoop helped an European CPG company reduce the time it took to reconcile financial statements by 40%.

In summary, mainframe modernization is real and now. How much and at what pace is specific to a client. We have the expertise and technology available today to rapidly move to more open business models and technology architecture.

April 5, 2017

Legacy Decommissioning (2/2)- Data Archival and Retrieval

In continuation to the earlier blog of decommissioning legacy systems where I explained the overall decommissioning process, this blog will address the data archival and retrieval approach in the design and execution phase of the legacy decommissioning.

As discussed in Blog 1, many organizations feel forced to keep aging legacy applications running, way beyond their life because they contain critical historical data that must stay accessible. This information may be needed for customer service or other operational reasons, or to comply with industry regulations. Yet keeping obsolete systems alive just to view the data, puts a real strain on resources. These applications steadily consume IT budget in areas such as maintenance charges, staffing and data center costs and in many cases it is over 50% of overall IT budget.

Data Archival during application decommissioning is the most cost-effective and simplest solution for keeping legacy data accessible for continuation of the business. Archiving complete legacy data at once is one of the best practice suggested during application decommissioning. This archived data can then be swiftly accessed online and can provide different views of the data for analytics or can be exported into different format when needed. During data archiving the data can be extracted from any legacy system and store it in a secure, centralize online archive. The data is easily accessible to end users, either from screens that mimic the original application or in a new format chosen by the business. The new infrastructure built for legacy data archival should be capable of moving all legacy data components, whether it is structured, unstructured, semi-structured, or even raw metadata, into a comprehensive online cloud based self-managed centralized repository. Infosys has proven methodology for archiving the data in this central repository.

Application decommissioning and data archival requirement are unique for every organization, Infosys with its proven framework and tools can help customers with below:-

1.       Building a strong understanding of the current application data model
2.       Building data retention policies and retention requirements through strong domain knowledge.
3.       Building the innovative archival data model.

The above approach is largely adopted for mainframe legacy applications which had accumulated large volumes of data over many years in form of documents and images. Typical examples are

1.       Billing records,
2.       Financial transactions
3.       Customer history. Etc.

 

Data Archival

The generic framework for a Data Archival process consists of following steps

* Data Extraction - Collect and extract the data from source data base into interim storage area i.e. Staging Area and perform data transformation where required to map the data format of the Target state e.g. EBCIDIC to ASCII, Alphanumeric to Numeric, Date format changes etc.

* Validation and Cleansing - Validate the schema of the target database such as Tables with all the constraints, indexes, views, Users along with their roles and Privileges are migrated as defined in business rules. Validate the contents of the data migrated to confirm the referential integrity is maintained in the target definition. If required, data cleaning is also performed for generation of golden records, removal of duplicate records, Cleansing of special char, spaces and emails.

* Transformation - Transform the data from the source to the target as defined in data mapping rules and lookup tables.

* Data Migration - Load the data into the target database using the data loader utilities / scripts and programs generated for loading the incremental data, multi lingual data and recovery of failure data.

Data archival solution designing for extracting and migrating the identified data source is usually done collaboratively with customer DBA's and SME's. Complete analysis of the application is done to identify the following information:

* Understand the current data Model in the legacy application

* Identify the unknown data relationships in current model.

* Creating retention policies of the data identified for data archival

* Data extraction with required transformation in the application context

* Identifying the key fields for indexing to search the required data efficiently

* Validating the target database schema and data contends of the archived data.

* Creating interface to access archived data and reports independent of the application

* Applying the application, entity, records level retention policies based on organizational requirements.

Based on above analysis the target data model will be generated and each target entity will be identified as business object and will have the data for the object. The business object is defined keeping the regulatory and business needs into consideration.

Based on organizations requirements and details collected during impact analysis, Infosys will suggest best approach to archive the data by evaluating multiple factors such as quantity of data to be archived and actual requirement of the data archival, two options are

1.       Complete data archival process at once
2.       Partial data archival over through multiple releases.

Data Retrieval

Data retrieval is to ensure the archived data is accessible anytime for the audits and analytics for normal business operations. Also the data is secured and can be accessed as per the user roles and privileges. Two most common strategies are Hot retrieval and Cold retrieval of the data. In Hot retrieval data is accessed immediately based of few keywords. In Cold retrieval data is accessed via reports and service request when specific view of data is needed.

Following are few ways through which we can retrieve the archived data:-

      • Data retrieval through keyword search and business objects to give full application context.
      • Custom generated reports using Data integration capabilities.
      • Data retrieval using standard interfaces to the databases such as ODBC/JDBC and SQL
      • Using enterprise reporting tools such as Crystal Reports
      • Data archival and retrieval using third party products such as IBM Optim, Infomatica etc.

March 30, 2017

Mainframe Legacy Decommissioning(1/2)

In the new era of digital transformation, new strategies and technologies are taking over the legacy applications to run the day to day business. As a result of new technologies driving in, when new digital next generation aligned applications are deployed to production, many of the mainframe legacy applications become redundant .The business value of these mainframe legacy applications decreases with every passing year, but organizations often continue to support them for the purposes of data access and compliance. The cost of doing so can consume over 50 percent of an overall IT budget. Decommissioning will enable organizations in eliminating maintenance costs, associated hardware and software costs of these outdated legacy systems.

 

Application decommissioning is all about isolating the application by migrating the database to archival system, removing the interfaces for upstream and downstream systems and detaching the allocated hardware from its active use, while the business-critical data is still available to the application as is and when required. Typical examples/drivers of decommissioning include:

ยท         Retiring an old application, which has more functional and technical capable alternative system.

  •          Consolidation of one or more in-house applications or COTS products into a single enterprise solution such as ERP, CRM etc.
  •         Elimination of redundant applications within the Enterprise landscape
  •         Mergers and acquisitions of the organizations.

 

Benefits of Decommissioning:

By decommissioning outdated, redundant, high cost and low value applications organizations can achieve tangible benefits as below :-

  •          Reduction in maintenance costs by removing unwanted hardware, outdated software and human resources.
  •         Financing new business initiative or projects  by utilizing unconstrained resources from decommissioning projects
  •         Expenditure reduction on licenses and infrastructure of non-scaleble technologies.  
  •         Reducing operational risks by simplifying and aligning the enterprise to next Gen Architecture.

 

All data from a proposed decommissioned legacy system or application may or may not be migrated / archived to the newer replacement system. This depends on the driving factors of application chosen for decommissioning, business criticality and technical adequacy of the system and the impact of the decommissioning on existing enterprise architecture and business processes. I will cover the data migration and archival options in the separate blog.

Application decommissioning best practices suggest archiving all of the data at once while maintaining online access to the data through a preferred reporting tool.

Infosys application decommissioning methodology typically involves following four steps:

1. Portfolio Assessment

2. Detailed Impact Assessment

3. Solution Planning

4. Actual Execution and realization.

 

Step 1: Portfolio Assessment

In the portfolio assessment phase, the entire enterprise landscape is assessed for each application's life-cycle and identify which applications to maintain, which to invest in, which to replace and which to decommission.

Using sophisticated questionnaire with individual scores mapped to each of the influencing area, the applications are assessed for its Technical maturity, Business criticality along with its Strategic Fitness on organizational road-map and TCO. This is top down interview based approach and by interviewing different Business owners, IT owners and Business Analysts, the application is assessed for its impact on organization, customer service, dependencies on other applications to see if it's a potential candidates for decommissioning.

 

Step 2: Detailed Impact Assessment

In this phase, we assess the impact of decommissioning on various external and internal interfaces of each application through in-depth analysis.

The impact is assessed by detailed tool based inventory analysis and considering the impact of application on various areas like on access, business, benefits, etc. Infosys has the right set of tools (Combination of In-house and third party tools) viz. Legacy Analyzer, Microfocus Enterprise Analyzer or Averisoruce, in this area which can help and detect the application dependencies, Limitation and risks associated with application and Maintainability index.

The outcome of this step will help us arrive at the final list of applications which will be considered for decommissioning.

 

Step 3: Solution Planning

The objective of this phase is to identify

  •        Possible solutions for decommissioning the identified applications
  •        Choose the best solution for data archival ( On Premises  Vs Cloud) and best data retrieval strategy (Cold and Hot Retrievals)
  •        Identify all the application technical components like programs, JCL, Databases, Online entries, Scheduler entries, infrastructure used by the application, different access levels required for different users along with all the interfaces and business processes. Raise a change request for the actual decommissioning solution in the execution step. The change request will have all the details of all associated hardware and software's to retire after decommissioning. 

Typical decommissioning solution is prepared from application usage and detailed parameters gathered during portfolio assessment and detailed impact analysis phase. These parameters are

  •         Managing application access for various users and partners
  •         Managing application upstream and downstream interfaces to ensure landscape collaboration
  •         Managing the data exchanges, decision making for archive or dispose
  •        Managing released infrastructure components used by the application to be decommissioned

 

Step 4: Actual Execution and Realization

In the execution phase, the identified decommissioning solution is implemented for successful completion of the project. The decommissioning of the applications is executed as per the project plan prepared under Solution planning and reports are generated for each component of solution.

Below are some reports generated during the execution phase and are validated after implementation:-

  •         Impact Analysis Report.
  •        Data Scan Report
  •        Data Archiving/Migration/Integration Reports

More coming in my next blog.

March 1, 2017

Legacy Modernization - Where do I begin?

Many enterprises in different industry verticals have made huge investments in the legacy systems in the past - which are now leading to operational inefficiencies with the increase in their business growth. These legacy systems inhibit the organizations from embracing next-generation technology that will enable business innovation. Many firms have looked at these investments as additional expenses but modernization is the key differentiator to capture market share and stay atop among the competitors. 

How to do more with less?
As a partner, it is important for vendors to understand the modern technology trends and evaluate, how can it help transform the enterprise and prepare it for the future. The below 5-step strategy will provide a simplified approach towards modernization of application and systems within the enterprise:

5- Step Strategy for IT Modernization:
  1. Identify the key business goals of the enterprise.
  2. Identify the barriers & challenges across IT systems and its impact to the business
  3. List out the key Modernization themes based on the gap between business and IT
  4. Lay out the Strategic Solutions to modernize the platform
  5. Choose the best suited solution and define the transition road map to future state
The 5-step strategy works. A case in point
As a strategic partner, we were part of a modernization initiative for one of the leading insurance brokers. The 5-Step strategy greatly helped us to arrive at the modernization solutions that helped transform their business. 

Let me explain how this 5-step strategy was applied to simplify the client's landscape modernization and achieve business innovation.

1. Identify the key business goals of the enterprise
A combination of interviews & questionnaires with all the key stakeholders of the enterprise helped us understand their vision and arrive at their business goals as listed below.
  • Reduce Total Cost of Ownership
  • Better Return on Investment
  • Enhance Operation Efficiency
  • Faster Time to Market
  • Increase Global Footprint
  • Gain Agility
  • Enrich User experience
  • Scale for Future
2. Identify the barriers & challenges across IT systems and its impact to the business  
It is important to comprehend the challenges & the gaps in the existing landscape that will lead us to find the right opportunities for investment.
  • Understand the business and IT constraints
  • Perform portfolio assessment of different applications within the landscape
  • Document the key challenges across different applications 
  • Identify the impact to the business
An assessment of the IT landscape of the insurance broking company was carried out, which helped in identifying the key challenges across different portfolios of the enterprise and the business impact of such concerns to the enterprise, which is depicted in the below diagram:

1. IT Landscape assesment.png

3. List out the key Modernization themes based on the gap between business and IT  
Bridging the gap between business and IT is essential to the success of a transformation or modernization initiative in an enterprise. A holistic view of the organization, an understanding of their business expectations and IT strategy helped is in deriving the key modernization themes that will help in delivering the desired business outcome.

4. Lay out the Strategic Solutions to modernize the platform  
We arrived at different strategic solution options based on the identified modernization themes. It is to be noted that the strategic solutions described below may only suit the challenges associated with this case study. The strategic solutions will have to be tailored for the specific needs of the enterprise.

2. Strategic Solutions.png

5. Choose the best suited solution and define the transition roadmap to future state
The strategic solutions arrived at step-4 will have to be well-thought across different aspects like cost, ROI (Return on investment), business benefits etc. These solutions were compared against different factors like cost & business benefits that they offer. A quick comparison of different solution options against these aspects helped the clients make the right choice.

3. Mod Options.png


In this case study, the client chose the 'Rationalize' option - where they wanted to prioritize their lines of business that will have to undergo modernization. The client was also keen on reusing their existing assets as much as possible. Accordingly, the road map was defined, where-in the modernization was planned in different phases.

The best suited solution should be chosen for the enterprise such that it improves the corporate agility and competitiveness.  Future state roadmap is to be defined and the transition architectures will have to be detailed to transform the baseline architecture to target architecture in phases.          

Conclusion:
Having an appropriate strategy for IT modernization is imperative to the success of an enterprise modernization. The CIOs will have to make conscious decisions in terms of investments for transforming their IT systems in the most efficient and cost effective manner. It is important to ensure that IT modernization strategy is in alignment with the overall business strategy and the vision of the enterprise. The 5-Step modernization strategy detailed in this article will help the CIOs, business & IT directors and enterprise architects optimize their business & gain competitive advantage.  

Authored by:
Nandha Venkateswaran
Senior Technology Architect, Infosys

January 18, 2017

Blockchain and IoT: The future of API economy [3/3]

3. Blockchain_IoT.png
In the previous two series of blogs, I covered the genesis of API economy and then how does an organization roll out APIs faster. While writing, I was stuck with the thought - what else one can do with APIs and how will it power the future. Over the time, APIs and API economy will become pervasive as most organizations will start working with its suppliers and vendors as partners expanding their services to encompass the digital life of the consumer. There are talks about the next revolution in financial services  on how banks will turn into utilities and expose their capabilities which can be used by fin tech companies to offer enriched services. 

However, what I feel is the two technologies that are already unfolding will shape the world and show how business is done. These technologies will shape the API economy and the mainframe world significantly.  In this third series, I will elaborate how blockchain and IoT revolution will enrich the API economy.

Blockchain: Simply put, it is an electronic ledger that is shared between peers and every change is transmitted to the blockchain record with each peer. Hence, everyone can see what changes are happening simultaneously. Also every record is hashed so that no peer can delete any record once it is on blockchains. There are many examples of how business can run faster and made cost-effective as the information is available in each of the nodes of the blockchain instantaneously. Moreover, since each transaction is validated and authenticated, pharma and food manufacturing companies will be able to track the ingredients of the product being supplied to them or they are in turn supplying to the next business in the chain. 

I feel that there will be a huge impact on the APIs and mainframes systems. The existing processes will now need to work with the block chain networks to either pass information or receive information. This information exchange will happen through APIs on the mainframe platforms. Many of the mainframe intensive organizations will start running their blockchain networks on the mainframes because of faster hashing, quick encryption, and very high availability / scalability on mainframes. Whether blockchain runs on mainframes or outside, you need a robust set of APIs to expose and consume functionality and provide a link to the existing business processes. New business streams can be uncovered and these can be exposed to further enhance the API economy. 

IoT revolution: Of course we have heard of multiple devices that talk to each other online to do businesses and capture and share information such as data on human body, objects, apparels, auto motives, consumer good, etc. We have also heard of connected cars /autonomous (self-driving) cars, refrigerators that can indicate when the food needs to be replenished, and TVs that can sense the audience and roll out appropriate contents. In factories, machinery parts can sense and send out signals for predictive maintenance. Oil and gas companies can remotely monitor and replenish the stock in their storage based on the level of fuel and provide predictive maintenance to critical equipment. Logistics companies can track and monitor the condition of goods in transit. Insurance companies can roll out usage-based insurance. We also know that these processes are typically running on mainframes.
All of this would require hundreds and billions of API calls and since 60 percent of critical data resides on mainframe, quite a significant percentage of it will end up with mainframes. Each device will transmit smaller payload of data, but multiple times. Hence, we need to build a robust set of APIs that can take the load when billions calls end up on the APIs on the mainframe for sending and receiving data. The resiliency of the mainframes and the security it provides will be tested when IoT devices start sending data. Then large analytics workloads will start running on the mainframes to process the data. The insights gained can be exposed through APIs. 

I believe blockchain and IoTs will further enrich the API economy and drive more new types of workload on to the mainframes thereby opening up new revenue streams. As an organization, we have to be ready with our business process, technology, and organizations structure to respond to this new and varied type of business. 

January 10, 2017

Tools: The accelerators to your modernization journey [Part 6/6]

 As mentioned in an earlier blog in the series, our 'people+software' approach is a critical success factor for mainframe renewal. Automation, leveraging tools and accelerators is of utmost importance for speed and faster value realization. Our tooling approach focuses on 3 important aspects:
  • It should enable insights in addition to providing data
  • It should be non-intrusive
  • It should significantly reduce SME dependency
Our knowledge curation platform which is a key component of knowledge discovery has adopted these three key aspects of the approach. Some of our clients prefer the knowledge discovery to be done within mainframe as they are not comfortable moving the mainframe components outside the mainframe boundary. The platform, therefore has been designed to be able to work within and outside mainframe boundary. The platform is built on three components:

Extraction Engine: This component consists of multiple point tools which extracts metadata from various mainframe components which comprises of operations, workload, interface, and code details.

Knowledge Repository: This component stores the knowledge extracted from the mainframe landscape. 

Infosys Modernization Portal: This is our window to the knowledge repository which can consume information from the repository and can present it in a meaningful way.

1.1_Ki.png

In simple terms, the extraction engine is like the sensory organs, Knowledge Repository is the brain and the Infosys Modernization Portal is like the eyes and ears of the solution. We take an "onion-peel" approach towards knowledge discovery. We start from the portfolio and gradually unearth knowledge from the subsequent layers. However it does not stop us to start at any layer. It also provides us flexibility to use a part or the entire solution based on the context of the client. Recently this solution has helped us create an accurate inventory while creating a rehosting solution for a No. 1 food mail order catalog company in the United States. For an electric utility company, we have used this solution to list all tape and disk datasets as well as a list of all active online volumes. We were able to discover this knowledge without any SME dependency. This has helped us accurately estimate and develop a credible business case.

Apart from knowledge curation, we have standardized tool sets across all the three solution levers of our mainframe modernization solution:
                2. ToolSet.png

Some of the most notable ones are our optimization tool kit which can not only identify the high CPU consuming resources but can also identify the anti-patterns which leads to such in-efficiencies and high CPU consumption. The SMF Log analytics tool from the optimization kit can extract and present meaningful insights from mainframe SMF logs. We have leveraged our optimization tool kit for a US based managed health care company and have significantly reduced mainframe operations cost. Infosys DevOps platform accelerates deployment of changes that helps optimize the mainframe operations. Our robust home-grown migration and testing tools validates the migration and significantly reduces the risk of migration. We also have minimally invasive rehosting solutions that minimizes migration costs and overall TCO for our clients.

Hope you enjoyed this series of blogs explaining how non-disruptive mainframe modernization is possible and the key factors of success. We touched upon 3 strategies one can choose from to modernize mainframe - accelerate, renew and transform concluding with the last post in theisseries about the tool that can help accelerate the journey. We would be happy to hear your thoughts around these topics and experiences from your modernization initiatives. Stay tuned for more.

Blog co-authored by:
Rajib Deb and Sanjay Mohan, Infosys

January 4, 2017

A rapid API roll out with a robust funding approach [2/3]

2. Robust Funding.pngAccording to Gartner, 50 percent of the B2B collaborations will take place through Web application programming interfaces (APIs), especially business APIs. This phenomenal growth is based on the fact that APIs aim to increase speed and reach which are very essential for the businesses to grow faster than their competitors.  In my last blog, I also highlighted the essential areas of consideration for the organizations to start their API journey. In this blog, I have elaborated the aspects of governance structure / funding models, and technology aspects.


SOA and API governance: Both drive re-usability 
What is the difference between service-oriented architecture (SOA) governance and API governance? - A question that is asked most frequently. Both drive re-usability but sharing the required level of control is different. Since SOA is all about business reusability, the governance structure put in place has strict key performance indicators (KPIs) on reuse. APIs on the other hand are governed by speed and are typically built to extend business reach. The governance here is still promoting reuse, but when a decision between speed and reuse is to be made, typically speed wins the debate. Also in many cases, internal developers start using APIs and then they are exposed to external developers. By that very nature of adoption, it is possible to start with a lightweight governance and then move towards a more comprehensive governance model. But in all cases, speed is prime driver behind API-fication and the governance model will remain lightweight compared to a SOA governance model.

When you start building an API, you need to create a core team. Typically, the core team that decides and develops APIs includes:
  1. Product managers - APIs need product managers who periodically collect requirements from businesses and provide to the technical team that builds APIs. They also monitor how many calls have been made to the APIs and who are the consumers of the APIs. They also decide when to enhance or decommission the APIs
  2. Technical team / engineering team - Defines the granularity , technical design, products, and work closely with the product managers to enhance functionality 
  3. Operations team - Primarily ensures when the APIs are up and running, alerts teams for any downgrade in performance, and looks at the overall well-being of the APIs

The core team is set up by a steering committee with the mandate to fund the APIs and provide overall directions. External teams that need to support the core team will also include a legal team that can come up with legal clauses for exposing the APIs to external developers. For monetization, integration team needs to incorporate the APIs with the back-end core systems and the owners of the applications will expose the APIs. On the customers' side, where there are many large legacy applications, the legacy application owner needs to work very closely with the core team and define the APIs that will be exposed. In fact, for banks and insurers with large legacy footprint, these teams can often be part of the core team.

Staying in competition with funding models 
For funding, we need to understand two points:
  1. How APIs will be monetized to make the external consumers pay 
  2. Who will pay for building the initial platform and APIs
For the monetization of the APIs, there are multiple business models. They include the APIs that are offered for free, premium, developer pays, developer gets paid, and most importantly, where there is an indirect benefit to the organization. In my experience, telcos / media companies are way ahead in terms of monetizing the APIs. News feeds and videos syndication APIs are common in telcos and media companies.

However, other segments, especially financial sector is exposing APIs for indirect benefits they get in terms of better brand recall, competitive advantages, or just to stay in competition. Most banks have now exposed their usual services on mobile platform to remain in competition. Among financial organizations, many payment providers offer their payment APIs for consumption against one of the commercial models.

Who will pay for building the APIs if the APIs are not immediately monetized? In one of the banks, I was speaking to the core team in which legacy application owner was also present. The team wanted to know whether they should wait for the business to identify the APIs for them to build. Of course, this will delay the roll out and frustrate the business. On the other hand, if they build a platform and be ready with some of the common APIs, who will pay the bill? Though there is no easy answer to this question, here is the model that I have seen work well before:
  1. Collaborate with the CIO (Chief Information Officer) / Chief Digital Officer (CDO) to make a business case for a seed funding to build the initial platform with the core services
  2. Create an internal team that evangelizes APIs with the line of business (LOB) that is most likely to build the APIs. In fact, build few APIs for these LOBs for free and charge them when they start using the APIs
  3. If the internal evangelizing is proceeding well, LOBs generally come on board and they can share the cost of running the platform
  4. If you can monetize the APIs, you can start paying for subsequent enhancements
The advantage of this approach is that it is far easier to roll out APIs as the basic building blocks are ready in hand.

Technology considerations

Typically, APIs are identified through top-down approach. However, in most cases, the programs on the z/OS do not offer the capability that an API design demands and the code needs to be refactored. In many cases, the programs include both the business and the screen logic, and there is no separation between them. In such cases, the code is to be refactored to separate out the logic so that it can be encapsulated. In my experience, this work requires more effort than required to expose the APIs on the z/OS platform. 

For exposing APIs on z/OS, there are multiple technologies available. Traditionally, most organizations having large mainframe footprints also have CTG or CICS TS available with the z/OS systems. These technologies help in exposing APIs but these have the following disadvantages:
  1. They operate on the general processor and therefore are more expensive when they land multiple calls.
  2. These technologies do not implement entire representational state transfer (REST) APIs. They support POST (one of many request methods supported by the HTTP protocol used by the World Wide Web) methods only. This is not a good architectural principle. 

IBM has launched z/OS connect that operates on the z integrated information processor (zIIP) and has the following benefits:
  • Shields backend systems from requiring awareness of RESTful uniform resource identifiers (URIs) and JavaScript object notation (JSON) data formatting 
  • Provides API description using swagger specification
  • Offers an integrated development environment (IDE) to map the common business-oriented language (COBOL) copybooks fields to the JSON request / response fields 

Together with a strong governance and a robust funding approach, the right technology will help in rapid API roll out.

Subscribe to this blog's feed

Follow us on

Blogger Profiles

Infosys on Twitter


Categories