Governments are overwhelmed balancing consumer expectations, aging workforce, regulations, rapid technology change and fiscal deficits. This blog gathers a community of SMEs who discuss trends and outline how public sector organizations can leverage relevant best practices to drive their software-led transformation and build the future of technology – today!

« Key takeaways from ISM 2018: Post-technology technology in the age of Digital | Main | The 'modular' future for HHS and the industry »

6 factors that can make or break a mainframe re-hosting initiative

Mainframes are fast, reliable and tightly integrated machines where applications, databases and other components work seamlessly to support critical business processes.

However, despite being powerful performers, many organizations are looking to move away from these systems because of cost pressures, a shortage of the right skill-sets, the difficulty in addressing the digital imperative and other challenges.

Re-hosting is a popular approach that organizations adopt to modernize their mainframe. This approach allows organizations to emulate core functionality on an on-premise or a cloud-based system built on Unix/Linux or Windows platform.

Ideally, re-hosting shouldn't change the code or the functionality and should offer better performance levels at a lower cost. However, there have been many instances where re-hosting projects have failed completely or did not deliver the desired value.

Having analyzed multiple re-hosting programs - both successful and unsuccessful - I found the following key factors which can make or break a re-hosting initiative.

  1. Selection of applications for re-hosting - Complex applications that have heavy I/O workloads, applications that have chatty (unpredictable) workloads and/or applications built using multiple different technologies generally create issues during re-hosting.The size, complexity and technology of the entire application portfolio should be assessed to identify which applications can be re-hosted easily and which would require extensive effort.
  2. Preservation of interfaces/connections - Applications that have a large number of interfaces require extra effort as part of the re-hosting process. It may also happen that the new environment does not support a particular interface technology. Again, this can be addressed by carefully analyzing the applications and taking stock of their connections before re-hosting.
  3. Capabilities of the target environment - Mainframe applications can be re-hosted on on-premise or cloud-based systems. It is important to ensure that the target environment can co-exist with other IT systems in use and also offers performance levels similar to or better than that offered by the mainframe system. Typically, X-86 or web based systems are cheaper to operate and maintain compared to the mainframe system, and can offer similar performance.
  4. Reliability engineering - Mainframes are the gold standard when it comes to reliability. IBM defines RAS as a term to establish Reliability, Accessibility and Serviceability of its mainframes. Modern X-86 systems and cloud systems do not claim such legendary reliability. Hence it is important to ensure re-hosted systems are engineered correctly to provide the desired reliability levels.
  5. Performance of the container platform - The container platform should ensure that the code is not impacted by re-hosting and that the entire application operates as efficiently as it did on the mainframe system. Industry has seen a host of these programs over the last two decades. Some of them specifically written for this purpose and some have evolved to become as powerful and as efficient as the mainframe system. In either case, picking the right platform that will seamlessly integrate in your environment and also will cater to your application interface needs is extremely important.
  6. Path to future modernization - Building a modular, agile, and digital IT landscape is a multiple-year, multi-step journey and re-hosting is just a pit stop on the road. Eventually, an organization may have to move its applications to the cloud or re-architect the entire IT landscape to align with (and take advantage of) the cloud-native, digital architecture. The re-hosting approach and solution should create a foundation that supports this continuing evolution of the system.

The preceding list is not exhaustive. Multiple other factors determine the success of a re-hosting effort. But I believe these are the ones which organizations often overlook or don't execute effectively.

Post a comment

(If you haven't left a comment here before, you may need to be approved by the site owner before your comment will appear. Until then, it won't appear on the entry. Thanks for waiting.)

Please key in the two words you see in the box to validate your identity as an authentic user and reduce spam.