Governments are overwhelmed balancing consumer expectations, aging workforce, regulations, rapid technology change and fiscal deficits. This blog gathers a community of SMEs who discuss trends and outline how public sector organizations can leverage relevant best practices to drive their software-led transformation and build the future of technology – today!

« September 2018 | Main | November 2018 »

October 15, 2018

6 factors that can make or break a mainframe re-hosting initiative

Mainframes are fast, reliable and tightly integrated machines where applications, databases and other components work seamlessly to support critical business processes.

However, despite being powerful performers, many organizations are looking to move away from these systems because of cost pressures, a shortage of the right skill-sets, the difficulty in addressing the digital imperative and other challenges.

Re-hosting is a popular approach that organizations adopt to modernize their mainframe. This approach allows organizations to emulate core functionality on an on-premise or a cloud-based system built on Unix/Linux or Windows platform.

Ideally, re-hosting shouldn't change the code or the functionality and should offer better performance levels at a lower cost. However, there have been many instances where re-hosting projects have failed completely or did not deliver the desired value.

Having analyzed multiple re-hosting programs - both successful and unsuccessful - I found the following key factors which can make or break a re-hosting initiative.


  1. Selection of applications for re-hosting - Complex applications that have heavy I/O workloads, applications that have chatty (unpredictable) workloads and/or applications built using multiple different technologies generally create issues during re-hosting.The size, complexity and technology of the entire application portfolio should be assessed to identify which applications can be re-hosted easily and which would require extensive effort.
  2. Preservation of interfaces/connections - Applications that have a large number of interfaces require extra effort as part of the re-hosting process. It may also happen that the new environment does not support a particular interface technology. Again, this can be addressed by carefully analyzing the applications and taking stock of their connections before re-hosting.
  3. Capabilities of the target environment - Mainframe applications can be re-hosted on on-premise or cloud-based systems. It is important to ensure that the target environment can co-exist with other IT systems in use and also offers performance levels similar to or better than that offered by the mainframe system. Typically, X-86 or web based systems are cheaper to operate and maintain compared to the mainframe system, and can offer similar performance.
  4. Reliability engineering - Mainframes are the gold standard when it comes to reliability. IBM defines RAS as a term to establish Reliability, Accessibility and Serviceability of its mainframes. Modern X-86 systems and cloud systems do not claim such legendary reliability. Hence it is important to ensure re-hosted systems are engineered correctly to provide the desired reliability levels.
  5. Performance of the container platform - The container platform should ensure that the code is not impacted by re-hosting and that the entire application operates as efficiently as it did on the mainframe system. Industry has seen a host of these programs over the last two decades. Some of them specifically written for this purpose and some have evolved to become as powerful and as efficient as the mainframe system. In either case, picking the right platform that will seamlessly integrate in your environment and also will cater to your application interface needs is extremely important.
  6. Path to future modernization - Building a modular, agile, and digital IT landscape is a multiple-year, multi-step journey and re-hosting is just a pit stop on the road. Eventually, an organization may have to move its applications to the cloud or re-architect the entire IT landscape to align with (and take advantage of) the cloud-native, digital architecture. The re-hosting approach and solution should create a foundation that supports this continuing evolution of the system.

The preceding list is not exhaustive. Multiple other factors determine the success of a re-hosting effort. But I believe these are the ones which organizations often overlook or don't execute effectively.


October 11, 2018

Key takeaways from ISM 2018: Post-technology technology in the age of Digital

The annual American Public Human Services Association (APHSA) IT Solutions Management (ISM) conference was recently held in Seattle, Washington. ISM brings together leaders from Federal, State, Local government and the Industry to exchange ideas on emerging trends and imperatives impacting HHS agencies. The past few years have seen discussions around modularity, agile implementation and best-practices related to health and social program management. This year things were different. The focus had shifted.

ISM 2018 discussions focused on business operations and the role technologies play in their [business operations] digitalization.

Both State and Industry speakers, instead of talking about how they implemented a solution, discussed how the solutions that they implemented are impacting (and improving) business operations. Technology was featured as well, but it wasn't about .NET vs. Java, SOA, etc. The emphasis was on how to build on a platform to solve a business problem more quickly.

The inference, backed by recent RFPs in the Child Welfare space, is that States are tired of dealing with the complexity of technology. They don't want to have to oversee the micro-detail of code reviews, multi-vendor COTS integrations between stack layers, assessing most efficient use of licensing in multiple environments from multiple vendors, application or persistence layer integration issues, and all the other here-to-date complex social program solution challenges. States just want a solution that solves today's problem and can either be "thrown away" in the future to solve the new or changed problem, or be agile enough to evolve into part of a truly national IT platform solution. Discussion on the best number of processors for a clustering strategy is out. Way out.

Salesforce, Amazon, and Azure are in ascendance. We are to done talking about technology as fine grained solutions from the code up. These platforms allow States to use technology as an enabler that rapidly solves business problems and quickly pivots in the future.

We are now in the post-technology technology world. Vendors who understand this and can help you navigate your next will be your partners. Those that insist on talking about the myriad of technology decisions to be made miss the forest for the trees. Business operations enablers are the post-technology technology.


Subscribe to this blog's feed
 

Recommend on Google

Follow us on

Blogger Profiles

Infosys on Twitter