The commoditization of technology has reached its pinnacle with the advent of the recent paradigm of Cloud Computing. Infosys Cloud Computing blog is a platform to exchange thoughts, ideas and opinions with Infosys experts on Cloud Computing

March 25, 2019

Optimizing Costs in the Cloud: Analyzing the Reserved Instances

There are more than one levers when it comes to optimizing costs in AWS. Right sizing, reserving, using spot instances are some of them. While right sizing is always applicable, it is not the case with use of Reserved Instances due to various constraints. Technologists, Evangelists, Architects, Consultants amongst like us are faced with dilemma - to reserve or not to reserve instances in the AWS cloud. I am sure you will have some point view, not necessarily for or against but rather how, when and why it makes or doesn't make sense to reserve instances in Cloud.  AWS is considered as reference for this discussion but I am sure it would be equally applicable for other CSP (Cloud Services Providers). 

This (not so) short article tries brings out the relevant points more succinctly so as to help you make a decision or at the very least, acts as an ice breaker to get the debate started. 

Let's begin, with some research!

Arguably, the documentation is detailed, well organized, peppered with diagrams with hues of orange.  In fact, it's so detailed, I had to open about seven tabs from various hyperlinks on one page to get started. No wonder then a majority of technicians hardly skim through the documentation to make a decision. Further it's a living document considering the 100s of innovation that AWS brings about each year, in some form or shape. If you are bored enough and have a had a chance to read through, congratulation! Hopefully, we will be on the "same page" at some point in time J



Text to Summary (Does AWS have a service yet?)

I checked a few online options for summarizing but it wasn't up to the mark, esp. when I provide the AWS documentation URL.   So here is summary form the documentation I managed to read


The EC2 instances in AWS is defined by attributes like size (e.g. 2x large, medium), instance type/family (e.g. m5, r4), scope (e.g. AZ, Region), platform (e.g.  Linux, Windows), network type (e.g. EC2-Classic, VPC)

The normalization factor refers to instance size within instance type. This is used while comparing the instance footprint and it is meaningful within the instance family.   One can't use the normalization factor to convert sizing across instance types e.g. from t2 to r4

The normalization factors, which are applicable within an instance type are given below


Reservation offerings

There are three offerings - Standard, Convertible and Scheduled Reserved instances with following key differences between these offerings



* Such availability of any feature or innovation, changes across region in a short span of time, hence it is better to validate at the time of actual use

Reservation Scope

Instance reservation scope is either Zonal (AZ specific) or Regional, with following characteristics

(Ref: )


Modifiable attributes

Some instance attributes are modifiable within certain constraints, most notably, the platform type. As a general rule, Linux platform instances are more amenable. Following table list the modifiable attributes  (Ref: )


The constraints

While there is sufficient flexibility in general for IaaS, there are a number of constraints, based on feature availability in specific region, upper or lower limits, and so on.  The constraints on Reserved Instances are highlighted below

1.       Instance size flexibility on Reserved Instances is lost f such reservation are purchased for a specific AZ or for bare metal instances. Also sizing flexibility does not apply to instances in dedicated tenancy, and for Windows with/without SQL Server, SuSE or RHEL.

2.       Software license usually does not align well with instance size changes.  One must give careful consideration for licensing aspects. As one example, in one of environment, I use SuSE for SAP for which the s/w license pricing is in couple of slabs.  If I use one r4.8xlarge, I have to pay an hourly price of 0.51/hr but if I choose two of the r4.4xlarge instances which are equivalent of one r4.8xlarge, then I have to pay a price 2 x 0.43/hr

3.       While modifying a Reserved Instance, the footprint as in size/capacity of target configuration is required to match that of the existing configuration., otherwise. Even when if reconfiguration results in higher footprint, it doesn't work.

4.       Instances are grouped by family (based on storage, or CPU capacity); type (designed for specific use cases); and size. For example, the c4 instance type is in the Compute optimized instance family and is available in multiple sizes. Even though c3 instances are of the same family as c4 family, one can't modify c4 instances into c3 instances because of different underlying hardware specifications

5.  Some instance types, where there is only one type instance, obviously can't be modified.  Such instances include cc2.8xlarge, cr1.8xlarge, hs1.8xlarge, i3.metal, t1.micro

6.       Convertible RI can only be exchanged for other Convertible Reserved Instances currently offered by AWS. Exchanging multiple convertible instances for one CI is possible.

7.       AWS doesn't like you to change All Upfront and Partial Upfront Convertible Reserved Instances to No Upfront Convertible Reserved Instances.

Do we have a common understanding?

It's very human not to have a common understanding, even if we have same source of information. Here is my attempt to rephrase some of the salient points as common understanding - If you don't agree, consider it as commoner's understanding!


1.       The much touted 70% discount is not that common. It applies to some very large instance types with partial/full upfront e.g. x1.16xlarge for partial upfront, 3yr term 

( ref: )

Most instance types give you slightly moderate discount, approx. 20-30% lesser than you would   have expected

2.       That Reserved instances is the first solution when savings costs on Cloud is entirely right. In fact, it requires a careful review various constraints.

3.       RI could be used when it comes to production load which is usually more on the stable side and by the time workload enters the production systems, a set of tests including performance tests, backup/recovery, network latency etc. would have been carried out so that sufficient information is available to make a decision.

4.       Type of applications running on EC2 instances, the operating system and the software/tools licenses which are usually bound to instance type/size must be understood before going with the reservation

5.       Even when decision has been made to go with RI, good starting point could be Convertible as opposed to Standard RI, due to the flexibility they provide and marginal cost disadvantage when compared to Standard RI. Scheduled RI due to their inflexibility need even more careful evaluation unless you just use it for smaller instances and non-prod systems.



As the number of innovations increase and solution architects are faced with numerous "equally good" options, it makes sense to take a pause, evaluate the options carefully, gather stakeholder inputs and make a recommendation. It's certainly true for using or otherwise Reserved Instances.

Continue reading "Optimizing Costs in the Cloud: Analyzing the Reserved Instances " »

March 24, 2019

Own Less - Serverless


We are in Age of Software, every business today regardless of its domain is driven by software. Software has tend to become the enabler and the differentiator for business. To get the perspective of what actually it means, think of Amazon and its impact on every business it had where its path crossed, in fact, I am not sure if anybody has stated this earlier, Amazon has pioneered the Art of Retail as a Service (RaaS) :) . Software has become so ubiquitous today that every industry appears to be a software industry, the line differentiating them has become rather very thin, example Netflix (Entertainment/Media or Software), Uber (Transportation or Software), Airbnb (Hospitality or Software?), Amazon (Retail or Software), the list goes on. This reality converges to the fact that every business demands a Software division.

When you deal with software, you have a lot to worry about, right from designing and developing your software to owning and maintaining infrastructure that runs it. A typical business executive would like to focus more on Business rather than software nuances that drives it. Unlike the real estate that's needed for the business operations, the software infrastructure is totally a different ball game. Software and it's hosting infrastructure tend to get irrelevant rather soon and demands flexibility based on changing business growth and conditions. Infrastructure also needs to keep in pace to support ideas that crop up, get realized, tested, and sometimes sustained with good returns but most of the time gets fizzed out, this engine that drives the change and innovation should not be economically taxing in terms of capital expenses that tend to hang on as liability in case of a failed endeavor. This changing dynamics also has ripple effects on the talents that keep the show running.

In ideal world, wouldn't it be simply great if you could just buy or rent the software you need instead of building and maintaining it? Well, in some cases where your need is well defined and mainstream with fair amount of variations, you have SaaS offerings to take care of it; but for other cases where your software is your differentiator or specific to your unique requirements, you have a choice of building your solution based on Serverless framework and in the process making your solution infrastructure agnostic (to certain degree). So if SaaS gives you almost total independence from maintaining software, Serverless gives you a model to go half way that route by taking care of infrastructure needed to run your software.

With Serverless the whole paradigm of building software solution changes, instead of traditional way of developing software components and hosting it in your managed infrastructure, you build solutions using other services that comes with "batteries included" oh I mean "Servers Included". With Serverless you assemble solutions akin to Lego where every service is a Lego block which you integrate together to build your solutions. 

When we deal with the concept of Serverless, it's important to understand that Serverless is an abstraction that gives the developer of the solution from underlying infrastructure needed to run it, this abstraction could come from the organization itself or from public cloud infrastructure companies like AWS and Azure. So, the term Serverless is relative from the standpoint of end Software developer, but in reality, someone else solution is abstracting the nuances of hardware for you. Effectively Serverless paradigm is adding another layer of abstraction, making developers and organization to focus more on the software aspect of the solution. This abstraction aspect is evolving and turns out to be the future of Software development. For those skeptical of this model and believe that you can't focus on the solution by forgoing the control of your infrastructure, it might be true for certain edge cases where you need have more control on the underlying hardware but for most of the mainstream solutions this would require more of a mental shift of building solutions with other Serverless services. A good analogy of a similar evolution would be the days where we coded using Assembly language and programmers had complete control over the Registers, Program Counters and Memory and decided on what moves in and out of this Registers. Transitioning to higher level language like Java and its likes, we have let off this control to abstractions layers that took care of it. We are in a kind of similar transition with Serverless simply extending this abstraction to higher level of resources that runs our code.

With Serverless your solution stack is quite simplified, From Authentication, Middleware to Database, you assemble your solution from Managed services rather than building each of its stack ground-up. This approach helps you focus on your actual business solutions, decreases the time to market and translates your CapEx to OpEx with significant cost savings in most cases.

Hence with Serverless you end up owning the integration and logic that matters you the most and forgo the ownership of its underlying infrastructure and its intricacies.

Having said this, Serverless model is continuously evolving and would gradually make sense for variety of use cases. Success of Serverless offerings highly depends on the economic model in which it operates. Typically cloud offerings are priced based on the amount of resource used (compute/storage/networking) and the duration of consumption and depending on the use case price needs to be calibrated such that it economically makes sense to stop managing one's own infrastructure. As this model gains popularity and well tested, you would be seeing innovative pricing mechanism that would make Serverless obvious choice. In fact, these whole exodus towards cloud that we are witnessing today would end up in either SaaS or a Serverless Solutions.

We are in the age of less. From Serverless to Driverless (Think Uber without driver) the journey is towards getting and paying for what we need instead of owning and maintaining what serves our need. It's here and It's evolving...

March 1, 2019

Can your legacy applications be moved to cloud?

 Migrating to cloud is a smooth journey until enterprises come across a hiccup called legacy applications. Moving legacy applications to cloud is a task that requires pre-planning and several other aspects which need to be defined clearly for the teams involved in the migration process. Evaluation of the importance of the legacy application to business growth becomes integral. If the application doesn't forward the enterprise's goals, then a call whether it is necessary to retain the application needs to be made. Several legacy applications are of great importance to enterprises and bring them business continuously. Thus, pre-planning, resources and budgeting becomes mandatory for such applications. Here is how enterprises can go about moving their legacy applications to cloud: 

1)      Application Rationalization: Based on the complexity of the environment and legacy nature of the applications, it is recommended to do application rationalization to remove or retire the duplicate applications or consolidate the applications into smaller footprints. This will reduce the legacy footprint and optimizes the migration.

2)      Cloud suitability Assessment: Conducting an application assessment before migrating it to cloud is an essential step. All the involved stakeholders need to understand the technical risks of moving a legacy application along with the impact it will have on business. In addition to this, security and compliance, performance, availability, scalability assessments also need to be performed to avoid any lag or disruption during or after the migration.  

3)      Robust Migration plan: IT teams and vendors involved in the migration of legacy applications need to create a robust and failproof migration plan. Starting from assessment to maintenance of the application after migration needs to be listed and elaborated in this plan. Furthermore, the plan must include key responsibilities and the role each team member is required to fulfill. Thus, preparing a checklist and ticking off tasks once they are done simplifies the process.  

4)      Declutter: Legacy applications bring with them several technical challenges such as OS compatibility, old VMs outdated infrastructure tools and inter application dependencies. When a legacy application is migrated to cloud, with it moves the log files, malware and file legacy. Thus, when the application is decluttered, the OS, VMs and legacy logs are not moved to cloud. Instead only the app is migrated and installed on a cloud-hosted hosted OS. Hence it is important to understand the application composition with core part of applications and its peripheral dependencies so that cloud migration can be optimized.

5)      Trick your applications: Legacy applications can be tricked to run in a stateless environment. This can be achieved by freezing the application's configuration. Freezing the state and configuration allows deployment of the application on one or more servers as per your choice.

6)      Disaster Recovery: Legacy applications often are of high importance to enterprises and have a drastic impact on business. Disaster Recovery needs to be in place for these legacy applications to avoid data loss and ensure there is no lag in business continuity.

 Legacy applications are critical to the smooth functioning of enterprises. A robust strategy to migrate these applications to Cloud becomes crucial. By doing this, not only is the enterprise at an advantage but can also reap a number of benefits such as faster innovation, greater agility and faster time to market.

Are you ready to move your legacy applications to Cloud? Infosys' vast experience in this domain will be of great help to you. Get in touch with us:

Continue reading "Can your legacy applications be moved to cloud?" »

January 17, 2019

Accelerate Digital Transformation with Service Management 'Beyond IT'

Service management is an organization capability to provide valuable services to their customers in a structured manner. It has long played a critical role in traditional areas such as IT business management and IT service management. The newer trend however is to extend this automation and seamless experience to newer areas such as HR, Finance, Project Portfolio Management (PPM), Security Operations, Customer Service Management, and others. Organizations are capitalizing on service management to deliver impactful user experience, similar to retail. This move towards an intuitive, personalized, and device agnostic experience boosts user satisfaction. It also optimizes manpower, delivers greater efficiencies, and enables organizations to access deeper insights from data.

Here's how some organizations have adopted service management beyond IT

A global fast-food chain enhanced business experience by leveraging mobility. To ensure their top 10 business critical apps were continuously monitored, they utilized a user-friendly mobile app. This enabled the management to view P1 and P2 alerts, act quickly, and stay on top of critical issues.

With a powerful service management solution, organizations can also automate their business applications. For instance, a leading provider of helicopter Services extended their Service management platform for fleet tracking, for better visibility of fleets leading to signification improvement in utilization of the aircrafts and for managing fleet operations by integrating with other operational systems to initiate event based alerts for the support team, thereby reducing downtime and revenue loss.

An automotive parts manufacturing major used orchestration to automate application and workstation deployments for end-user computing. This led to a 30% reduction in manual effort and close to 100% tracking and accuracy of orchestration workflows.

Delivering seamless customer service is becoming increasingly critical. To address this requirement, a telecom giant built a guided assurance portal. This reduced their systems from 9 to 1, improved agent experience, usability, and dropped average handling time by 20-25%.

In another instance, a large automotive manufacturer built a single platform as an enterprise software library to standardize software deployments across corporate offices and plants. This introduced uniformity, reduced license costs, enabled tracking on software use, and simplified management. 

Power your Enterprise Service Management beyond IT, with AI

As digital transformation progresses beyond automation, organizations have the opportunity to harness technologies such as Machine Learning (ML), Artificial Intelligence (AI), IoT, and even Augmented Reality (AR) to drive service management and user experience. No longer do organizations have to deploy personnel to review thousands of tickets and direct them to the right department for resolution. With ML, self-learning algorithms can read, auto-categorize, and manage tickets with no time lag. This reduces human intervention, errors, and optimizes manpower utilization.

With AI, the algorithm can also alert on any uncommon increase in the volume of tickets to the next level of decision-makers for them to find the cause for it and take necessary corrective action. Chat bots ensure 24/7 availability of service with no human intervention.

Best Practices to Deploy Service Management Beyond IT

Go for the small wins- It is a great way to ensure success with a small investment while making a strong case for bigger budgetary allocations and adoption of comprehensive tools.

2.      Prioritize- Prioritize based on the business value delivered, usually customers prefer to prioritize the ones which directly impacts the user experience

3.      Ensure you have employees support - This enables rapid adoption of automation and ensures openness to new roles post automation.

4.      Adopt an integrated self-service portal - This ensures 24/7 access to information and improved user-experience

5.      Find the right solutions partner- Since service management adoption is a gradual process and may span many months, having the right partner willing to journey you through the process can be a critical success factor

IT Service Management in a New Power-packed Plug and Play Avatar

Until recently, organizations seeking to adopt a service management solution beyond IT had to build them from the ground up at considerable cost. Today, feature rich solutions are available in ready-to-be deployed modules and as power-packed plug and play apps. A case in point is the Infosys Enterprise Service Management (ESM) Café, an accelerator built on top of ServiceNow. This solution enables organizations to access a host of pre-configured process templates and out-of-the-box service management modules that can be quickly deployed. As per our experience with clients, enterprises deploying ESM beyond IT have experienced as much as a 40-50% reduction in implementation timelines. Users have also found a 50% reduction in upgrade timelines with out of the box configuration and automated testing. And 30-40% increase in user satisfaction with next generation UI and UX interfaces for service portals, guided trainings, and automation.

With digital becoming the norm, it is imperative for organizations to adopt an integrated approach to digital that is pervasive. To know more about how service management can make a difference to your organization, visit

December 11, 2018

Navigate your Digital Transformation with a Robust HR Service Delivery Solution

Today, employees are adept at technology, ultra-social, opinionated, and continuously connected. They demand high-quality service, experience, and prefer self-service instead of having to reach out to support via phone or email. The consumerization of employee experience is leading HR departments to capitalize on HR service delivery (HRSD) solutions to realign and automate functions such as recruitment, compensation, performance evaluation, compliance, legal, and more. They are also going beyond smart-looking portals and consolidating functions to enable employees to access a modern, smart, and omnichannel experience across desktop, mobile, and a virtual assistant. Organizations deploying a robust HR solution have discovered that they were able to reduce administrative costs by up to 30%.

Why an HR service delivery solution offers more than just cost savings

Usually, the first few days at work for a new employee can be a flurry of paperwork and processes. An HRSD solution that is accessible across devices could mean shorter smooth joining formalities.  Employees, whether joining remotely or at an office can submit soft copies of their documents and this can reduce workflows from 70 to 10 steps and thus save thousands of man-hours, annually.

With an HRSD solution, organizations can do away with geography specific portals, SharePoint, and the intranet for different sets of information, and offer a single, comprehensive, and user-friendly knowledge platform that is device agnostic.  With a type-ahead feature, the platform can suggest terms so that users execute their search quickly. 

Another advantage of an HRSD solution is that employees can access context-sensitive content, tasks, and services through a Single Sign-on (SSO). A prompt feature can suggest related documents so that employees have access to all the information available. For instance, if an employee is searching for the vacation policy of the organization, information related to paid holidays, guest house facilities, leave travel allowance, etc. could pop up for the employee to review.

The traditional way of addressing HR problems is to raise a ticket. At the backend, case routing is manual, time-consuming, and person-dependent. Studies indicate that human resource personnel spend 57% of their time on repetitive tasks. Instead, information can be made available real-time via call, chatbot, or chat with a virtual agent. Larger organizations can also invest in an interactive voice response (IVR) facility which is accessible 24/7. When tickets are raised, an HRSD solution can be used to assign cases automatically depending on the skills and workload of HR personnel. This can positively impact employee experience.

Determine the success of an HRSD solution through leading and lagging indicators

Adopting an HRSD solution can be a major investment, and organizations can measure ROI through leading and lagging indicators. Two instances of leading indicators are, a self-service portal and a feedback mechanism. Studies show that 70% of issues can be resolved through a self-service knowledge portal. Accessible 24/7, it gives users greater control over information and does away with costs associated with deploying HR staff to answer calls. A feedback mechanism can be deployed by enabling users to comment and rate a document. This allows the organization to engage in continuous improvement of the information on the knowledge platform.  

Lagging indicators provide quantifiable data that proves the automation invested in by the organization is delivering ROI. For instance, increase in the use of the chat tool versus reduction in case volume demonstrates that employees effectively use the chat option to solve issues instead of raising tickets -which take longer to address. As a result, HR personnel spend less time in backend administration and more time responding to actual employee concerns.

Increase in the use of IVR versus reduction in the number of cases logged indicates that employees are able to quickly address queries over the phone instead of raising tickets. Thus, less personnel are needed to service a call center.

Measuring ROI on an HR service delivery solution

  • Organizations that implemented a knowledge portal or mobile app with personalized content found they could solve Tier 0 inquiries over 60% of the time and reduce HR administrative costs by up to 30%
  • Increased resolution of first calls reduces Tier 2 escalations. This can save up to 300k (for a client with a case volume of 25,000) as only around 8% of queries escalated to Tier 2
  • With a well-managed HRSD solution, less than 5% of employee queries escalate to Tier 3, at which, specialized professionals review and respond to cases. This allows organizations to optimize HR resources to do more value-added work
  • Increased self-service and peer-networks help case deflection. Over time, more than 60% of employee inquiries are resolved before reaching an HR personnel

·         With employee self-reliance, HR can be up to 30% more productive. Freed HR personnel can focus on higher-value strategic issues such as employee retention and workforce planning


So, if your organization is looking to give employees a seamless experience similar to retail, an HRSD is the answer. While the market abounds with HRSD vendors, choosing the right one requires a deeper understanding of one's requirements and the strengths of the vendor. Begin a conversation with Infosys to know how your organization can navigate its digital journey with an effective HR service delivery solution.


September 30, 2018

Public Cloud Security- is it still a concern for enterprises?

Author: Jitendra Jain, Senior Technology Architect (Architecture & Design Group, Infosys)


Cloud computing has become integral part of IT modernization in any large to small scale enterprises. It has been considered as a major milestone in the transformational journey. Cloud computing changes the way enterprises store the data, share the data and access the data for services, products and applications. Public cloud is the most widely adopted model of cloud computing. Public cloud as the same suggest available to public over the internet and easily accessible via web channel in a free mode or pay as you go mode. Gmail, O365, Dropbox are some of the popular examples of public cloud.

Public cloud provided services eliminates extra investment in infrastructure as all the required hardware, platform architecture and core operating software services is entirely owned, managed and efficiently maintained by the cloud hosting vendor.

As per mcafee research almost 76% of enterprises have adopted minimum 1 public cloud service provider, it could be any kind of cloud offerings (SaaS, IaaS, or PaaS). It shows popularity of public cloud. 

Continue reading "Public Cloud Security- is it still a concern for enterprises?" »

September 20, 2018

Multi-Cloud strategy - Considerations for Cloud Transformation Partners

While "Cloud" has become the "New Normal", recent analyst surveys indicate that more and more enterprises are adopting Multi-Cloud, wherein more than one Public Cloud provider is utilized to deliver the solution for an enterprise, for example; a solution that employs both AWS and Azure. There are various reasons for enterprises to take this route, Cloud Reliability, Data Sovereignty, Technical Features, Vendor Lock-in to being a few amongst the several reasons.
Though most of the deliberations are revolving around Multi-Cloud for enterprises, here is an attempt to bring out the considerations that a Cloud Transformation Partner needs to watch out for.

There are four core areas a Cloud Transformation Partner must focus on to ensure successful and seamless Transformation & Operation of a Multi-Cloud environment:

1. Architecture
2. Engineering
3. Operations
4. Resources

Architecture: Success of a multi-cloud strategy depends largely on defining the right architecture that can help reap the benefits of having a multi-cloud environment. Architecture decisions should be reviewed against the business demands that triggered a multi-cloud strategy and ensure they are fulfilled.

Application and Deployment architecture has address all aspects of why an enterprise is looking to adopt a multi-cloud strategy. For example, if Data Sovereignty was the key consideration; application deployment architecture should make sure that data will reside in the appropriate Cloud that suits the need. If reliability is the driver, suitable failover mechanism needs to be in place, thus making use of the multiple cloud platforms available.

Interoperability across platforms is among the critical elements to emphasize on along with portability across Cloud Service Providers (CSPs). Achieving this takes a multi layered approach and containers is emerging as a solution in the cloud native space. More details in another blog post here.

Though Cloud as a platform is stable, there is a possibility of failure with a cloud provider (and we have witnessed it in the past). Disaster Recovery (DR) solution built on multiple clouds can be a more effective solution than DR with a single cloud provider in multiple regions.

Establishing network connectivity between competitor CSPs can have its own challenges and bottle necks. Network solution should facilitate provisioning new connections when needed with desired performance across multiple clouds.

Security solutions and controls need to run natively on all clouds and work across all boundaries. Hence Cloud Security Architecture should be on top of the list for considerations in multi-cloud. More importantly, solutions for threats, breaches and fixes need to cater to multiple CSPs and have to be centrally coordinated to respond effectively.

Engineering: There will be changes to the current set of application development and engineering processes followed for a single cloud environment. Application Deployment would need careful planning in a multi-cloud environment with specific focus on developer productivity, process compliance and security implementations.

DevOps should be an integral part of agile development for cloud native & traditional applications. Attention and careful planning needs to be given to the DevOps process and tools to work seamlessly across multiple cloud platforms.

Application lifecycle management should have Platform specific testing built into the process and ensure reliable operations on each of the target platforms.

Operations: Cloud operations are more complex in a multi-cloud scenario due to the overheads that each cloud platform will bring in.

Cloud Management Platform (CMP) must support the multiple Public Clouds that are part of the solution. CMP should be capable to abstract the complexity of different Cloud stacks and models and provide a single window view to monitor, administer and manage multi-cloud ecosystem for the operators.

Oversubscription of Cloud resources needs to be watched for a multi-cloud environment. It is hard to foresee the cloud usage patterns in each of the cloud platforms, and it is very likely that one or all of the cloud platforms can get oversubscribed. Optimization of cloud resources can be a challenge and can result to increased costs. Multi-Cloud strategy may not attract the best volume discounts from a CSP and can impact the cost.

SLA's can vary across CSPs, this should be taken in to consideration while defining the service levels.

Overheads for managing and tracking multiple CSP contracts, billing etc. takes effort and time and needs to be planned for. A well-defined change control mechanism and a roles & responsibilities matrix are essentials in a multi-cloud environment.

Resources: Staffing needs to be planned considering the multiple cloud platforms and the varied skills that would be required. Teams need to have an appropriate mix of core cloud Horizontal skills and CSP specific vertical skills. Multi-cloud environment will demand resources in:

Cloud Horizontal Skills - Engineering skills like Cloud native development with 12 factor principles, cloud orchestration is relatively cloud provider independent. Resources will be specialists in their technical areas and will not be dependent on the Cloud platforms.

Cloud Vertical Skills - Specialists of each cloud platform will be required to extract the best out of each of the multiple cloud platforms that are used. These resources will be required at various roles ranging from architects to developers.

Agile/DevOps - Cloud development needs to be agile and should accommodate changes with the minimal turnaround time. This would need adoption of Agile/DevOps and resources with the appropriate skills to run large scale agile projects.
Cloud led transformation is a journey/ continuum for any large enterprise and hence they should choose a cloud transformation partner who has deep expertise across architecture, engineering and operations with right resources. Infosys as a leading cloud transformation partner has been working with Global 2000 enterprises on their transformations. You can find more details on the same here.

Continue reading "Multi-Cloud strategy - Considerations for Cloud Transformation Partners" »

September 3, 2018

Choosing the right Cloud Service Provider(s) and managing portability and interoperability across them

Global Enterprises are leveraging cloud as a platform to enable transformation, to drive business growth, improve business agility and enhance customer experience while delivering resilient IT systems at an optimal cost. AWS and Azure are the leading hyperscale cloud service players in the market, while others like Google Cloud, Oracle Cloud are emerging strong as well with compelling product service offerings for enterprise customers.

Choosing the right Cloud Service Provider

A cloud service provider choice is not made by enterprises solely based on cost, neither will they move from one cloud service provider to another just to achieve direct cost advantage on CSP charges. The choice of cloud service provider is made considering suitability of CSP for the workload, unique feature set offered by the CSP, visibility into the product roadmap, security & compliance adherence, flexibility in commercial agreements, pricing models and overall business strategy alignment. With the heterogeneity in the current enterprise IT landscape, globally distributed businesses with IT strategy at line of business level or country/ regional level, leads to adopting more than one cloud service provider by enterprises.

With more than one cloud service provider and an existing infrastructure landscape, enterprises end up with a multi cloud environment and applications deployed across them. With business process flowing across applications in different deployment zones, it is essential that enterprises manage the hybrid environment with due considerations involving interoperability and portability.


The foundation for interoperability should factor in all four layers of the IT landscape, namely: Infrastructure, platform, application and business processes while catering to the needs of all involved stakeholders which primarily includes developers, IT operations, security, application and business owners. Considerations in the interoperability design include:

  1. Abstract the complexity of the cloud platform and provided unified interface to IT developers to enable large scale adoption
  2. Provide a unified cloud orchestration & management layer for provisioning, self-service catalog, policy based orchestration, monitoring and billing & chargeback.
  3.  Create an integration platform at data and process levels across the deployment zones in a secure manner. This is to ensure business processes can be executed seamlessly across applications deployed in various zones.


Though interoperability ensures operations across multiple cloud services providers, there is need to consider portability at various levels including:

  •  Applications -  Technology stack (Programming) and application packaging to enable application development irrespective of the application deployment target. For example, application would be developed with technologies like Spring, Python, NodeJS, MySQL, MongoDB, Hadoop, Spark and packaged as Containers to ease deployment.
  •  Middleware platform - An application management runtime that brings in uniformity across cloud service providers and simplify operations and management across. Containers like Docker and container management platform like Kubernetes help deploy application in a multi cloud platform and manage in a scalable manner)
  •   Development and Management Tools - While cloud native applications bring in required agility they need right set of development and management tools to manage it.
    1.  Unified Service discovery, routing, security and management to monitor and troubleshoot micro services and applications deployed in the hybrid cloud. Cloud control plane is expected to provide service discovery & routing, security policy enforcement, identity & authorization service, tracing, logging and monitoring to run large scale hybrid cloud environments. ServiceMesh technology is in its nascent stage and focused on addressing these needs.
    2. DevOps platform to build, test, package and deploy applications in a uniform manner across cloud service providers. Tools like GitHub, Jenkins, Packer, Terraforms, CloudForms, Chef/ Puppet help realize a DevOps platform which works across public and private clouds.
  •   Security - Consistent implementation/ enforcement of security irrespective of the application deployment zone in the hybrid cloud. Unlike the traditional data center deployment model of applications into a defined network architecture, the cloud native workloads are dynamically placed across deployment zones in multiple clouds in a portable manner. This necessitates technologies that would reconfigure the infrastructure to enforce the security policies in a software defined manner. ServiceMesh attempts to address the security needs of the hybrid cloud as well and continuous to evolve.

Implementation of portability should consider factors like cost of implementing portability, impact due to avoidance CSP native capabilities, time to market, engineering required skills to build the platform. The enterprise may also choose to implement limited portability with considerations on factors like unique advantages of a specific CSP service, cost of porting out in the future, etc.

Summarizing, while the choice of cloud service providers is made based on the feature set, workload affinity and commercial agreement, it is essential to establish the interoperability across infrastructure, platform and application layers ensure service resiliency and unhindered operations. Also, critically evaluate portability needs while defining the cloud solution blueprint, to retain the continuous evolution path for the organization.

Infosys as a leading cloud transformation service provider has helped several clients successfully to navigate through their multi cloud adoption journey. We would be happy to share our experiences with you and help you in your journey. 

August 17, 2018

Capabilities required to build and manage a Dynamic Cloud Environment

Cloud transformation is changing the way infrastructure and platforms are built and operated in an IT landscape. It is moving from the conventional implementation of ITSM processes, wherein every infrastructure request from an end user (application developers, application owners) goes through an approval & budgeting process and long cycle time to provision, which might involve procurement as well to a simple a self-service provisioning model driven by enterprise specific product/ service catalog.  

This self-service model requires minimal or no support from the infrastructure and platform teams during provisioning but the responsibilities of platform resiliency, security, cost control and compliance remain with the platform teams. So, the platform engineering approach changes from being people and process centric to "self-service" methods with automation, controls and governance embedded in a non-intrusive way.

The platform for cloud includes 4 distinct layers:

Platform Services Layer.jpg

  1. Cloud Platform Management - Manages the catalog, handles the request process and business approval in the provisioning phase. Addresses service management, billing and cost allocation in the operational phase. 
  2. Enterprise Orchestration - Unified provisioning across multiple deployment zones and configuring the environment for application usage which application specific middleware deployment and integrating with operational & management tools.
  3. Cloud Control Plane - These are new capabilities required to address the dynamic characteristics of the cloud including workload placement, routing, tracing and security implementations.
  4. Deployment Zone - The infrastructure layer should be traditional on-premise data centers or private/ public cloud augmented with a container management platform.

Technology products are maturing in the cloud platform management and orchestration space to enable organizations to work effectively with the multi-cloud environment. For example, ServiceNow for cloud platform management (workflows in provisioning part), Terraforms for multi cloud provisioning, Chef/ Puppet/ Ansible for configuration management.  Along with maturity of technologies, there is also increased number of skilled resources to work on these maturing technologies, which enables enterprise to transform with cloud, benefitting from cloud scalability, agility and cost.

While the challenges in provisioning in a multi-cloud environment are being addressed, the effective solutions for operations of multi-cloud that goes beyond IaaS is in early stages of evolution. For example, orchestration and operations tools for containerized platform or PaaS or server-less architecture is not mature. Cloud control plane is a concept that is evolving, and focuses on the concerns around service location, routing, security and monitoring, however the supporting technologies for these are in nascent stages with limited standard support.

Enterprises who are taking the journey to multi-cloud should,

  • Look at a comprehensive cloud management and orchestration platform, preferably an integrated platform to make consumption of resources for multiple deployment zones as simple as possible for the consumers while ensuring organization controls in a policy driven manner.
  • Explore the technology stack to implement a cloud control plane which would bring in operational control over the hybrid IT landscape.

The second part of this post would lay out the schematic of the Cloud control plane and analyze standards & technologies that are evolving to meet the needs. Stay tuned!

March 19, 2018

Do I stop at Enterprise Agility?

      In Today's world, with the increase in the competition and customer demands, CRM transformation is no longer a single heroic application or module. The increasing needs of having better customer experience through connected architecture have added layers of complexity with the solution spreading across systems/ technology or modules making it cumbersome to manage and maintain the enterprise architecture. Very recently, one of our customers asked - while you have the best solution for my CX woes, do you have anything which can help manage my delivery process? Do you have a packaged offering which solves my implementation as well as execution requirements?

Continue reading "Do I stop at Enterprise Agility? " »