The commoditization of technology has reached its pinnacle with the advent of the recent paradigm of Cloud Computing. Infosys Cloud Computing blog is a platform to exchange thoughts, ideas and opinions with Infosys experts on Cloud Computing

Main

June 25, 2019

RDS - Scaling and Load Balancing

img2.2.jpg

Solution architects often have to encounter a question i.e. like an EC2 instances, can a load balancer with autoscaling group be used for scaling and load balancing RDS instances and databases hosted on them? 

While the answer to this question is "NO" there are other ways to scale RDS and load balance RDS read queries.

Point to consider here is that RDS is a managed service from AWS and thus takes care of scaling of relational database to keep up with the growing requirements of application without manual intervention. So former part of this article will focus on exploring vertical as well as horizontal scaling of RDS instances and the later would be observing load balancing options.

 Amazon RDS was first released on 22 October 2009, supporting MySQL databases. This was followed by support for Oracle Database in June 2011, Microsoft SQL Server in May 2012, PostgreSQL in November 2013 and MariaDB in October 2015. As on today it is one of the core PaaS services offered by AWS.


Scaling- RDS Instances 

RDS instances can be scaled vertically as well as horizontally.

Vertical Scaling

To handle higher load, database instances can be vertically scaled up with a single click. At present there are fifty type of instance and sizes to choose for RDS MySQL, PostgreSQL, MariaDB, Oracle or Microsoft SQL Server instance. For Aurora, there are twenty different instance sizes.

Follow below steps to vertically scale RDS instance.

 

img3.jpg

Remember, so far only instance type has been scaled, however the storage is separate and when instance is scaled up or down it remains unchanged. Hence volume also must be modified separately. We can increase the allocated storage space or improve the performance by changing the storage type (such as to General Purpose SSD to Provisioned IOPS SSD). 


img4.jpg

One important point to remember while scaling horizontally is to ensure correct license is in place for commercial engines like Oracle, SQL Server. Especially in BYOL model, because licenses are usually tied to the CPU sockets or cores. 

Another important consideration is that single AZ instance will be down or unavailable during this change. However if  database instance is Multi-AZ, the impact will be minimal as backup database will be updated first. A fail over will occur to newly updated database (backup which was updated first) before applying changes to main database engine (which will now become the standby).


Horizontal Scaling

To scale read intensive traffic, read replicas can be used. Presently, Amazon RDS for MySQL, MariaDB and PostgreSQL allow to create up to five read replicas for a given source database instance. Amazon Aurora permits creation of up to fifteen read replicas for a given database cluster.

Read replicas are asynchronously replicated copies of main database.

A read replica can be -

  • In same or different AZ as well as region- to be placed close to users.
  • Can be promoted to master as disaster recovery.
  • Can have same or different database instance type/class.
  • Can be configured as Multi-AZ - Amazon RDS for MySQL, MariaDB and PostgreSQL allow to enable Multi-AZ configuration on read replicas to support disaster recovery and minimize downtime from engine upgrades.
  • Can have different storage class.

a9.2.jpgEach of these read replicas can have different endpoints to share the read load. We can connect to these read replicas like how we connect to standard DB instance.


Load Balancing/Distribution of Read/Write Traffic on AWS RDS

AWS load balancer doesn't support routing of traffic to RDS. The CLB, ALB and NLB cannot route traffic to RDS Instances. So then how to distribute or balance read traffic on AWS RDS read replicas? 

There are two options -


1)  Using an open-source software-based load balancer, like HAProxy.

As we know that each replica has unique Domain Name Service (DNS) endpoint. These endpoints can be used by an application to implement load balancing. 

This can be done programmatically at application level or by using several open-source solutions such as MaxScale, ProxySQL and MySQL Proxy.

These solutions can split read/write queries and then proxy's like HAProxy can be used in between application and database server. HAProxy can listen read and write on different ports and route accordingly.


11.2.jpg

This approach allows to have a single database endpoint instead of several independent DNS endpoints for each read replica.

It also allows more dynamic environment as read replicas can be transparently added or removed behind the load balancer without any need to update database connection string of the application.


2) Second option is to use Amazon Route 53 weighted record sets to distribute requests across read replicas.

Though there is no built-in way, but this is a work around to use Route 53 weighted records to load share requests across multiple read replicas to achieve same results.

Within Route 53 hosted zone, different record sets can be created with each read replica endpoint with equal weight and then using Route 53 read traffic can be shared among different record sets. 


a8.2.jpg

Application can use route 53 endpoint to send read requests to database which will be distributed among all read replicas to achieve same behavior as load balancer. 








March 25, 2019

Optimizing Costs in the Cloud: Analyzing the Reserved Instances

There are more than one levers when it comes to optimizing costs in AWS. Right sizing, reserving, using spot instances are some of them. While right sizing is always applicable, it is not the case with use of Reserved Instances due to various constraints. Technologists, Evangelists, Architects, Consultants amongst like us are faced with dilemma - to reserve or not to reserve instances in the AWS cloud. I am sure you will have some point view, not necessarily for or against but rather how, when and why it makes or doesn't make sense to reserve instances in Cloud.  AWS is considered as reference for this discussion but I am sure it would be equally applicable for other CSP (Cloud Services Providers). 

This (not so) short article tries brings out the relevant points more succinctly so as to help you make a decision or at the very least, acts as an ice breaker to get the debate started. 


Let's begin, with some research!


Arguably, the documentation is detailed, well organized, peppered with diagrams with hues of orange.  In fact, it's so detailed, I had to open about seven tabs from various hyperlinks on one page to get started. No wonder then a majority of technicians hardly skim through the documentation to make a decision. Further it's a living document considering the 100s of innovation that AWS brings about each year, in some form or shape. If you are bored enough and have a had a chance to read through, congratulation! Hopefully, we will be on the "same page" at some point in time J


AWS_Documentation_RI.png

  

Text to Summary (Does AWS have a service yet?)

I checked a few online options for summarizing but it wasn't up to the mark, esp. when I provide the AWS documentation URL.   So here is summary form the documentation I managed to read

Basics

The EC2 instances in AWS is defined by attributes like size (e.g. 2x large, medium), instance type/family (e.g. m5, r4), scope (e.g. AZ, Region), platform (e.g.  Linux, Windows), network type (e.g. EC2-Classic, VPC)

The normalization factor refers to instance size within instance type. This is used while comparing the instance footprint and it is meaningful within the instance family.   One can't use the normalization factor to convert sizing across instance types e.g. from t2 to r4

The normalization factors, which are applicable within an instance type are given below

NormalizationFactors.PNG


Reservation offerings


There are three offerings - Standard, Convertible and Scheduled Reserved instances with following key differences between these offerings

RI_offerings.PNG

Notes:

* Such availability of any feature or innovation, changes across region in a short span of time, hence it is better to validate at the time of actual use


Reservation Scope

Instance reservation scope is either Zonal (AZ specific) or Regional, with following characteristics

(Ref: https://docs.aws.amazon.com/AWSEC2/latest/UserGuide/apply_ri.html )

RI_Scope.PNG

Modifiable attributes

Some instance attributes are modifiable within certain constraints, most notably, the platform type. As a general rule, Linux platform instances are more amenable. Following table list the modifiable attributes  (Ref: https://docs.aws.amazon.com/AWSEC2/latest/UserGuide/ri-modifying.html )

Modifyable_attr.PNG

The constraints

While there is sufficient flexibility in general for IaaS, there are a number of constraints, based on feature availability in specific region, upper or lower limits, and so on.  The constraints on Reserved Instances are highlighted below

1.       Instance size flexibility on Reserved Instances is lost f such reservation are purchased for a specific AZ or for bare metal instances. Also sizing flexibility does not apply to instances in dedicated tenancy, and for Windows with/without SQL Server, SuSE or RHEL.

2.       Software license usually does not align well with instance size changes.  One must give careful consideration for licensing aspects. As one example, in one of environment, I use SuSE for SAP for which the s/w license pricing is in couple of slabs.  If I use one r4.8xlarge, I have to pay an hourly price of 0.51/hr but if I choose two of the r4.4xlarge instances which are equivalent of one r4.8xlarge, then I have to pay a price 2 x 0.43/hr

3.       While modifying a Reserved Instance, the footprint as in size/capacity of target configuration is required to match that of the existing configuration., otherwise. Even when if reconfiguration results in higher footprint, it doesn't work.

4.       Instances are grouped by family (based on storage, or CPU capacity); type (designed for specific use cases); and size. For example, the c4 instance type is in the Compute optimized instance family and is available in multiple sizes. Even though c3 instances are of the same family as c4 family, one can't modify c4 instances into c3 instances because of different underlying hardware specifications

5.  Some instance types, where there is only one type instance, obviously can't be modified.  Such instances include cc2.8xlarge, cr1.8xlarge, hs1.8xlarge, i3.metal, t1.micro

6.       Convertible RI can only be exchanged for other Convertible Reserved Instances currently offered by AWS. Exchanging multiple convertible instances for one CI is possible.

7.       AWS doesn't like you to change All Upfront and Partial Upfront Convertible Reserved Instances to No Upfront Convertible Reserved Instances.

Do we have a common understanding?

It's very human not to have a common understanding, even if we have same source of information. Here is my attempt to rephrase some of the salient points as common understanding - If you don't agree, consider it as commoner's understanding!

 

1.       The much touted 70% discount is not that common. It applies to some very large instance types with partial/full upfront e.g. x1.16xlarge for partial upfront, 3yr term 

( ref: https://aws.amazon.com/ec2/pricing/reserved-instances/pricing/ )

Most instance types give you slightly moderate discount, approx. 20-30% lesser than you would   have expected

2.       That Reserved instances is the first solution when savings costs on Cloud is entirely right. In fact, it requires a careful review various constraints.

3.       RI could be used when it comes to production load which is usually more on the stable side and by the time workload enters the production systems, a set of tests including performance tests, backup/recovery, network latency etc. would have been carried out so that sufficient information is available to make a decision.

4.       Type of applications running on EC2 instances, the operating system and the software/tools licenses which are usually bound to instance type/size must be understood before going with the reservation

5.       Even when decision has been made to go with RI, good starting point could be Convertible as opposed to Standard RI, due to the flexibility they provide and marginal cost disadvantage when compared to Standard RI. Scheduled RI due to their inflexibility need even more careful evaluation unless you just use it for smaller instances and non-prod systems.

 

Conclusion

As the number of innovations increase and solution architects are faced with numerous "equally good" options, it makes sense to take a pause, evaluate the options carefully, gather stakeholder inputs and make a recommendation. It's certainly true for using or otherwise Reserved Instances.



Continue reading "Optimizing Costs in the Cloud: Analyzing the Reserved Instances " »

March 1, 2019

Can your legacy applications be moved to cloud?

 Migrating to cloud is a smooth journey until enterprises come across a hiccup called legacy applications. Moving legacy applications to cloud is a task that requires pre-planning and several other aspects which need to be defined clearly for the teams involved in the migration process. Evaluation of the importance of the legacy application to business growth becomes integral. If the application doesn't forward the enterprise's goals, then a call whether it is necessary to retain the application needs to be made. Several legacy applications are of great importance to enterprises and bring them business continuously. Thus, pre-planning, resources and budgeting becomes mandatory for such applications. Here is how enterprises can go about moving their legacy applications to cloud: 


1)      Application Rationalization: Based on the complexity of the environment and legacy nature of the applications, it is recommended to do application rationalization to remove or retire the duplicate applications or consolidate the applications into smaller footprints. This will reduce the legacy footprint and optimizes the migration.


2)      Cloud suitability Assessment: Conducting an application assessment before migrating it to cloud is an essential step. All the involved stakeholders need to understand the technical risks of moving a legacy application along with the impact it will have on business. In addition to this, security and compliance, performance, availability, scalability assessments also need to be performed to avoid any lag or disruption during or after the migration.  


3)      Robust Migration plan: IT teams and vendors involved in the migration of legacy applications need to create a robust and failproof migration plan. Starting from assessment to maintenance of the application after migration needs to be listed and elaborated in this plan. Furthermore, the plan must include key responsibilities and the role each team member is required to fulfill. Thus, preparing a checklist and ticking off tasks once they are done simplifies the process.  


4)      Declutter: Legacy applications bring with them several technical challenges such as OS compatibility, old VMs outdated infrastructure tools and inter application dependencies. When a legacy application is migrated to cloud, with it moves the log files, malware and file legacy. Thus, when the application is decluttered, the OS, VMs and legacy logs are not moved to cloud. Instead only the app is migrated and installed on a cloud-hosted hosted OS. Hence it is important to understand the application composition with core part of applications and its peripheral dependencies so that cloud migration can be optimized.


5)      Trick your applications: Legacy applications can be tricked to run in a stateless environment. This can be achieved by freezing the application's configuration. Freezing the state and configuration allows deployment of the application on one or more servers as per your choice.


6)      Disaster Recovery: Legacy applications often are of high importance to enterprises and have a drastic impact on business. Disaster Recovery needs to be in place for these legacy applications to avoid data loss and ensure there is no lag in business continuity.


 Legacy applications are critical to the smooth functioning of enterprises. A robust strategy to migrate these applications to Cloud becomes crucial. By doing this, not only is the enterprise at an advantage but can also reap a number of benefits such as faster innovation, greater agility and faster time to market.



Are you ready to move your legacy applications to Cloud? Infosys' vast experience in this domain will be of great help to you. Get in touch with us: enterprisecloud@infosys.com


Continue reading "Can your legacy applications be moved to cloud?" »

September 30, 2018

Public Cloud Security- is it still a concern for enterprises?

Author: Jitendra Jain, Senior Technology Architect (Architecture & Design Group, Infosys)

Introduction

Cloud computing has become integral part of IT modernization in any large to small scale enterprises. It has been considered as a major milestone in the transformational journey. Cloud computing changes the way enterprises store the data, share the data and access the data for services, products and applications. Public cloud is the most widely adopted model of cloud computing. Public cloud as the same suggest available to public over the internet and easily accessible via web channel in a free mode or pay as you go mode. Gmail, O365, Dropbox are some of the popular examples of public cloud.

Public cloud provided services eliminates extra investment in infrastructure as all the required hardware, platform architecture and core operating software services is entirely owned, managed and efficiently maintained by the cloud hosting vendor.

As per mcafee research almost 76% of enterprises have adopted minimum 1 public cloud service provider, it could be any kind of cloud offerings (SaaS, IaaS, or PaaS). It shows popularity of public cloud. 


Continue reading "Public Cloud Security- is it still a concern for enterprises?" »

September 20, 2018

Multi-Cloud strategy - Considerations for Cloud Transformation Partners

While "Cloud" has become the "New Normal", recent analyst surveys indicate that more and more enterprises are adopting Multi-Cloud, wherein more than one Public Cloud provider is utilized to deliver the solution for an enterprise, for example; a solution that employs both AWS and Azure. There are various reasons for enterprises to take this route, Cloud Reliability, Data Sovereignty, Technical Features, Vendor Lock-in to being a few amongst the several reasons.
Though most of the deliberations are revolving around Multi-Cloud for enterprises, here is an attempt to bring out the considerations that a Cloud Transformation Partner needs to watch out for.


There are four core areas a Cloud Transformation Partner must focus on to ensure successful and seamless Transformation & Operation of a Multi-Cloud environment:

1. Architecture
2. Engineering
3. Operations
4. Resources

Architecture: Success of a multi-cloud strategy depends largely on defining the right architecture that can help reap the benefits of having a multi-cloud environment. Architecture decisions should be reviewed against the business demands that triggered a multi-cloud strategy and ensure they are fulfilled.

Application and Deployment architecture has address all aspects of why an enterprise is looking to adopt a multi-cloud strategy. For example, if Data Sovereignty was the key consideration; application deployment architecture should make sure that data will reside in the appropriate Cloud that suits the need. If reliability is the driver, suitable failover mechanism needs to be in place, thus making use of the multiple cloud platforms available.

Interoperability across platforms is among the critical elements to emphasize on along with portability across Cloud Service Providers (CSPs). Achieving this takes a multi layered approach and containers is emerging as a solution in the cloud native space. More details in another blog post here.

Though Cloud as a platform is stable, there is a possibility of failure with a cloud provider (and we have witnessed it in the past). Disaster Recovery (DR) solution built on multiple clouds can be a more effective solution than DR with a single cloud provider in multiple regions.

Establishing network connectivity between competitor CSPs can have its own challenges and bottle necks. Network solution should facilitate provisioning new connections when needed with desired performance across multiple clouds.

Security solutions and controls need to run natively on all clouds and work across all boundaries. Hence Cloud Security Architecture should be on top of the list for considerations in multi-cloud. More importantly, solutions for threats, breaches and fixes need to cater to multiple CSPs and have to be centrally coordinated to respond effectively.


Engineering: There will be changes to the current set of application development and engineering processes followed for a single cloud environment. Application Deployment would need careful planning in a multi-cloud environment with specific focus on developer productivity, process compliance and security implementations.

DevOps should be an integral part of agile development for cloud native & traditional applications. Attention and careful planning needs to be given to the DevOps process and tools to work seamlessly across multiple cloud platforms.

Application lifecycle management should have Platform specific testing built into the process and ensure reliable operations on each of the target platforms.


Operations: Cloud operations are more complex in a multi-cloud scenario due to the overheads that each cloud platform will bring in.

Cloud Management Platform (CMP) must support the multiple Public Clouds that are part of the solution. CMP should be capable to abstract the complexity of different Cloud stacks and models and provide a single window view to monitor, administer and manage multi-cloud ecosystem for the operators.

Oversubscription of Cloud resources needs to be watched for a multi-cloud environment. It is hard to foresee the cloud usage patterns in each of the cloud platforms, and it is very likely that one or all of the cloud platforms can get oversubscribed. Optimization of cloud resources can be a challenge and can result to increased costs. Multi-Cloud strategy may not attract the best volume discounts from a CSP and can impact the cost.

SLA's can vary across CSPs, this should be taken in to consideration while defining the service levels.

Overheads for managing and tracking multiple CSP contracts, billing etc. takes effort and time and needs to be planned for. A well-defined change control mechanism and a roles & responsibilities matrix are essentials in a multi-cloud environment.


Resources: Staffing needs to be planned considering the multiple cloud platforms and the varied skills that would be required. Teams need to have an appropriate mix of core cloud Horizontal skills and CSP specific vertical skills. Multi-cloud environment will demand resources in:


Cloud Horizontal Skills - Engineering skills like Cloud native development with 12 factor principles, cloud orchestration is relatively cloud provider independent. Resources will be specialists in their technical areas and will not be dependent on the Cloud platforms.

Cloud Vertical Skills - Specialists of each cloud platform will be required to extract the best out of each of the multiple cloud platforms that are used. These resources will be required at various roles ranging from architects to developers.

Agile/DevOps - Cloud development needs to be agile and should accommodate changes with the minimal turnaround time. This would need adoption of Agile/DevOps and resources with the appropriate skills to run large scale agile projects.
Cloud led transformation is a journey/ continuum for any large enterprise and hence they should choose a cloud transformation partner who has deep expertise across architecture, engineering and operations with right resources. Infosys as a leading cloud transformation partner has been working with Global 2000 enterprises on their transformations. You can find more details on the same here.

Continue reading "Multi-Cloud strategy - Considerations for Cloud Transformation Partners" »

September 3, 2018

Choosing the right Cloud Service Provider(s) and managing portability and interoperability across them

Global Enterprises are leveraging cloud as a platform to enable transformation, to drive business growth, improve business agility and enhance customer experience while delivering resilient IT systems at an optimal cost. AWS and Azure are the leading hyperscale cloud service players in the market, while others like Google Cloud, Oracle Cloud are emerging strong as well with compelling product service offerings for enterprise customers.

Choosing the right Cloud Service Provider

A cloud service provider choice is not made by enterprises solely based on cost, neither will they move from one cloud service provider to another just to achieve direct cost advantage on CSP charges. The choice of cloud service provider is made considering suitability of CSP for the workload, unique feature set offered by the CSP, visibility into the product roadmap, security & compliance adherence, flexibility in commercial agreements, pricing models and overall business strategy alignment. With the heterogeneity in the current enterprise IT landscape, globally distributed businesses with IT strategy at line of business level or country/ regional level, leads to adopting more than one cloud service provider by enterprises.

With more than one cloud service provider and an existing infrastructure landscape, enterprises end up with a multi cloud environment and applications deployed across them. With business process flowing across applications in different deployment zones, it is essential that enterprises manage the hybrid environment with due considerations involving interoperability and portability.

Interoperability

The foundation for interoperability should factor in all four layers of the IT landscape, namely: Infrastructure, platform, application and business processes while catering to the needs of all involved stakeholders which primarily includes developers, IT operations, security, application and business owners. Considerations in the interoperability design include:

  1. Abstract the complexity of the cloud platform and provided unified interface to IT developers to enable large scale adoption
  2. Provide a unified cloud orchestration & management layer for provisioning, self-service catalog, policy based orchestration, monitoring and billing & chargeback.
  3.  Create an integration platform at data and process levels across the deployment zones in a secure manner. This is to ensure business processes can be executed seamlessly across applications deployed in various zones.

Portability

Though interoperability ensures operations across multiple cloud services providers, there is need to consider portability at various levels including:

  •  Applications -  Technology stack (Programming) and application packaging to enable application development irrespective of the application deployment target. For example, application would be developed with technologies like Spring, Python, NodeJS, MySQL, MongoDB, Hadoop, Spark and packaged as Containers to ease deployment.
  •  Middleware platform - An application management runtime that brings in uniformity across cloud service providers and simplify operations and management across. Containers like Docker and container management platform like Kubernetes help deploy application in a multi cloud platform and manage in a scalable manner)
  •   Development and Management Tools - While cloud native applications bring in required agility they need right set of development and management tools to manage it.
    1.  Unified Service discovery, routing, security and management to monitor and troubleshoot micro services and applications deployed in the hybrid cloud. Cloud control plane is expected to provide service discovery & routing, security policy enforcement, identity & authorization service, tracing, logging and monitoring to run large scale hybrid cloud environments. ServiceMesh technology is in its nascent stage and focused on addressing these needs.
    2. DevOps platform to build, test, package and deploy applications in a uniform manner across cloud service providers. Tools like GitHub, Jenkins, Packer, Terraforms, CloudForms, Chef/ Puppet help realize a DevOps platform which works across public and private clouds.
  •   Security - Consistent implementation/ enforcement of security irrespective of the application deployment zone in the hybrid cloud. Unlike the traditional data center deployment model of applications into a defined network architecture, the cloud native workloads are dynamically placed across deployment zones in multiple clouds in a portable manner. This necessitates technologies that would reconfigure the infrastructure to enforce the security policies in a software defined manner. ServiceMesh attempts to address the security needs of the hybrid cloud as well and continuous to evolve.

Implementation of portability should consider factors like cost of implementing portability, impact due to avoidance CSP native capabilities, time to market, engineering required skills to build the platform. The enterprise may also choose to implement limited portability with considerations on factors like unique advantages of a specific CSP service, cost of porting out in the future, etc.

Summarizing, while the choice of cloud service providers is made based on the feature set, workload affinity and commercial agreement, it is essential to establish the interoperability across infrastructure, platform and application layers ensure service resiliency and unhindered operations. Also, critically evaluate portability needs while defining the cloud solution blueprint, to retain the continuous evolution path for the organization.

Infosys as a leading cloud transformation service provider has helped several clients successfully to navigate through their multi cloud adoption journey. We would be happy to share our experiences with you and help you in your journey. 

August 17, 2018

Capabilities required to build and manage a Dynamic Cloud Environment

Cloud transformation is changing the way infrastructure and platforms are built and operated in an IT landscape. It is moving from the conventional implementation of ITSM processes, wherein every infrastructure request from an end user (application developers, application owners) goes through an approval & budgeting process and long cycle time to provision, which might involve procurement as well to a simple a self-service provisioning model driven by enterprise specific product/ service catalog.  

This self-service model requires minimal or no support from the infrastructure and platform teams during provisioning but the responsibilities of platform resiliency, security, cost control and compliance remain with the platform teams. So, the platform engineering approach changes from being people and process centric to "self-service" methods with automation, controls and governance embedded in a non-intrusive way.

The platform for cloud includes 4 distinct layers:

Platform Services Layer.jpg


  1. Cloud Platform Management - Manages the catalog, handles the request process and business approval in the provisioning phase. Addresses service management, billing and cost allocation in the operational phase. 
  2. Enterprise Orchestration - Unified provisioning across multiple deployment zones and configuring the environment for application usage which application specific middleware deployment and integrating with operational & management tools.
  3. Cloud Control Plane - These are new capabilities required to address the dynamic characteristics of the cloud including workload placement, routing, tracing and security implementations.
  4. Deployment Zone - The infrastructure layer should be traditional on-premise data centers or private/ public cloud augmented with a container management platform.

Technology products are maturing in the cloud platform management and orchestration space to enable organizations to work effectively with the multi-cloud environment. For example, ServiceNow for cloud platform management (workflows in provisioning part), Terraforms for multi cloud provisioning, Chef/ Puppet/ Ansible for configuration management.  Along with maturity of technologies, there is also increased number of skilled resources to work on these maturing technologies, which enables enterprise to transform with cloud, benefitting from cloud scalability, agility and cost.

While the challenges in provisioning in a multi-cloud environment are being addressed, the effective solutions for operations of multi-cloud that goes beyond IaaS is in early stages of evolution. For example, orchestration and operations tools for containerized platform or PaaS or server-less architecture is not mature. Cloud control plane is a concept that is evolving, and focuses on the concerns around service location, routing, security and monitoring, however the supporting technologies for these are in nascent stages with limited standard support.

Enterprises who are taking the journey to multi-cloud should,

  • Look at a comprehensive cloud management and orchestration platform, preferably an integrated platform to make consumption of resources for multiple deployment zones as simple as possible for the consumers while ensuring organization controls in a policy driven manner.
  • Explore the technology stack to implement a cloud control plane which would bring in operational control over the hybrid IT landscape.

The second part of this post would lay out the schematic of the Cloud control plane and analyze standards & technologies that are evolving to meet the needs. Stay tuned!

August 24, 2017

Managing vendor product licenses during large scale migration to cloud

Public Cloud Services are mature and enterprises are adopting cloud to achieve cost optimization, introduce agility and modernize the IT landscape. But public cloud adoption presents a significant challenge in handling the existing vendor licensing arrangements. Commercial impact varies based on cloud delivery model from IaaS to SaaS and the licensing flexibility. The business case for cloud transformation needs careful consideration on existing software licenses.

Based on our experiences we see software licensing and support by software vendors are at varying stages of maturity. At times, software licensing model can become expensive while moving to cloud. Typically, on premise licenses are contracted for number of cores, processor points or users, whereas the definition of core in virtualized/ cloud world is different.

While enterprises assess the licenses when undertaking the cloud journey, they should carry out a high-level assessment of risks associated with licenses while formulating the business case.

Before formulating a business case it's important to consider the following aspects into the enterprises license transition strategy:

·         Conduct due-diligence of major software vendors to identify any absolute 'show stoppers' for the use of their products such as:

o   Support level in new platform services, license portability and license unit derivation mechanism in cloud.

o   Commercial impact for re-use on multi-tenant cloud platform.

o   Flexibility to reassign licenses as often as needed.

o   Mechanism to check and report compliance on public cloud in an ongoing basis across product vendor licenses.

·         Inventory management of licences and the commercials around these licences.

·         'Future state' services and application stacks should balance between license cost and performance requirements.

o   Negotiate unfriendly product licensing bound to socket or physical hardware level.

o   Evaluate existing licensing terms and conditions for increase in licensing costs.

o   Evaluate / check for mitigation controls and options on public cloud.

o   Plan ahead for the cost implications for reusing converged stack or appliance based licenses on public cloud.

o   Translate the on-premise licenses to public cloud (virtual core).

o   Cloud Service Provider includes operating system licenses - examine the option to reduce the same from existing vendor agreements.

o   Leverage the continuous availability capability of public cloud platforms to eliminate disaster recovery licenses and costs associated with it.

Approaches to overcome public cloud licensing challenges:

To overcome the licensing challenges associated, IT teams can optimize target state architecture, solution blueprints and frameworks with considerations on license/ cost models. Few approaches like:

·         Re-architect existing solutions leveraging event driven Service, Function as a Service, PaaS, Containers and Micro-Services to achieve agility and significantly license cost reduction.

·         Enterprises should consider dedicated hosts or instances / bare-metal options when socket level visibility is required for complying with license usage on public cloud but also weigh the cost impact of these machine types.

·         Embark on Open Source for platforms like database, application server and web servers.

·         If traditional deployment of platform must be moved to cloud, consider creating a pool of platform services rather than services for individual application requirements like common database services. For example: Line of business can consume business applications through centralised platform services across business units in order to achieve greatest cost and agility benefits.

·         Consider solutions with bundled license under usage based pricing models like SaaS, PaaS, Market Place and Public Cloud Native Services.

In reusing "on-premise" licenses, all major software vendors are changing license policies to allow flexibility to port the licenses to cloud but it is not uniform nor all-inclusive yet. Options like vendor allows certain product licenses on cloud but not all, another vendors may allow all on public cloud; and while some vendors allows porting onto authorized cloud environments only.

In summary, migrating like for like will have an impact to licensing costs on public cloud. Understanding the current licensing agreements / models, optimizing application architectures to cloud; negotiating a position with vendors that will be suitable for cloud along with compliance processes in the target state model should hold the organization in good stead. With the cloud native services and open source innovation continues to grow rapidly, enterprises can mitigate traditional licensing constraints by leveraging these technology innovations.

August 27, 2013

You can't compete with the clouds, but you can embrace and succeed!

 

I take my inspiration for my blog title from Forrester's James Staten who recently wrote about "You can learn from the clouds but you can't compete". James Staten talks about how data center operations can help achieve levels of excellence and success and prescribes that standardization, simplification, workload consistency, automation and maniacal focus on power and cooling can help one setup and run the best data center operations.

 

However, I think there is more to these large cloud providers than just learning some best practices. I was talking to an important client of Infosys earlier with whom we are currently enabling an cloud enabled IT transformation and she mentioned something that clarified to me what the real value of these cloud providers means. She said her aspiration is to set up and run a trust cloud ecosystem for her enterprise with a single point of accountability.  In spite of the sheer scale and magnitude of their investments, the likes of Amazon Web Services, Microsoft Windows Azure, these giant behemoths of industrial scale infrastructure with their infinitesimal compute power, derive respect from the sheer agility and speed with which they are able to respond to their customer needs.

 

Of course, this happens because of a phenomenal level of simplification, standardization, automation and orchestration they run their operations with. Now imagine, how it would be if these principles of IT governance and operations management were extended to an enterprise. Vishnu Bhat , VP and Global head of Cloud at Infosys keeps saying "It is not about the Cloud. It is about the enterprise" and towards this if an enterprise were to simply focus on learning from these cloud leaders and work towards establishing an ecosystem of cloud providers, a hybrid setup, where their current IT is conveniently and easily connected to their private cloud, and public cloud setups. And this hybrid cloud environment is managed with the same level of agility and speed as an AWS is, that is when the possibility of true success and value from cloud starts to emerge.

 

Imagine a hybrid cloud within the realms of your enterprise that functions with the same speed, agility and alacrity of an AWS. Imagine exceptionally efficient levels of optimization of costs on a continuous basis by bringing in levels of automated provisioning of enterprise workloads, integrated federation and brokerage with on-premise core IT systems, extensibility with public clouds available for spikes, constant optimization through contestability and optimization, control and governance through single enterprise view, metering, billing and charge backs to business, clear points of accountability with easy governance of SLAs and liabilities, secure management of the cloud and compliance de-risking in keeping with the laws of the land. And all this from one ecosystem integrator with one point of responsibility, accountability. That's cloud nirvana at work!  I am eager to keeping telling clients on how to get to this state, how to learn from the cloud providers and contextualize these to an enterprise.

 

In my next blog, I will talk about an important aspect of cloud success - contestability, but before that, I would urge you to read my colleague Soma Pamidi's blog "Getting cloud management and sustenance right - Part 1". Till then, may the clouds keep you productive!

August 21, 2013

Getting cloud management and sustenance right -- Part-1

I have been in Australia in recent weeks as I work with a major client of Infosys' to help devise  the   cloud transformation for them. It is exciting, and all this effort in planning their transformation enablement with cloud only reaffirms a long time belief I have had - to get value from Cloud, one needs to get their cloud management and sustenance right. In this 3 part blog series, I will try and cover that. My secret hope is that it transpires into a model we can help clients with.

 

The promise of Cloud is that it just does simply reduce costs but also delivers business agility with flexibility. While enterprises are ramping up for the journey to cloud, achieving proper integration and delivery is of prime importance. It marks the steps to transition from 'asset-centric IT' to 'service-centric IT' in a systematic and progressive manner. It also presents a robust Cloud management framework that, with the right implementation partner can help the enterprise reap a rich harvest of business benefits.

 

Organizations are in various stages of consolidating their information technology (IT) infrastructure to a single or few centralized datacenters along with vendors and suppliers.  While such datacenter consolidation has yielded performance efficiencies and cost optimization, there has also been exponential growth in demand owing to the evolution of cloud solutions. Compared to traditional datacenters and hosting, cloud services offer higher elasticity, rapid scalability, on-demand pay-per-use models, and other innovative out-of-the-box, ready-to-use services. Today's enterprises are entering a new world where businesses are facing increasing pressures to offer efficient services, ownership and decisions are restricted to the services they consume.

 

Today's enterprises recognize that Cloud technologies are transforming IT and business models. With Cloud, organizations realize that it calls a new operating model which comes with its own set of challenges.

 

The increasing IT complexity makes it impossible to achieve desired benefits from a single cloud provider. The demand for superior services is forcing businesses to opt for multiple CSPs, transforming the existing IT landscape into a complex, hybrid cloud enterprise. This poses several challenges such as higher governance overheads, absence of a single point of accountability, complexity in managing SLAs, lack of standardized processes, tools and reports. In my next blog, I will speak about cloud governance and compliance and integration and orchestration management.

 

 

July 12, 2012

Big Data and the God particle (Higgs Boson)

The July 4th 2012 announcement from CERN on the possible evidence of the existence of the God particle or the Higgs Boson has sent ripples through the physics community. This is not just fundamental to explain the existence of gravity but a validation of the Standard Model of particle physics. It holds the possibility of opening up new frontiers in physics and a new understanding of the world we live in.

While we marvel at these discoveries, our physicist brother-in grapple with trying to understand if this discovery is truly a Higgs Boson or an imposter?  It is however very interesting to look at the magnitude of the data analysis and the distributed computing framework that was required to wade through the massive amounts of data produced by the Large Hadron Collider (LHC).

The Big Data problem that the scientists at CERN and across the world had to contend with was sifting through over 800 Trillion (you heard that right ...) proton to proton collisions looking for this elusive Higgs Boson.  Additionally this particle has a large mass but is extremely unstable and lasts for less than a billionth of a second.  It cannot be detected directly and is identified through its footprint or the particles that it splits into. This results in over 200 Petabytes of data that needs to be analyzed.

 Right from the beginning CERN had set this up as a distributed cloud based tiered data processing solution. There were three tiers identified T0 being the tier that collects the data directly from LDH, there were 11 T1 nodes across the world getting the data from CERN and a number of T2 nodes (for e.g. there are 8 in the US) based on the areas of the data that particular groups of physicists were interested in analyzing.  From the T2 nodes people could download the data to their personal T3 nodes for analysis. This resulted in a massive highly distributed data processing framework that collects data spewed out by the LHC detectors at a phenomenal rate of 1.25GB/sec. The overall network can rely on a computation capability of over 100,000 processors spread over 130 organization in 34 countries.

From a technology perspective it is interesting that people have used some of the open source technologies that we use for big data processing in enterprises for e.g. the file system with the Hadoop echo system, HDFS (Hadoop Distirbuted File System) was the candidate for storing these massive amounts of data, ROOT another open source tool which is also used by financial institutions as well is used to analyze this data.

It is amazing that the analysis tools used to find the God particle is commonly available to be used by the enterprise to solve smaller Big Data problems.

To paraphrase Galelio "All truths are easy to understand once they are discovered; the point is to discover them and Big Data can help"

June 27, 2012

Microsoft can extend its Windows run on the Cloud.

While I am preparing my trip for Microsoft World Wide Partner Conference (WWPC) 2012 happening in Toronto during second week of July, I was referring to the notes which I captured during the previous events.  What really caught my eyes were some key messages delivered during 2010 WWPC which happened at Washington DC. Steve Ballmer in his key note mentioned that Microsoft is transforming itself into a cloud services company and 70 percent of Microsoft developers are working on cloud services. This was followed by the message to partners to "Lead with cloud, Build, Tell, Sell and Support".  Microsoft's big bets on cloud were clearly evident throughout the partner conference in 2010 where 70% and more of the sessions were around Microsoft cloud products be it Public or Private Cloud and SaaS or PaaS offerings. It was the first wave of Microsoft message to partners to gear up to support its Windows run on cloud.  

If you look back, this strategy from Microsoft didn't happen overnight. The preparations started way back in 2005 when Ray Ozzie released an exciting internal memo to Microsoft employees outlining the competitive landscape, the opportunities and future steps for Microsoft's strategy to deliver on a portfolio of seamless software + services. I recollect majority of us during that timeframe ignored it and continued implementing Windows XP's, Windows Server 2003, Office 2003 and kept exploring the power of .NET applications and XML web services on premises.

Later during 2008 Ray further advanced his software + services strategy and highlighted the power of choice for business while embracing cloud, which set the beginning for the enhanced set of products and services from Microsoft extending its powerful in premise story to cloud. This was followed with an announcement of Windows Live and Microsoft business productivity online services. This was the beginning of cloud being one of the mainstream focus. 

The 2011 WWPC which happened in Los Angeles highlighted Microsoft's huge investments on building data centers across the world and the product line enhancements to entice adoption. Today I believe Microsoft has become one of the largest and most influential players in the cloud for enterprise, SMB and consumer space.   Windows Azure today is mature and extensive, unlike the competitors provides both IaaS and PaaS capability. Microsoft has turned its most influential Windows based business productivity application "Microsoft office" as a cloud app. Along with this Microsoft also offers its most popular collaboration and messaging suite comprising of Microsoft Exchange, SharePoint, Lync  as a SaaS solution branded as O365.  To add to this is its recently unveiled SaaS offering "Microsoft Dynamics" CRM online. They also have the right synergy across public and private cloud for Identity, virtualization, management and application development.  Earlier this week Microsoft announced its acquisition of Yammer a social networking company to extend its cloud services with best-in-class enterprise social networking.  What more do we need to believe that Microsoft will extend its run on Cloud ?

I am sure the 2012 WWPC will have more exciting announcements re iterating their strategy as a cloud services company. 

April 4, 2012

Will the inclusion of cloud-computing in Industries may decrease the job opportunities in future?

We know that cloud-computing apart from providing benefits like reliability, availability, scalability, etc, it was also shifts some of the responsibility (from the infrastructure point of view) to the cloud-computing providers.

Once application/service deployed to the cloud computing infrastructure:

·         Network administrators need not to worry about the load balancing, bandwidth balancing, etc

·         System administrators need not to worry about updating the machines/servers with latest security and other patches, etc.

So from the application owner's perspective, he/she needs not to put much effort and money in this types of administrative works rather devote more on the application feature enhancement.

Does that mean personals involved in such administrative works are going to lose their job opportunities?

Continue reading "Will the inclusion of cloud-computing in Industries may decrease the job opportunities in future?" »

January 9, 2012

Enterprise Cloud Adoption Strategic Roadmap

Adoption of Cloud in an enterprise is more of a strategic decision than an operational or tactical. Cloud adoption needs to be seen more from enterprise architecture strategy perspective rather than an isolated application architecture specific strategy for the simple reason that it has several short and long term implications on enterprise strategy which may be beyond the specific application's business or technology footprint.

Continue reading "Enterprise Cloud Adoption Strategic Roadmap" »

November 7, 2011

Enterprise Cloud trends - "Cloud First" strategy

While we were working on the Cloud strategy for Infosys a year back we had lengthy debates on how an enterprise of the future looks like with their cloud vision in the coming years. Most of our forecasts on this are coming factual. My recent interactions with clients and partners clearly reveal that the Cloud adoption by enterprises is faster than what was being perceived by a larger community.

 

"Cloud first" strategy is being adopted by some of our leading clients and few of them have a very clear approach for their Infrastructure and application stack. Hybrid is the most common trend and Private Cloud plans are in place for the new and old gears. IaaS consumption from Public Cloud seems to be a short term strategy and PaaS is becoming more prominent for application development even though there is still some fear of vendor lock-in. This to me is the right strategy as more innovations are to happen in the PaaS space and applications can leverage the power of Cloud in terms of scalability, global availability, design for failures etc. more with platform as a service. Application portability gaps across platforms and on-premise setup will gradually diminish with parity amongst on-premise server operating systems and Cloud platforms being addressed with every new version release. Those who consider that an application developed for windows server is a platform "lock-in" may not agree with me on this view.

 

One of our clients who has adopted O365 has outlined the future strategy for portals with first choice as SharePoint Online and anything on-premise will be an exception (feature parity, data privacy etc). This shows that "Cloud first" strategy is becoming the norm within enterprises with clear directions for non-standard applications and short living workloads. This works well across organizations and industries for especially self-contained application workloads which have least dependency on data residing on premise. Additionally, these organizations could have security and compliance concerns in their data being exposed to the Public Cloud.

 

Next wave is around mobility and analytics. Will discuss this in my next post. 

 

August 23, 2011

Practicing Agile Software Development on the Windows® Azure™ Platform

Over the years, several software development methodologies have evolved to help the IT industry cope with rapidly evolving business requirements. One such methodology is Agile... -an iterative approach to software development. Similarly rapid strides on the technology front are resulting in paradigm shifts towards software development and how IT delivers its services to business. Technologies in the form of virtualization and cloud are offering low entry barriers by making software and hardware infrastructure easily accessible and thus reduce the time to market. These are encouraging signs that help reduce the gap between business and IT. 

Continue reading "Practicing Agile Software Development on the Windows® Azure™ Platform" »

July 29, 2011

Big Data and Cloud Computing

It is well known that leveraging the Cloud for high computing work loads for a short span of time is a good business case.

Getting Business insights from Big Data is becoming main stream. Cloud is becoming an ideal choice for that.

Continue reading "Big Data and Cloud Computing" »

March 31, 2011

Is cloud computing same as putting things Online?

All those just boarding the cloud train, may have posed this question to themselves or to others who may have a know-how on cloud. Being a Cloud SME myself, I have faced this question several times. This post is an attempt to clear some of the confusion that exists around this specific topic.

Continue reading "Is cloud computing same as putting things Online? " »

June 27, 2009

The Cloud ROI Framework

Last week, I had an opportunity to discuss the cloud computing ROI model with a large banking major.  We did some illustration using the simplistic ROI framework and figured-out the savings to the scale of 90% on CAPEX and 50% OPEX cost Y-o-Y.

The numbers were attractive enough to get attention of “C” profiles, he was bit positively surprised. The first question he asked was: what’s the trap here? He understands the concerns of security and outsourcing in banking business better than me, so I need not attempt raising technical obstacles. I simplified the answer: It is paradigm-shift; IT needs to manage this change! To my surprise, the wave-length matched just here, and we had constructive meeting.

We discussed the future of IT is in cloud, it is a big wave like moving from DOS to Windows; Desktops to Internet and On-premise IT to Cloud; through iterative lean IT transformation. We were on the same-page throughout, we concluded meeting with a note saying that – enterprises will adopt cloud tomorrow if not today; but if they fail to do it, they will be forced to adopt.

Continue reading "The Cloud ROI Framework" »

May 26, 2009

How enterprises have benefited from the cloud ?

It was an interesting question by one of our Manufacturing clients, when we were discussing an opportunity to leverage Microsoft Azure .Net Service Bus for their innovative SaaS product offering.  The intent of the question was loud & clear from the tone backed-up by the data-points from recently published, highly debated McKinsey report – Clearing the Air on the Cloud – where the cost effectiveness of cloud for large enterprises was questioned.

We presented a bunch of case-studies to our client - some executed by Infosys and some publicly available on internet. As a most trusted transformation partner & technology advisor, we shared our viewpoints on each-one-of-them.

We also presented the counter argument from Booz Allen Hamilton on the referred McKinsey report. It helped turn the table – we were able to shape the discussion in the comfort zone to every stakeholder on the call – you know how difficult this is in short meetings!

Continue reading "How enterprises have benefited from the cloud ?" »

May 15, 2009

Hybrid Approach for Cloud Computing Adoption

Concepts of private and public clouds came into existence when cloud computing was an emerging trend trying to lay its roots into the large scale enterprise’s IT environment. The multi-tenant model of public clouds succeeded in attracting numerous small-scale organizations, which found the CapEx savings and reduced development time as alluring.

But large-scale enterprises see the public nature of cloud as a potential issue for adoption.  Moreover, the fact that the IT is out of the limits of their control further alleviates the threat to public cloud’s adoption.

With an aim to tackle the challenges of public cloud a concept of in-house private clouds caught pace. However though private clouds are supposed to emulate the public cloud functionalities, they introduce their own shortcomings of scalability and up-front costs.

Hybrid model combining best of both worlds can be a solution.

Continue reading "Hybrid Approach for Cloud Computing Adoption" »

May 14, 2009

Google App Engine - Where to start?

In my first blog Down pour or Drizzle to Enterprise?   I wrote how large enterprises can take a drizzle in the Cloud.

In this let us look one of the Cloud providers – Google App Engine (GAE).

Is Google App Engine ready for the Enterprise?

Where enterprises can start the adoption?

Continue reading "Google App Engine - Where to start?" »