The commoditization of technology has reached its pinnacle with the advent of the recent paradigm of Cloud Computing. Infosys Cloud Computing blog is a platform to exchange thoughts, ideas and opinions with Infosys experts on Cloud Computing

« January 2019 | Main | June 2019 »

March 25, 2019

Optimizing Costs in the Cloud: Analyzing the Reserved Instances

There are more than one levers when it comes to optimizing costs in AWS. Right sizing, reserving, using spot instances are some of them. While right sizing is always applicable, it is not the case with use of Reserved Instances due to various constraints. Technologists, Evangelists, Architects, Consultants amongst like us are faced with dilemma - to reserve or not to reserve instances in the AWS cloud. I am sure you will have some point view, not necessarily for or against but rather how, when and why it makes or doesn't make sense to reserve instances in Cloud.  AWS is considered as reference for this discussion but I am sure it would be equally applicable for other CSP (Cloud Services Providers). 

This (not so) short article tries brings out the relevant points more succinctly so as to help you make a decision or at the very least, acts as an ice breaker to get the debate started. 


Let's begin, with some research!


Arguably, the documentation is detailed, well organized, peppered with diagrams with hues of orange.  In fact, it's so detailed, I had to open about seven tabs from various hyperlinks on one page to get started. No wonder then a majority of technicians hardly skim through the documentation to make a decision. Further it's a living document considering the 100s of innovation that AWS brings about each year, in some form or shape. If you are bored enough and have a had a chance to read through, congratulation! Hopefully, we will be on the "same page" at some point in time J


AWS_Documentation_RI.png

  

Text to Summary (Does AWS have a service yet?)

I checked a few online options for summarizing but it wasn't up to the mark, esp. when I provide the AWS documentation URL.   So here is summary form the documentation I managed to read

Basics

The EC2 instances in AWS is defined by attributes like size (e.g. 2x large, medium), instance type/family (e.g. m5, r4), scope (e.g. AZ, Region), platform (e.g.  Linux, Windows), network type (e.g. EC2-Classic, VPC)

The normalization factor refers to instance size within instance type. This is used while comparing the instance footprint and it is meaningful within the instance family.   One can't use the normalization factor to convert sizing across instance types e.g. from t2 to r4

The normalization factors, which are applicable within an instance type are given below

NormalizationFactors.PNG


Reservation offerings


There are three offerings - Standard, Convertible and Scheduled Reserved instances with following key differences between these offerings

RI_offerings.PNG

Notes:

* Such availability of any feature or innovation, changes across region in a short span of time, hence it is better to validate at the time of actual use


Reservation Scope

Instance reservation scope is either Zonal (AZ specific) or Regional, with following characteristics

(Ref: https://docs.aws.amazon.com/AWSEC2/latest/UserGuide/apply_ri.html )

RI_Scope.PNG

Modifiable attributes

Some instance attributes are modifiable within certain constraints, most notably, the platform type. As a general rule, Linux platform instances are more amenable. Following table list the modifiable attributes  (Ref: https://docs.aws.amazon.com/AWSEC2/latest/UserGuide/ri-modifying.html )

Modifyable_attr.PNG

The constraints

While there is sufficient flexibility in general for IaaS, there are a number of constraints, based on feature availability in specific region, upper or lower limits, and so on.  The constraints on Reserved Instances are highlighted below

1.       Instance size flexibility on Reserved Instances is lost f such reservation are purchased for a specific AZ or for bare metal instances. Also sizing flexibility does not apply to instances in dedicated tenancy, and for Windows with/without SQL Server, SuSE or RHEL.

2.       Software license usually does not align well with instance size changes.  One must give careful consideration for licensing aspects. As one example, in one of environment, I use SuSE for SAP for which the s/w license pricing is in couple of slabs.  If I use one r4.8xlarge, I have to pay an hourly price of 0.51/hr but if I choose two of the r4.4xlarge instances which are equivalent of one r4.8xlarge, then I have to pay a price 2 x 0.43/hr

3.       While modifying a Reserved Instance, the footprint as in size/capacity of target configuration is required to match that of the existing configuration., otherwise. Even when if reconfiguration results in higher footprint, it doesn't work.

4.       Instances are grouped by family (based on storage, or CPU capacity); type (designed for specific use cases); and size. For example, the c4 instance type is in the Compute optimized instance family and is available in multiple sizes. Even though c3 instances are of the same family as c4 family, one can't modify c4 instances into c3 instances because of different underlying hardware specifications

5.  Some instance types, where there is only one type instance, obviously can't be modified.  Such instances include cc2.8xlarge, cr1.8xlarge, hs1.8xlarge, i3.metal, t1.micro

6.       Convertible RI can only be exchanged for other Convertible Reserved Instances currently offered by AWS. Exchanging multiple convertible instances for one CI is possible.

7.       AWS doesn't like you to change All Upfront and Partial Upfront Convertible Reserved Instances to No Upfront Convertible Reserved Instances.

Do we have a common understanding?

It's very human not to have a common understanding, even if we have same source of information. Here is my attempt to rephrase some of the salient points as common understanding - If you don't agree, consider it as commoner's understanding!

 

1.       The much touted 70% discount is not that common. It applies to some very large instance types with partial/full upfront e.g. x1.16xlarge for partial upfront, 3yr term 

( ref: https://aws.amazon.com/ec2/pricing/reserved-instances/pricing/ )

Most instance types give you slightly moderate discount, approx. 20-30% lesser than you would   have expected

2.       That Reserved instances is the first solution when savings costs on Cloud is entirely right. In fact, it requires a careful review various constraints.

3.       RI could be used when it comes to production load which is usually more on the stable side and by the time workload enters the production systems, a set of tests including performance tests, backup/recovery, network latency etc. would have been carried out so that sufficient information is available to make a decision.

4.       Type of applications running on EC2 instances, the operating system and the software/tools licenses which are usually bound to instance type/size must be understood before going with the reservation

5.       Even when decision has been made to go with RI, good starting point could be Convertible as opposed to Standard RI, due to the flexibility they provide and marginal cost disadvantage when compared to Standard RI. Scheduled RI due to their inflexibility need even more careful evaluation unless you just use it for smaller instances and non-prod systems.

 

Conclusion

As the number of innovations increase and solution architects are faced with numerous "equally good" options, it makes sense to take a pause, evaluate the options carefully, gather stakeholder inputs and make a recommendation. It's certainly true for using or otherwise Reserved Instances.




March 24, 2019

Own Less - Serverless

 

We are in Age of Software, every business today regardless of its domain is driven by software. Software has tend to become the enabler and the differentiator for business. To get the perspective of what actually it means, think of Amazon and its impact on every business it had where its path crossed, in fact, I am not sure if anybody has stated this earlier, Amazon has pioneered the Art of Retail as a Service (RaaS) :) . Software has become so ubiquitous today that every industry appears to be a software industry, the line differentiating them has become rather very thin, example Netflix (Entertainment/Media or Software), Uber (Transportation or Software), Airbnb (Hospitality or Software?), Amazon (Retail or Software), the list goes on. This reality converges to the fact that every business demands a Software division.

When you deal with software, you have a lot to worry about, right from designing and developing your software to owning and maintaining infrastructure that runs it. A typical business executive would like to focus more on Business rather than software nuances that drives it. Unlike the real estate that's needed for the business operations, the software infrastructure is totally a different ball game. Software and it's hosting infrastructure tend to get irrelevant rather soon and demands flexibility based on changing business growth and conditions. Infrastructure also needs to keep in pace to support ideas that crop up, get realized, tested, and sometimes sustained with good returns but most of the time gets fizzed out, this engine that drives the change and innovation should not be economically taxing in terms of capital expenses that tend to hang on as liability in case of a failed endeavor. This changing dynamics also has ripple effects on the talents that keep the show running.

In ideal world, wouldn't it be simply great if you could just buy or rent the software you need instead of building and maintaining it? Well, in some cases where your need is well defined and mainstream with fair amount of variations, you have SaaS offerings to take care of it; but for other cases where your software is your differentiator or specific to your unique requirements, you have a choice of building your solution based on Serverless framework and in the process making your solution infrastructure agnostic (to certain degree). So if SaaS gives you almost total independence from maintaining software, Serverless gives you a model to go half way that route by taking care of infrastructure needed to run your software.

With Serverless the whole paradigm of building software solution changes, instead of traditional way of developing software components and hosting it in your managed infrastructure, you build solutions using other services that comes with "batteries included" oh I mean "Servers Included". With Serverless you assemble solutions akin to Lego where every service is a Lego block which you integrate together to build your solutions. 

When we deal with the concept of Serverless, it's important to understand that Serverless is an abstraction that gives the developer of the solution from underlying infrastructure needed to run it, this abstraction could come from the organization itself or from public cloud infrastructure companies like AWS and Azure. So, the term Serverless is relative from the standpoint of end Software developer, but in reality, someone else solution is abstracting the nuances of hardware for you. Effectively Serverless paradigm is adding another layer of abstraction, making developers and organization to focus more on the software aspect of the solution. This abstraction aspect is evolving and turns out to be the future of Software development. For those skeptical of this model and believe that you can't focus on the solution by forgoing the control of your infrastructure, it might be true for certain edge cases where you need have more control on the underlying hardware but for most of the mainstream solutions this would require more of a mental shift of building solutions with other Serverless services. A good analogy of a similar evolution would be the days where we coded using Assembly language and programmers had complete control over the Registers, Program Counters and Memory and decided on what moves in and out of this Registers. Transitioning to higher level language like Java and its likes, we have let off this control to abstractions layers that took care of it. We are in a kind of similar transition with Serverless simply extending this abstraction to higher level of resources that runs our code.

With Serverless your solution stack is quite simplified, From Authentication, Middleware to Database, you assemble your solution from Managed services rather than building each of its stack ground-up. This approach helps you focus on your actual business solutions, decreases the time to market and translates your CapEx to OpEx with significant cost savings in most cases.

Hence with Serverless you end up owning the integration and logic that matters you the most and forgo the ownership of its underlying infrastructure and its intricacies.

Having said this, Serverless model is continuously evolving and would gradually make sense for variety of use cases. Success of Serverless offerings highly depends on the economic model in which it operates. Typically cloud offerings are priced based on the amount of resource used (compute/storage/networking) and the duration of consumption and depending on the use case price needs to be calibrated such that it economically makes sense to stop managing one's own infrastructure. As this model gains popularity and well tested, you would be seeing innovative pricing mechanism that would make Serverless obvious choice. In fact, these whole exodus towards cloud that we are witnessing today would end up in either SaaS or a Serverless Solutions.

We are in the age of less. From Serverless to Driverless (Think Uber without driver) the journey is towards getting and paying for what we need instead of owning and maintaining what serves our need. It's here and It's evolving...

March 1, 2019

Can your legacy applications be moved to cloud?

 Migrating to cloud is a smooth journey until enterprises come across a hiccup called legacy applications. Moving legacy applications to cloud is a task that requires pre-planning and several other aspects which need to be defined clearly for the teams involved in the migration process. Evaluation of the importance of the legacy application to business growth becomes integral. If the application doesn't forward the enterprise's goals, then a call whether it is necessary to retain the application needs to be made. Several legacy applications are of great importance to enterprises and bring them business continuously. Thus, pre-planning, resources and budgeting becomes mandatory for such applications. Here is how enterprises can go about moving their legacy applications to cloud: 


1)      Application Rationalization: Based on the complexity of the environment and legacy nature of the applications, it is recommended to do application rationalization to remove or retire the duplicate applications or consolidate the applications into smaller footprints. This will reduce the legacy footprint and optimizes the migration.


2)      Cloud suitability Assessment: Conducting an application assessment before migrating it to cloud is an essential step. All the involved stakeholders need to understand the technical risks of moving a legacy application along with the impact it will have on business. In addition to this, security and compliance, performance, availability, scalability assessments also need to be performed to avoid any lag or disruption during or after the migration.  


3)      Robust Migration plan: IT teams and vendors involved in the migration of legacy applications need to create a robust and failproof migration plan. Starting from assessment to maintenance of the application after migration needs to be listed and elaborated in this plan. Furthermore, the plan must include key responsibilities and the role each team member is required to fulfill. Thus, preparing a checklist and ticking off tasks once they are done simplifies the process.  


4)      Declutter: Legacy applications bring with them several technical challenges such as OS compatibility, old VMs outdated infrastructure tools and inter application dependencies. When a legacy application is migrated to cloud, with it moves the log files, malware and file legacy. Thus, when the application is decluttered, the OS, VMs and legacy logs are not moved to cloud. Instead only the app is migrated and installed on a cloud-hosted hosted OS. Hence it is important to understand the application composition with core part of applications and its peripheral dependencies so that cloud migration can be optimized.


5)      Trick your applications: Legacy applications can be tricked to run in a stateless environment. This can be achieved by freezing the application's configuration. Freezing the state and configuration allows deployment of the application on one or more servers as per your choice.


6)      Disaster Recovery: Legacy applications often are of high importance to enterprises and have a drastic impact on business. Disaster Recovery needs to be in place for these legacy applications to avoid data loss and ensure there is no lag in business continuity.


 Legacy applications are critical to the smooth functioning of enterprises. A robust strategy to migrate these applications to Cloud becomes crucial. By doing this, not only is the enterprise at an advantage but can also reap a number of benefits such as faster innovation, greater agility and faster time to market.



Are you ready to move your legacy applications to Cloud? Infosys' vast experience in this domain will be of great help to you. Get in touch with us: enterprisecloud@infosys.com