The commoditization of technology has reached its pinnacle with the advent of the recent paradigm of Cloud Computing. Infosys Cloud Computing blog is a platform to exchange thoughts, ideas and opinions with Infosys experts on Cloud Computing

Main

June 28, 2019

Amazon Aurora Serverless, the future of database consumption

Amazon has recently launched Amazon Aurora Serverless database (MySQL-compatible edition). This is going to set a new trend in the way databases are consumed by organizations. Traditionally database setup, administration, scaling and maintenance is tedious, time consuming and expensive. Thanks to cloud computing, RDS takes away the setup, scaling and maintenance of databases from customers. Amazon Aurora Serverless takes RDS to the next level where the users pay only for what they use and when they use.

Continue reading "Amazon Aurora Serverless, the future of database consumption" »

June 26, 2019

AWS Cloudformation: An underrated service with a vast potential

As businesses are experiencing surge in provisioning and managing infrastructure and services through cloud offerings, a collateral challenge has emerged on the sidewalls. The challenge to remain accurate and quick while provisioning, configuring and managing medium to large scale setups with predictability, efficiency and security.
Infrastructure as a Code i.e. IaaC is a way to manage resource provisioning, configurations and updates/changes using tested and proven software development practices which are used for application development.

E.g.
  • Version Control
  • Testing
  • CI/CD
IaaC2.png

Key Benefits:

1)  Cost Reduction- Time and effort reduction in provisioning and management through IaaC.
2)  Speed - Faster execution through automation.
3)  Risk Reduction- Less chances of error due to misconfiguration or human error.
4)  Predictability- Assess the impact of changes via change set and take decision accordingly.

There are several tools which can be used for deploying Infrastructure as a Code.
  • Terraform
  • CloudFormation 
  • Heat
  • Ansible
  • Salt
  • Chef, Puppet

Ansible, Chef and Puppet are configuration management tools which are primarily designed to install and manage software on existing servers. Certain degree of infrastructure provisioning can be supported by them, however, there are some specifically designed tools which are a better fit.

Orchestration tools like Terraform and CloudFormation are specially designed for infrastructure provisioning and management.  

CloudFormation is an AWS native Infrastructure as a code offering. One of the most underrated services in Amazon cloud environment for so many years. However, with increasing awareness on this, IaaC Service is getting traction and lot of clients are willing to look at the advantages.

It allows codification of infrastructure which helps in leveraging best software development practices and version control. It can be authored with any code editor like Visual Studio code or Atom editor, checked into a version control system like Git and reviewed with team members before deployment into Dev/Test/Prod. 

CloudFormation takes care of all the provisioning and configuration of resources and developer can focus on development rather than spending time and efforts on creating and managing resources individually.

CFNDgrm1.4.png
Resources are defined in the form of code (JSON or YAML) in Template which interacts with CFN service to produce Stack which is a collection of AWS resources that can be managed as a single unit. In other words, we can create, update, or delete a collection of resources by creating, updating, or deleting stacks.

CloudFormation can be used to deploy simple scenarios like spinning up a single EC2 instance to a complex multi-tier and multi-region application deployment.

For example, all the resources required to deploy a web application like web server, database server and networking components can be defined in a template. When this template interacts with CloudFormation service, it deploys desired web application. There is no need to manage dependencies of the resources on each other as it's all taken care by CloudFormation. 

CloudFormation treats all stack resources as a single unit which means for a stack creation to be successful, all the underlying resources should be created successfully. If resource creation fails, by default CloudFormation will roll back the stack creation and any created resource till that point of time will be deleted.

However, point to be noted here is that any resource created before roll back will be charged.

Below example will create a t2.micro instance Named "EC2Instance" using Amazon Linux AMI in N. Virginia region.

Temp2.png
 
Like easy creation, CloudFormation also allows easy deletion of stack and cleanup of all underlying resources in a single go.

Change Sets- While updating or changing any resource there is always a risk associated with the impact of that change. For example, updating security group description without defining VPC in template or in a non VPC environment will recreate security group as well as EC2 instance associated to it. Another example is updating an RDS database name which will recreate the database instance and can be severely impacting.

CloudFormation allows to preview and assess the impact of that change through change sets to ensure it doesn't implement unintentional changes. 

ChangeSet3.1.png

Below change set example shows that this change will -

CHangeSetAWSPart1.png
CHangeSetAWSPart2.png
 
1)  Replace the security group.
2)  EC2 instance may or may not be replaced based on several factors which are external to this CloudFormation template and can't be assessed with certainty. For such cases the impact can be assessed with the help of AWS Resource and Property Types Reference (https://docs.aws.amazon.com/AWSCloudFormation/latest/UserGuide/aws-template-resource-type-ref.html) document.

Conclusion: CloudFormation, the infrastructure as a code service from AWS unleashes the real power and flexibility of cloud environment and has revolutionized the way we deploy and manage the infrastructure. It is worth investing time and efforts exploring it.





June 25, 2019

RDS - Scaling and Load Balancing

img2.2.jpg

Solution architects often have to encounter a question i.e. like an EC2 instances, can a load balancer with autoscaling group be used for scaling and load balancing RDS instances and databases hosted on them? 

While the answer to this question is "NO" there are other ways to scale RDS and load balance RDS read queries.

Point to consider here is that RDS is a managed service from AWS and thus takes care of scaling of relational database to keep up with the growing requirements of application without manual intervention. So former part of this article will focus on exploring vertical as well as horizontal scaling of RDS instances and the later would be observing load balancing options.

 Amazon RDS was first released on 22 October 2009, supporting MySQL databases. This was followed by support for Oracle Database in June 2011, Microsoft SQL Server in May 2012, PostgreSQL in November 2013 and MariaDB in October 2015. As on today it is one of the core PaaS services offered by AWS.


Scaling- RDS Instances 

RDS instances can be scaled vertically as well as horizontally.

Vertical Scaling

To handle higher load, database instances can be vertically scaled up with a single click. At present there are fifty type of instance and sizes to choose for RDS MySQL, PostgreSQL, MariaDB, Oracle or Microsoft SQL Server instance. For Aurora, there are twenty different instance sizes.

Follow below steps to vertically scale RDS instance.

 

img3.jpg

Remember, so far only instance type has been scaled, however the storage is separate and when instance is scaled up or down it remains unchanged. Hence volume also must be modified separately. We can increase the allocated storage space or improve the performance by changing the storage type (such as to General Purpose SSD to Provisioned IOPS SSD). 


img4.jpg

One important point to remember while scaling horizontally is to ensure correct license is in place for commercial engines like Oracle, SQL Server. Especially in BYOL model, because licenses are usually tied to the CPU sockets or cores. 

Another important consideration is that single AZ instance will be down or unavailable during this change. However if  database instance is Multi-AZ, the impact will be minimal as backup database will be updated first. A fail over will occur to newly updated database (backup which was updated first) before applying changes to main database engine (which will now become the standby).


Horizontal Scaling

To scale read intensive traffic, read replicas can be used. Presently, Amazon RDS for MySQL, MariaDB and PostgreSQL allow to create up to five read replicas for a given source database instance. Amazon Aurora permits creation of up to fifteen read replicas for a given database cluster.

Read replicas are asynchronously replicated copies of main database.

A read replica can be -

  • In same or different AZ as well as region- to be placed close to users.
  • Can be promoted to master as disaster recovery.
  • Can have same or different database instance type/class.
  • Can be configured as Multi-AZ - Amazon RDS for MySQL, MariaDB and PostgreSQL allow to enable Multi-AZ configuration on read replicas to support disaster recovery and minimize downtime from engine upgrades.
  • Can have different storage class.

a9.2.jpgEach of these read replicas can have different endpoints to share the read load. We can connect to these read replicas like how we connect to standard DB instance.


Load Balancing/Distribution of Read/Write Traffic on AWS RDS

AWS load balancer doesn't support routing of traffic to RDS. The CLB, ALB and NLB cannot route traffic to RDS Instances. So then how to distribute or balance read traffic on AWS RDS read replicas? 

There are two options -


1)  Using an open-source software-based load balancer, like HAProxy.

As we know that each replica has unique Domain Name Service (DNS) endpoint. These endpoints can be used by an application to implement load balancing. 

This can be done programmatically at application level or by using several open-source solutions such as MaxScale, ProxySQL and MySQL Proxy.

These solutions can split read/write queries and then proxy's like HAProxy can be used in between application and database server. HAProxy can listen read and write on different ports and route accordingly.


11.2.jpg

This approach allows to have a single database endpoint instead of several independent DNS endpoints for each read replica.

It also allows more dynamic environment as read replicas can be transparently added or removed behind the load balancer without any need to update database connection string of the application.


2) Second option is to use Amazon Route 53 weighted record sets to distribute requests across read replicas.

Though there is no built-in way, but this is a work around to use Route 53 weighted records to load share requests across multiple read replicas to achieve same results.

Within Route 53 hosted zone, different record sets can be created with each read replica endpoint with equal weight and then using Route 53 read traffic can be shared among different record sets. 


a8.2.jpg

Application can use route 53 endpoint to send read requests to database which will be distributed among all read replicas to achieve same behavior as load balancer. 








March 1, 2019

Can your legacy applications be moved to cloud?

 Migrating to cloud is a smooth journey until enterprises come across a hiccup called legacy applications. Moving legacy applications to cloud is a task that requires pre-planning and several other aspects which need to be defined clearly for the teams involved in the migration process. Evaluation of the importance of the legacy application to business growth becomes integral. If the application doesn't forward the enterprise's goals, then a call whether it is necessary to retain the application needs to be made. Several legacy applications are of great importance to enterprises and bring them business continuously. Thus, pre-planning, resources and budgeting becomes mandatory for such applications. Here is how enterprises can go about moving their legacy applications to cloud: 


1)      Application Rationalization: Based on the complexity of the environment and legacy nature of the applications, it is recommended to do application rationalization to remove or retire the duplicate applications or consolidate the applications into smaller footprints. This will reduce the legacy footprint and optimizes the migration.


2)      Cloud suitability Assessment: Conducting an application assessment before migrating it to cloud is an essential step. All the involved stakeholders need to understand the technical risks of moving a legacy application along with the impact it will have on business. In addition to this, security and compliance, performance, availability, scalability assessments also need to be performed to avoid any lag or disruption during or after the migration.  


3)      Robust Migration plan: IT teams and vendors involved in the migration of legacy applications need to create a robust and failproof migration plan. Starting from assessment to maintenance of the application after migration needs to be listed and elaborated in this plan. Furthermore, the plan must include key responsibilities and the role each team member is required to fulfill. Thus, preparing a checklist and ticking off tasks once they are done simplifies the process.  


4)      Declutter: Legacy applications bring with them several technical challenges such as OS compatibility, old VMs outdated infrastructure tools and inter application dependencies. When a legacy application is migrated to cloud, with it moves the log files, malware and file legacy. Thus, when the application is decluttered, the OS, VMs and legacy logs are not moved to cloud. Instead only the app is migrated and installed on a cloud-hosted hosted OS. Hence it is important to understand the application composition with core part of applications and its peripheral dependencies so that cloud migration can be optimized.


5)      Trick your applications: Legacy applications can be tricked to run in a stateless environment. This can be achieved by freezing the application's configuration. Freezing the state and configuration allows deployment of the application on one or more servers as per your choice.


6)      Disaster Recovery: Legacy applications often are of high importance to enterprises and have a drastic impact on business. Disaster Recovery needs to be in place for these legacy applications to avoid data loss and ensure there is no lag in business continuity.


 Legacy applications are critical to the smooth functioning of enterprises. A robust strategy to migrate these applications to Cloud becomes crucial. By doing this, not only is the enterprise at an advantage but can also reap a number of benefits such as faster innovation, greater agility and faster time to market.



Are you ready to move your legacy applications to Cloud? Infosys' vast experience in this domain will be of great help to you. Get in touch with us: enterprisecloud@infosys.com


Continue reading "Can your legacy applications be moved to cloud?" »

September 30, 2018

Public Cloud Security- is it still a concern for enterprises?

Author: Jitendra Jain, Senior Technology Architect (Architecture & Design Group, Infosys)

Introduction

Cloud computing has become integral part of IT modernization in any large to small scale enterprises. It has been considered as a major milestone in the transformational journey. Cloud computing changes the way enterprises store the data, share the data and access the data for services, products and applications. Public cloud is the most widely adopted model of cloud computing. Public cloud as the same suggest available to public over the internet and easily accessible via web channel in a free mode or pay as you go mode. Gmail, O365, Dropbox are some of the popular examples of public cloud.

Public cloud provided services eliminates extra investment in infrastructure as all the required hardware, platform architecture and core operating software services is entirely owned, managed and efficiently maintained by the cloud hosting vendor.

As per mcafee research almost 76% of enterprises have adopted minimum 1 public cloud service provider, it could be any kind of cloud offerings (SaaS, IaaS, or PaaS). It shows popularity of public cloud. 


Continue reading "Public Cloud Security- is it still a concern for enterprises?" »

September 20, 2018

Multi-Cloud strategy - Considerations for Cloud Transformation Partners

While "Cloud" has become the "New Normal", recent analyst surveys indicate that more and more enterprises are adopting Multi-Cloud, wherein more than one Public Cloud provider is utilized to deliver the solution for an enterprise, for example; a solution that employs both AWS and Azure. There are various reasons for enterprises to take this route, Cloud Reliability, Data Sovereignty, Technical Features, Vendor Lock-in to being a few amongst the several reasons.
Though most of the deliberations are revolving around Multi-Cloud for enterprises, here is an attempt to bring out the considerations that a Cloud Transformation Partner needs to watch out for.


There are four core areas a Cloud Transformation Partner must focus on to ensure successful and seamless Transformation & Operation of a Multi-Cloud environment:

1. Architecture
2. Engineering
3. Operations
4. Resources

Architecture: Success of a multi-cloud strategy depends largely on defining the right architecture that can help reap the benefits of having a multi-cloud environment. Architecture decisions should be reviewed against the business demands that triggered a multi-cloud strategy and ensure they are fulfilled.

Application and Deployment architecture has address all aspects of why an enterprise is looking to adopt a multi-cloud strategy. For example, if Data Sovereignty was the key consideration; application deployment architecture should make sure that data will reside in the appropriate Cloud that suits the need. If reliability is the driver, suitable failover mechanism needs to be in place, thus making use of the multiple cloud platforms available.

Interoperability across platforms is among the critical elements to emphasize on along with portability across Cloud Service Providers (CSPs). Achieving this takes a multi layered approach and containers is emerging as a solution in the cloud native space. More details in another blog post here.

Though Cloud as a platform is stable, there is a possibility of failure with a cloud provider (and we have witnessed it in the past). Disaster Recovery (DR) solution built on multiple clouds can be a more effective solution than DR with a single cloud provider in multiple regions.

Establishing network connectivity between competitor CSPs can have its own challenges and bottle necks. Network solution should facilitate provisioning new connections when needed with desired performance across multiple clouds.

Security solutions and controls need to run natively on all clouds and work across all boundaries. Hence Cloud Security Architecture should be on top of the list for considerations in multi-cloud. More importantly, solutions for threats, breaches and fixes need to cater to multiple CSPs and have to be centrally coordinated to respond effectively.


Engineering: There will be changes to the current set of application development and engineering processes followed for a single cloud environment. Application Deployment would need careful planning in a multi-cloud environment with specific focus on developer productivity, process compliance and security implementations.

DevOps should be an integral part of agile development for cloud native & traditional applications. Attention and careful planning needs to be given to the DevOps process and tools to work seamlessly across multiple cloud platforms.

Application lifecycle management should have Platform specific testing built into the process and ensure reliable operations on each of the target platforms.


Operations: Cloud operations are more complex in a multi-cloud scenario due to the overheads that each cloud platform will bring in.

Cloud Management Platform (CMP) must support the multiple Public Clouds that are part of the solution. CMP should be capable to abstract the complexity of different Cloud stacks and models and provide a single window view to monitor, administer and manage multi-cloud ecosystem for the operators.

Oversubscription of Cloud resources needs to be watched for a multi-cloud environment. It is hard to foresee the cloud usage patterns in each of the cloud platforms, and it is very likely that one or all of the cloud platforms can get oversubscribed. Optimization of cloud resources can be a challenge and can result to increased costs. Multi-Cloud strategy may not attract the best volume discounts from a CSP and can impact the cost.

SLA's can vary across CSPs, this should be taken in to consideration while defining the service levels.

Overheads for managing and tracking multiple CSP contracts, billing etc. takes effort and time and needs to be planned for. A well-defined change control mechanism and a roles & responsibilities matrix are essentials in a multi-cloud environment.


Resources: Staffing needs to be planned considering the multiple cloud platforms and the varied skills that would be required. Teams need to have an appropriate mix of core cloud Horizontal skills and CSP specific vertical skills. Multi-cloud environment will demand resources in:


Cloud Horizontal Skills - Engineering skills like Cloud native development with 12 factor principles, cloud orchestration is relatively cloud provider independent. Resources will be specialists in their technical areas and will not be dependent on the Cloud platforms.

Cloud Vertical Skills - Specialists of each cloud platform will be required to extract the best out of each of the multiple cloud platforms that are used. These resources will be required at various roles ranging from architects to developers.

Agile/DevOps - Cloud development needs to be agile and should accommodate changes with the minimal turnaround time. This would need adoption of Agile/DevOps and resources with the appropriate skills to run large scale agile projects.
Cloud led transformation is a journey/ continuum for any large enterprise and hence they should choose a cloud transformation partner who has deep expertise across architecture, engineering and operations with right resources. Infosys as a leading cloud transformation partner has been working with Global 2000 enterprises on their transformations. You can find more details on the same here.

Continue reading "Multi-Cloud strategy - Considerations for Cloud Transformation Partners" »

August 27, 2013

You can't compete with the clouds, but you can embrace and succeed!

 

I take my inspiration for my blog title from Forrester's James Staten who recently wrote about "You can learn from the clouds but you can't compete". James Staten talks about how data center operations can help achieve levels of excellence and success and prescribes that standardization, simplification, workload consistency, automation and maniacal focus on power and cooling can help one setup and run the best data center operations.

 

However, I think there is more to these large cloud providers than just learning some best practices. I was talking to an important client of Infosys earlier with whom we are currently enabling an cloud enabled IT transformation and she mentioned something that clarified to me what the real value of these cloud providers means. She said her aspiration is to set up and run a trust cloud ecosystem for her enterprise with a single point of accountability.  In spite of the sheer scale and magnitude of their investments, the likes of Amazon Web Services, Microsoft Windows Azure, these giant behemoths of industrial scale infrastructure with their infinitesimal compute power, derive respect from the sheer agility and speed with which they are able to respond to their customer needs.

 

Of course, this happens because of a phenomenal level of simplification, standardization, automation and orchestration they run their operations with. Now imagine, how it would be if these principles of IT governance and operations management were extended to an enterprise. Vishnu Bhat , VP and Global head of Cloud at Infosys keeps saying "It is not about the Cloud. It is about the enterprise" and towards this if an enterprise were to simply focus on learning from these cloud leaders and work towards establishing an ecosystem of cloud providers, a hybrid setup, where their current IT is conveniently and easily connected to their private cloud, and public cloud setups. And this hybrid cloud environment is managed with the same level of agility and speed as an AWS is, that is when the possibility of true success and value from cloud starts to emerge.

 

Imagine a hybrid cloud within the realms of your enterprise that functions with the same speed, agility and alacrity of an AWS. Imagine exceptionally efficient levels of optimization of costs on a continuous basis by bringing in levels of automated provisioning of enterprise workloads, integrated federation and brokerage with on-premise core IT systems, extensibility with public clouds available for spikes, constant optimization through contestability and optimization, control and governance through single enterprise view, metering, billing and charge backs to business, clear points of accountability with easy governance of SLAs and liabilities, secure management of the cloud and compliance de-risking in keeping with the laws of the land. And all this from one ecosystem integrator with one point of responsibility, accountability. That's cloud nirvana at work!  I am eager to keeping telling clients on how to get to this state, how to learn from the cloud providers and contextualize these to an enterprise.

 

In my next blog, I will talk about an important aspect of cloud success - contestability, but before that, I would urge you to read my colleague Soma Pamidi's blog "Getting cloud management and sustenance right - Part 1". Till then, may the clouds keep you productive!

March 31, 2011

Is cloud computing same as putting things Online?

All those just boarding the cloud train, may have posed this question to themselves or to others who may have a know-how on cloud. Being a Cloud SME myself, I have faced this question several times. This post is an attempt to clear some of the confusion that exists around this specific topic.

Continue reading "Is cloud computing same as putting things Online? " »