The commoditization of technology has reached its pinnacle with the advent of the recent paradigm of Cloud Computing. Infosys Cloud Computing blog is a platform to exchange thoughts, ideas and opinions with Infosys experts on Cloud Computing

June 28, 2019

Amazon Aurora Serverless, the future of database consumption

Amazon has recently launched Amazon Aurora Serverless database (MySQL-compatible edition). This is going to set a new trend in the way databases are consumed by organizations. Traditionally database setup, administration, scaling and maintenance is tedious, time consuming and expensive. Thanks to cloud computing, RDS takes away the setup, scaling and maintenance of databases from customers. Amazon Aurora Serverless takes RDS to the next level where the users pay only for what they use and when they use.

Continue reading "Amazon Aurora Serverless, the future of database consumption" »

June 26, 2019

AWS Cloudformation: An underrated service with a vast potential

As businesses are experiencing surge in provisioning and managing infrastructure and services through cloud offerings, a collateral challenge has emerged on the sidewalls. The challenge to remain accurate and quick while provisioning, configuring and managing medium to large scale setups with predictability, efficiency and security.
Infrastructure as a Code i.e. IaaC is a way to manage resource provisioning, configurations and updates/changes using tested and proven software development practices which are used for application development.

E.g.
  • Version Control
  • Testing
  • CI/CD
IaaC2.png

Key Benefits:

1)  Cost Reduction- Time and effort reduction in provisioning and management through IaaC.
2)  Speed - Faster execution through automation.
3)  Risk Reduction- Less chances of error due to misconfiguration or human error.
4)  Predictability- Assess the impact of changes via change set and take decision accordingly.

There are several tools which can be used for deploying Infrastructure as a Code.
  • Terraform
  • CloudFormation 
  • Heat
  • Ansible
  • Salt
  • Chef, Puppet

Ansible, Chef and Puppet are configuration management tools which are primarily designed to install and manage software on existing servers. Certain degree of infrastructure provisioning can be supported by them, however, there are some specifically designed tools which are a better fit.

Orchestration tools like Terraform and CloudFormation are specially designed for infrastructure provisioning and management.  

CloudFormation is an AWS native Infrastructure as a code offering. One of the most underrated services in Amazon cloud environment for so many years. However, with increasing awareness on this, IaaC Service is getting traction and lot of clients are willing to look at the advantages.

It allows codification of infrastructure which helps in leveraging best software development practices and version control. It can be authored with any code editor like Visual Studio code or Atom editor, checked into a version control system like Git and reviewed with team members before deployment into Dev/Test/Prod. 

CloudFormation takes care of all the provisioning and configuration of resources and developer can focus on development rather than spending time and efforts on creating and managing resources individually.

CFNDgrm1.4.png
Resources are defined in the form of code (JSON or YAML) in Template which interacts with CFN service to produce Stack which is a collection of AWS resources that can be managed as a single unit. In other words, we can create, update, or delete a collection of resources by creating, updating, or deleting stacks.

CloudFormation can be used to deploy simple scenarios like spinning up a single EC2 instance to a complex multi-tier and multi-region application deployment.

For example, all the resources required to deploy a web application like web server, database server and networking components can be defined in a template. When this template interacts with CloudFormation service, it deploys desired web application. There is no need to manage dependencies of the resources on each other as it's all taken care by CloudFormation. 

CloudFormation treats all stack resources as a single unit which means for a stack creation to be successful, all the underlying resources should be created successfully. If resource creation fails, by default CloudFormation will roll back the stack creation and any created resource till that point of time will be deleted.

However, point to be noted here is that any resource created before roll back will be charged.

Below example will create a t2.micro instance Named "EC2Instance" using Amazon Linux AMI in N. Virginia region.

Temp2.png
 
Like easy creation, CloudFormation also allows easy deletion of stack and cleanup of all underlying resources in a single go.

Change Sets- While updating or changing any resource there is always a risk associated with the impact of that change. For example, updating security group description without defining VPC in template or in a non VPC environment will recreate security group as well as EC2 instance associated to it. Another example is updating an RDS database name which will recreate the database instance and can be severely impacting.

CloudFormation allows to preview and assess the impact of that change through change sets to ensure it doesn't implement unintentional changes. 

ChangeSet3.1.png

Below change set example shows that this change will -

CHangeSetAWSPart1.png
CHangeSetAWSPart2.png
 
1)  Replace the security group.
2)  EC2 instance may or may not be replaced based on several factors which are external to this CloudFormation template and can't be assessed with certainty. For such cases the impact can be assessed with the help of AWS Resource and Property Types Reference (https://docs.aws.amazon.com/AWSCloudFormation/latest/UserGuide/aws-template-resource-type-ref.html) document.

Conclusion: CloudFormation, the infrastructure as a code service from AWS unleashes the real power and flexibility of cloud environment and has revolutionized the way we deploy and manage the infrastructure. It is worth investing time and efforts exploring it.





June 25, 2019

S3- Managing Object Versions


Main2.jpg
S3 has been one of the most appreciated services in AWS environment, launched in 2006, it provides 99.999999999 % (eleven nines) of durability. As of now, it handles over a million requests per second and stores trillions of documents, images, backups and other data.

Versioning is one of the S3 feature which makes it even more useful. Once versioning is enabled, successive uploads or PUTs of a particular object creates distinct named and individually addressable versions of it. This is a great feature as it provides safety against any accidental deletion due to human or programmatic error. Therefore, if versioning is enabled, any version of object stored in S3 can be preserved, retrieved or restored.

However, this comes with an additional cost as each time a new version is uploaded, it adds up to S3 usage which is chargeable. This cost can multiply very quickly if the versions which are not in use are managed improperly. So how to suitably manage current as well as old versions?

This is easy, there are two options: -
1)  Use of S3 Lifecycle Rules
2)  S3 Versions-Manual Delete 


Use of S3 Lifecycle Rules

When versioning is enabled, a bucket will have multiple versions of same file i.e. current and non-current ones.
Lifecycle rules can be applied to ensure object versions are stored efficiently by defining what action should be taken for non-current versions. Lifecycle rules can define transition and expiration action.
 
LifecyclePolicySteps.png
Below example will create a lifecycle policy for the bucket which says that all non-current versions should be transitioned to Glacier after one day and should be permanently deleted after thirty days.

Review2.JPG



S3 Versions-Manual Delete
Deleting versions manually can be done simply from console. Because all the versions are visible/accessible from console so specific version of the object can be selected and deleted.

 
ObjectConsoleDelete.JPG

However, while using command line interface, a simple delete object command will not permanently delete the object named in delete command, instead S3 will insert a delete marker in the bucket. That delete marker will become the current version of that object with new Id and all subsequent GET object request will return that delete marker resulting a 404 error. 
So even though that object is not erased, it's not accessible and can be confused with deletion. However, the object with all versions along with a delete marker still exists in bucket and keeps on consuming the storage which results in additional charges.

ObjectDel1.1.png
So what is the delete marker? When delete command is executed for a versioned object, a delete marker get inserted in the bucket which is like a placeholder for that versioned object. Due to this delete marker, S3 behaves as if object is erased. Like any object, delete marker also has key name and Id, however it differs from an object as it does not have any data and that is the reason it returns 404 error. 

The storage size of a delete marker is equal to the size of its key name which adds one to four byte of bucket storage for each character in key name. It is not that huge; then why should we get concerned about it? This is because the size of objects it blocks or hides can be huge and pileup enormous bills.

Point to be noted here is that delete marker is also inserted in version suspended buckets, so if versioning is enabled and then suspended (because we know that versioning can't be disabled ever if once enabled) even then all simple delete commands will insert delete marker. 

Removing delete markers is tricky. If a simple delete request is executed to erase a delete marker without specifying its version Id, it won't get erased instead another delete marker gets inserted with a new unique version Id. All subsequent delete request will insert additional delete markers. It is possible to have several delete markers for same object in a bucket.

ObjectDel2.2.png
To permanently remove delete marker, simply include version Id in delete object version Id request.

ObjectDel3.3.png
Once this delete marker is removed, a simple GET request will now retrieve the current version (e.g. 20002) of the object. 

This solves the problem of unintended storage consumption. But how to deal with that object at first place so that we don't have to go through this complication? 
To get rid of an object permanently, we need to use specific command "DELETE Object versionId". This command will permanently delete that version.

ObjectDel4.4.png

Conclusion: S3 provides virtually unlimited storage in cloud and versioning makes it even more secure by protecting objects from accidental deletion. However, it comes with a cost and should be managed cautiously. Above is a rational explanation for a scenario where the user deleted S3 object but still struggled with its charges in AWS bill. 



RDS - Scaling and Load Balancing

img2.2.jpg

Solution architects often have to encounter a question i.e. like an EC2 instances, can a load balancer with autoscaling group be used for scaling and load balancing RDS instances and databases hosted on them? 

While the answer to this question is "NO" there are other ways to scale RDS and load balance RDS read queries.

Point to consider here is that RDS is a managed service from AWS and thus takes care of scaling of relational database to keep up with the growing requirements of application without manual intervention. So former part of this article will focus on exploring vertical as well as horizontal scaling of RDS instances and the later would be observing load balancing options.

 Amazon RDS was first released on 22 October 2009, supporting MySQL databases. This was followed by support for Oracle Database in June 2011, Microsoft SQL Server in May 2012, PostgreSQL in November 2013 and MariaDB in October 2015. As on today it is one of the core PaaS services offered by AWS.


Scaling- RDS Instances 

RDS instances can be scaled vertically as well as horizontally.

Vertical Scaling

To handle higher load, database instances can be vertically scaled up with a single click. At present there are fifty type of instance and sizes to choose for RDS MySQL, PostgreSQL, MariaDB, Oracle or Microsoft SQL Server instance. For Aurora, there are twenty different instance sizes.

Follow below steps to vertically scale RDS instance.

 

img3.jpg

Remember, so far only instance type has been scaled, however the storage is separate and when instance is scaled up or down it remains unchanged. Hence volume also must be modified separately. We can increase the allocated storage space or improve the performance by changing the storage type (such as to General Purpose SSD to Provisioned IOPS SSD). 


img4.jpg

One important point to remember while scaling horizontally is to ensure correct license is in place for commercial engines like Oracle, SQL Server. Especially in BYOL model, because licenses are usually tied to the CPU sockets or cores. 

Another important consideration is that single AZ instance will be down or unavailable during this change. However if  database instance is Multi-AZ, the impact will be minimal as backup database will be updated first. A fail over will occur to newly updated database (backup which was updated first) before applying changes to main database engine (which will now become the standby).


Horizontal Scaling

To scale read intensive traffic, read replicas can be used. Presently, Amazon RDS for MySQL, MariaDB and PostgreSQL allow to create up to five read replicas for a given source database instance. Amazon Aurora permits creation of up to fifteen read replicas for a given database cluster.

Read replicas are asynchronously replicated copies of main database.

A read replica can be -

  • In same or different AZ as well as region- to be placed close to users.
  • Can be promoted to master as disaster recovery.
  • Can have same or different database instance type/class.
  • Can be configured as Multi-AZ - Amazon RDS for MySQL, MariaDB and PostgreSQL allow to enable Multi-AZ configuration on read replicas to support disaster recovery and minimize downtime from engine upgrades.
  • Can have different storage class.

a9.2.jpgEach of these read replicas can have different endpoints to share the read load. We can connect to these read replicas like how we connect to standard DB instance.


Load Balancing/Distribution of Read/Write Traffic on AWS RDS

AWS load balancer doesn't support routing of traffic to RDS. The CLB, ALB and NLB cannot route traffic to RDS Instances. So then how to distribute or balance read traffic on AWS RDS read replicas? 

There are two options -


1)  Using an open-source software-based load balancer, like HAProxy.

As we know that each replica has unique Domain Name Service (DNS) endpoint. These endpoints can be used by an application to implement load balancing. 

This can be done programmatically at application level or by using several open-source solutions such as MaxScale, ProxySQL and MySQL Proxy.

These solutions can split read/write queries and then proxy's like HAProxy can be used in between application and database server. HAProxy can listen read and write on different ports and route accordingly.


11.2.jpg

This approach allows to have a single database endpoint instead of several independent DNS endpoints for each read replica.

It also allows more dynamic environment as read replicas can be transparently added or removed behind the load balancer without any need to update database connection string of the application.


2) Second option is to use Amazon Route 53 weighted record sets to distribute requests across read replicas.

Though there is no built-in way, but this is a work around to use Route 53 weighted records to load share requests across multiple read replicas to achieve same results.

Within Route 53 hosted zone, different record sets can be created with each read replica endpoint with equal weight and then using Route 53 read traffic can be shared among different record sets. 


a8.2.jpg

Application can use route 53 endpoint to send read requests to database which will be distributed among all read replicas to achieve same behavior as load balancer. 








March 25, 2019

Optimizing Costs in the Cloud: Analyzing the Reserved Instances

There are more than one levers when it comes to optimizing costs in AWS. Right sizing, reserving, using spot instances are some of them. While right sizing is always applicable, it is not the case with use of Reserved Instances due to various constraints. Technologists, Evangelists, Architects, Consultants amongst like us are faced with dilemma - to reserve or not to reserve instances in the AWS cloud. I am sure you will have some point view, not necessarily for or against but rather how, when and why it makes or doesn't make sense to reserve instances in Cloud.  AWS is considered as reference for this discussion but I am sure it would be equally applicable for other CSP (Cloud Services Providers). 

This (not so) short article tries brings out the relevant points more succinctly so as to help you make a decision or at the very least, acts as an ice breaker to get the debate started. 


Let's begin, with some research!


Arguably, the documentation is detailed, well organized, peppered with diagrams with hues of orange.  In fact, it's so detailed, I had to open about seven tabs from various hyperlinks on one page to get started. No wonder then a majority of technicians hardly skim through the documentation to make a decision. Further it's a living document considering the 100s of innovation that AWS brings about each year, in some form or shape. If you are bored enough and have a had a chance to read through, congratulation! Hopefully, we will be on the "same page" at some point in time J


AWS_Documentation_RI.png

  

Text to Summary (Does AWS have a service yet?)

I checked a few online options for summarizing but it wasn't up to the mark, esp. when I provide the AWS documentation URL.   So here is summary form the documentation I managed to read

Basics

The EC2 instances in AWS is defined by attributes like size (e.g. 2x large, medium), instance type/family (e.g. m5, r4), scope (e.g. AZ, Region), platform (e.g.  Linux, Windows), network type (e.g. EC2-Classic, VPC)

The normalization factor refers to instance size within instance type. This is used while comparing the instance footprint and it is meaningful within the instance family.   One can't use the normalization factor to convert sizing across instance types e.g. from t2 to r4

The normalization factors, which are applicable within an instance type are given below

NormalizationFactors.PNG


Reservation offerings


There are three offerings - Standard, Convertible and Scheduled Reserved instances with following key differences between these offerings

RI_offerings.PNG

Notes:

* Such availability of any feature or innovation, changes across region in a short span of time, hence it is better to validate at the time of actual use


Reservation Scope

Instance reservation scope is either Zonal (AZ specific) or Regional, with following characteristics

(Ref: https://docs.aws.amazon.com/AWSEC2/latest/UserGuide/apply_ri.html )

RI_Scope.PNG

Modifiable attributes

Some instance attributes are modifiable within certain constraints, most notably, the platform type. As a general rule, Linux platform instances are more amenable. Following table list the modifiable attributes  (Ref: https://docs.aws.amazon.com/AWSEC2/latest/UserGuide/ri-modifying.html )

Modifyable_attr.PNG

The constraints

While there is sufficient flexibility in general for IaaS, there are a number of constraints, based on feature availability in specific region, upper or lower limits, and so on.  The constraints on Reserved Instances are highlighted below

1.       Instance size flexibility on Reserved Instances is lost f such reservation are purchased for a specific AZ or for bare metal instances. Also sizing flexibility does not apply to instances in dedicated tenancy, and for Windows with/without SQL Server, SuSE or RHEL.

2.       Software license usually does not align well with instance size changes.  One must give careful consideration for licensing aspects. As one example, in one of environment, I use SuSE for SAP for which the s/w license pricing is in couple of slabs.  If I use one r4.8xlarge, I have to pay an hourly price of 0.51/hr but if I choose two of the r4.4xlarge instances which are equivalent of one r4.8xlarge, then I have to pay a price 2 x 0.43/hr

3.       While modifying a Reserved Instance, the footprint as in size/capacity of target configuration is required to match that of the existing configuration., otherwise. Even when if reconfiguration results in higher footprint, it doesn't work.

4.       Instances are grouped by family (based on storage, or CPU capacity); type (designed for specific use cases); and size. For example, the c4 instance type is in the Compute optimized instance family and is available in multiple sizes. Even though c3 instances are of the same family as c4 family, one can't modify c4 instances into c3 instances because of different underlying hardware specifications

5.  Some instance types, where there is only one type instance, obviously can't be modified.  Such instances include cc2.8xlarge, cr1.8xlarge, hs1.8xlarge, i3.metal, t1.micro

6.       Convertible RI can only be exchanged for other Convertible Reserved Instances currently offered by AWS. Exchanging multiple convertible instances for one CI is possible.

7.       AWS doesn't like you to change All Upfront and Partial Upfront Convertible Reserved Instances to No Upfront Convertible Reserved Instances.

Do we have a common understanding?

It's very human not to have a common understanding, even if we have same source of information. Here is my attempt to rephrase some of the salient points as common understanding - If you don't agree, consider it as commoner's understanding!

 

1.       The much touted 70% discount is not that common. It applies to some very large instance types with partial/full upfront e.g. x1.16xlarge for partial upfront, 3yr term 

( ref: https://aws.amazon.com/ec2/pricing/reserved-instances/pricing/ )

Most instance types give you slightly moderate discount, approx. 20-30% lesser than you would   have expected

2.       That Reserved instances is the first solution when savings costs on Cloud is entirely right. In fact, it requires a careful review various constraints.

3.       RI could be used when it comes to production load which is usually more on the stable side and by the time workload enters the production systems, a set of tests including performance tests, backup/recovery, network latency etc. would have been carried out so that sufficient information is available to make a decision.

4.       Type of applications running on EC2 instances, the operating system and the software/tools licenses which are usually bound to instance type/size must be understood before going with the reservation

5.       Even when decision has been made to go with RI, good starting point could be Convertible as opposed to Standard RI, due to the flexibility they provide and marginal cost disadvantage when compared to Standard RI. Scheduled RI due to their inflexibility need even more careful evaluation unless you just use it for smaller instances and non-prod systems.

 

Conclusion

As the number of innovations increase and solution architects are faced with numerous "equally good" options, it makes sense to take a pause, evaluate the options carefully, gather stakeholder inputs and make a recommendation. It's certainly true for using or otherwise Reserved Instances.



Continue reading "Optimizing Costs in the Cloud: Analyzing the Reserved Instances " »

March 24, 2019

Own Less - Serverless

 

We are in Age of Software, every business today regardless of its domain is driven by software. Software has tend to become the enabler and the differentiator for business. To get the perspective of what actually it means, think of Amazon and its impact on every business it had where its path crossed, in fact, I am not sure if anybody has stated this earlier, Amazon has pioneered the Art of Retail as a Service (RaaS) :) . Software has become so ubiquitous today that every industry appears to be a software industry, the line differentiating them has become rather very thin, example Netflix (Entertainment/Media or Software), Uber (Transportation or Software), Airbnb (Hospitality or Software?), Amazon (Retail or Software), the list goes on. This reality converges to the fact that every business demands a Software division.

When you deal with software, you have a lot to worry about, right from designing and developing your software to owning and maintaining infrastructure that runs it. A typical business executive would like to focus more on Business rather than software nuances that drives it. Unlike the real estate that's needed for the business operations, the software infrastructure is totally a different ball game. Software and it's hosting infrastructure tend to get irrelevant rather soon and demands flexibility based on changing business growth and conditions. Infrastructure also needs to keep in pace to support ideas that crop up, get realized, tested, and sometimes sustained with good returns but most of the time gets fizzed out, this engine that drives the change and innovation should not be economically taxing in terms of capital expenses that tend to hang on as liability in case of a failed endeavor. This changing dynamics also has ripple effects on the talents that keep the show running.

In ideal world, wouldn't it be simply great if you could just buy or rent the software you need instead of building and maintaining it? Well, in some cases where your need is well defined and mainstream with fair amount of variations, you have SaaS offerings to take care of it; but for other cases where your software is your differentiator or specific to your unique requirements, you have a choice of building your solution based on Serverless framework and in the process making your solution infrastructure agnostic (to certain degree). So if SaaS gives you almost total independence from maintaining software, Serverless gives you a model to go half way that route by taking care of infrastructure needed to run your software.

With Serverless the whole paradigm of building software solution changes, instead of traditional way of developing software components and hosting it in your managed infrastructure, you build solutions using other services that comes with "batteries included" oh I mean "Servers Included". With Serverless you assemble solutions akin to Lego where every service is a Lego block which you integrate together to build your solutions. 

When we deal with the concept of Serverless, it's important to understand that Serverless is an abstraction that gives the developer of the solution from underlying infrastructure needed to run it, this abstraction could come from the organization itself or from public cloud infrastructure companies like AWS and Azure. So, the term Serverless is relative from the standpoint of end Software developer, but in reality, someone else solution is abstracting the nuances of hardware for you. Effectively Serverless paradigm is adding another layer of abstraction, making developers and organization to focus more on the software aspect of the solution. This abstraction aspect is evolving and turns out to be the future of Software development. For those skeptical of this model and believe that you can't focus on the solution by forgoing the control of your infrastructure, it might be true for certain edge cases where you need have more control on the underlying hardware but for most of the mainstream solutions this would require more of a mental shift of building solutions with other Serverless services. A good analogy of a similar evolution would be the days where we coded using Assembly language and programmers had complete control over the Registers, Program Counters and Memory and decided on what moves in and out of this Registers. Transitioning to higher level language like Java and its likes, we have let off this control to abstractions layers that took care of it. We are in a kind of similar transition with Serverless simply extending this abstraction to higher level of resources that runs our code.

With Serverless your solution stack is quite simplified, From Authentication, Middleware to Database, you assemble your solution from Managed services rather than building each of its stack ground-up. This approach helps you focus on your actual business solutions, decreases the time to market and translates your CapEx to OpEx with significant cost savings in most cases.

Hence with Serverless you end up owning the integration and logic that matters you the most and forgo the ownership of its underlying infrastructure and its intricacies.

Having said this, Serverless model is continuously evolving and would gradually make sense for variety of use cases. Success of Serverless offerings highly depends on the economic model in which it operates. Typically cloud offerings are priced based on the amount of resource used (compute/storage/networking) and the duration of consumption and depending on the use case price needs to be calibrated such that it economically makes sense to stop managing one's own infrastructure. As this model gains popularity and well tested, you would be seeing innovative pricing mechanism that would make Serverless obvious choice. In fact, these whole exodus towards cloud that we are witnessing today would end up in either SaaS or a Serverless Solutions.

We are in the age of less. From Serverless to Driverless (Think Uber without driver) the journey is towards getting and paying for what we need instead of owning and maintaining what serves our need. It's here and It's evolving...

March 1, 2019

Can your legacy applications be moved to cloud?

 Migrating to cloud is a smooth journey until enterprises come across a hiccup called legacy applications. Moving legacy applications to cloud is a task that requires pre-planning and several other aspects which need to be defined clearly for the teams involved in the migration process. Evaluation of the importance of the legacy application to business growth becomes integral. If the application doesn't forward the enterprise's goals, then a call whether it is necessary to retain the application needs to be made. Several legacy applications are of great importance to enterprises and bring them business continuously. Thus, pre-planning, resources and budgeting becomes mandatory for such applications. Here is how enterprises can go about moving their legacy applications to cloud: 


1)      Application Rationalization: Based on the complexity of the environment and legacy nature of the applications, it is recommended to do application rationalization to remove or retire the duplicate applications or consolidate the applications into smaller footprints. This will reduce the legacy footprint and optimizes the migration.


2)      Cloud suitability Assessment: Conducting an application assessment before migrating it to cloud is an essential step. All the involved stakeholders need to understand the technical risks of moving a legacy application along with the impact it will have on business. In addition to this, security and compliance, performance, availability, scalability assessments also need to be performed to avoid any lag or disruption during or after the migration.  


3)      Robust Migration plan: IT teams and vendors involved in the migration of legacy applications need to create a robust and failproof migration plan. Starting from assessment to maintenance of the application after migration needs to be listed and elaborated in this plan. Furthermore, the plan must include key responsibilities and the role each team member is required to fulfill. Thus, preparing a checklist and ticking off tasks once they are done simplifies the process.  


4)      Declutter: Legacy applications bring with them several technical challenges such as OS compatibility, old VMs outdated infrastructure tools and inter application dependencies. When a legacy application is migrated to cloud, with it moves the log files, malware and file legacy. Thus, when the application is decluttered, the OS, VMs and legacy logs are not moved to cloud. Instead only the app is migrated and installed on a cloud-hosted hosted OS. Hence it is important to understand the application composition with core part of applications and its peripheral dependencies so that cloud migration can be optimized.


5)      Trick your applications: Legacy applications can be tricked to run in a stateless environment. This can be achieved by freezing the application's configuration. Freezing the state and configuration allows deployment of the application on one or more servers as per your choice.


6)      Disaster Recovery: Legacy applications often are of high importance to enterprises and have a drastic impact on business. Disaster Recovery needs to be in place for these legacy applications to avoid data loss and ensure there is no lag in business continuity.


 Legacy applications are critical to the smooth functioning of enterprises. A robust strategy to migrate these applications to Cloud becomes crucial. By doing this, not only is the enterprise at an advantage but can also reap a number of benefits such as faster innovation, greater agility and faster time to market.



Are you ready to move your legacy applications to Cloud? Infosys' vast experience in this domain will be of great help to you. Get in touch with us: enterprisecloud@infosys.com


Continue reading "Can your legacy applications be moved to cloud?" »

January 17, 2019

Accelerate Digital Transformation with Service Management 'Beyond IT'

Service management is an organization capability to provide valuable services to their customers in a structured manner. It has long played a critical role in traditional areas such as IT business management and IT service management. The newer trend however is to extend this automation and seamless experience to newer areas such as HR, Finance, Project Portfolio Management (PPM), Security Operations, Customer Service Management, and others. Organizations are capitalizing on service management to deliver impactful user experience, similar to retail. This move towards an intuitive, personalized, and device agnostic experience boosts user satisfaction. It also optimizes manpower, delivers greater efficiencies, and enables organizations to access deeper insights from data.


Here's how some organizations have adopted service management beyond IT

A global fast-food chain enhanced business experience by leveraging mobility. To ensure their top 10 business critical apps were continuously monitored, they utilized a user-friendly mobile app. This enabled the management to view P1 and P2 alerts, act quickly, and stay on top of critical issues.

With a powerful service management solution, organizations can also automate their business applications. For instance, a leading provider of helicopter Services extended their Service management platform for fleet tracking, for better visibility of fleets leading to signification improvement in utilization of the aircrafts and for managing fleet operations by integrating with other operational systems to initiate event based alerts for the support team, thereby reducing downtime and revenue loss.

An automotive parts manufacturing major used orchestration to automate application and workstation deployments for end-user computing. This led to a 30% reduction in manual effort and close to 100% tracking and accuracy of orchestration workflows.

Delivering seamless customer service is becoming increasingly critical. To address this requirement, a telecom giant built a guided assurance portal. This reduced their systems from 9 to 1, improved agent experience, usability, and dropped average handling time by 20-25%.

In another instance, a large automotive manufacturer built a single platform as an enterprise software library to standardize software deployments across corporate offices and plants. This introduced uniformity, reduced license costs, enabled tracking on software use, and simplified management. 


Power your Enterprise Service Management beyond IT, with AI

As digital transformation progresses beyond automation, organizations have the opportunity to harness technologies such as Machine Learning (ML), Artificial Intelligence (AI), IoT, and even Augmented Reality (AR) to drive service management and user experience. No longer do organizations have to deploy personnel to review thousands of tickets and direct them to the right department for resolution. With ML, self-learning algorithms can read, auto-categorize, and manage tickets with no time lag. This reduces human intervention, errors, and optimizes manpower utilization.

With AI, the algorithm can also alert on any uncommon increase in the volume of tickets to the next level of decision-makers for them to find the cause for it and take necessary corrective action. Chat bots ensure 24/7 availability of service with no human intervention.


Best Practices to Deploy Service Management Beyond IT

Go for the small wins- It is a great way to ensure success with a small investment while making a strong case for bigger budgetary allocations and adoption of comprehensive tools.

2.      Prioritize- Prioritize based on the business value delivered, usually customers prefer to prioritize the ones which directly impacts the user experience

3.      Ensure you have employees support - This enables rapid adoption of automation and ensures openness to new roles post automation.

4.      Adopt an integrated self-service portal - This ensures 24/7 access to information and improved user-experience

5.      Find the right solutions partner- Since service management adoption is a gradual process and may span many months, having the right partner willing to journey you through the process can be a critical success factor


IT Service Management in a New Power-packed Plug and Play Avatar

Until recently, organizations seeking to adopt a service management solution beyond IT had to build them from the ground up at considerable cost. Today, feature rich solutions are available in ready-to-be deployed modules and as power-packed plug and play apps. A case in point is the Infosys Enterprise Service Management (ESM) Café, an accelerator built on top of ServiceNow. This solution enables organizations to access a host of pre-configured process templates and out-of-the-box service management modules that can be quickly deployed. As per our experience with clients, enterprises deploying ESM beyond IT have experienced as much as a 40-50% reduction in implementation timelines. Users have also found a 50% reduction in upgrade timelines with out of the box configuration and automated testing. And 30-40% increase in user satisfaction with next generation UI and UX interfaces for service portals, guided trainings, and automation.

With digital becoming the norm, it is imperative for organizations to adopt an integrated approach to digital that is pervasive. To know more about how service management can make a difference to your organization, visit Infosys.com

December 11, 2018

Navigate your Digital Transformation with a Robust HR Service Delivery Solution

Today, employees are adept at technology, ultra-social, opinionated, and continuously connected. They demand high-quality service, experience, and prefer self-service instead of having to reach out to support via phone or email. The consumerization of employee experience is leading HR departments to capitalize on HR service delivery (HRSD) solutions to realign and automate functions such as recruitment, compensation, performance evaluation, compliance, legal, and more. They are also going beyond smart-looking portals and consolidating functions to enable employees to access a modern, smart, and omnichannel experience across desktop, mobile, and a virtual assistant. Organizations deploying a robust HR solution have discovered that they were able to reduce administrative costs by up to 30%.

Why an HR service delivery solution offers more than just cost savings

Usually, the first few days at work for a new employee can be a flurry of paperwork and processes. An HRSD solution that is accessible across devices could mean shorter smooth joining formalities.  Employees, whether joining remotely or at an office can submit soft copies of their documents and this can reduce workflows from 70 to 10 steps and thus save thousands of man-hours, annually.

With an HRSD solution, organizations can do away with geography specific portals, SharePoint, and the intranet for different sets of information, and offer a single, comprehensive, and user-friendly knowledge platform that is device agnostic.  With a type-ahead feature, the platform can suggest terms so that users execute their search quickly. 

Another advantage of an HRSD solution is that employees can access context-sensitive content, tasks, and services through a Single Sign-on (SSO). A prompt feature can suggest related documents so that employees have access to all the information available. For instance, if an employee is searching for the vacation policy of the organization, information related to paid holidays, guest house facilities, leave travel allowance, etc. could pop up for the employee to review.

The traditional way of addressing HR problems is to raise a ticket. At the backend, case routing is manual, time-consuming, and person-dependent. Studies indicate that human resource personnel spend 57% of their time on repetitive tasks. Instead, information can be made available real-time via call, chatbot, or chat with a virtual agent. Larger organizations can also invest in an interactive voice response (IVR) facility which is accessible 24/7. When tickets are raised, an HRSD solution can be used to assign cases automatically depending on the skills and workload of HR personnel. This can positively impact employee experience.

Determine the success of an HRSD solution through leading and lagging indicators

Adopting an HRSD solution can be a major investment, and organizations can measure ROI through leading and lagging indicators. Two instances of leading indicators are, a self-service portal and a feedback mechanism. Studies show that 70% of issues can be resolved through a self-service knowledge portal. Accessible 24/7, it gives users greater control over information and does away with costs associated with deploying HR staff to answer calls. A feedback mechanism can be deployed by enabling users to comment and rate a document. This allows the organization to engage in continuous improvement of the information on the knowledge platform.  

Lagging indicators provide quantifiable data that proves the automation invested in by the organization is delivering ROI. For instance, increase in the use of the chat tool versus reduction in case volume demonstrates that employees effectively use the chat option to solve issues instead of raising tickets -which take longer to address. As a result, HR personnel spend less time in backend administration and more time responding to actual employee concerns.

Increase in the use of IVR versus reduction in the number of cases logged indicates that employees are able to quickly address queries over the phone instead of raising tickets. Thus, less personnel are needed to service a call center.

Measuring ROI on an HR service delivery solution

  • Organizations that implemented a knowledge portal or mobile app with personalized content found they could solve Tier 0 inquiries over 60% of the time and reduce HR administrative costs by up to 30%
  • Increased resolution of first calls reduces Tier 2 escalations. This can save up to 300k (for a client with a case volume of 25,000) as only around 8% of queries escalated to Tier 2
  • With a well-managed HRSD solution, less than 5% of employee queries escalate to Tier 3, at which, specialized professionals review and respond to cases. This allows organizations to optimize HR resources to do more value-added work
  • Increased self-service and peer-networks help case deflection. Over time, more than 60% of employee inquiries are resolved before reaching an HR personnel

·         With employee self-reliance, HR can be up to 30% more productive. Freed HR personnel can focus on higher-value strategic issues such as employee retention and workforce planning

 

So, if your organization is looking to give employees a seamless experience similar to retail, an HRSD is the answer. While the market abounds with HRSD vendors, choosing the right one requires a deeper understanding of one's requirements and the strengths of the vendor. Begin a conversation with Infosys to know how your organization can navigate its digital journey with an effective HR service delivery solution.

 

September 30, 2018

Public Cloud Security- is it still a concern for enterprises?

Author: Jitendra Jain, Senior Technology Architect (Architecture & Design Group, Infosys)

Introduction

Cloud computing has become integral part of IT modernization in any large to small scale enterprises. It has been considered as a major milestone in the transformational journey. Cloud computing changes the way enterprises store the data, share the data and access the data for services, products and applications. Public cloud is the most widely adopted model of cloud computing. Public cloud as the same suggest available to public over the internet and easily accessible via web channel in a free mode or pay as you go mode. Gmail, O365, Dropbox are some of the popular examples of public cloud.

Public cloud provided services eliminates extra investment in infrastructure as all the required hardware, platform architecture and core operating software services is entirely owned, managed and efficiently maintained by the cloud hosting vendor.

As per mcafee research almost 76% of enterprises have adopted minimum 1 public cloud service provider, it could be any kind of cloud offerings (SaaS, IaaS, or PaaS). It shows popularity of public cloud. 


Continue reading "Public Cloud Security- is it still a concern for enterprises?" »