The commoditization of technology has reached its pinnacle with the advent of the recent paradigm of Cloud Computing. Infosys Cloud Computing blog is a platform to exchange thoughts, ideas and opinions with Infosys experts on Cloud Computing

Main

June 28, 2019

Amazon Aurora Serverless, the future of database consumption

Amazon has recently launched Amazon Aurora Serverless database (MySQL-compatible edition). This is going to set a new trend in the way databases are consumed by organizations. Traditionally database setup, administration, scaling and maintenance is tedious, time consuming and expensive. Thanks to cloud computing, RDS takes away the setup, scaling and maintenance of databases from customers. Amazon Aurora Serverless takes RDS to the next level where the users pay only for what they use and when they use.

Continue reading "Amazon Aurora Serverless, the future of database consumption" »

June 26, 2019

AWS Cloudformation: An underrated service with a vast potential

As businesses are experiencing surge in provisioning and managing infrastructure and services through cloud offerings, a collateral challenge has emerged on the sidewalls. The challenge to remain accurate and quick while provisioning, configuring and managing medium to large scale setups with predictability, efficiency and security.
Infrastructure as a Code i.e. IaaC is a way to manage resource provisioning, configurations and updates/changes using tested and proven software development practices which are used for application development.

E.g.
  • Version Control
  • Testing
  • CI/CD
IaaC2.png

Key Benefits:

1)  Cost Reduction- Time and effort reduction in provisioning and management through IaaC.
2)  Speed - Faster execution through automation.
3)  Risk Reduction- Less chances of error due to misconfiguration or human error.
4)  Predictability- Assess the impact of changes via change set and take decision accordingly.

There are several tools which can be used for deploying Infrastructure as a Code.
  • Terraform
  • CloudFormation 
  • Heat
  • Ansible
  • Salt
  • Chef, Puppet

Ansible, Chef and Puppet are configuration management tools which are primarily designed to install and manage software on existing servers. Certain degree of infrastructure provisioning can be supported by them, however, there are some specifically designed tools which are a better fit.

Orchestration tools like Terraform and CloudFormation are specially designed for infrastructure provisioning and management.  

CloudFormation is an AWS native Infrastructure as a code offering. One of the most underrated services in Amazon cloud environment for so many years. However, with increasing awareness on this, IaaC Service is getting traction and lot of clients are willing to look at the advantages.

It allows codification of infrastructure which helps in leveraging best software development practices and version control. It can be authored with any code editor like Visual Studio code or Atom editor, checked into a version control system like Git and reviewed with team members before deployment into Dev/Test/Prod. 

CloudFormation takes care of all the provisioning and configuration of resources and developer can focus on development rather than spending time and efforts on creating and managing resources individually.

CFNDgrm1.4.png
Resources are defined in the form of code (JSON or YAML) in Template which interacts with CFN service to produce Stack which is a collection of AWS resources that can be managed as a single unit. In other words, we can create, update, or delete a collection of resources by creating, updating, or deleting stacks.

CloudFormation can be used to deploy simple scenarios like spinning up a single EC2 instance to a complex multi-tier and multi-region application deployment.

For example, all the resources required to deploy a web application like web server, database server and networking components can be defined in a template. When this template interacts with CloudFormation service, it deploys desired web application. There is no need to manage dependencies of the resources on each other as it's all taken care by CloudFormation. 

CloudFormation treats all stack resources as a single unit which means for a stack creation to be successful, all the underlying resources should be created successfully. If resource creation fails, by default CloudFormation will roll back the stack creation and any created resource till that point of time will be deleted.

However, point to be noted here is that any resource created before roll back will be charged.

Below example will create a t2.micro instance Named "EC2Instance" using Amazon Linux AMI in N. Virginia region.

Temp2.png
 
Like easy creation, CloudFormation also allows easy deletion of stack and cleanup of all underlying resources in a single go.

Change Sets- While updating or changing any resource there is always a risk associated with the impact of that change. For example, updating security group description without defining VPC in template or in a non VPC environment will recreate security group as well as EC2 instance associated to it. Another example is updating an RDS database name which will recreate the database instance and can be severely impacting.

CloudFormation allows to preview and assess the impact of that change through change sets to ensure it doesn't implement unintentional changes. 

ChangeSet3.1.png

Below change set example shows that this change will -

CHangeSetAWSPart1.png
CHangeSetAWSPart2.png
 
1)  Replace the security group.
2)  EC2 instance may or may not be replaced based on several factors which are external to this CloudFormation template and can't be assessed with certainty. For such cases the impact can be assessed with the help of AWS Resource and Property Types Reference (https://docs.aws.amazon.com/AWSCloudFormation/latest/UserGuide/aws-template-resource-type-ref.html) document.

Conclusion: CloudFormation, the infrastructure as a code service from AWS unleashes the real power and flexibility of cloud environment and has revolutionized the way we deploy and manage the infrastructure. It is worth investing time and efforts exploring it.





June 25, 2019

S3- Managing Object Versions


Main2.jpg
S3 has been one of the most appreciated services in AWS environment, launched in 2006, it provides 99.999999999 % (eleven nines) of durability. As of now, it handles over a million requests per second and stores trillions of documents, images, backups and other data.

Versioning is one of the S3 feature which makes it even more useful. Once versioning is enabled, successive uploads or PUTs of a particular object creates distinct named and individually addressable versions of it. This is a great feature as it provides safety against any accidental deletion due to human or programmatic error. Therefore, if versioning is enabled, any version of object stored in S3 can be preserved, retrieved or restored.

However, this comes with an additional cost as each time a new version is uploaded, it adds up to S3 usage which is chargeable. This cost can multiply very quickly if the versions which are not in use are managed improperly. So how to suitably manage current as well as old versions?

This is easy, there are two options: -
1)  Use of S3 Lifecycle Rules
2)  S3 Versions-Manual Delete 


Use of S3 Lifecycle Rules

When versioning is enabled, a bucket will have multiple versions of same file i.e. current and non-current ones.
Lifecycle rules can be applied to ensure object versions are stored efficiently by defining what action should be taken for non-current versions. Lifecycle rules can define transition and expiration action.
 
LifecyclePolicySteps.png
Below example will create a lifecycle policy for the bucket which says that all non-current versions should be transitioned to Glacier after one day and should be permanently deleted after thirty days.

Review2.JPG



S3 Versions-Manual Delete
Deleting versions manually can be done simply from console. Because all the versions are visible/accessible from console so specific version of the object can be selected and deleted.

 
ObjectConsoleDelete.JPG

However, while using command line interface, a simple delete object command will not permanently delete the object named in delete command, instead S3 will insert a delete marker in the bucket. That delete marker will become the current version of that object with new Id and all subsequent GET object request will return that delete marker resulting a 404 error. 
So even though that object is not erased, it's not accessible and can be confused with deletion. However, the object with all versions along with a delete marker still exists in bucket and keeps on consuming the storage which results in additional charges.

ObjectDel1.1.png
So what is the delete marker? When delete command is executed for a versioned object, a delete marker get inserted in the bucket which is like a placeholder for that versioned object. Due to this delete marker, S3 behaves as if object is erased. Like any object, delete marker also has key name and Id, however it differs from an object as it does not have any data and that is the reason it returns 404 error. 

The storage size of a delete marker is equal to the size of its key name which adds one to four byte of bucket storage for each character in key name. It is not that huge; then why should we get concerned about it? This is because the size of objects it blocks or hides can be huge and pileup enormous bills.

Point to be noted here is that delete marker is also inserted in version suspended buckets, so if versioning is enabled and then suspended (because we know that versioning can't be disabled ever if once enabled) even then all simple delete commands will insert delete marker. 

Removing delete markers is tricky. If a simple delete request is executed to erase a delete marker without specifying its version Id, it won't get erased instead another delete marker gets inserted with a new unique version Id. All subsequent delete request will insert additional delete markers. It is possible to have several delete markers for same object in a bucket.

ObjectDel2.2.png
To permanently remove delete marker, simply include version Id in delete object version Id request.

ObjectDel3.3.png
Once this delete marker is removed, a simple GET request will now retrieve the current version (e.g. 20002) of the object. 

This solves the problem of unintended storage consumption. But how to deal with that object at first place so that we don't have to go through this complication? 
To get rid of an object permanently, we need to use specific command "DELETE Object versionId". This command will permanently delete that version.

ObjectDel4.4.png

Conclusion: S3 provides virtually unlimited storage in cloud and versioning makes it even more secure by protecting objects from accidental deletion. However, it comes with a cost and should be managed cautiously. Above is a rational explanation for a scenario where the user deleted S3 object but still struggled with its charges in AWS bill. 



RDS - Scaling and Load Balancing

img2.2.jpg

Solution architects often have to encounter a question i.e. like an EC2 instances, can a load balancer with autoscaling group be used for scaling and load balancing RDS instances and databases hosted on them? 

While the answer to this question is "NO" there are other ways to scale RDS and load balance RDS read queries.

Point to consider here is that RDS is a managed service from AWS and thus takes care of scaling of relational database to keep up with the growing requirements of application without manual intervention. So former part of this article will focus on exploring vertical as well as horizontal scaling of RDS instances and the later would be observing load balancing options.

 Amazon RDS was first released on 22 October 2009, supporting MySQL databases. This was followed by support for Oracle Database in June 2011, Microsoft SQL Server in May 2012, PostgreSQL in November 2013 and MariaDB in October 2015. As on today it is one of the core PaaS services offered by AWS.


Scaling- RDS Instances 

RDS instances can be scaled vertically as well as horizontally.

Vertical Scaling

To handle higher load, database instances can be vertically scaled up with a single click. At present there are fifty type of instance and sizes to choose for RDS MySQL, PostgreSQL, MariaDB, Oracle or Microsoft SQL Server instance. For Aurora, there are twenty different instance sizes.

Follow below steps to vertically scale RDS instance.

 

img3.jpg

Remember, so far only instance type has been scaled, however the storage is separate and when instance is scaled up or down it remains unchanged. Hence volume also must be modified separately. We can increase the allocated storage space or improve the performance by changing the storage type (such as to General Purpose SSD to Provisioned IOPS SSD). 


img4.jpg

One important point to remember while scaling horizontally is to ensure correct license is in place for commercial engines like Oracle, SQL Server. Especially in BYOL model, because licenses are usually tied to the CPU sockets or cores. 

Another important consideration is that single AZ instance will be down or unavailable during this change. However if  database instance is Multi-AZ, the impact will be minimal as backup database will be updated first. A fail over will occur to newly updated database (backup which was updated first) before applying changes to main database engine (which will now become the standby).


Horizontal Scaling

To scale read intensive traffic, read replicas can be used. Presently, Amazon RDS for MySQL, MariaDB and PostgreSQL allow to create up to five read replicas for a given source database instance. Amazon Aurora permits creation of up to fifteen read replicas for a given database cluster.

Read replicas are asynchronously replicated copies of main database.

A read replica can be -

  • In same or different AZ as well as region- to be placed close to users.
  • Can be promoted to master as disaster recovery.
  • Can have same or different database instance type/class.
  • Can be configured as Multi-AZ - Amazon RDS for MySQL, MariaDB and PostgreSQL allow to enable Multi-AZ configuration on read replicas to support disaster recovery and minimize downtime from engine upgrades.
  • Can have different storage class.

a9.2.jpgEach of these read replicas can have different endpoints to share the read load. We can connect to these read replicas like how we connect to standard DB instance.


Load Balancing/Distribution of Read/Write Traffic on AWS RDS

AWS load balancer doesn't support routing of traffic to RDS. The CLB, ALB and NLB cannot route traffic to RDS Instances. So then how to distribute or balance read traffic on AWS RDS read replicas? 

There are two options -


1)  Using an open-source software-based load balancer, like HAProxy.

As we know that each replica has unique Domain Name Service (DNS) endpoint. These endpoints can be used by an application to implement load balancing. 

This can be done programmatically at application level or by using several open-source solutions such as MaxScale, ProxySQL and MySQL Proxy.

These solutions can split read/write queries and then proxy's like HAProxy can be used in between application and database server. HAProxy can listen read and write on different ports and route accordingly.


11.2.jpg

This approach allows to have a single database endpoint instead of several independent DNS endpoints for each read replica.

It also allows more dynamic environment as read replicas can be transparently added or removed behind the load balancer without any need to update database connection string of the application.


2) Second option is to use Amazon Route 53 weighted record sets to distribute requests across read replicas.

Though there is no built-in way, but this is a work around to use Route 53 weighted records to load share requests across multiple read replicas to achieve same results.

Within Route 53 hosted zone, different record sets can be created with each read replica endpoint with equal weight and then using Route 53 read traffic can be shared among different record sets. 


a8.2.jpg

Application can use route 53 endpoint to send read requests to database which will be distributed among all read replicas to achieve same behavior as load balancer. 








March 25, 2019

Optimizing Costs in the Cloud: Analyzing the Reserved Instances

There are more than one levers when it comes to optimizing costs in AWS. Right sizing, reserving, using spot instances are some of them. While right sizing is always applicable, it is not the case with use of Reserved Instances due to various constraints. Technologists, Evangelists, Architects, Consultants amongst like us are faced with dilemma - to reserve or not to reserve instances in the AWS cloud. I am sure you will have some point view, not necessarily for or against but rather how, when and why it makes or doesn't make sense to reserve instances in Cloud.  AWS is considered as reference for this discussion but I am sure it would be equally applicable for other CSP (Cloud Services Providers). 

This (not so) short article tries brings out the relevant points more succinctly so as to help you make a decision or at the very least, acts as an ice breaker to get the debate started. 


Let's begin, with some research!


Arguably, the documentation is detailed, well organized, peppered with diagrams with hues of orange.  In fact, it's so detailed, I had to open about seven tabs from various hyperlinks on one page to get started. No wonder then a majority of technicians hardly skim through the documentation to make a decision. Further it's a living document considering the 100s of innovation that AWS brings about each year, in some form or shape. If you are bored enough and have a had a chance to read through, congratulation! Hopefully, we will be on the "same page" at some point in time J


AWS_Documentation_RI.png

  

Text to Summary (Does AWS have a service yet?)

I checked a few online options for summarizing but it wasn't up to the mark, esp. when I provide the AWS documentation URL.   So here is summary form the documentation I managed to read

Basics

The EC2 instances in AWS is defined by attributes like size (e.g. 2x large, medium), instance type/family (e.g. m5, r4), scope (e.g. AZ, Region), platform (e.g.  Linux, Windows), network type (e.g. EC2-Classic, VPC)

The normalization factor refers to instance size within instance type. This is used while comparing the instance footprint and it is meaningful within the instance family.   One can't use the normalization factor to convert sizing across instance types e.g. from t2 to r4

The normalization factors, which are applicable within an instance type are given below

NormalizationFactors.PNG


Reservation offerings


There are three offerings - Standard, Convertible and Scheduled Reserved instances with following key differences between these offerings

RI_offerings.PNG

Notes:

* Such availability of any feature or innovation, changes across region in a short span of time, hence it is better to validate at the time of actual use


Reservation Scope

Instance reservation scope is either Zonal (AZ specific) or Regional, with following characteristics

(Ref: https://docs.aws.amazon.com/AWSEC2/latest/UserGuide/apply_ri.html )

RI_Scope.PNG

Modifiable attributes

Some instance attributes are modifiable within certain constraints, most notably, the platform type. As a general rule, Linux platform instances are more amenable. Following table list the modifiable attributes  (Ref: https://docs.aws.amazon.com/AWSEC2/latest/UserGuide/ri-modifying.html )

Modifyable_attr.PNG

The constraints

While there is sufficient flexibility in general for IaaS, there are a number of constraints, based on feature availability in specific region, upper or lower limits, and so on.  The constraints on Reserved Instances are highlighted below

1.       Instance size flexibility on Reserved Instances is lost f such reservation are purchased for a specific AZ or for bare metal instances. Also sizing flexibility does not apply to instances in dedicated tenancy, and for Windows with/without SQL Server, SuSE or RHEL.

2.       Software license usually does not align well with instance size changes.  One must give careful consideration for licensing aspects. As one example, in one of environment, I use SuSE for SAP for which the s/w license pricing is in couple of slabs.  If I use one r4.8xlarge, I have to pay an hourly price of 0.51/hr but if I choose two of the r4.4xlarge instances which are equivalent of one r4.8xlarge, then I have to pay a price 2 x 0.43/hr

3.       While modifying a Reserved Instance, the footprint as in size/capacity of target configuration is required to match that of the existing configuration., otherwise. Even when if reconfiguration results in higher footprint, it doesn't work.

4.       Instances are grouped by family (based on storage, or CPU capacity); type (designed for specific use cases); and size. For example, the c4 instance type is in the Compute optimized instance family and is available in multiple sizes. Even though c3 instances are of the same family as c4 family, one can't modify c4 instances into c3 instances because of different underlying hardware specifications

5.  Some instance types, where there is only one type instance, obviously can't be modified.  Such instances include cc2.8xlarge, cr1.8xlarge, hs1.8xlarge, i3.metal, t1.micro

6.       Convertible RI can only be exchanged for other Convertible Reserved Instances currently offered by AWS. Exchanging multiple convertible instances for one CI is possible.

7.       AWS doesn't like you to change All Upfront and Partial Upfront Convertible Reserved Instances to No Upfront Convertible Reserved Instances.

Do we have a common understanding?

It's very human not to have a common understanding, even if we have same source of information. Here is my attempt to rephrase some of the salient points as common understanding - If you don't agree, consider it as commoner's understanding!

 

1.       The much touted 70% discount is not that common. It applies to some very large instance types with partial/full upfront e.g. x1.16xlarge for partial upfront, 3yr term 

( ref: https://aws.amazon.com/ec2/pricing/reserved-instances/pricing/ )

Most instance types give you slightly moderate discount, approx. 20-30% lesser than you would   have expected

2.       That Reserved instances is the first solution when savings costs on Cloud is entirely right. In fact, it requires a careful review various constraints.

3.       RI could be used when it comes to production load which is usually more on the stable side and by the time workload enters the production systems, a set of tests including performance tests, backup/recovery, network latency etc. would have been carried out so that sufficient information is available to make a decision.

4.       Type of applications running on EC2 instances, the operating system and the software/tools licenses which are usually bound to instance type/size must be understood before going with the reservation

5.       Even when decision has been made to go with RI, good starting point could be Convertible as opposed to Standard RI, due to the flexibility they provide and marginal cost disadvantage when compared to Standard RI. Scheduled RI due to their inflexibility need even more careful evaluation unless you just use it for smaller instances and non-prod systems.

 

Conclusion

As the number of innovations increase and solution architects are faced with numerous "equally good" options, it makes sense to take a pause, evaluate the options carefully, gather stakeholder inputs and make a recommendation. It's certainly true for using or otherwise Reserved Instances.



Continue reading "Optimizing Costs in the Cloud: Analyzing the Reserved Instances " »

September 30, 2018

Public Cloud Security- is it still a concern for enterprises?

Author: Jitendra Jain, Senior Technology Architect (Architecture & Design Group, Infosys)

Introduction

Cloud computing has become integral part of IT modernization in any large to small scale enterprises. It has been considered as a major milestone in the transformational journey. Cloud computing changes the way enterprises store the data, share the data and access the data for services, products and applications. Public cloud is the most widely adopted model of cloud computing. Public cloud as the same suggest available to public over the internet and easily accessible via web channel in a free mode or pay as you go mode. Gmail, O365, Dropbox are some of the popular examples of public cloud.

Public cloud provided services eliminates extra investment in infrastructure as all the required hardware, platform architecture and core operating software services is entirely owned, managed and efficiently maintained by the cloud hosting vendor.

As per mcafee research almost 76% of enterprises have adopted minimum 1 public cloud service provider, it could be any kind of cloud offerings (SaaS, IaaS, or PaaS). It shows popularity of public cloud. 


Continue reading "Public Cloud Security- is it still a concern for enterprises?" »

September 20, 2018

Multi-Cloud strategy - Considerations for Cloud Transformation Partners

While "Cloud" has become the "New Normal", recent analyst surveys indicate that more and more enterprises are adopting Multi-Cloud, wherein more than one Public Cloud provider is utilized to deliver the solution for an enterprise, for example; a solution that employs both AWS and Azure. There are various reasons for enterprises to take this route, Cloud Reliability, Data Sovereignty, Technical Features, Vendor Lock-in to being a few amongst the several reasons.
Though most of the deliberations are revolving around Multi-Cloud for enterprises, here is an attempt to bring out the considerations that a Cloud Transformation Partner needs to watch out for.


There are four core areas a Cloud Transformation Partner must focus on to ensure successful and seamless Transformation & Operation of a Multi-Cloud environment:

1. Architecture
2. Engineering
3. Operations
4. Resources

Architecture: Success of a multi-cloud strategy depends largely on defining the right architecture that can help reap the benefits of having a multi-cloud environment. Architecture decisions should be reviewed against the business demands that triggered a multi-cloud strategy and ensure they are fulfilled.

Application and Deployment architecture has address all aspects of why an enterprise is looking to adopt a multi-cloud strategy. For example, if Data Sovereignty was the key consideration; application deployment architecture should make sure that data will reside in the appropriate Cloud that suits the need. If reliability is the driver, suitable failover mechanism needs to be in place, thus making use of the multiple cloud platforms available.

Interoperability across platforms is among the critical elements to emphasize on along with portability across Cloud Service Providers (CSPs). Achieving this takes a multi layered approach and containers is emerging as a solution in the cloud native space. More details in another blog post here.

Though Cloud as a platform is stable, there is a possibility of failure with a cloud provider (and we have witnessed it in the past). Disaster Recovery (DR) solution built on multiple clouds can be a more effective solution than DR with a single cloud provider in multiple regions.

Establishing network connectivity between competitor CSPs can have its own challenges and bottle necks. Network solution should facilitate provisioning new connections when needed with desired performance across multiple clouds.

Security solutions and controls need to run natively on all clouds and work across all boundaries. Hence Cloud Security Architecture should be on top of the list for considerations in multi-cloud. More importantly, solutions for threats, breaches and fixes need to cater to multiple CSPs and have to be centrally coordinated to respond effectively.


Engineering: There will be changes to the current set of application development and engineering processes followed for a single cloud environment. Application Deployment would need careful planning in a multi-cloud environment with specific focus on developer productivity, process compliance and security implementations.

DevOps should be an integral part of agile development for cloud native & traditional applications. Attention and careful planning needs to be given to the DevOps process and tools to work seamlessly across multiple cloud platforms.

Application lifecycle management should have Platform specific testing built into the process and ensure reliable operations on each of the target platforms.


Operations: Cloud operations are more complex in a multi-cloud scenario due to the overheads that each cloud platform will bring in.

Cloud Management Platform (CMP) must support the multiple Public Clouds that are part of the solution. CMP should be capable to abstract the complexity of different Cloud stacks and models and provide a single window view to monitor, administer and manage multi-cloud ecosystem for the operators.

Oversubscription of Cloud resources needs to be watched for a multi-cloud environment. It is hard to foresee the cloud usage patterns in each of the cloud platforms, and it is very likely that one or all of the cloud platforms can get oversubscribed. Optimization of cloud resources can be a challenge and can result to increased costs. Multi-Cloud strategy may not attract the best volume discounts from a CSP and can impact the cost.

SLA's can vary across CSPs, this should be taken in to consideration while defining the service levels.

Overheads for managing and tracking multiple CSP contracts, billing etc. takes effort and time and needs to be planned for. A well-defined change control mechanism and a roles & responsibilities matrix are essentials in a multi-cloud environment.


Resources: Staffing needs to be planned considering the multiple cloud platforms and the varied skills that would be required. Teams need to have an appropriate mix of core cloud Horizontal skills and CSP specific vertical skills. Multi-cloud environment will demand resources in:


Cloud Horizontal Skills - Engineering skills like Cloud native development with 12 factor principles, cloud orchestration is relatively cloud provider independent. Resources will be specialists in their technical areas and will not be dependent on the Cloud platforms.

Cloud Vertical Skills - Specialists of each cloud platform will be required to extract the best out of each of the multiple cloud platforms that are used. These resources will be required at various roles ranging from architects to developers.

Agile/DevOps - Cloud development needs to be agile and should accommodate changes with the minimal turnaround time. This would need adoption of Agile/DevOps and resources with the appropriate skills to run large scale agile projects.
Cloud led transformation is a journey/ continuum for any large enterprise and hence they should choose a cloud transformation partner who has deep expertise across architecture, engineering and operations with right resources. Infosys as a leading cloud transformation partner has been working with Global 2000 enterprises on their transformations. You can find more details on the same here.

Continue reading "Multi-Cloud strategy - Considerations for Cloud Transformation Partners" »

September 3, 2018

Choosing the right Cloud Service Provider(s) and managing portability and interoperability across them

Global Enterprises are leveraging cloud as a platform to enable transformation, to drive business growth, improve business agility and enhance customer experience while delivering resilient IT systems at an optimal cost. AWS and Azure are the leading hyperscale cloud service players in the market, while others like Google Cloud, Oracle Cloud are emerging strong as well with compelling product service offerings for enterprise customers.

Choosing the right Cloud Service Provider

A cloud service provider choice is not made by enterprises solely based on cost, neither will they move from one cloud service provider to another just to achieve direct cost advantage on CSP charges. The choice of cloud service provider is made considering suitability of CSP for the workload, unique feature set offered by the CSP, visibility into the product roadmap, security & compliance adherence, flexibility in commercial agreements, pricing models and overall business strategy alignment. With the heterogeneity in the current enterprise IT landscape, globally distributed businesses with IT strategy at line of business level or country/ regional level, leads to adopting more than one cloud service provider by enterprises.

With more than one cloud service provider and an existing infrastructure landscape, enterprises end up with a multi cloud environment and applications deployed across them. With business process flowing across applications in different deployment zones, it is essential that enterprises manage the hybrid environment with due considerations involving interoperability and portability.

Interoperability

The foundation for interoperability should factor in all four layers of the IT landscape, namely: Infrastructure, platform, application and business processes while catering to the needs of all involved stakeholders which primarily includes developers, IT operations, security, application and business owners. Considerations in the interoperability design include:

  1. Abstract the complexity of the cloud platform and provided unified interface to IT developers to enable large scale adoption
  2. Provide a unified cloud orchestration & management layer for provisioning, self-service catalog, policy based orchestration, monitoring and billing & chargeback.
  3.  Create an integration platform at data and process levels across the deployment zones in a secure manner. This is to ensure business processes can be executed seamlessly across applications deployed in various zones.

Portability

Though interoperability ensures operations across multiple cloud services providers, there is need to consider portability at various levels including:

  •  Applications -  Technology stack (Programming) and application packaging to enable application development irrespective of the application deployment target. For example, application would be developed with technologies like Spring, Python, NodeJS, MySQL, MongoDB, Hadoop, Spark and packaged as Containers to ease deployment.
  •  Middleware platform - An application management runtime that brings in uniformity across cloud service providers and simplify operations and management across. Containers like Docker and container management platform like Kubernetes help deploy application in a multi cloud platform and manage in a scalable manner)
  •   Development and Management Tools - While cloud native applications bring in required agility they need right set of development and management tools to manage it.
    1.  Unified Service discovery, routing, security and management to monitor and troubleshoot micro services and applications deployed in the hybrid cloud. Cloud control plane is expected to provide service discovery & routing, security policy enforcement, identity & authorization service, tracing, logging and monitoring to run large scale hybrid cloud environments. ServiceMesh technology is in its nascent stage and focused on addressing these needs.
    2. DevOps platform to build, test, package and deploy applications in a uniform manner across cloud service providers. Tools like GitHub, Jenkins, Packer, Terraforms, CloudForms, Chef/ Puppet help realize a DevOps platform which works across public and private clouds.
  •   Security - Consistent implementation/ enforcement of security irrespective of the application deployment zone in the hybrid cloud. Unlike the traditional data center deployment model of applications into a defined network architecture, the cloud native workloads are dynamically placed across deployment zones in multiple clouds in a portable manner. This necessitates technologies that would reconfigure the infrastructure to enforce the security policies in a software defined manner. ServiceMesh attempts to address the security needs of the hybrid cloud as well and continuous to evolve.

Implementation of portability should consider factors like cost of implementing portability, impact due to avoidance CSP native capabilities, time to market, engineering required skills to build the platform. The enterprise may also choose to implement limited portability with considerations on factors like unique advantages of a specific CSP service, cost of porting out in the future, etc.

Summarizing, while the choice of cloud service providers is made based on the feature set, workload affinity and commercial agreement, it is essential to establish the interoperability across infrastructure, platform and application layers ensure service resiliency and unhindered operations. Also, critically evaluate portability needs while defining the cloud solution blueprint, to retain the continuous evolution path for the organization.

Infosys as a leading cloud transformation service provider has helped several clients successfully to navigate through their multi cloud adoption journey. We would be happy to share our experiences with you and help you in your journey. 

August 24, 2017

Managing vendor product licenses during large scale migration to cloud

Public Cloud Services are mature and enterprises are adopting cloud to achieve cost optimization, introduce agility and modernize the IT landscape. But public cloud adoption presents a significant challenge in handling the existing vendor licensing arrangements. Commercial impact varies based on cloud delivery model from IaaS to SaaS and the licensing flexibility. The business case for cloud transformation needs careful consideration on existing software licenses.

Based on our experiences we see software licensing and support by software vendors are at varying stages of maturity. At times, software licensing model can become expensive while moving to cloud. Typically, on premise licenses are contracted for number of cores, processor points or users, whereas the definition of core in virtualized/ cloud world is different.

While enterprises assess the licenses when undertaking the cloud journey, they should carry out a high-level assessment of risks associated with licenses while formulating the business case.

Before formulating a business case it's important to consider the following aspects into the enterprises license transition strategy:

·         Conduct due-diligence of major software vendors to identify any absolute 'show stoppers' for the use of their products such as:

o   Support level in new platform services, license portability and license unit derivation mechanism in cloud.

o   Commercial impact for re-use on multi-tenant cloud platform.

o   Flexibility to reassign licenses as often as needed.

o   Mechanism to check and report compliance on public cloud in an ongoing basis across product vendor licenses.

·         Inventory management of licences and the commercials around these licences.

·         'Future state' services and application stacks should balance between license cost and performance requirements.

o   Negotiate unfriendly product licensing bound to socket or physical hardware level.

o   Evaluate existing licensing terms and conditions for increase in licensing costs.

o   Evaluate / check for mitigation controls and options on public cloud.

o   Plan ahead for the cost implications for reusing converged stack or appliance based licenses on public cloud.

o   Translate the on-premise licenses to public cloud (virtual core).

o   Cloud Service Provider includes operating system licenses - examine the option to reduce the same from existing vendor agreements.

o   Leverage the continuous availability capability of public cloud platforms to eliminate disaster recovery licenses and costs associated with it.

Approaches to overcome public cloud licensing challenges:

To overcome the licensing challenges associated, IT teams can optimize target state architecture, solution blueprints and frameworks with considerations on license/ cost models. Few approaches like:

·         Re-architect existing solutions leveraging event driven Service, Function as a Service, PaaS, Containers and Micro-Services to achieve agility and significantly license cost reduction.

·         Enterprises should consider dedicated hosts or instances / bare-metal options when socket level visibility is required for complying with license usage on public cloud but also weigh the cost impact of these machine types.

·         Embark on Open Source for platforms like database, application server and web servers.

·         If traditional deployment of platform must be moved to cloud, consider creating a pool of platform services rather than services for individual application requirements like common database services. For example: Line of business can consume business applications through centralised platform services across business units in order to achieve greatest cost and agility benefits.

·         Consider solutions with bundled license under usage based pricing models like SaaS, PaaS, Market Place and Public Cloud Native Services.

In reusing "on-premise" licenses, all major software vendors are changing license policies to allow flexibility to port the licenses to cloud but it is not uniform nor all-inclusive yet. Options like vendor allows certain product licenses on cloud but not all, another vendors may allow all on public cloud; and while some vendors allows porting onto authorized cloud environments only.

In summary, migrating like for like will have an impact to licensing costs on public cloud. Understanding the current licensing agreements / models, optimizing application architectures to cloud; negotiating a position with vendors that will be suitable for cloud along with compliance processes in the target state model should hold the organization in good stead. With the cloud native services and open source innovation continues to grow rapidly, enterprises can mitigate traditional licensing constraints by leveraging these technology innovations.

August 27, 2015

Software is the New Hardware

The "Humanics", "Mechanics" and "Economics" of the new enterprise world

 

The enterprise world seems to be poised at an interesting inflection point today. There no longer seems to be anything called as a "known competitor" or an "industry adjacency" in enterprise business anymore.

 

A Google can come from nowhere and reimagine, redefine and rewrite the rules of the entire advertisement industry. An Apple can come from nowhere and reimagine, redefine and rewrite the rules of the entire entertainment industry. A Facebook and Twitter can create absolutely new spaces that did not exist a few years ago. An Amazon and/or Alibaba can come from nowhere and reimagine, redefine and rewrite the rules of the way commerce is done around the world. And then there are Uber, Tesla and others.

 

In each of these examples, three elements seem to combine to perfection: 

  • Humanics: This is about using the power of imagination to discover new possibilities and create new experiences. All the companies mentioned above have done this par excellence in their respective contexts.
  • Mechanics: The new possibilities powered by imagination have to be converted into reality and, more often than not, in today's world, all of this is being driven by software. All the examples mentioned above, have leveraged the power of software in reimagining, redefining and rewriting the rules of their respective games. 
  • Economics: And finally, of course, there is the economics - the right business model for the right context. Businesses and business plans need to find the right balance between "Humanics", "Mechanics" "Economics" to scale new horizons and convert possibilities into realities - leveraging the power of software!

GAFTA vs G2K

At a biomedicine conference last year, venture capitalist Vinod Khosla famously declared that healthcare would be better off with fewer doctors. And then he delivered the same advice to IT at a tech conference the following month. Needless provocation? Far-fetched fantasy? Datacenter utopia, actually. Because that's exactly what most of the traditional and large G2K companies would dearly love to achieve.

Not too long ago, the Director of Data Center Operations at Facebook said each of their administrators managed at least 20,000 servers. Contrast that with the 1:500 or 1:1,000 (admin to server) ratio that a typical G2K company manages. At best. A couple of years earlier - as if to prove a point - Facebook had launched the Open Compute project to make their highly efficient hardware design "open source" for everyone's benefit.

The reason for this lopsided infrastructural evolution is mainly historical. Most G2K companies have been around long enough to accumulate a legacy of disparate, non-interoperating, generations of technologies that seem to be operating in silos. These enterprises are forced to dedicate the technology budget, not to mention large human resources, to simply keep the lights on. On the other hand, the GAFTA (Google-Apple-Facebook-Twitter-Amazon) group - with a scant 97 years between them - found a way to abstract and codify this complexity using the power of software to build highly scalable and highly automated solutions to the same problem.

The stark difference in productivity means that many G2K enterprises struggle with most of their resources being stuck with "keeping the lights on." This also means that very limited resources are allocated to reimagining, redefining and rewriting possibilities and converting these into newer realities for business.

Now, what if, somehow magically, this could be completely turned upside down. The possibilities would be immense. The probability of converting these possibilities into realities would be immense.

The key question is, how can G2K organizations do a GAFTA? Especially in the world of infrastructure management.

Software is the new hardware

The basis to the hypothesis of G2K doing a GAFTA, especially in the field of infrastructure management, seems to be encapsulated in a mere 5 words: "software is the new hardware". 

G2K companies must find a way to emulate their GAFTA counterparts to leverage the power of software to reimagine, redefine and rewrite the way the current infrastructure is managed and convert possibilities into realities.

They must find a way to run their operations noiselessly leveraging the power of software. To achieve this, they must find a way to abstract the complexities and heterogeneity of their environments through the power of software and drive extreme standardization and extreme automation to achieve extreme productivity - by an order of magnitude, not incrementally. This will help them take costs out - and large chunks of it.

They must find a way to: 

  • Drive extreme visibility and control across not only the "horizontal elements" spanning various businesses, geographies, applications, partners, and functions but also "vertical elements" across all infrastructural elements to applications to business processes. And all of this in a "single pane".
  • Modernize their infrastructure by possibilities that software offers - hyper-converged infrastructure, software defined everything, Open Compute, and a good mix of public, private and hybrid clouds so that agility increases by leaps and bounds and costs decrease by an order of magnitude.
  • Modernize and move their existing workloads to take advantage of the new software-powered underlying infrastructure.
  • Reimagine their processes to make DevOps an integral part of the new ways of working.
  • Reimagine their security with "hazy perimeters", collaborative work models to counter ever-increasing vulnerabilities and risks - all this through the power of software.
  • Reskill and reorganize talent. In the world where software is the new hardware, there will be need for a massive change in skills and structure.
  • Change the organizational culture.

While the existing and mature businesses within the enterprise will demand relentless excellence in efficiency, control, certainty, and variance reduction, the foundational cultural constructs of the "newer" lines of business of the enterprise will be based on exploration, discovery, autonomy, and innovation. Building an ambidextrous organization and driving a culture of purpose, creativity and learning would be paramount.

All said and done, this journey is best undertaken with partners who are able and aligned - not alone. G2K companies must find a way to leverage partners who have firmly based their strategies and their businesses on the fact that "software is the new hardware". Not just by talking about it but actually making it a way of life of using software to help their clients "run" operations, "build" next-gen infrastructure, "modernize/migrate" workloads, and "secure" them against the new threats. 

The last word

The approach to technology infrastructure at G2K and GAFTA companies belong to different eras. There exists a clear blueprint for G2K enterprises to leverage the benefits of the GAFTA world in terms of agility, and freed-up man and money resources that can be promptly plowed back into re-imagination, innovation and new business models. 

GAFTA has shown the way on how new business models can be "Powered by imagination. Driven by software".  

Software is indeed the new hardware!

Continue reading "Software is the New Hardware" »

October 11, 2013

Take your business where your digital consumers are - on the cloud! Part2

In my previous blog, I spoke about how digitization is taking place these days in all realms with cloud. Enterprises have started embracing the new phenomenon of "consumerization of enterprises" for business. In this blog, I will share some thoughts on how- cloud and big data can be two pillars for an organizational strategy.

 

My essential tenet is that Cloud and Big Data are interdependent on each other - as more and more information resides on Cloud, it will become easier to access and analyse information, providing valuable business intelligence for companies. In fact, Gartner predicts that 35% of customer content will reside on the Cloud by 2016, up from 11% in 2011 (1).

 

Customers are leveraging this easy and instant access to rich data to make smarter decisions. Many in-store shoppers tend to use their mobile devices to compare product prices on online channels such as Amazon or eBay. Retailers who have the capacity to track this action can immediately offer customers a better package/ deal, thereby delighting the customer and closing the sale instantly.

 

To achieve such pervasive intelligence and instant actionable insights, one should be able to sift through large amounts of data pertaining to each customer in quick time. Businesses will need to verify whether the information they gather about their customers is accurate. Coupled with all this, there are large scale technology related changes, and costs, that need to be considered. And this elasticity of compute at an affordable cost is quite possible when you leverage the cloud effectively.

 

Information that resides across multiple locations can be collated, accessed, analysed, and verified for accuracy at much lower costs on the Cloud. Further, through Cloud-based media, brands can track consumer opinion as well as follow critical consumer behaviour actions/ changes. Take for example the manufacturing industry. Cloud can drive shorter product lifecycles and faster time-to-market as well as enhance their product design, development and marketing campaigns (3).

 

In my next blog, I will talk about how consumer responsiveness can be accelerated by cloud.

October 4, 2013

Building Tomorrow's Digital Enterprise on Cloud - Part 2

In my last blog on Building Tomorrow's Digital Enterprise on Cloud we looked at enterprise cloud adoption trends and how the Digital transformation is influencing Cloud adoption models like PaaS. In this blog, we will look at how enterprises can leverage Cloud to reinvent themselves into a Digital enterprise.

We see enterprises that want to take on this Digital transformation challenge are evaluating and on-boarding new technology and business solutions leveraging Cloud, Mobility and social media. However, enterprises should be careful to avoid merely repackaging old capabilities in new technology solutions. Merely moving applications and not services (lack of service-orientation); Merely moving applications and not business capabilities that can be offered as a Service; Merely moving applications and not exposing API's that allows 3rd parties and partners to build innovative services to enhance consumer experience is like offering "Old wine in a new bottle" that no longer appeals to tomorrow's Gen-Y digital consumers.

Continue reading "Building Tomorrow's Digital Enterprise on Cloud - Part 2" »

Take your business where your digital consumers are - on the cloud! Part 1

The CTO of a leading automaker recently asked me how he could access personal information such as birthdays, anniversary dates, or other significant events in their customers' lives to be able to serve them better. More particularly, he was interested in getting this information in real-time on occasions such as when they approach a POS terminal or an associate in a dealership or merchandise store. He was looking for ways to drive a more personalized customer engagement: mechanisms that could extract relevant and useful insights from multiple sources of customer data to empower the sales team and make highly customized and compelling offers to the customer.

 

Essentially, his question represents an on-going paradigm shift in the mindset of CXOs in today's enterprises - from traditional ways of doing business to leveraging the power of digital channels. And the main reason for this is the rise of the digital consumer fuelled by the big bang of Cloud. In this three series blog, I would like to discuss about how cloud is driving "consumerization of enterprises".

 

High-speed connectivity and information transparency of online and mobile channels has spawned a new breed of Gen Y customers that are well-connected through multiple devices, expressive and ready to engage with an attitude. This changing customer demographic means that enterprises have to find new ways of getting their attention - and winning their trust.

 

Today's customers are sharing and accessing tremendous amounts of data, a bulk of which resides on the Cloud, through social media sites and various other interactive channels. In 2012 alone, consumer-generated content accounted for 68% of information available through various devices (TV, mobiles, tablets, etc.) and channels (social media, videos, texts, etc.). Much of this information is extremely valuable if we can access and analyze it. In fact, by 2020, 33% of the information that resides in this digital universe will have critical analytical value, compared to today's where one-fourth of the information has critical analytical value. This is will be an increase of 7% in 8 years on a total data volume that is growing at 40% year on year (2).

 

In my next blog I would be talking about how cloud and Big Data are driving actionable business insights for enterprises for digital transformation.

September 27, 2013

Building Tomorrow's Digital Enterprise on Cloud - Part 1

IDC in its 2013 Predictions "Competing on the 3rd Platform" predicts that the Next Generation of Platform opportunities would be in the intersection of Cloud, Mobile, Social and Big Data.

My discussions with Client IT executives across industries over the last few months has clearly shown a growing interest for building such a platform to address their digital transformation needs. They realize the importance to look at this strategically from a Digital Consumer engagement perspective and the need to align their internal projects and initiatives focusing on Mobility, Cloud and Social Integration to maximize business value. Consumers today demand seamless access across devices/channels and the ability to integrate into the context of their digital experience. Next generation of platforms need to address this transforming consumer engagement model.

Foundations of such a platform I believe would leverage Cloud-based applications enhanced with API Management at its core to address the needs of Integrating the Enterprise with their Digital Ecosystem and consumers on Mobile Apps & devices, Social Media by leveraging Cloud Services for elastic scaling and Analytics on API usage trends to generate business insights.

In this blog series we will talk about Cloud adoption trends that we are seeing in the marketplace and how this Digital convergence is influencing cloud adoption models and the evolving need for Enterprise API Management on Cloud for value realization.

Continue reading "Building Tomorrow's Digital Enterprise on Cloud - Part 1" »

August 27, 2013

You can't compete with the clouds, but you can embrace and succeed!

 

I take my inspiration for my blog title from Forrester's James Staten who recently wrote about "You can learn from the clouds but you can't compete". James Staten talks about how data center operations can help achieve levels of excellence and success and prescribes that standardization, simplification, workload consistency, automation and maniacal focus on power and cooling can help one setup and run the best data center operations.

 

However, I think there is more to these large cloud providers than just learning some best practices. I was talking to an important client of Infosys earlier with whom we are currently enabling an cloud enabled IT transformation and she mentioned something that clarified to me what the real value of these cloud providers means. She said her aspiration is to set up and run a trust cloud ecosystem for her enterprise with a single point of accountability.  In spite of the sheer scale and magnitude of their investments, the likes of Amazon Web Services, Microsoft Windows Azure, these giant behemoths of industrial scale infrastructure with their infinitesimal compute power, derive respect from the sheer agility and speed with which they are able to respond to their customer needs.

 

Of course, this happens because of a phenomenal level of simplification, standardization, automation and orchestration they run their operations with. Now imagine, how it would be if these principles of IT governance and operations management were extended to an enterprise. Vishnu Bhat , VP and Global head of Cloud at Infosys keeps saying "It is not about the Cloud. It is about the enterprise" and towards this if an enterprise were to simply focus on learning from these cloud leaders and work towards establishing an ecosystem of cloud providers, a hybrid setup, where their current IT is conveniently and easily connected to their private cloud, and public cloud setups. And this hybrid cloud environment is managed with the same level of agility and speed as an AWS is, that is when the possibility of true success and value from cloud starts to emerge.

 

Imagine a hybrid cloud within the realms of your enterprise that functions with the same speed, agility and alacrity of an AWS. Imagine exceptionally efficient levels of optimization of costs on a continuous basis by bringing in levels of automated provisioning of enterprise workloads, integrated federation and brokerage with on-premise core IT systems, extensibility with public clouds available for spikes, constant optimization through contestability and optimization, control and governance through single enterprise view, metering, billing and charge backs to business, clear points of accountability with easy governance of SLAs and liabilities, secure management of the cloud and compliance de-risking in keeping with the laws of the land. And all this from one ecosystem integrator with one point of responsibility, accountability. That's cloud nirvana at work!  I am eager to keeping telling clients on how to get to this state, how to learn from the cloud providers and contextualize these to an enterprise.

 

In my next blog, I will talk about an important aspect of cloud success - contestability, but before that, I would urge you to read my colleague Soma Pamidi's blog "Getting cloud management and sustenance right - Part 1". Till then, may the clouds keep you productive!

November 26, 2012

Harnessing the Hybrid Cloud

A recent IDC study claims that "private cloud is [the] current flavor but hybrid cloud is fast becoming a reality." This makes sense, because it is the hybrid model that exploits cloud potential to the fullest. It's the golden mean between private, public and on-premise, enabling enterprises to fulfill all their ambitions--from plain-vanilla aspirations like cost efficiency, scalability, productivity and "on demand" to strategic business priorities like innovation, market expansion and business model reinvention. Of course, there's a catch. Read More

July 12, 2012

Big Data and the God particle (Higgs Boson)

The July 4th 2012 announcement from CERN on the possible evidence of the existence of the God particle or the Higgs Boson has sent ripples through the physics community. This is not just fundamental to explain the existence of gravity but a validation of the Standard Model of particle physics. It holds the possibility of opening up new frontiers in physics and a new understanding of the world we live in.

While we marvel at these discoveries, our physicist brother-in grapple with trying to understand if this discovery is truly a Higgs Boson or an imposter?  It is however very interesting to look at the magnitude of the data analysis and the distributed computing framework that was required to wade through the massive amounts of data produced by the Large Hadron Collider (LHC).

The Big Data problem that the scientists at CERN and across the world had to contend with was sifting through over 800 Trillion (you heard that right ...) proton to proton collisions looking for this elusive Higgs Boson.  Additionally this particle has a large mass but is extremely unstable and lasts for less than a billionth of a second.  It cannot be detected directly and is identified through its footprint or the particles that it splits into. This results in over 200 Petabytes of data that needs to be analyzed.

 Right from the beginning CERN had set this up as a distributed cloud based tiered data processing solution. There were three tiers identified T0 being the tier that collects the data directly from LDH, there were 11 T1 nodes across the world getting the data from CERN and a number of T2 nodes (for e.g. there are 8 in the US) based on the areas of the data that particular groups of physicists were interested in analyzing.  From the T2 nodes people could download the data to their personal T3 nodes for analysis. This resulted in a massive highly distributed data processing framework that collects data spewed out by the LHC detectors at a phenomenal rate of 1.25GB/sec. The overall network can rely on a computation capability of over 100,000 processors spread over 130 organization in 34 countries.

From a technology perspective it is interesting that people have used some of the open source technologies that we use for big data processing in enterprises for e.g. the file system with the Hadoop echo system, HDFS (Hadoop Distirbuted File System) was the candidate for storing these massive amounts of data, ROOT another open source tool which is also used by financial institutions as well is used to analyze this data.

It is amazing that the analysis tools used to find the God particle is commonly available to be used by the enterprise to solve smaller Big Data problems.

To paraphrase Galelio "All truths are easy to understand once they are discovered; the point is to discover them and Big Data can help"

July 4, 2012

Can IT now can fully enable business? With Cloud?

In the last twenty years in this industry, I have seen quite a few faces of IT operations in many organizations. From being considered as a back room operation to leading change and business integration. But the truth is, most IT organizations in enterprises are struggling to stay in the front line and constantly battling to find ways of quantifying the "value of IT". This has not been easy to say the least. 

 

In my view, the two big issues that are contributing to the woes of most IT organizations. They are ..

1.       Time to Market

2.       Innovation

 

There is no doubt that IT has transformed to be the backbone of all enterprises. Most organizations IT capabilities have evolved and we have done a 'good job' in bringing  the engineering discipline, process, methodologies and more importantly,predictability, in this industry. Now, where has that left IT in any organization. It is considered more often than not a bottleneck in bringing new ideas to the market. Let's look at it from a business leaders standpoint. How many times have we heard " we do not have an environment to start the work, develop and test", " this cannot go into this release cycle", " it is too time consuming to try things out"," We need to have all requirement and design ready" etc. Truth of the matter is that it takes too long to get a business idea from a concept to production. More often than not , IT is the bottleneck. 

 

It is true that 85 percent of IT budgets world over are to keep the lights on. So where is the room for innovation? Most investments are engaged in keeping lights on and the other 15 to 20 percent is used up in delivering "strategic programs". There is generally no room available to try new ideas or engage in any sort of R&D activity. This has resulted in formation of shadow IT groups outside of then IT depts with limited success and most certainly resulting in a "mess" to be cleaned up. 

 

I am very excited about the opportunity the Cloud offers. I am also very confident that this can be as big a disruption or an opportunity to IT as the Internet was, or a phenomenon close to heart, the Global Delivery Model. The Cloud offers a bright future for the industry, one in which IT can ensure greater agility, simplicity, scalability, efficiency, innovation, and cost-effectiveness for businesses.  Boy, does it provide enough room! And lots of it. It has the power to address the issue of time to market as well as bring in significant opportunities and construct for innovation.

 

As I sign off, I would like to leave you all with one thought: The sky is the limit on the Cloud. So What would limit CIOs in leveraging it to it's potential ? More on that in my next blog..

 

 

June 27, 2012

Microsoft can extend its Windows run on the Cloud.

While I am preparing my trip for Microsoft World Wide Partner Conference (WWPC) 2012 happening in Toronto during second week of July, I was referring to the notes which I captured during the previous events.  What really caught my eyes were some key messages delivered during 2010 WWPC which happened at Washington DC. Steve Ballmer in his key note mentioned that Microsoft is transforming itself into a cloud services company and 70 percent of Microsoft developers are working on cloud services. This was followed by the message to partners to "Lead with cloud, Build, Tell, Sell and Support".  Microsoft's big bets on cloud were clearly evident throughout the partner conference in 2010 where 70% and more of the sessions were around Microsoft cloud products be it Public or Private Cloud and SaaS or PaaS offerings. It was the first wave of Microsoft message to partners to gear up to support its Windows run on cloud.  

If you look back, this strategy from Microsoft didn't happen overnight. The preparations started way back in 2005 when Ray Ozzie released an exciting internal memo to Microsoft employees outlining the competitive landscape, the opportunities and future steps for Microsoft's strategy to deliver on a portfolio of seamless software + services. I recollect majority of us during that timeframe ignored it and continued implementing Windows XP's, Windows Server 2003, Office 2003 and kept exploring the power of .NET applications and XML web services on premises.

Later during 2008 Ray further advanced his software + services strategy and highlighted the power of choice for business while embracing cloud, which set the beginning for the enhanced set of products and services from Microsoft extending its powerful in premise story to cloud. This was followed with an announcement of Windows Live and Microsoft business productivity online services. This was the beginning of cloud being one of the mainstream focus. 

The 2011 WWPC which happened in Los Angeles highlighted Microsoft's huge investments on building data centers across the world and the product line enhancements to entice adoption. Today I believe Microsoft has become one of the largest and most influential players in the cloud for enterprise, SMB and consumer space.   Windows Azure today is mature and extensive, unlike the competitors provides both IaaS and PaaS capability. Microsoft has turned its most influential Windows based business productivity application "Microsoft office" as a cloud app. Along with this Microsoft also offers its most popular collaboration and messaging suite comprising of Microsoft Exchange, SharePoint, Lync  as a SaaS solution branded as O365.  To add to this is its recently unveiled SaaS offering "Microsoft Dynamics" CRM online. They also have the right synergy across public and private cloud for Identity, virtualization, management and application development.  Earlier this week Microsoft announced its acquisition of Yammer a social networking company to extend its cloud services with best-in-class enterprise social networking.  What more do we need to believe that Microsoft will extend its run on Cloud ?

I am sure the 2012 WWPC will have more exciting announcements re iterating their strategy as a cloud services company. 

June 26, 2012

My road trip for the Windows Azure Spring Release

I am just back from a roadtrip to Chennai, the capital of the state of Tamil Nadu in South India. Chennai is one of the IT hubs in India, that houses thousands of  IT professionals. Initially, I was reluctant to leave the cool and comfortable mediteranean climate of Bangalore to go to Chennai for a day, a city that is famous for its sweltering heat and humidity. I went there because of two primary reasons - I got to speak to over 1000 Microsoft technology enthusiasts, something I always enjoy. The second reason, but more important one was that it was the occasion of the the Windows Azure Camp,  The event that happened on June 21st was the official announcement from Microsoft of their much awaited enhancement around Windows Azure (Spring Release). Microsoft has been unveiling this spring release the world over and it is an important event for developers and IT professionals and technology decision makers to discover and familiarize new services and capabilities of the Windows Azure platform. Beyond this it gave them a platform to interact with Cloud experts who work and enable these technologies across multiple use cases to address the real world scenarios.

 

I was invited because Infosys is one of the top System Integrators across the world for Windows Azure. That I was the only Cloud Technology expert from any SI to have been invited and be part of this launch keynote made it more special. I just decided to share my experiences and success stories with clients on Windows Azure. I shared our views on how the enhanced platform can address broader client scenarios. With the spring release Windows Azure has become equally exciting for both developers and Infrastructure experts. Windows Azure will no longer be just a platform as a service (PaaS) which was a developers panorama, will have a new capability that will function as Infrastructure as a service (IaaS) that will run Windows and Linux instances and their related applications. There has been some major enhancements to the existing services as well new service addition. With this Windows Azure can be a preferred platform for Dev / Test environment as well as lifting and shifting Windows and Linux VM's.

 

The earlier PaaS capability is now known as Azure cloud service, new IaaS capability is called Azure Virtual Machine and there is one more capability for hosting websites which is known as Azure websites. Beyond this there are few new network caging capabilities like Azure connect and Azure virtual network to enable cross premise connectivity scenarios. It was an excited and packed crowd of around 1000 who turned up at the Chennai Trade Center for the event, dominated by the developer community along with infrastructure experts. The folks with whom I interacted were keen to know how enterprise clients are adopting the public platform and how the platform addresses their security and other compliance requirements. Few of them expressed that application mobility in a hybrid scenario with VM's moving across on premise and cloud will be really exciting in addressing most of the scenarios they come across with their clients. Few of them were keen to know the e commerce possibilities. The dev and test scenarios especially SharePoint and SQL servers were also highlighted.

 

As I left the city in the evening, the heat and the humidity of Chennai were forgotten, what remain in my memory was the promise of this new cloud offering from Microsoft, enriching experience of interacting with over a 1000 enthusiastic cloud technologists  and how excited everybody is about what the Cloud can do for enterprises worldwide.

 

The Cloud is real and underway and it is not going away, let's embrace it with both hands and enjoy the benefits.

March 20, 2012

Along with data in on-premise database, can we also expose stored procedure using Azure building blocks to internet?

While working in some project we might have come across the requirement:

·         Migrate an existing web- application from on-premise to cloud for some of the obvious reasons. And I believe by now we know what are the different driving factors for migrating an/a application/service to cloud

·         But keep the back-end database on-premise. Quite a few reasons for this, say for instance the data is of very "high business impact" type and can't be put outside the corporate network.

We must have explored quite a few options like being in Windows Azure domain, options are:

·         Make use of Azure connect and create some kind of local area network comprising of the database server and the virtual machines having Azure roles (having the application).

·         Make use of Azure appfabric service bus (my favorite option) to expose database over http as OData interface and also support the CRUD operations.

But now how to expose the SQL Stored Procedures and functions defined in the back-end database.

Continue reading "Along with data in on-premise database, can we also expose stored procedure using Azure building blocks to internet?" »

January 9, 2012

Enterprise Cloud Adoption Strategic Roadmap

Adoption of Cloud in an enterprise is more of a strategic decision than an operational or tactical. Cloud adoption needs to be seen more from enterprise architecture strategy perspective rather than an isolated application architecture specific strategy for the simple reason that it has several short and long term implications on enterprise strategy which may be beyond the specific application's business or technology footprint.

Continue reading "Enterprise Cloud Adoption Strategic Roadmap" »

November 7, 2011

Enterprise Cloud trends - "Cloud First" strategy

While we were working on the Cloud strategy for Infosys a year back we had lengthy debates on how an enterprise of the future looks like with their cloud vision in the coming years. Most of our forecasts on this are coming factual. My recent interactions with clients and partners clearly reveal that the Cloud adoption by enterprises is faster than what was being perceived by a larger community.

 

"Cloud first" strategy is being adopted by some of our leading clients and few of them have a very clear approach for their Infrastructure and application stack. Hybrid is the most common trend and Private Cloud plans are in place for the new and old gears. IaaS consumption from Public Cloud seems to be a short term strategy and PaaS is becoming more prominent for application development even though there is still some fear of vendor lock-in. This to me is the right strategy as more innovations are to happen in the PaaS space and applications can leverage the power of Cloud in terms of scalability, global availability, design for failures etc. more with platform as a service. Application portability gaps across platforms and on-premise setup will gradually diminish with parity amongst on-premise server operating systems and Cloud platforms being addressed with every new version release. Those who consider that an application developed for windows server is a platform "lock-in" may not agree with me on this view.

 

One of our clients who has adopted O365 has outlined the future strategy for portals with first choice as SharePoint Online and anything on-premise will be an exception (feature parity, data privacy etc). This shows that "Cloud first" strategy is becoming the norm within enterprises with clear directions for non-standard applications and short living workloads. This works well across organizations and industries for especially self-contained application workloads which have least dependency on data residing on premise. Additionally, these organizations could have security and compliance concerns in their data being exposed to the Public Cloud.

 

Next wave is around mobility and analytics. Will discuss this in my next post. 

 

August 23, 2011

Practicing Agile Software Development on the Windows® Azure™ Platform

Over the years, several software development methodologies have evolved to help the IT industry cope with rapidly evolving business requirements. One such methodology is Agile... -an iterative approach to software development. Similarly rapid strides on the technology front are resulting in paradigm shifts towards software development and how IT delivers its services to business. Technologies in the form of virtualization and cloud are offering low entry barriers by making software and hardware infrastructure easily accessible and thus reduce the time to market. These are encouraging signs that help reduce the gap between business and IT. 

Continue reading "Practicing Agile Software Development on the Windows® Azure™ Platform" »

July 29, 2011

Step by step approach to expose on-premise database using Azure infrastructure - Part 2

In the last blog we understood the usage of Azure-connect to expose on-premise SQL database and accordingly the points of concern while doing that and also the benefit. In this blog we will understand another approach using Azure appfabric Service Bus.

Continue reading "Step by step approach to expose on-premise database using Azure infrastructure - Part 2" »

Big Data and Cloud Computing

It is well known that leveraging the Cloud for high computing work loads for a short span of time is a good business case.

Getting Business insights from Big Data is becoming main stream. Cloud is becoming an ideal choice for that.

Continue reading "Big Data and Cloud Computing" »

March 31, 2011

Is cloud computing same as putting things Online?

All those just boarding the cloud train, may have posed this question to themselves or to others who may have a know-how on cloud. Being a Cloud SME myself, I have faced this question several times. This post is an attempt to clear some of the confusion that exists around this specific topic.

Continue reading "Is cloud computing same as putting things Online? " »

January 17, 2011

Banking 3.0 : Organizational Drivers for Cloud Computing- Financial Institutions

To talk about the organizations that manage our money and assets, let us first classify them into the kind of work, they undertake. Then a careful study of the trends and a clear understanding of what cloud computing can offer, would provide us with the drivers for this sector to adopt cloud computing.

Bank services.jpgFor categorization, we can use a freshman's criteria of the nature of work. Hence we have

   ·         The banks

    o   Consumer Banks

    o   Govt Banks

    o  Industrial/Corporate/Specialized 

             Banks

   ·         The Insurance Companies

   ·        The Non-Banking Financial corporations

Continue reading "Banking 3.0 : Organizational Drivers for Cloud Computing- Financial Institutions" »