The commoditization of technology has reached its pinnacle with the advent of the recent paradigm of Cloud Computing. Infosys Cloud Computing blog is a platform to exchange thoughts, ideas and opinions with Infosys experts on Cloud Computing

Main

June 28, 2019

Amazon Aurora Serverless, the future of database consumption

Amazon has recently launched Amazon Aurora Serverless database (MySQL-compatible edition). This is going to set a new trend in the way databases are consumed by organizations. Traditionally database setup, administration, scaling and maintenance is tedious, time consuming and expensive. Thanks to cloud computing, RDS takes away the setup, scaling and maintenance of databases from customers. Amazon Aurora Serverless takes RDS to the next level where the users pay only for what they use and when they use.

Continue reading "Amazon Aurora Serverless, the future of database consumption" »

June 26, 2019

AWS Cloudformation: An underrated service with a vast potential

As businesses are experiencing surge in provisioning and managing infrastructure and services through cloud offerings, a collateral challenge has emerged on the sidewalls. The challenge to remain accurate and quick while provisioning, configuring and managing medium to large scale setups with predictability, efficiency and security.
Infrastructure as a Code i.e. IaaC is a way to manage resource provisioning, configurations and updates/changes using tested and proven software development practices which are used for application development.

E.g.
  • Version Control
  • Testing
  • CI/CD
IaaC2.png

Key Benefits:

1)  Cost Reduction- Time and effort reduction in provisioning and management through IaaC.
2)  Speed - Faster execution through automation.
3)  Risk Reduction- Less chances of error due to misconfiguration or human error.
4)  Predictability- Assess the impact of changes via change set and take decision accordingly.

There are several tools which can be used for deploying Infrastructure as a Code.
  • Terraform
  • CloudFormation 
  • Heat
  • Ansible
  • Salt
  • Chef, Puppet

Ansible, Chef and Puppet are configuration management tools which are primarily designed to install and manage software on existing servers. Certain degree of infrastructure provisioning can be supported by them, however, there are some specifically designed tools which are a better fit.

Orchestration tools like Terraform and CloudFormation are specially designed for infrastructure provisioning and management.  

CloudFormation is an AWS native Infrastructure as a code offering. One of the most underrated services in Amazon cloud environment for so many years. However, with increasing awareness on this, IaaC Service is getting traction and lot of clients are willing to look at the advantages.

It allows codification of infrastructure which helps in leveraging best software development practices and version control. It can be authored with any code editor like Visual Studio code or Atom editor, checked into a version control system like Git and reviewed with team members before deployment into Dev/Test/Prod. 

CloudFormation takes care of all the provisioning and configuration of resources and developer can focus on development rather than spending time and efforts on creating and managing resources individually.

CFNDgrm1.4.png
Resources are defined in the form of code (JSON or YAML) in Template which interacts with CFN service to produce Stack which is a collection of AWS resources that can be managed as a single unit. In other words, we can create, update, or delete a collection of resources by creating, updating, or deleting stacks.

CloudFormation can be used to deploy simple scenarios like spinning up a single EC2 instance to a complex multi-tier and multi-region application deployment.

For example, all the resources required to deploy a web application like web server, database server and networking components can be defined in a template. When this template interacts with CloudFormation service, it deploys desired web application. There is no need to manage dependencies of the resources on each other as it's all taken care by CloudFormation. 

CloudFormation treats all stack resources as a single unit which means for a stack creation to be successful, all the underlying resources should be created successfully. If resource creation fails, by default CloudFormation will roll back the stack creation and any created resource till that point of time will be deleted.

However, point to be noted here is that any resource created before roll back will be charged.

Below example will create a t2.micro instance Named "EC2Instance" using Amazon Linux AMI in N. Virginia region.

Temp2.png
 
Like easy creation, CloudFormation also allows easy deletion of stack and cleanup of all underlying resources in a single go.

Change Sets- While updating or changing any resource there is always a risk associated with the impact of that change. For example, updating security group description without defining VPC in template or in a non VPC environment will recreate security group as well as EC2 instance associated to it. Another example is updating an RDS database name which will recreate the database instance and can be severely impacting.

CloudFormation allows to preview and assess the impact of that change through change sets to ensure it doesn't implement unintentional changes. 

ChangeSet3.1.png

Below change set example shows that this change will -

CHangeSetAWSPart1.png
CHangeSetAWSPart2.png
 
1)  Replace the security group.
2)  EC2 instance may or may not be replaced based on several factors which are external to this CloudFormation template and can't be assessed with certainty. For such cases the impact can be assessed with the help of AWS Resource and Property Types Reference (https://docs.aws.amazon.com/AWSCloudFormation/latest/UserGuide/aws-template-resource-type-ref.html) document.

Conclusion: CloudFormation, the infrastructure as a code service from AWS unleashes the real power and flexibility of cloud environment and has revolutionized the way we deploy and manage the infrastructure. It is worth investing time and efforts exploring it.





June 25, 2019

S3- Managing Object Versions


Main2.jpg
S3 has been one of the most appreciated services in AWS environment, launched in 2006, it provides 99.999999999 % (eleven nines) of durability. As of now, it handles over a million requests per second and stores trillions of documents, images, backups and other data.

Versioning is one of the S3 feature which makes it even more useful. Once versioning is enabled, successive uploads or PUTs of a particular object creates distinct named and individually addressable versions of it. This is a great feature as it provides safety against any accidental deletion due to human or programmatic error. Therefore, if versioning is enabled, any version of object stored in S3 can be preserved, retrieved or restored.

However, this comes with an additional cost as each time a new version is uploaded, it adds up to S3 usage which is chargeable. This cost can multiply very quickly if the versions which are not in use are managed improperly. So how to suitably manage current as well as old versions?

This is easy, there are two options: -
1)  Use of S3 Lifecycle Rules
2)  S3 Versions-Manual Delete 


Use of S3 Lifecycle Rules

When versioning is enabled, a bucket will have multiple versions of same file i.e. current and non-current ones.
Lifecycle rules can be applied to ensure object versions are stored efficiently by defining what action should be taken for non-current versions. Lifecycle rules can define transition and expiration action.
 
LifecyclePolicySteps.png
Below example will create a lifecycle policy for the bucket which says that all non-current versions should be transitioned to Glacier after one day and should be permanently deleted after thirty days.

Review2.JPG



S3 Versions-Manual Delete
Deleting versions manually can be done simply from console. Because all the versions are visible/accessible from console so specific version of the object can be selected and deleted.

 
ObjectConsoleDelete.JPG

However, while using command line interface, a simple delete object command will not permanently delete the object named in delete command, instead S3 will insert a delete marker in the bucket. That delete marker will become the current version of that object with new Id and all subsequent GET object request will return that delete marker resulting a 404 error. 
So even though that object is not erased, it's not accessible and can be confused with deletion. However, the object with all versions along with a delete marker still exists in bucket and keeps on consuming the storage which results in additional charges.

ObjectDel1.1.png
So what is the delete marker? When delete command is executed for a versioned object, a delete marker get inserted in the bucket which is like a placeholder for that versioned object. Due to this delete marker, S3 behaves as if object is erased. Like any object, delete marker also has key name and Id, however it differs from an object as it does not have any data and that is the reason it returns 404 error. 

The storage size of a delete marker is equal to the size of its key name which adds one to four byte of bucket storage for each character in key name. It is not that huge; then why should we get concerned about it? This is because the size of objects it blocks or hides can be huge and pileup enormous bills.

Point to be noted here is that delete marker is also inserted in version suspended buckets, so if versioning is enabled and then suspended (because we know that versioning can't be disabled ever if once enabled) even then all simple delete commands will insert delete marker. 

Removing delete markers is tricky. If a simple delete request is executed to erase a delete marker without specifying its version Id, it won't get erased instead another delete marker gets inserted with a new unique version Id. All subsequent delete request will insert additional delete markers. It is possible to have several delete markers for same object in a bucket.

ObjectDel2.2.png
To permanently remove delete marker, simply include version Id in delete object version Id request.

ObjectDel3.3.png
Once this delete marker is removed, a simple GET request will now retrieve the current version (e.g. 20002) of the object. 

This solves the problem of unintended storage consumption. But how to deal with that object at first place so that we don't have to go through this complication? 
To get rid of an object permanently, we need to use specific command "DELETE Object versionId". This command will permanently delete that version.

ObjectDel4.4.png

Conclusion: S3 provides virtually unlimited storage in cloud and versioning makes it even more secure by protecting objects from accidental deletion. However, it comes with a cost and should be managed cautiously. Above is a rational explanation for a scenario where the user deleted S3 object but still struggled with its charges in AWS bill. 



December 11, 2018

Navigate your Digital Transformation with a Robust HR Service Delivery Solution

Today, employees are adept at technology, ultra-social, opinionated, and continuously connected. They demand high-quality service, experience, and prefer self-service instead of having to reach out to support via phone or email. The consumerization of employee experience is leading HR departments to capitalize on HR service delivery (HRSD) solutions to realign and automate functions such as recruitment, compensation, performance evaluation, compliance, legal, and more. They are also going beyond smart-looking portals and consolidating functions to enable employees to access a modern, smart, and omnichannel experience across desktop, mobile, and a virtual assistant. Organizations deploying a robust HR solution have discovered that they were able to reduce administrative costs by up to 30%.

Why an HR service delivery solution offers more than just cost savings

Usually, the first few days at work for a new employee can be a flurry of paperwork and processes. An HRSD solution that is accessible across devices could mean shorter smooth joining formalities.  Employees, whether joining remotely or at an office can submit soft copies of their documents and this can reduce workflows from 70 to 10 steps and thus save thousands of man-hours, annually.

With an HRSD solution, organizations can do away with geography specific portals, SharePoint, and the intranet for different sets of information, and offer a single, comprehensive, and user-friendly knowledge platform that is device agnostic.  With a type-ahead feature, the platform can suggest terms so that users execute their search quickly. 

Another advantage of an HRSD solution is that employees can access context-sensitive content, tasks, and services through a Single Sign-on (SSO). A prompt feature can suggest related documents so that employees have access to all the information available. For instance, if an employee is searching for the vacation policy of the organization, information related to paid holidays, guest house facilities, leave travel allowance, etc. could pop up for the employee to review.

The traditional way of addressing HR problems is to raise a ticket. At the backend, case routing is manual, time-consuming, and person-dependent. Studies indicate that human resource personnel spend 57% of their time on repetitive tasks. Instead, information can be made available real-time via call, chatbot, or chat with a virtual agent. Larger organizations can also invest in an interactive voice response (IVR) facility which is accessible 24/7. When tickets are raised, an HRSD solution can be used to assign cases automatically depending on the skills and workload of HR personnel. This can positively impact employee experience.

Determine the success of an HRSD solution through leading and lagging indicators

Adopting an HRSD solution can be a major investment, and organizations can measure ROI through leading and lagging indicators. Two instances of leading indicators are, a self-service portal and a feedback mechanism. Studies show that 70% of issues can be resolved through a self-service knowledge portal. Accessible 24/7, it gives users greater control over information and does away with costs associated with deploying HR staff to answer calls. A feedback mechanism can be deployed by enabling users to comment and rate a document. This allows the organization to engage in continuous improvement of the information on the knowledge platform.  

Lagging indicators provide quantifiable data that proves the automation invested in by the organization is delivering ROI. For instance, increase in the use of the chat tool versus reduction in case volume demonstrates that employees effectively use the chat option to solve issues instead of raising tickets -which take longer to address. As a result, HR personnel spend less time in backend administration and more time responding to actual employee concerns.

Increase in the use of IVR versus reduction in the number of cases logged indicates that employees are able to quickly address queries over the phone instead of raising tickets. Thus, less personnel are needed to service a call center.

Measuring ROI on an HR service delivery solution

  • Organizations that implemented a knowledge portal or mobile app with personalized content found they could solve Tier 0 inquiries over 60% of the time and reduce HR administrative costs by up to 30%
  • Increased resolution of first calls reduces Tier 2 escalations. This can save up to 300k (for a client with a case volume of 25,000) as only around 8% of queries escalated to Tier 2
  • With a well-managed HRSD solution, less than 5% of employee queries escalate to Tier 3, at which, specialized professionals review and respond to cases. This allows organizations to optimize HR resources to do more value-added work
  • Increased self-service and peer-networks help case deflection. Over time, more than 60% of employee inquiries are resolved before reaching an HR personnel

ยท         With employee self-reliance, HR can be up to 30% more productive. Freed HR personnel can focus on higher-value strategic issues such as employee retention and workforce planning

 

So, if your organization is looking to give employees a seamless experience similar to retail, an HRSD is the answer. While the market abounds with HRSD vendors, choosing the right one requires a deeper understanding of one's requirements and the strengths of the vendor. Begin a conversation with Infosys to know how your organization can navigate its digital journey with an effective HR service delivery solution.

 

September 30, 2018

Public Cloud Security- is it still a concern for enterprises?

Author: Jitendra Jain, Senior Technology Architect (Architecture & Design Group, Infosys)

Introduction

Cloud computing has become integral part of IT modernization in any large to small scale enterprises. It has been considered as a major milestone in the transformational journey. Cloud computing changes the way enterprises store the data, share the data and access the data for services, products and applications. Public cloud is the most widely adopted model of cloud computing. Public cloud as the same suggest available to public over the internet and easily accessible via web channel in a free mode or pay as you go mode. Gmail, O365, Dropbox are some of the popular examples of public cloud.

Public cloud provided services eliminates extra investment in infrastructure as all the required hardware, platform architecture and core operating software services is entirely owned, managed and efficiently maintained by the cloud hosting vendor.

As per mcafee research almost 76% of enterprises have adopted minimum 1 public cloud service provider, it could be any kind of cloud offerings (SaaS, IaaS, or PaaS). It shows popularity of public cloud. 


Continue reading "Public Cloud Security- is it still a concern for enterprises?" »

September 20, 2018

Multi-Cloud strategy - Considerations for Cloud Transformation Partners

While "Cloud" has become the "New Normal", recent analyst surveys indicate that more and more enterprises are adopting Multi-Cloud, wherein more than one Public Cloud provider is utilized to deliver the solution for an enterprise, for example; a solution that employs both AWS and Azure. There are various reasons for enterprises to take this route, Cloud Reliability, Data Sovereignty, Technical Features, Vendor Lock-in to being a few amongst the several reasons.
Though most of the deliberations are revolving around Multi-Cloud for enterprises, here is an attempt to bring out the considerations that a Cloud Transformation Partner needs to watch out for.


There are four core areas a Cloud Transformation Partner must focus on to ensure successful and seamless Transformation & Operation of a Multi-Cloud environment:

1. Architecture
2. Engineering
3. Operations
4. Resources

Architecture: Success of a multi-cloud strategy depends largely on defining the right architecture that can help reap the benefits of having a multi-cloud environment. Architecture decisions should be reviewed against the business demands that triggered a multi-cloud strategy and ensure they are fulfilled.

Application and Deployment architecture has address all aspects of why an enterprise is looking to adopt a multi-cloud strategy. For example, if Data Sovereignty was the key consideration; application deployment architecture should make sure that data will reside in the appropriate Cloud that suits the need. If reliability is the driver, suitable failover mechanism needs to be in place, thus making use of the multiple cloud platforms available.

Interoperability across platforms is among the critical elements to emphasize on along with portability across Cloud Service Providers (CSPs). Achieving this takes a multi layered approach and containers is emerging as a solution in the cloud native space. More details in another blog post here.

Though Cloud as a platform is stable, there is a possibility of failure with a cloud provider (and we have witnessed it in the past). Disaster Recovery (DR) solution built on multiple clouds can be a more effective solution than DR with a single cloud provider in multiple regions.

Establishing network connectivity between competitor CSPs can have its own challenges and bottle necks. Network solution should facilitate provisioning new connections when needed with desired performance across multiple clouds.

Security solutions and controls need to run natively on all clouds and work across all boundaries. Hence Cloud Security Architecture should be on top of the list for considerations in multi-cloud. More importantly, solutions for threats, breaches and fixes need to cater to multiple CSPs and have to be centrally coordinated to respond effectively.


Engineering: There will be changes to the current set of application development and engineering processes followed for a single cloud environment. Application Deployment would need careful planning in a multi-cloud environment with specific focus on developer productivity, process compliance and security implementations.

DevOps should be an integral part of agile development for cloud native & traditional applications. Attention and careful planning needs to be given to the DevOps process and tools to work seamlessly across multiple cloud platforms.

Application lifecycle management should have Platform specific testing built into the process and ensure reliable operations on each of the target platforms.


Operations: Cloud operations are more complex in a multi-cloud scenario due to the overheads that each cloud platform will bring in.

Cloud Management Platform (CMP) must support the multiple Public Clouds that are part of the solution. CMP should be capable to abstract the complexity of different Cloud stacks and models and provide a single window view to monitor, administer and manage multi-cloud ecosystem for the operators.

Oversubscription of Cloud resources needs to be watched for a multi-cloud environment. It is hard to foresee the cloud usage patterns in each of the cloud platforms, and it is very likely that one or all of the cloud platforms can get oversubscribed. Optimization of cloud resources can be a challenge and can result to increased costs. Multi-Cloud strategy may not attract the best volume discounts from a CSP and can impact the cost.

SLA's can vary across CSPs, this should be taken in to consideration while defining the service levels.

Overheads for managing and tracking multiple CSP contracts, billing etc. takes effort and time and needs to be planned for. A well-defined change control mechanism and a roles & responsibilities matrix are essentials in a multi-cloud environment.


Resources: Staffing needs to be planned considering the multiple cloud platforms and the varied skills that would be required. Teams need to have an appropriate mix of core cloud Horizontal skills and CSP specific vertical skills. Multi-cloud environment will demand resources in:


Cloud Horizontal Skills - Engineering skills like Cloud native development with 12 factor principles, cloud orchestration is relatively cloud provider independent. Resources will be specialists in their technical areas and will not be dependent on the Cloud platforms.

Cloud Vertical Skills - Specialists of each cloud platform will be required to extract the best out of each of the multiple cloud platforms that are used. These resources will be required at various roles ranging from architects to developers.

Agile/DevOps - Cloud development needs to be agile and should accommodate changes with the minimal turnaround time. This would need adoption of Agile/DevOps and resources with the appropriate skills to run large scale agile projects.
Cloud led transformation is a journey/ continuum for any large enterprise and hence they should choose a cloud transformation partner who has deep expertise across architecture, engineering and operations with right resources. Infosys as a leading cloud transformation partner has been working with Global 2000 enterprises on their transformations. You can find more details on the same here.

Continue reading "Multi-Cloud strategy - Considerations for Cloud Transformation Partners" »

March 19, 2018

Do I stop at Enterprise Agility?

      In Today's world, with the increase in the competition and customer demands, CRM transformation is no longer a single heroic application or module. The increasing needs of having better customer experience through connected architecture have added layers of complexity with the solution spreading across systems/ technology or modules making it cumbersome to manage and maintain the enterprise architecture. Very recently, one of our customers asked - while you have the best solution for my CX woes, do you have anything which can help manage my delivery process? Do you have a packaged offering which solves my implementation as well as execution requirements?

Continue reading "Do I stop at Enterprise Agility? " »

February 12, 2014

Is your service ready to be 'API'-fied? (Part 1)

A question that I come across pretty often, especially with clients who are pretty early in their API journey, is "Can I expose my internal service as an API?"  The answer unfortunately is not a simple Yes or a No. Even though APIs are supposed to build over SOA, something that the industry has been doing for quite a while now and many have mastered, there are several considerations that should be looked into before an 'internal' service can be 'API'-fied (A new word I just coined J - meaning "exposed as an API"). In this 2 part series, we take a brief look at these aspects which are key to answering that question.

To begin with, examine the data being exposed through the service. Since internal services are meant to be consumed within the organization, data security and governance in most cases are relaxed. However, when it comes to exposing the service to external entities, the equations change.  It is therefore important to carefully review the service and ascertain the type and sensitivity of the data being exposed and make sure that you are ready to expose it to the external world.

Security is the next key aspect that must be delved into. Internal services mostly have none or not enough security built into it for external consumption. Even those that do have might have a proprietary security mechanism back from their early SOA days.  All these are dampers for APIs. The API economy is meant to be open. Hence it is important both to have a robust security architecture and one that conforms to commonly accepted industry standards (e.g. OAuth). It is also important to abstract security out of the service. Security should be managed by experts through policies. That will free the service developers to focus only on the business logic.

In the next part, we will look at how scalability and service design play a key role in answering the question.


October 24, 2013

Riding the Band Wagon for Enterprise APIs - the Technology Leadership Dilemma (Part 3)

This is the final blog of this three-part series discussing the challenges facing technology leadership of traditional businesses in their API adoption journey. In my first part we had talked about the importance of APIs and API economy. In the next part, we had explored more on the unique challenges of enterprise APIs and the importance of an enterprise SOA strategy. In this blog, we will see how API Management solutions come to the rescue, but, more importantly, we will also talk about where such solutions might fall short.

API Management solutions go a long way to address many of these concerns. They abstract out a lot of operational aspects like security, API publishing, traffic management, user onboarding and access management, usage tracking and health monitoring, so that the technology teams can focus on the actual business of the API functionality.  With the big players (the likes of Intel and IBM) entering the arena the market is heating up and there are tall claims on what an API Management platform can do for the enterprise. For the enterprises it would be a slightly bigger challenge. One challenge certainly is to find the right API Management solution to suit their needs. Currently none of the products in the market seem to address all the concerns of API Management. Admittedly products are evolving fast and it would be just a matter of time when the market will see products which will cater to most of the needs in some ways or the other.

However, there are certain other aspects that need to be tackled by the business and technology leadership before they can take the leap for enterprise APIs. Most enterprise APIs need support from other processes/systems in order to complete the functionality being exposed through enterprise APIs. Some examples are audit control, transaction traceability, reconciliation reports, customer service, batch integration with partners, etc. And these may not be able to be supported in the fast forward manner that APIs can be developed and exposed.

It is important for organizations to realize that just putting an API Management platform in place will not put them in the driver's seat. They have to take a more holistic view of their particular needs and ensure that all the supporting teams are able to join them in their API journey. It is not only a matter of just riding the bandwagon. It is also important to take all your stuff along to ensure you don't have to jump off the bandwagon half-way through.

October 22, 2013

Take your business where your digital consumers are - on the cloud! Part3

In my first blog, I spoke about the consumerization of IT with cloud and in my second blog, I spoke about how enterprises can leverage cloud and big data for pervasive intelligence. In this blog let's talk about how cloud can be enable consumer responsiveness. 

Cloud is also becoming a distinct enabler at improving reach and responsiveness to consumers. Consumers today demand a high responsiveness for their needs. Dealing with such aggressive demands is possible only with features of cloud such as on-demand scaling, agility and elasticity. For instance, manufacturers use Cloud to manage direct and indirect sales channels, giving them instant visibility into field intelligence. The most significant revelation is that the tremendous time and cost savings driven by Cloud-based customer service has created high Cloud adoption levels within the industry (3).

It isn't surprising, then, that luxury car brands such as Mercedes and BMW take it one step further - investing in Cloud technologies to accurately track the digital footprints of customers and update contact information to stay in touch with their customer base. They also keep track of maintenance information of their cars even if they are serviced outside the primary dealer networks. Significant facets like buyer perceptions, brand loyalty, buying patterns can be also charted out while studying markets to develop consumer focussed products.

Ultimately, enterprises need to scale and meet or even exceed the expectations of the digital consumer to be able succeed in their marketplace. That will be possible through superior and real-time analytics applied to help sell and service the digital consumer better - every day, every hour, and every minute. Nothing can be more powerful than leveraging consumer behaviour to tailor products Cloud offers an excellent platform to do this and Big Data based analytics becomes the core engine to do it. Big Data streamlines your massive data while cloud helps you optimize your resources efficiently. Cloud has the potent to blur the lines of physical and online space by integrating potential opportunities with existing data.

To conclude, by scaling their businesses to the cloud, enterprises equip themselves to succeed by taking their business to the digital consumer and winning in the marketplace. 

 

October 17, 2013

Riding the Band Wagon for Enterprise APIs - the Technology Leadership Dilemma (Part 2)

In the previous blog we had looked at the importance of APIs and the API economy and had outlined the challenge that the technology leadership of traditional businesses face.  In this part we talk about the nuances of Enterprise APIs and what makes them more challenging to be exposed by traditional businesses.

Traditional businesses are a mix of different technology platforms and legacy systems are still a reality for most financial, health and travel industries. While most distributed teams can easily adopt agile or rapid development methodologies, it is a bit more difficult for legacy systems to take that leap. Additionally enterprise APIs presents unique challenges in itself, challenges that are very different from those of consumer APIs. Most enterprise APIs deal with Personally Identifiable Information (PII) or company confidential data. Consequently, there are higher security, compliance and regulatory needs. Many of the enterprise APIs are also transactional in nature requiring heavy integration with multiple back-end systems. Hence delivering such APIs comes with the additional baggage of being able to ensure transactional handling capabilities, complete audit, traceability and non-repudiation characteristics.

The dilemma of the technology leadership for such traditional businesses is formidable. On one hand they have to deliver to continue to remain relevant and competitive while on the other hand they have to take care of the various facets of the enterprise APIs that they are exposing.

Enterprise SOA readiness plays a key role in the ability for the enterprise to deliver such APIs. Organizations which have got their act together in gaining SOA maturity will definitely find themselves a few steps ahead in their API journey. Trying to fix a broken SOA strategy with a new API strategy might not be a promising idea.

In the last part of this three-part series, we will discuss how API Management solutions can help address some of the concerns for these businesses and where such solutions fall short. 

October 11, 2013

Take your business where your digital consumers are - on the cloud! Part2

In my previous blog, I spoke about how digitization is taking place these days in all realms with cloud. Enterprises have started embracing the new phenomenon of "consumerization of enterprises" for business. In this blog, I will share some thoughts on how- cloud and big data can be two pillars for an organizational strategy.

 

My essential tenet is that Cloud and Big Data are interdependent on each other - as more and more information resides on Cloud, it will become easier to access and analyse information, providing valuable business intelligence for companies. In fact, Gartner predicts that 35% of customer content will reside on the Cloud by 2016, up from 11% in 2011 (1).

 

Customers are leveraging this easy and instant access to rich data to make smarter decisions. Many in-store shoppers tend to use their mobile devices to compare product prices on online channels such as Amazon or eBay. Retailers who have the capacity to track this action can immediately offer customers a better package/ deal, thereby delighting the customer and closing the sale instantly.

 

To achieve such pervasive intelligence and instant actionable insights, one should be able to sift through large amounts of data pertaining to each customer in quick time. Businesses will need to verify whether the information they gather about their customers is accurate. Coupled with all this, there are large scale technology related changes, and costs, that need to be considered. And this elasticity of compute at an affordable cost is quite possible when you leverage the cloud effectively.

 

Information that resides across multiple locations can be collated, accessed, analysed, and verified for accuracy at much lower costs on the Cloud. Further, through Cloud-based media, brands can track consumer opinion as well as follow critical consumer behaviour actions/ changes. Take for example the manufacturing industry. Cloud can drive shorter product lifecycles and faster time-to-market as well as enhance their product design, development and marketing campaigns (3).

 

In my next blog, I will talk about how consumer responsiveness can be accelerated by cloud.

October 10, 2013

Riding the Band Wagon for Enterprise APIs - the Technology Leadership Dilemma (Part 1)

Application Programming Interfaces (APIs) assume a lot of significance in today's enterprise digital strategy. However, as businesses rush towards the API economy they often overlook certain subtle aspects that are key to success. In this 3 part series, we take a technology leadership view at the challenges that dare traditional businesses in their journey towards establishing their digital identity.

This part will talk about how APIs are shaping the IT landscape.

The pressures of time-to-market have been ever increasing. Traditional software development cycles measured in months are no longer the reality. The expectation from technology leadership now is to be able to deliver in a matter of weeks. 'Drive innovation and drive it fast' seems to be the mantra.

The API economy presents a unique opportunity to remain agile while driving revenue and innovation and businesses across the globe are quickly realizing it. The API economy has given birth to an entire generation of newbies whose business models revolve completely around their APIs and apps (think Twitter, Siri). Traditional businesses have also tried to catch up with the newer generation API-centric businesses in realizing new ways of customer engagement and new streams of revenue. The impact of the API economy has been such that it is now being compared to what internet was in the 90's.

As traditional businesses push their limits to ride the API bandwagon, it presents a fundamental challenge to the technology leadership of these organizations - How to deliver agility at the speeds expected for Enterprise APIs? The demands from business and pressures from industry peers have to be balanced against the ground realities.

In the next part we will take a look at how enterprise APIs differ from consumer APIs and why is it a bigger challenge for the traditional businesses to expose such APIs.

October 4, 2013

Building Tomorrow's Digital Enterprise on Cloud - Part 2

In my last blog on Building Tomorrow's Digital Enterprise on Cloud we looked at enterprise cloud adoption trends and how the Digital transformation is influencing Cloud adoption models like PaaS. In this blog, we will look at how enterprises can leverage Cloud to reinvent themselves into a Digital enterprise.

We see enterprises that want to take on this Digital transformation challenge are evaluating and on-boarding new technology and business solutions leveraging Cloud, Mobility and social media. However, enterprises should be careful to avoid merely repackaging old capabilities in new technology solutions. Merely moving applications and not services (lack of service-orientation); Merely moving applications and not business capabilities that can be offered as a Service; Merely moving applications and not exposing API's that allows 3rd parties and partners to build innovative services to enhance consumer experience is like offering "Old wine in a new bottle" that no longer appeals to tomorrow's Gen-Y digital consumers.

Continue reading "Building Tomorrow's Digital Enterprise on Cloud - Part 2" »

Take your business where your digital consumers are - on the cloud! Part 1

The CTO of a leading automaker recently asked me how he could access personal information such as birthdays, anniversary dates, or other significant events in their customers' lives to be able to serve them better. More particularly, he was interested in getting this information in real-time on occasions such as when they approach a POS terminal or an associate in a dealership or merchandise store. He was looking for ways to drive a more personalized customer engagement: mechanisms that could extract relevant and useful insights from multiple sources of customer data to empower the sales team and make highly customized and compelling offers to the customer.

 

Essentially, his question represents an on-going paradigm shift in the mindset of CXOs in today's enterprises - from traditional ways of doing business to leveraging the power of digital channels. And the main reason for this is the rise of the digital consumer fuelled by the big bang of Cloud. In this three series blog, I would like to discuss about how cloud is driving "consumerization of enterprises".

 

High-speed connectivity and information transparency of online and mobile channels has spawned a new breed of Gen Y customers that are well-connected through multiple devices, expressive and ready to engage with an attitude. This changing customer demographic means that enterprises have to find new ways of getting their attention - and winning their trust.

 

Today's customers are sharing and accessing tremendous amounts of data, a bulk of which resides on the Cloud, through social media sites and various other interactive channels. In 2012 alone, consumer-generated content accounted for 68% of information available through various devices (TV, mobiles, tablets, etc.) and channels (social media, videos, texts, etc.). Much of this information is extremely valuable if we can access and analyze it. In fact, by 2020, 33% of the information that resides in this digital universe will have critical analytical value, compared to today's where one-fourth of the information has critical analytical value. This is will be an increase of 7% in 8 years on a total data volume that is growing at 40% year on year (2).

 

In my next blog I would be talking about how cloud and Big Data are driving actionable business insights for enterprises for digital transformation.

September 27, 2013

Building Tomorrow's Digital Enterprise on Cloud - Part 1

IDC in its 2013 Predictions "Competing on the 3rd Platform" predicts that the Next Generation of Platform opportunities would be in the intersection of Cloud, Mobile, Social and Big Data.

My discussions with Client IT executives across industries over the last few months has clearly shown a growing interest for building such a platform to address their digital transformation needs. They realize the importance to look at this strategically from a Digital Consumer engagement perspective and the need to align their internal projects and initiatives focusing on Mobility, Cloud and Social Integration to maximize business value. Consumers today demand seamless access across devices/channels and the ability to integrate into the context of their digital experience. Next generation of platforms need to address this transforming consumer engagement model.

Foundations of such a platform I believe would leverage Cloud-based applications enhanced with API Management at its core to address the needs of Integrating the Enterprise with their Digital Ecosystem and consumers on Mobile Apps & devices, Social Media by leveraging Cloud Services for elastic scaling and Analytics on API usage trends to generate business insights.

In this blog series we will talk about Cloud adoption trends that we are seeing in the marketplace and how this Digital convergence is influencing cloud adoption models and the evolving need for Enterprise API Management on Cloud for value realization.

Continue reading "Building Tomorrow's Digital Enterprise on Cloud - Part 1" »