The commoditization of technology has reached its pinnacle with the advent of the recent paradigm of Cloud Computing. Infosys Cloud Computing blog is a platform to exchange thoughts, ideas and opinions with Infosys experts on Cloud Computing

August 24, 2017

Managing vendor product licenses during large scale migration to cloud

Public Cloud Services are mature and enterprises are adopting cloud to achieve cost optimization, introduce agility and modernize the IT landscape. But public cloud adoption presents a significant challenge in handling the existing vendor licensing arrangements. Commercial impact varies based on cloud delivery model from IaaS to SaaS and the licensing flexibility. The business case for cloud transformation needs careful consideration on existing software licenses.

Based on our experiences we see software licensing and support by software vendors are at varying stages of maturity. At times, software licensing model can become expensive while moving to cloud. Typically, on premise licenses are contracted for number of cores, processor points or users, whereas the definition of core in virtualized/ cloud world is different.

While enterprises assess the licenses when undertaking the cloud journey, they should carry out a high-level assessment of risks associated with licenses while formulating the business case.

Before formulating a business case it's important to consider the following aspects into the enterprises license transition strategy:

·         Conduct due-diligence of major software vendors to identify any absolute 'show stoppers' for the use of their products such as:

o   Support level in new platform services, license portability and license unit derivation mechanism in cloud.

o   Commercial impact for re-use on multi-tenant cloud platform.

o   Flexibility to reassign licenses as often as needed.

o   Mechanism to check and report compliance on public cloud in an ongoing basis across product vendor licenses.

·         Inventory management of licences and the commercials around these licences.

·         'Future state' services and application stacks should balance between license cost and performance requirements.

o   Negotiate unfriendly product licensing bound to socket or physical hardware level.

o   Evaluate existing licensing terms and conditions for increase in licensing costs.

o   Evaluate / check for mitigation controls and options on public cloud.

o   Plan ahead for the cost implications for reusing converged stack or appliance based licenses on public cloud.

o   Translate the on-premise licenses to public cloud (virtual core).

o   Cloud Service Provider includes operating system licenses - examine the option to reduce the same from existing vendor agreements.

o   Leverage the continuous availability capability of public cloud platforms to eliminate disaster recovery licenses and costs associated with it.

Approaches to overcome public cloud licensing challenges:

To overcome the licensing challenges associated, IT teams can optimize target state architecture, solution blueprints and frameworks with considerations on license/ cost models. Few approaches like:

·         Re-architect existing solutions leveraging event driven Service, Function as a Service, PaaS, Containers and Micro-Services to achieve agility and significantly license cost reduction.

·         Enterprises should consider dedicated hosts or instances / bare-metal options when socket level visibility is required for complying with license usage on public cloud but also weigh the cost impact of these machine types.

·         Embark on Open Source for platforms like database, application server and web servers.

·         If traditional deployment of platform must be moved to cloud, consider creating a pool of platform services rather than services for individual application requirements like common database services. For example: Line of business can consume business applications through centralised platform services across business units in order to achieve greatest cost and agility benefits.

·         Consider solutions with bundled license under usage based pricing models like SaaS, PaaS, Market Place and Public Cloud Native Services.

In reusing "on-premise" licenses, all major software vendors are changing license policies to allow flexibility to port the licenses to cloud but it is not uniform nor all-inclusive yet. Options like vendor allows certain product licenses on cloud but not all, another vendors may allow all on public cloud; and while some vendors allows porting onto authorized cloud environments only.

In summary, migrating like for like will have an impact to licensing costs on public cloud. Understanding the current licensing agreements / models, optimizing application architectures to cloud; negotiating a position with vendors that will be suitable for cloud along with compliance processes in the target state model should hold the organization in good stead. With the cloud native services and open source innovation continues to grow rapidly, enterprises can mitigate traditional licensing constraints by leveraging these technology innovations.

August 31, 2016

The End of P1 as we Know it - Immutable Infrastructure and Automation


P1's (Priority 1) mean different to different people, sleepless nights for some, lost revenues/opportunities for others, and for some triggers P1 in their personal life J.

Software systems have gradually evolved from being business support to business enabler.  During its earlier days Software systems helped enterprise to be more productive and efficient but today they have become synonym with business operation and its existence, recent Delta Airlines outage that grounded it's entire fleet is the reflection of this fact. Software uptime today functions as oxygen to the business.

With ever connecting systems, devices and global customers business always demands to be "Lights ON". The cost of downtime just increases as technology keeps on adding value to business.

According to Gartner study, the cost of downtime ranges anywhere from $140K to $540K per hour depending on the nature of the business and then there is impact on reputation and brand image that comes with it.

Hardware/software failure and human error tends to be top causes of most of the downtimes.   

With the advancement of technology and quest for ever expanding 9's things are changing. In my previous blog I wrote about the whole paradigm of Infrastructure as Code and its potentials, this blog is natural extension of one of its possibilities.

Immutable Infrastructure is the outcome of infrastructure automation and evolved application architecture where a component is always deployed rather than being updated from its previous state effectively rendering it immutable. This whole model eliminates component from any pre-existing dependency and gives you a capability to deploy the service from a clean pre-tested image; a process that could be repeated and automated to be triggered on demand in response to monitoring tools.

Immutable Infrastructure usually have these common attribute:

  • A stateless application Architecture or to be specific an Architecture whose state is isolated and redundant.

  • Automated deployment and configuration that's testable.  

In addition, Cloud and Virtualization amplifies the whole effect by providing unmatched resiliency.   

Though not a silver bullet, Immutable infrastructure along with self-healing techniques can provide stable infrastructure for critical IT operations, hence the usual P1 that one is used to sinks down to least common denominator.

August 8, 2016

Melting your Infrastructure - Infrastructure as Code


We all are familiar with the well-known state of the matter - Solid, Liquid and Gas, as the state of matter changes from Solid to gas, the volume changes and is more manageable to achieve desired shape.


Analogues to this, IT Infrastructure has evolved its maturity state from Bare Metal (Single tenant), Virtualized and Codified. And over this evolution Infrastructure has increased its value proposition.


Infrastructure as Code (IaC) also called as Programmable Infrastructure is the automated process of building, configuring and provisioning infrastructure in a programmatic way. Thus enabling the mechanism of serializing your Infrastructure to code with a capability to rehydrate it on demand. The whole process eliminates human intervention and can be part of your deployment pipeline, so in a continuous delivery scenario, not only your application but also your Infrastructure to support the application is built on the fly. Now without manual intervention or steps out of the equation, you have a more predictable and repeatable process that is maintained and managed for reuse. This paradigm not only helps you audit the Infrastructure, it gives version control capability for your infrastructure changes and ability to revert back to specific version, just as you could do with your code. 


In this blog, what I call as "melting your infrastructure" is actual the concept of transforming the state of infrastructure to code that could be used to resurrect it again to the state is was. The whole concept of codifying the Infrastructure (or melting it to code) gives rise to immense possibilities which are already being realized or have great potential in near future.


Continuing the analogy with the state of matter, just like two different metals in liquid state facilitates the creation of alloy, similarly with IaC your Infrastructure could be an alloy consisting hybrid or multi-cloud environment.


With the ability of being invoked programmatically, IaC could be used to incorporate self-healing mechanism into your application, where failure detection could be used to spin new infrastructure on the fly.


With IaC unit testing capability could be extended to test your infrastructure configuration to support your applications.


IaC tools could be leveraged to extend the concept of Code Contracts, where there could be possibilities for Applications define their Infrastructure precondition and post conditions as part of the deployment process.


IaC is an and an idea that is still maturing,  growing and gaining momentum, they are implemented using tools like Chef, Ansible, Puppet and Salt (CAPS), and natively supported in some platforms (VMware) through API's. Above examples were only some of the familiar arena where IaC brings value but these are just the beginning.


IaC brings speed to value creation, agility to respond to change and the ability to reuse and extend. In today's world of innovation based economy IaC helps bring the required synergies for your Ideas that get transformed into Software Solutions and which in turn runs on Infrastructure.  


Infrastructure in the state of IaC is well positioned to be augmented with AI to enter the next phase of innovation, as with the state of matter, we have solid, liquid, gas and then "Plasma" so with Infrastructure we do have Bare Metal, Virtualized, Codified and then?

March 17, 2016

From Open Sesame to Biometrics - Authentications have been Vulnerable


We all have come across the famous old fable of "Ali Baba and forty thieves", Ali Baba happens to come across a bunch of thieves and secretly watches the leader of the gang open the cave by uttering the words "Open Sesame", the cave opens to reveal vast treasure amassed by the thieves. With the magical phrase disclosed, Ali Baba was able to make it into the cave to strike a fortune.  This probably is the oldest known instance of a situation where the password had been compromised J.

 Times and characters have changed but theme remains unchanged. Today caves holding treasures have shrunken to Systems, hard drives, phones and chips and the keys to unlock them have been transformed.  Oral "Magical" spells have been evolved to characters that need to be keyed in.

Passwords have been the most traditional form of Authentication for a long time which I consider nothing but "Open Sesame 2.0", where phrases or characters are to be actually keyed in to get you the access. This approach though mitigates the risk of you having to speak it loud but has its own problem of being weak in terms of being guessable or hard to remember and prone to be key logged. Being a de facto option it's been a target of most of the current innovations.

Of recently Biometrics have become a popular alternative for traditional passwords, most of recent consumer electronics products have embraced this feature. So are they really secure or to be safe, are they at least better than the traditional passwords? I doubt.

Biometrics as a password are an irony in itself due to the obvious reason that the password is quite evident. So why the hype? The only factor that makes it safe (or rather gives you that feeling) is the fact that it's unique to the user and can't be replicated by others, so do you think that's enough or worth having it? Example below will help you realize my point.

When it comes to fingerprint sensor (those found in smartphones and laptop), there might be complicated ways to imitate the fingerprints, which you can find googling, but for my little son who does not know anything about security it's much easier to break this barrier, all he does is when I am a sleep or busy with something else, he would just take my phone to contact with my finger and presto, he has access to his games, so isn't that easy as a child's game?  So from security perspective your unique key is just too obvious and quite vulnerable.

Next I happen to come across Surface Pro 4, which is the latest offering from Microsoft and is equipped with state of art and innovative facial recognition, once setup, all you have to do is come in front of the Surface Pro 4 and it recognizes you and logs you in without even needing to press a single key. So that's great, right? Keep reading to find more.

With my earlier experience of my son's innovative method, I just thought for a moment and started thinking like a kid, first what I did was to take a selfie using my smartphone and showed that to the Surface cam, but that did not work. I thought the reason could be either the size or it was looking for a three dimensional image, the later one would have been difficult to mimic based on my time and resource, but the former was not an issue, next I used a Tab to take a selfie and used the picture in the tab in front of the Surface cam and presto!! It worked. Surface Pro without any issue quickly allowed me in. Now this looked like a bigger risk, unlike fingerprint sensor where a physical contact is required, in today's world of Social Media anyone can have access to photos and hence can fool facial recognition.

Say Mr. X has access to Mr. Y's device, all that Mr. X has to do is get hold of Mr. X picture load it to tab and show it to the device Cam and it's done'

The moral of the story is nothing is hundred percent secured, especially when it comes to Biometrics definitely not! So the story of Open Sesame and Ali Baba are here to stay.

August 27, 2015

Software is the New Hardware

The "Humanics", "Mechanics" and "Economics" of the new enterprise world


The enterprise world seems to be poised at an interesting inflection point today. There no longer seems to be anything called as a "known competitor" or an "industry adjacency" in enterprise business anymore.


A Google can come from nowhere and reimagine, redefine and rewrite the rules of the entire advertisement industry. An Apple can come from nowhere and reimagine, redefine and rewrite the rules of the entire entertainment industry. A Facebook and Twitter can create absolutely new spaces that did not exist a few years ago. An Amazon and/or Alibaba can come from nowhere and reimagine, redefine and rewrite the rules of the way commerce is done around the world. And then there are Uber, Tesla and others.


In each of these examples, three elements seem to combine to perfection: 

  • Humanics: This is about using the power of imagination to discover new possibilities and create new experiences. All the companies mentioned above have done this par excellence in their respective contexts.
  • Mechanics: The new possibilities powered by imagination have to be converted into reality and, more often than not, in today's world, all of this is being driven by software. All the examples mentioned above, have leveraged the power of software in reimagining, redefining and rewriting the rules of their respective games. 
  • Economics: And finally, of course, there is the economics - the right business model for the right context. Businesses and business plans need to find the right balance between "Humanics", "Mechanics" "Economics" to scale new horizons and convert possibilities into realities - leveraging the power of software!


At a biomedicine conference last year, venture capitalist Vinod Khosla famously declared that healthcare would be better off with fewer doctors. And then he delivered the same advice to IT at a tech conference the following month. Needless provocation? Far-fetched fantasy? Datacenter utopia, actually. Because that's exactly what most of the traditional and large G2K companies would dearly love to achieve.

Not too long ago, the Director of Data Center Operations at Facebook said each of their administrators managed at least 20,000 servers. Contrast that with the 1:500 or 1:1,000 (admin to server) ratio that a typical G2K company manages. At best. A couple of years earlier - as if to prove a point - Facebook had launched the Open Compute project to make their highly efficient hardware design "open source" for everyone's benefit.

The reason for this lopsided infrastructural evolution is mainly historical. Most G2K companies have been around long enough to accumulate a legacy of disparate, non-interoperating, generations of technologies that seem to be operating in silos. These enterprises are forced to dedicate the technology budget, not to mention large human resources, to simply keep the lights on. On the other hand, the GAFTA (Google-Apple-Facebook-Twitter-Amazon) group - with a scant 97 years between them - found a way to abstract and codify this complexity using the power of software to build highly scalable and highly automated solutions to the same problem.

The stark difference in productivity means that many G2K enterprises struggle with most of their resources being stuck with "keeping the lights on." This also means that very limited resources are allocated to reimagining, redefining and rewriting possibilities and converting these into newer realities for business.

Now, what if, somehow magically, this could be completely turned upside down. The possibilities would be immense. The probability of converting these possibilities into realities would be immense.

The key question is, how can G2K organizations do a GAFTA? Especially in the world of infrastructure management.

Software is the new hardware

The basis to the hypothesis of G2K doing a GAFTA, especially in the field of infrastructure management, seems to be encapsulated in a mere 5 words: "software is the new hardware". 

G2K companies must find a way to emulate their GAFTA counterparts to leverage the power of software to reimagine, redefine and rewrite the way the current infrastructure is managed and convert possibilities into realities.

They must find a way to run their operations noiselessly leveraging the power of software. To achieve this, they must find a way to abstract the complexities and heterogeneity of their environments through the power of software and drive extreme standardization and extreme automation to achieve extreme productivity - by an order of magnitude, not incrementally. This will help them take costs out - and large chunks of it.

They must find a way to: 

  • Drive extreme visibility and control across not only the "horizontal elements" spanning various businesses, geographies, applications, partners, and functions but also "vertical elements" across all infrastructural elements to applications to business processes. And all of this in a "single pane".
  • Modernize their infrastructure by possibilities that software offers - hyper-converged infrastructure, software defined everything, Open Compute, and a good mix of public, private and hybrid clouds so that agility increases by leaps and bounds and costs decrease by an order of magnitude.
  • Modernize and move their existing workloads to take advantage of the new software-powered underlying infrastructure.
  • Reimagine their processes to make DevOps an integral part of the new ways of working.
  • Reimagine their security with "hazy perimeters", collaborative work models to counter ever-increasing vulnerabilities and risks - all this through the power of software.
  • Reskill and reorganize talent. In the world where software is the new hardware, there will be need for a massive change in skills and structure.
  • Change the organizational culture.

While the existing and mature businesses within the enterprise will demand relentless excellence in efficiency, control, certainty, and variance reduction, the foundational cultural constructs of the "newer" lines of business of the enterprise will be based on exploration, discovery, autonomy, and innovation. Building an ambidextrous organization and driving a culture of purpose, creativity and learning would be paramount.

All said and done, this journey is best undertaken with partners who are able and aligned - not alone. G2K companies must find a way to leverage partners who have firmly based their strategies and their businesses on the fact that "software is the new hardware". Not just by talking about it but actually making it a way of life of using software to help their clients "run" operations, "build" next-gen infrastructure, "modernize/migrate" workloads, and "secure" them against the new threats. 

The last word

The approach to technology infrastructure at G2K and GAFTA companies belong to different eras. There exists a clear blueprint for G2K enterprises to leverage the benefits of the GAFTA world in terms of agility, and freed-up man and money resources that can be promptly plowed back into re-imagination, innovation and new business models. 

GAFTA has shown the way on how new business models can be "Powered by imagination. Driven by software".  

Software is indeed the new hardware!

Continue reading "Software is the New Hardware" »

February 27, 2014

Is your service ready to be 'API'-fied? (Part 2)

In my previous blog, we looked at the data and security aspects that might hinder an internal service from being API-fied. In this conclusive part, we will look at the other two important aspects that needs to be addressed before internal services can be certified for API exposure.


Scalability is the next key area to talk about. Exposing APIs for external consumption can quickly lead to explosive growth in traffic due to genuine or rogue usage. Will the back-end applications, databases and infrastructure be able to scale as traffic swells? APIs exposed to external consumption usually have stricter service level agreements and financial implications to adhere to.  But that should not come at the expense of the internal consumers. Imposing usage quotas and throttling limits is just one way of managing growth in API traffic. A complete discussion on API scalability however merits a dedicated blog.

Last, but not the least, is service design. Service granularity is a tricky subject and in many cases consumers of internal services have to make several calls to the back-end to complete a transaction or achieve a functionality. While this might not be a big challenge for internal consumption, when it comes to externalizing these APIs, it makes the consuming apps unnecessarily "chatty".

Internal services are also often infested by the 'pass-me-back' syndrome i.e. sending pieces of data absolutely irrelevant to the consumer but required to be passed back by the consumer for future calls. The problem creeps in during early service design for the ease of processing and lives through internal consumption without much fuss. However, when the same services are exposed for external consumption, these design issues may be barriers to API adoption.

API Management products that are available in the market provide a lot of capabilities especially in the area of security and traffic management. But as you can see from this blog series, it takes a lot more than just a product to make an API strategy successful. There's no one-size-fits-all solution.

February 12, 2014

Is your service ready to be 'API'-fied? (Part 1)

A question that I come across pretty often, especially with clients who are pretty early in their API journey, is "Can I expose my internal service as an API?"  The answer unfortunately is not a simple Yes or a No. Even though APIs are supposed to build over SOA, something that the industry has been doing for quite a while now and many have mastered, there are several considerations that should be looked into before an 'internal' service can be 'API'-fied (A new word I just coined J - meaning "exposed as an API"). In this 2 part series, we take a brief look at these aspects which are key to answering that question.

To begin with, examine the data being exposed through the service. Since internal services are meant to be consumed within the organization, data security and governance in most cases are relaxed. However, when it comes to exposing the service to external entities, the equations change.  It is therefore important to carefully review the service and ascertain the type and sensitivity of the data being exposed and make sure that you are ready to expose it to the external world.

Security is the next key aspect that must be delved into. Internal services mostly have none or not enough security built into it for external consumption. Even those that do have might have a proprietary security mechanism back from their early SOA days.  All these are dampers for APIs. The API economy is meant to be open. Hence it is important both to have a robust security architecture and one that conforms to commonly accepted industry standards (e.g. OAuth). It is also important to abstract security out of the service. Security should be managed by experts through policies. That will free the service developers to focus only on the business logic.

In the next part, we will look at how scalability and service design play a key role in answering the question.

October 24, 2013

Riding the Band Wagon for Enterprise APIs - the Technology Leadership Dilemma (Part 3)

This is the final blog of this three-part series discussing the challenges facing technology leadership of traditional businesses in their API adoption journey. In my first part we had talked about the importance of APIs and API economy. In the next part, we had explored more on the unique challenges of enterprise APIs and the importance of an enterprise SOA strategy. In this blog, we will see how API Management solutions come to the rescue, but, more importantly, we will also talk about where such solutions might fall short.

API Management solutions go a long way to address many of these concerns. They abstract out a lot of operational aspects like security, API publishing, traffic management, user onboarding and access management, usage tracking and health monitoring, so that the technology teams can focus on the actual business of the API functionality.  With the big players (the likes of Intel and IBM) entering the arena the market is heating up and there are tall claims on what an API Management platform can do for the enterprise. For the enterprises it would be a slightly bigger challenge. One challenge certainly is to find the right API Management solution to suit their needs. Currently none of the products in the market seem to address all the concerns of API Management. Admittedly products are evolving fast and it would be just a matter of time when the market will see products which will cater to most of the needs in some ways or the other.

However, there are certain other aspects that need to be tackled by the business and technology leadership before they can take the leap for enterprise APIs. Most enterprise APIs need support from other processes/systems in order to complete the functionality being exposed through enterprise APIs. Some examples are audit control, transaction traceability, reconciliation reports, customer service, batch integration with partners, etc. And these may not be able to be supported in the fast forward manner that APIs can be developed and exposed.

It is important for organizations to realize that just putting an API Management platform in place will not put them in the driver's seat. They have to take a more holistic view of their particular needs and ensure that all the supporting teams are able to join them in their API journey. It is not only a matter of just riding the bandwagon. It is also important to take all your stuff along to ensure you don't have to jump off the bandwagon half-way through.

October 22, 2013

Take your business where your digital consumers are - on the cloud! Part3

In my first blog, I spoke about the consumerization of IT with cloud and in my second blog, I spoke about how enterprises can leverage cloud and big data for pervasive intelligence. In this blog let's talk about how cloud can be enable consumer responsiveness. 

Cloud is also becoming a distinct enabler at improving reach and responsiveness to consumers. Consumers today demand a high responsiveness for their needs. Dealing with such aggressive demands is possible only with features of cloud such as on-demand scaling, agility and elasticity. For instance, manufacturers use Cloud to manage direct and indirect sales channels, giving them instant visibility into field intelligence. The most significant revelation is that the tremendous time and cost savings driven by Cloud-based customer service has created high Cloud adoption levels within the industry (3).

It isn't surprising, then, that luxury car brands such as Mercedes and BMW take it one step further - investing in Cloud technologies to accurately track the digital footprints of customers and update contact information to stay in touch with their customer base. They also keep track of maintenance information of their cars even if they are serviced outside the primary dealer networks. Significant facets like buyer perceptions, brand loyalty, buying patterns can be also charted out while studying markets to develop consumer focussed products.

Ultimately, enterprises need to scale and meet or even exceed the expectations of the digital consumer to be able succeed in their marketplace. That will be possible through superior and real-time analytics applied to help sell and service the digital consumer better - every day, every hour, and every minute. Nothing can be more powerful than leveraging consumer behaviour to tailor products Cloud offers an excellent platform to do this and Big Data based analytics becomes the core engine to do it. Big Data streamlines your massive data while cloud helps you optimize your resources efficiently. Cloud has the potent to blur the lines of physical and online space by integrating potential opportunities with existing data.

To conclude, by scaling their businesses to the cloud, enterprises equip themselves to succeed by taking their business to the digital consumer and winning in the marketplace. 


October 17, 2013

Riding the Band Wagon for Enterprise APIs - the Technology Leadership Dilemma (Part 2)

In the previous blog we had looked at the importance of APIs and the API economy and had outlined the challenge that the technology leadership of traditional businesses face.  In this part we talk about the nuances of Enterprise APIs and what makes them more challenging to be exposed by traditional businesses.

Traditional businesses are a mix of different technology platforms and legacy systems are still a reality for most financial, health and travel industries. While most distributed teams can easily adopt agile or rapid development methodologies, it is a bit more difficult for legacy systems to take that leap. Additionally enterprise APIs presents unique challenges in itself, challenges that are very different from those of consumer APIs. Most enterprise APIs deal with Personally Identifiable Information (PII) or company confidential data. Consequently, there are higher security, compliance and regulatory needs. Many of the enterprise APIs are also transactional in nature requiring heavy integration with multiple back-end systems. Hence delivering such APIs comes with the additional baggage of being able to ensure transactional handling capabilities, complete audit, traceability and non-repudiation characteristics.

The dilemma of the technology leadership for such traditional businesses is formidable. On one hand they have to deliver to continue to remain relevant and competitive while on the other hand they have to take care of the various facets of the enterprise APIs that they are exposing.

Enterprise SOA readiness plays a key role in the ability for the enterprise to deliver such APIs. Organizations which have got their act together in gaining SOA maturity will definitely find themselves a few steps ahead in their API journey. Trying to fix a broken SOA strategy with a new API strategy might not be a promising idea.

In the last part of this three-part series, we will discuss how API Management solutions can help address some of the concerns for these businesses and where such solutions fall short.