The commoditization of technology has reached its pinnacle with the advent of the recent paradigm of Cloud Computing. Infosys Cloud Computing blog is a platform to exchange thoughts, ideas and opinions with Infosys experts on Cloud Computing

September 30, 2017

Artificial Intelligence(AI) In Security Landscape

The world is becoming more and more innovative, intelligent with mesh of digitalized people, things and disruptive technologies.

At one end human brain power is being infused into machines making machines artificially intelligent for solving human problems for good; On the other end unethical hackers are instilling their intelligence in malicious worms that attack IT systems posing security threats to one and all. 

In short human brain power is mimicked into machines for both good and evil purpose.  This has given rise to long debate whether AI (Artificial Intelligence) is a force for Good or Evil; threat or opportunity for IT security?  There is no single answer to this debate. Good and Evil are like two sides of a coin; inseparable. Every invention has good and bad potential with it. Ex. be it Fire, Knife, Engine, Fuel, our beloved Internet and on and on. Good wins over Evil when we as humans strive for maximizing the positive potential of the invention and thus automatically weakening the negative potential.

With this worthy intent let's move forward to see how AI can be leveraged to its best for positive use cases. In this blog want to take up one such use case that is "Adaptive Security Model"

Adaptive Security Model is all about real-time combatting of IT security-threats by employing AI technology. It's a transition from traditional detective & preventive security models to NextGen security models which are increasingly intelligent, predictive & adaptive. These scrutinizes the real-time network traffic/activities, continuously learns based on the data patterns , classifies them normal & malicious ,raises alerts on potential attacks and adapts automatically by implementing end-point security.

Enterprises with Adaptive Security Models possesses four key competencies:

o        Preventive: precautionary policies, processes, products (e.g. firewall) to keep-away attack threats

o   Detective:  Detect the attack that bypasses the preventive layer

o   Retrospective: Deep analysis of issues which were not detected at detective layer. Preventive & detective measures would be enhanced to accommodate these learnings.

o   Predictive: Continuously learns and observes the patterns in network traffic. And keeps the security team on alert on potential anomalies/attacks.  

Machine Learning(ML) algorithms and techniques are the core to these predictive competency of adaptive security model. ML field be it in security arena or others, is too vast and continuously evolving with numerous researches. Intention in this blog is to just scratch the surface of this ML field in adaptive security context.

Out of many types of Predictive models in security context most popular ones are Network Intrusion Detection Models. These models focus on anomaly detection and thus differentiate between normal and malicious data.     

Broad two types of machine learning for anomaly detection techniques are Supervised and Unsupervised.  

o    In Supervised Machine Learning method model is trained with the dataset which contains both normal and anomalous samples which are explicitly labelled. These use classification techniques to classify data observations based on the attributes. Key algorithms for adaptive security model are decision tree, naïve Bayesian classifier, neural network, genetic algorithm, and support vector machine etc.

o      Unsupervised Machine Learning is not based on the training data. They use clustering technique to group the data of similar characteristics. It differentiates normal and malicious data based on a) based on the assumption that most of the network traffic is normal traffic and only a small amount of percentage is abnormal. b) statistical parameters variations among two clusters.

Most common unsupervised algorithms are self-organizing maps (SOM), K-means, C-means, expectation-maximization meta-algorithm (EM), adaptive resonance theory (ART), and one-class support vector machine.

Theoretically, supervised methods are believed to provide better detection rate than unsupervised methods.

 Main phases in building Predictive Models (assuming supervised ML):

Name

Description

Data Set Building

Creation of rich dataset to be used for Training the model and Testing the model. Data source may range from retrospective network traffic , past malicious attack patterns, audit logs, normal activity profile patterns , attack signatures and so on.

Predictive Attributes Selection

This is popularly known as 'Feature Engineering' for models. Dataset will have numerous attributes. Success of predictive-models depends on impactful combination of attributes or features as called in ML terminologies. Irrelevant and redundant attributes of the dataset have to be eliminated from the feature set. There are many theorems and techniques for this, PCA (Principal Component Analysis) being one of the popular technique. PCA is a common statistical method used in multivariate optimization problems in order to reduce the dimensionality of data while retaining a large fraction of the data characteristic.

Classifier Model Construction

Build and train the model based on one or more algorithms. Test the model with test data. Model should classify the data as Normal Class OR Anomaly(malicious) class.

Test and Optimize the Model

 

The performance of the model depends on two parameters, malicious activities detection rates (DR) and false positives (FP).

DR is defined as the number of intrusion instances detected by the system divided by the total number of the intrusion instances present in the test dataset.

FP is instances of false alarms raised for something that is not really an attack. Model Optimization should target  to maximize the DR and minimize the FP.

Employ the Model for real-time network traffic

Model performance in production will depend on the accuracy and maturity of the trained model. Model should be maintained to-be up-to-date with repeated re-training of the model. Retraining should accommodate changing attack patterns and activities. 

 

Multiple industry leaders are striving towards providing solutions for smart adaptive security architecture for enterprises. Infosys too has strong presence in this space.

Conclusion:

Whatever is the technology revolution there's no silver bullet to future-proof the security. Security fencing has to be always one level up against some of the most devious minds. Though innovative AI based Predictive-Adaptive Models are gaining momentum, security hackers & predators too are advancing in maliciously attacking these models. We have to wait and watch which intelligence reigns...The Threat or The Protection J.

September 25, 2017

Microservices and Secrets management - How to comply with security must-dos

Microservices - The light of every modern developer's life:

Microservices is now becoming the most preferred method for creating distributed and components-based applications on cloud. This architectural style allows developers to develop, deploy, test and integrate modular components with much ease. When an application is built using the microservices model, smaller modular services are created instead of one autonomous monolithic unit. These modular services are then tied down together with the help of HTTP or REST interfaces. But this distributed model results in proliferation of interfaces and the communication between them generates several secrets management challenges. Some application secrets that need to be secured in a microservices deployment model are:

  • Environment variables - If not secured can pose security risk and affect the smooth running of processes.
  • Database credentials - Usernames and strong passwords to connect to a resource.
  • API keys - API keys must be used for restricted access to applications.
  • SSL/TLS certificates - SSL or TLS certificates are essential to avoid data or security breaches.

Secrets management in monolithic applications world:

In a monolithic application, secrets are stored in various places like:

  • Application code and configuration files
  • Passed as environment variables
  • Stored in data bags and databases tables
  • Scripts and machine images
  • Gaps in secrets management in a monolithic model: 

  • Some if the gaps can be summarized as below

    • Secrets sprawl - On several occasions, companies are unaware of being compromised.
    • Decentralized secrets - Secrets become confined to the limited operators with no repository to store them ; If a secret is compromised, it cannot be easily revoked or rotated.
    • Limited auditing - Limited or no insight into who is accessing a secret ; Limited logging makes it difficult to track who has access to confidential data.

      Microservices requires a robust secret management system:

      Microservices brings with it a host of security and secrets management challenges.

      • Each microservices modular has its own database and credentials, thereby increasing the number of secrets to be managed.
      • Several developers & operators and applications have access to the database, thus making certificate management, credential storage, API keys etc. extremely difficult to manage.
      • With automated deployment in Micro Services, there are additional credentials for creation of resources (mostly in cloud), access to code and artifact repository, machine credentials to install components, etc.

      There is a need for centralized secrets management system so that enterprises adopting a microservices model can effectively manage secrets and handle security breaches by adhering to these must-dos:

      • Secure storage of various type of secrets (API Token, Keys, Certificates, username & passwords)
      • Reliable API based access to secrets
      • Dynamic secret distribution for automated encryption and authentication of keys
      • Full Audit of access to secrets.
      • Multi-level role based access to secrets
      • Centralized revocation of secrets and redistribution

      Diagram below illustrates how centralized secret management helps manage a large repository of secrets:

      How to keep your microservices secrets safe without compromising on security and automation?

      • A secrets hierarchy design should account secrets isolation per application, environment and a fail-proof revocation of secrets when required.
      • To further strengthen the secrets structure, access policies and role based mappings need to be built to support emergencies by making them version controlled and automated.

      Let's take a look at some secrets management scenarios and examples:

      • Servers on which microservices needs to be deployed with certificates - On cloud, as the servers come and go, a centralized certificate management system helps generate certificates on the fly, thus allowing immediate deployment to servers. Certificate keyStore and trustStore need to be secured with passwords which can be kept safe and retrieved from a secrets management solution. A PKI secret backend and generic secrets storage comes in handy to automate all of these with minimum risk to security.
      • Microservices and applications need access to their own database or data stores. It makes sense to isolate the database/data access credentials using a generic secrets storage to maintain renewal, rotation and revokes easily as per requirement.
      • When automated environment provisioning needs access to a software installable repository - For example, an Apache server provisioning can be automated with an Apache software installable accessed from a software repository. The repository can be accessed using generic credentials or an API key. A centralized secrets management solution is the right place to store these credentials and achieve automation with no compromise on security.

      In conclusion: to simplify and automate secrets management, solutions are available from Cloud providers like AWS KMS, Azure Key Vault and from specialized security solution like Hashicorp Vault. The paradigm shift with respect to secrets management needs to be understood by enterprises adopting microservices, to ensure that their transformation journey provides the agility as required in the most secure manner possible.


      Continue reading "Microservices and Secrets management - How to comply with security must-dos" »

      August 24, 2017

      Managing vendor product licenses during large scale migration to cloud

      Public Cloud Services are mature and enterprises are adopting cloud to achieve cost optimization, introduce agility and modernize the IT landscape. But public cloud adoption presents a significant challenge in handling the existing vendor licensing arrangements. Commercial impact varies based on cloud delivery model from IaaS to SaaS and the licensing flexibility. The business case for cloud transformation needs careful consideration on existing software licenses.

      Based on our experiences we see software licensing and support by software vendors are at varying stages of maturity. At times, software licensing model can become expensive while moving to cloud. Typically, on premise licenses are contracted for number of cores, processor points or users, whereas the definition of core in virtualized/ cloud world is different.

      While enterprises assess the licenses when undertaking the cloud journey, they should carry out a high-level assessment of risks associated with licenses while formulating the business case.

      Before formulating a business case it's important to consider the following aspects into the enterprises license transition strategy:

      ·         Conduct due-diligence of major software vendors to identify any absolute 'show stoppers' for the use of their products such as:

      o   Support level in new platform services, license portability and license unit derivation mechanism in cloud.

      o   Commercial impact for re-use on multi-tenant cloud platform.

      o   Flexibility to reassign licenses as often as needed.

      o   Mechanism to check and report compliance on public cloud in an ongoing basis across product vendor licenses.

      ·         Inventory management of licences and the commercials around these licences.

      ·         'Future state' services and application stacks should balance between license cost and performance requirements.

      o   Negotiate unfriendly product licensing bound to socket or physical hardware level.

      o   Evaluate existing licensing terms and conditions for increase in licensing costs.

      o   Evaluate / check for mitigation controls and options on public cloud.

      o   Plan ahead for the cost implications for reusing converged stack or appliance based licenses on public cloud.

      o   Translate the on-premise licenses to public cloud (virtual core).

      o   Cloud Service Provider includes operating system licenses - examine the option to reduce the same from existing vendor agreements.

      o   Leverage the continuous availability capability of public cloud platforms to eliminate disaster recovery licenses and costs associated with it.

      Approaches to overcome public cloud licensing challenges:

      To overcome the licensing challenges associated, IT teams can optimize target state architecture, solution blueprints and frameworks with considerations on license/ cost models. Few approaches like:

      ·         Re-architect existing solutions leveraging event driven Service, Function as a Service, PaaS, Containers and Micro-Services to achieve agility and significantly license cost reduction.

      ·         Enterprises should consider dedicated hosts or instances / bare-metal options when socket level visibility is required for complying with license usage on public cloud but also weigh the cost impact of these machine types.

      ·         Embark on Open Source for platforms like database, application server and web servers.

      ·         If traditional deployment of platform must be moved to cloud, consider creating a pool of platform services rather than services for individual application requirements like common database services. For example: Line of business can consume business applications through centralised platform services across business units in order to achieve greatest cost and agility benefits.

      ·         Consider solutions with bundled license under usage based pricing models like SaaS, PaaS, Market Place and Public Cloud Native Services.

      In reusing "on-premise" licenses, all major software vendors are changing license policies to allow flexibility to port the licenses to cloud but it is not uniform nor all-inclusive yet. Options like vendor allows certain product licenses on cloud but not all, another vendors may allow all on public cloud; and while some vendors allows porting onto authorized cloud environments only.

      In summary, migrating like for like will have an impact to licensing costs on public cloud. Understanding the current licensing agreements / models, optimizing application architectures to cloud; negotiating a position with vendors that will be suitable for cloud along with compliance processes in the target state model should hold the organization in good stead. With the cloud native services and open source innovation continues to grow rapidly, enterprises can mitigate traditional licensing constraints by leveraging these technology innovations.

      August 31, 2016

      The End of P1 as we Know it - Immutable Infrastructure and Automation

       

      P1's (Priority 1) mean different to different people, sleepless nights for some, lost revenues/opportunities for others, and for some triggers P1 in their personal life J.

      Software systems have gradually evolved from being business support to business enabler.  During its earlier days Software systems helped enterprise to be more productive and efficient but today they have become synonym with business operation and its existence, recent Delta Airlines outage that grounded it's entire fleet is the reflection of this fact. Software uptime today functions as oxygen to the business.

      With ever connecting systems, devices and global customers business always demands to be "Lights ON". The cost of downtime just increases as technology keeps on adding value to business.

      According to Gartner study, the cost of downtime ranges anywhere from $140K to $540K per hour depending on the nature of the business and then there is impact on reputation and brand image that comes with it.

      Hardware/software failure and human error tends to be top causes of most of the downtimes.   

      With the advancement of technology and quest for ever expanding 9's things are changing. In my previous blog I wrote about the whole paradigm of Infrastructure as Code and its potentials, this blog is natural extension of one of its possibilities.

      Immutable Infrastructure is the outcome of infrastructure automation and evolved application architecture where a component is always deployed rather than being updated from its previous state effectively rendering it immutable. This whole model eliminates component from any pre-existing dependency and gives you a capability to deploy the service from a clean pre-tested image; a process that could be repeated and automated to be triggered on demand in response to monitoring tools.

      Immutable Infrastructure usually have these common attribute:

      • A stateless application Architecture or to be specific an Architecture whose state is isolated and redundant.

      • Automated deployment and configuration that's testable.  

      In addition, Cloud and Virtualization amplifies the whole effect by providing unmatched resiliency.   

      Though not a silver bullet, Immutable infrastructure along with self-healing techniques can provide stable infrastructure for critical IT operations, hence the usual P1 that one is used to sinks down to least common denominator.

      August 8, 2016

      Melting your Infrastructure - Infrastructure as Code

       

      We all are familiar with the well-known state of the matter - Solid, Liquid and Gas, as the state of matter changes from Solid to gas, the volume changes and is more manageable to achieve desired shape.

       

      Analogues to this, IT Infrastructure has evolved its maturity state from Bare Metal (Single tenant), Virtualized and Codified. And over this evolution Infrastructure has increased its value proposition.

       

      Infrastructure as Code (IaC) also called as Programmable Infrastructure is the automated process of building, configuring and provisioning infrastructure in a programmatic way. Thus enabling the mechanism of serializing your Infrastructure to code with a capability to rehydrate it on demand. The whole process eliminates human intervention and can be part of your deployment pipeline, so in a continuous delivery scenario, not only your application but also your Infrastructure to support the application is built on the fly. Now without manual intervention or steps out of the equation, you have a more predictable and repeatable process that is maintained and managed for reuse. This paradigm not only helps you audit the Infrastructure, it gives version control capability for your infrastructure changes and ability to revert back to specific version, just as you could do with your code. 

       

      In this blog, what I call as "melting your infrastructure" is actual the concept of transforming the state of infrastructure to code that could be used to resurrect it again to the state is was. The whole concept of codifying the Infrastructure (or melting it to code) gives rise to immense possibilities which are already being realized or have great potential in near future.

       

      Continuing the analogy with the state of matter, just like two different metals in liquid state facilitates the creation of alloy, similarly with IaC your Infrastructure could be an alloy consisting hybrid or multi-cloud environment.

       

      With the ability of being invoked programmatically, IaC could be used to incorporate self-healing mechanism into your application, where failure detection could be used to spin new infrastructure on the fly.

       

      With IaC unit testing capability could be extended to test your infrastructure configuration to support your applications.

       

      IaC tools could be leveraged to extend the concept of Code Contracts, where there could be possibilities for Applications define their Infrastructure precondition and post conditions as part of the deployment process.

       

      IaC is an and an idea that is still maturing,  growing and gaining momentum, they are implemented using tools like Chef, Ansible, Puppet and Salt (CAPS), and natively supported in some platforms (VMware) through API's. Above examples were only some of the familiar arena where IaC brings value but these are just the beginning.

       

      IaC brings speed to value creation, agility to respond to change and the ability to reuse and extend. In today's world of innovation based economy IaC helps bring the required synergies for your Ideas that get transformed into Software Solutions and which in turn runs on Infrastructure.  

       

      Infrastructure in the state of IaC is well positioned to be augmented with AI to enter the next phase of innovation, as with the state of matter, we have solid, liquid, gas and then "Plasma" so with Infrastructure we do have Bare Metal, Virtualized, Codified and then?

      March 17, 2016

      From Open Sesame to Biometrics - Authentications have been Vulnerable

       

      We all have come across the famous old fable of "Ali Baba and forty thieves", Ali Baba happens to come across a bunch of thieves and secretly watches the leader of the gang open the cave by uttering the words "Open Sesame", the cave opens to reveal vast treasure amassed by the thieves. With the magical phrase disclosed, Ali Baba was able to make it into the cave to strike a fortune.  This probably is the oldest known instance of a situation where the password had been compromised J.

       Times and characters have changed but theme remains unchanged. Today caves holding treasures have shrunken to Systems, hard drives, phones and chips and the keys to unlock them have been transformed.  Oral "Magical" spells have been evolved to characters that need to be keyed in.

      Passwords have been the most traditional form of Authentication for a long time which I consider nothing but "Open Sesame 2.0", where phrases or characters are to be actually keyed in to get you the access. This approach though mitigates the risk of you having to speak it loud but has its own problem of being weak in terms of being guessable or hard to remember and prone to be key logged. Being a de facto option it's been a target of most of the current innovations.

      Of recently Biometrics have become a popular alternative for traditional passwords, most of recent consumer electronics products have embraced this feature. So are they really secure or to be safe, are they at least better than the traditional passwords? I doubt.

      Biometrics as a password are an irony in itself due to the obvious reason that the password is quite evident. So why the hype? The only factor that makes it safe (or rather gives you that feeling) is the fact that it's unique to the user and can't be replicated by others, so do you think that's enough or worth having it? Example below will help you realize my point.

      When it comes to fingerprint sensor (those found in smartphones and laptop), there might be complicated ways to imitate the fingerprints, which you can find googling, but for my little son who does not know anything about security it's much easier to break this barrier, all he does is when I am a sleep or busy with something else, he would just take my phone to contact with my finger and presto, he has access to his games, so isn't that easy as a child's game?  So from security perspective your unique key is just too obvious and quite vulnerable.

      Next I happen to come across Surface Pro 4, which is the latest offering from Microsoft and is equipped with state of art and innovative facial recognition, once setup, all you have to do is come in front of the Surface Pro 4 and it recognizes you and logs you in without even needing to press a single key. So that's great, right? Keep reading to find more.

      With my earlier experience of my son's innovative method, I just thought for a moment and started thinking like a kid, first what I did was to take a selfie using my smartphone and showed that to the Surface cam, but that did not work. I thought the reason could be either the size or it was looking for a three dimensional image, the later one would have been difficult to mimic based on my time and resource, but the former was not an issue, next I used a Tab to take a selfie and used the picture in the tab in front of the Surface cam and presto!! It worked. Surface Pro without any issue quickly allowed me in. Now this looked like a bigger risk, unlike fingerprint sensor where a physical contact is required, in today's world of Social Media anyone can have access to photos and hence can fool facial recognition.

      Say Mr. X has access to Mr. Y's device, all that Mr. X has to do is get hold of Mr. X picture load it to tab and show it to the device Cam and it's done'

      The moral of the story is nothing is hundred percent secured, especially when it comes to Biometrics definitely not! So the story of Open Sesame and Ali Baba are here to stay.

      August 27, 2015

      Software is the New Hardware

      The "Humanics", "Mechanics" and "Economics" of the new enterprise world

       

      The enterprise world seems to be poised at an interesting inflection point today. There no longer seems to be anything called as a "known competitor" or an "industry adjacency" in enterprise business anymore.

       

      A Google can come from nowhere and reimagine, redefine and rewrite the rules of the entire advertisement industry. An Apple can come from nowhere and reimagine, redefine and rewrite the rules of the entire entertainment industry. A Facebook and Twitter can create absolutely new spaces that did not exist a few years ago. An Amazon and/or Alibaba can come from nowhere and reimagine, redefine and rewrite the rules of the way commerce is done around the world. And then there are Uber, Tesla and others.

       

      In each of these examples, three elements seem to combine to perfection: 

      • Humanics: This is about using the power of imagination to discover new possibilities and create new experiences. All the companies mentioned above have done this par excellence in their respective contexts.
      • Mechanics: The new possibilities powered by imagination have to be converted into reality and, more often than not, in today's world, all of this is being driven by software. All the examples mentioned above, have leveraged the power of software in reimagining, redefining and rewriting the rules of their respective games. 
      • Economics: And finally, of course, there is the economics - the right business model for the right context. Businesses and business plans need to find the right balance between "Humanics", "Mechanics" "Economics" to scale new horizons and convert possibilities into realities - leveraging the power of software!

      GAFTA vs G2K

      At a biomedicine conference last year, venture capitalist Vinod Khosla famously declared that healthcare would be better off with fewer doctors. And then he delivered the same advice to IT at a tech conference the following month. Needless provocation? Far-fetched fantasy? Datacenter utopia, actually. Because that's exactly what most of the traditional and large G2K companies would dearly love to achieve.

      Not too long ago, the Director of Data Center Operations at Facebook said each of their administrators managed at least 20,000 servers. Contrast that with the 1:500 or 1:1,000 (admin to server) ratio that a typical G2K company manages. At best. A couple of years earlier - as if to prove a point - Facebook had launched the Open Compute project to make their highly efficient hardware design "open source" for everyone's benefit.

      The reason for this lopsided infrastructural evolution is mainly historical. Most G2K companies have been around long enough to accumulate a legacy of disparate, non-interoperating, generations of technologies that seem to be operating in silos. These enterprises are forced to dedicate the technology budget, not to mention large human resources, to simply keep the lights on. On the other hand, the GAFTA (Google-Apple-Facebook-Twitter-Amazon) group - with a scant 97 years between them - found a way to abstract and codify this complexity using the power of software to build highly scalable and highly automated solutions to the same problem.

      The stark difference in productivity means that many G2K enterprises struggle with most of their resources being stuck with "keeping the lights on." This also means that very limited resources are allocated to reimagining, redefining and rewriting possibilities and converting these into newer realities for business.

      Now, what if, somehow magically, this could be completely turned upside down. The possibilities would be immense. The probability of converting these possibilities into realities would be immense.

      The key question is, how can G2K organizations do a GAFTA? Especially in the world of infrastructure management.

      Software is the new hardware

      The basis to the hypothesis of G2K doing a GAFTA, especially in the field of infrastructure management, seems to be encapsulated in a mere 5 words: "software is the new hardware". 

      G2K companies must find a way to emulate their GAFTA counterparts to leverage the power of software to reimagine, redefine and rewrite the way the current infrastructure is managed and convert possibilities into realities.

      They must find a way to run their operations noiselessly leveraging the power of software. To achieve this, they must find a way to abstract the complexities and heterogeneity of their environments through the power of software and drive extreme standardization and extreme automation to achieve extreme productivity - by an order of magnitude, not incrementally. This will help them take costs out - and large chunks of it.

      They must find a way to: 

      • Drive extreme visibility and control across not only the "horizontal elements" spanning various businesses, geographies, applications, partners, and functions but also "vertical elements" across all infrastructural elements to applications to business processes. And all of this in a "single pane".
      • Modernize their infrastructure by possibilities that software offers - hyper-converged infrastructure, software defined everything, Open Compute, and a good mix of public, private and hybrid clouds so that agility increases by leaps and bounds and costs decrease by an order of magnitude.
      • Modernize and move their existing workloads to take advantage of the new software-powered underlying infrastructure.
      • Reimagine their processes to make DevOps an integral part of the new ways of working.
      • Reimagine their security with "hazy perimeters", collaborative work models to counter ever-increasing vulnerabilities and risks - all this through the power of software.
      • Reskill and reorganize talent. In the world where software is the new hardware, there will be need for a massive change in skills and structure.
      • Change the organizational culture.

      While the existing and mature businesses within the enterprise will demand relentless excellence in efficiency, control, certainty, and variance reduction, the foundational cultural constructs of the "newer" lines of business of the enterprise will be based on exploration, discovery, autonomy, and innovation. Building an ambidextrous organization and driving a culture of purpose, creativity and learning would be paramount.

      All said and done, this journey is best undertaken with partners who are able and aligned - not alone. G2K companies must find a way to leverage partners who have firmly based their strategies and their businesses on the fact that "software is the new hardware". Not just by talking about it but actually making it a way of life of using software to help their clients "run" operations, "build" next-gen infrastructure, "modernize/migrate" workloads, and "secure" them against the new threats. 

      The last word

      The approach to technology infrastructure at G2K and GAFTA companies belong to different eras. There exists a clear blueprint for G2K enterprises to leverage the benefits of the GAFTA world in terms of agility, and freed-up man and money resources that can be promptly plowed back into re-imagination, innovation and new business models. 

      GAFTA has shown the way on how new business models can be "Powered by imagination. Driven by software".  

      Software is indeed the new hardware!

      Continue reading "Software is the New Hardware" »

      February 27, 2014

      Is your service ready to be 'API'-fied? (Part 2)

      In my previous blog, we looked at the data and security aspects that might hinder an internal service from being API-fied. In this conclusive part, we will look at the other two important aspects that needs to be addressed before internal services can be certified for API exposure.

       

      Scalability is the next key area to talk about. Exposing APIs for external consumption can quickly lead to explosive growth in traffic due to genuine or rogue usage. Will the back-end applications, databases and infrastructure be able to scale as traffic swells? APIs exposed to external consumption usually have stricter service level agreements and financial implications to adhere to.  But that should not come at the expense of the internal consumers. Imposing usage quotas and throttling limits is just one way of managing growth in API traffic. A complete discussion on API scalability however merits a dedicated blog.

      Last, but not the least, is service design. Service granularity is a tricky subject and in many cases consumers of internal services have to make several calls to the back-end to complete a transaction or achieve a functionality. While this might not be a big challenge for internal consumption, when it comes to externalizing these APIs, it makes the consuming apps unnecessarily "chatty".

      Internal services are also often infested by the 'pass-me-back' syndrome i.e. sending pieces of data absolutely irrelevant to the consumer but required to be passed back by the consumer for future calls. The problem creeps in during early service design for the ease of processing and lives through internal consumption without much fuss. However, when the same services are exposed for external consumption, these design issues may be barriers to API adoption.

      API Management products that are available in the market provide a lot of capabilities especially in the area of security and traffic management. But as you can see from this blog series, it takes a lot more than just a product to make an API strategy successful. There's no one-size-fits-all solution.


      February 12, 2014

      Is your service ready to be 'API'-fied? (Part 1)

      A question that I come across pretty often, especially with clients who are pretty early in their API journey, is "Can I expose my internal service as an API?"  The answer unfortunately is not a simple Yes or a No. Even though APIs are supposed to build over SOA, something that the industry has been doing for quite a while now and many have mastered, there are several considerations that should be looked into before an 'internal' service can be 'API'-fied (A new word I just coined J - meaning "exposed as an API"). In this 2 part series, we take a brief look at these aspects which are key to answering that question.

      To begin with, examine the data being exposed through the service. Since internal services are meant to be consumed within the organization, data security and governance in most cases are relaxed. However, when it comes to exposing the service to external entities, the equations change.  It is therefore important to carefully review the service and ascertain the type and sensitivity of the data being exposed and make sure that you are ready to expose it to the external world.

      Security is the next key aspect that must be delved into. Internal services mostly have none or not enough security built into it for external consumption. Even those that do have might have a proprietary security mechanism back from their early SOA days.  All these are dampers for APIs. The API economy is meant to be open. Hence it is important both to have a robust security architecture and one that conforms to commonly accepted industry standards (e.g. OAuth). It is also important to abstract security out of the service. Security should be managed by experts through policies. That will free the service developers to focus only on the business logic.

      In the next part, we will look at how scalability and service design play a key role in answering the question.


      October 24, 2013

      Riding the Band Wagon for Enterprise APIs - the Technology Leadership Dilemma (Part 3)

      This is the final blog of this three-part series discussing the challenges facing technology leadership of traditional businesses in their API adoption journey. In my first part we had talked about the importance of APIs and API economy. In the next part, we had explored more on the unique challenges of enterprise APIs and the importance of an enterprise SOA strategy. In this blog, we will see how API Management solutions come to the rescue, but, more importantly, we will also talk about where such solutions might fall short.

      API Management solutions go a long way to address many of these concerns. They abstract out a lot of operational aspects like security, API publishing, traffic management, user onboarding and access management, usage tracking and health monitoring, so that the technology teams can focus on the actual business of the API functionality.  With the big players (the likes of Intel and IBM) entering the arena the market is heating up and there are tall claims on what an API Management platform can do for the enterprise. For the enterprises it would be a slightly bigger challenge. One challenge certainly is to find the right API Management solution to suit their needs. Currently none of the products in the market seem to address all the concerns of API Management. Admittedly products are evolving fast and it would be just a matter of time when the market will see products which will cater to most of the needs in some ways or the other.

      However, there are certain other aspects that need to be tackled by the business and technology leadership before they can take the leap for enterprise APIs. Most enterprise APIs need support from other processes/systems in order to complete the functionality being exposed through enterprise APIs. Some examples are audit control, transaction traceability, reconciliation reports, customer service, batch integration with partners, etc. And these may not be able to be supported in the fast forward manner that APIs can be developed and exposed.

      It is important for organizations to realize that just putting an API Management platform in place will not put them in the driver's seat. They have to take a more holistic view of their particular needs and ensure that all the supporting teams are able to join them in their API journey. It is not only a matter of just riding the bandwagon. It is also important to take all your stuff along to ensure you don't have to jump off the bandwagon half-way through.