Enterprise architecture at Infosys works at the intersection of business and technology to deliver tangible business outcomes and value in a timely manner by leveraging architecture and technology innovatively, extensively, and at optimal costs. Stay up-to-date on the latest trends and discussions in architecture, business capabilities, digital, cloud, application programming interfaces (APIs), innovations, and more.

January 16, 2018

Adoption Strategies for Cloud Native and Cloud Hosted Applications

Author: Ramkumar Krishnamurthy Dargha
            AVP, Senior Principal Technology Architect

Cloud technologies have been in forefront for nearly a decade now. Many enterprises have been adopting cloud as one of their key technology strategies. In today's world, no IT strategy discussion happens without cloud technologies in the mix. Should an enterprise go for cloud native application strategy or cloud hosted application strategy? If you are also grappling with such questions, read on.

Cloud Hosted applications and Cloud Native applications. What are these?

Cloud hosted applications are those which are found or made suitable to be 'in the cloud', so that enterprises can take advantages of underlying cloud infrastructure. They have following characteristics:

  • Hosted on Standard platforms: They run on non-proprietary, standard platforms. They run on either standard UNIX or Linux or Windows platforms. These applications will be either remediated or refactored to make them suitable to move or migrate to cloud infrastructure. Some of them may be able to move to cloud infrastructure as-is.

  • Hosted on an on-demand and as-a-service cloud infrastructure: They run on standard cloud infrastructure offered by cloud vendors (public clouds) or on private cloud infrastructure. They leverage cloud infrastructure services and features provided by cloud vendor for security, high availability, reliability and other non-functional requirements.

Cloud Native applications on the other hand, are those which are designed 'for the cloud'. These applications are designed such a way so as to derive the best overall advantage of cloud technologies.

In addition to the characteristics of cloud hosted applications, the cloud native applications display certain unique characteristics. They are:

  • Services based: Cloud native applications are services based. They use service oriented architectures (SOA) and microservice-based architectures. Such architectures make cloud native applications loosely coupled and self-contained. They also make cloud native applications independent (independently deployable) and seamless (ability to move around). These characteristics also enable cloud native application to integrate more easily with other services/applications, which is important in multi cloud, multi-vendor environments. All these characteristics are also key prerequisites for auto scalability needs for cloud applications.
  • Container based: Containers encapsulate specific components of application but provisioned only with the minimal OS resources to do their job. Virtual Machines (VM) encapsulate the guest OS, whereas containers reuse the host OS but encapsulate the application logic, required binaries, libraries and configurations required for the application to run. This makes containers light weight and independently deployable. Docker engine is one of the examples of such container engines.

    In addition, we would need a way to orchestrate multiple components that are encapsulated in each container to form one holistic application. This is where container clusters come into play. Docker swarm and Kubernetes are some of the popular container orchestrator engines which does this job. Microservices need not run on containers. They can even run on traditional virtual servers. But microservices and containers bring in synergies together. While microservices enable loosely coupled application architectures, containers enable such applications to be more seamlessly deployable and moved across multiple cloud infrastructures. Thus microservices together with containerization enables agility and flexibility.
  • API first:  APIs are related to the integration characteristics of Microservices. When we say API first, we mean, before implementing Microservices or applications, think about how and for what purpose a Microservice or application would be used/consumed. Are you duplicating the efforts of other developers who may be developing the same functionality exposed through similar APIs? Do you see a specific need for such a functionality exposed through those APIs? Your implementation of Microservices or application may change, but should not change your APIs. If you really want to change the APIs, give them different versions.

    Though APIs are not new architectural concept, API first strategy is specifically important in cloud native context. The reason is: if you want to make your applications really seamless, loosely coupled, re-usable and self-contained, not only the 'how' part of the design (satisfied by microservices based implementation) but also the 'what' part of the design (The API part) is also crucial. APIs should adhere to established standards (ex: REST, HTTP etc.) for seamless consumption.
  • Security: Security is important in cloud native or otherwise. However cloud native applications give specific focus on security within the application. What does it mean? Cloud native applications do not assume that the application is secure just because the application reside behind a firewall. Cloud native applications take precautions in building security in the application as well. That means employing required application specific authentication, authorization, ACLs, security controls, data security including data at rest and data in transit, adopting latest encryption and more stringent encryption algorithms, multi-factor authentication mechanisms etc.

There are additional design principles which cloud native applications are expected to follow.  For example, stateless as against state-full, design for failure etc. One good reference for these additional design principles is an AWS White paper.

Cloud Hosted applications or Cloud Native applications?

Should enterprises go for cloud native applications or cloud hosted applications. Here are points to consider:

  1.  For Applications which have higher life expectancy, cloud native application strategy is better suited.

  2. Applications which are expected to retire soon or replaced by better alternatives in near term, cloud hosted strategy is more suited.

  3. Applications which are in legacy technologies or platforms (like mainframes), the effort and costs required to make them cloud native may be prohibitive and/or risky. Such candidates may be better suited to reside in existing environment. If there is indeed an attractive business case to re-engineer and to take those applications out of legacy platforms, then adopt cloud native application strategy.

  4. Applications which are expected to undergo more frequent changes, cloud native application strategy suits better.

  5. For applications for which will go through agile development process or DevOps process, cloud native application strategy suits better.

  6. Any new applications that are being freshly developed, cloud native application strategy suits better.

  7. When enterprises are undergoing a large scale applications movement to cloud, to mitigate risks in such large transformations, enterprises may be better off adopting a two phase approach. In phase one, move applications to cloud which are suitable to be hosted on cloud. In this way, enterprises benefit from using cloud infrastructures (on-demand, spend flexibility etc.) in phase one.  Once the applications stabilize on cloud hosted environments, in phase two, adopt a cloud native application strategy.

While cloud native application strategy is attractive in long term, the enterprises would have to leverage a combination of cloud native application strategy and cloud hosting application strategies as per specific needs and circumstances listed above.

Any comments/suggestions let me know.

October 28, 2017

Purposeful Enterprise -- some thoughts on shaping the enterprise of the future

Author: Sarang Shah, Principal Consultant

We are in the midst of an exciting stage in the evolution of the modern economy, where accelerating technological changes, highly networked world, demographic shifts and rapid urbanization are leading to a disruption that is not ordinary [1]. The effects of these disruptive changes are impacting the prime movers of our modern economy - the businesses' or corporations. In this blog post, I would like to share some of my thoughts and questions about the future of these prime movers.

Let us take a step back and talk about the corporation itself. What is a corporation? What is its relationship with the market? What determines its boundary, size and scope? Economists attempt to answer these questions using various approaches - I would like to specifically point out the idea of cost of market transactions or transaction costs as described in Ronald Coase's seminal work 'the nature of the firm'. Coase points out that 'people begin to organize their production in firms when the transaction cost of coordinating production through the market exchange, given imperfect information, is greater than within the firm' as illustrated in the diagram below.[2]


Recent technological advances like mobile, cloud, social media, internet of things, augmented reality, block chain, and many more are causing disintermediation and dematerialization at an unprecedented speed and scale. These technologies directly lead to decrease in the transaction costs that we mentioned above and hence they influence the nature of the corporation also. We see these changes manifesting themselves in new digital business models, unbundling of corporations and redrawing of industry boundaries e.g. a mobile company provides payment services, an e-commerce retailer provides credit facilities, and so on.

Along with the technological changes, we are also seeing demographic and behavioral shifts in our economy. For instance, today's customer is more demanding in terms of value they get from the product or service than in the past, as they have easier access and the ability to consult and compare between various products/services in the market as a result of technological advances. In fact, even regulations are also promoting behaviors that allow customers more choice - e.g. sharing of payments data by the banks (Open Banking, PSD2) or mobile phone number portability across network carriers. The same demographic and behavioral shifts that affect the customers also influence the staff employed by the enterprise. We are beginning to see large parts of the workforce are now digital natives, who have access to information and are networked like never before. I believe these shifts impact the way the enterprise functions and is architected.

We are already seeing these shifts impacting the way enterprises function - enterprises that empathize with their customers and put them at the center more so then ever before; enterprises that understand that taking a long view on capital may be more beneficial for all the stakeholders; enterprises that are responsible towards the social and natural ecosystems they operate in and take a circular economy approach to the future; and enterprises that understand that in the future man and intelligent machines will work together collaboratively.

These changes lead us to ask some fundamental questions like: How will enterprises look like in the future? How will enterprises transform and adapt under volatile, uncertain, complex and ambiguous conditions? How should enterprise processes and policies be designed for digital natives and gig economy? How should enterprise ethics evolve as intelligent machines become integral to the enterprise? and many more.

I believe that taking a holistic and systemic perspective is required in shaping the purposeful enterprises of the future and we as enterprise architects have a unique opportunity to do the same. I and my colleagues at Infosys will be writing more about the same in the future blogs here.


I would like to thank A. Nick Malik & Steven Schilders for providing their suggestions for this post.



[1] No Ordinary Disruption: The Four Global Forces Breaking All the Trends by Dobbs, R. and Manyika, J.

[2] The nature of the firm (http)

[3] The nexus of forces is creating the digital business, Dec 2014, Gartner (http)

[4] Unbundling the corporation, Mar 1999, Harvard Business Review (http)

[5] The self-tuning enterprise, June 2015, Harvard Business Review (http)



The terms 'business', 'company', 'corporation', 'enterprise' & 'firm' have been used interchangeably. The primary intent of this blog is for-profit enterprises, though some of the ideas described above are applicable to other types of enterprises also.

October 21, 2017

The benefits of leveraging information-centric enterprise architecture - Part 3

Dr. Steven Schilders
AVP and Senior Principal Consultant, Enterprise Architecture
Marie-Michelle Strah, Ph.D.
Senior Principal Consultant, Enterprise Architecture

Continuing our three-blog series on information-centric architecture, this blog highlights the benefits of the data-first approach. While explaining how this approach drives agility, we want to emphasize that these blogs do not advocate a complete implementation of information-centric architecture. Rather, we are presenting an alternate view on the two most prevalent architecture paradigms.

In Part 1 and Part 2 of this series, we explored how organizations typically implement systems based on business capabilities rather than data. Such an approach invariably creates extreme data segmentation because system capabilities dictate what data is stored, how many copies are stored and how it is accessed. In today's age, no organization can succeed with fragmented data as data and its relationships - both direct and indirect - are the lifeblood of an organization.

Integration challenges in data warehousing solutions

Data warehousing solutions are quite popular for data integration. However, these solutions involve lengthy processing making it difficult to forge business-critical data connections, thereby diminishing the value of data. Further, data warehousing approaches - and the assigned 'data architects' - become tied to vendor data models. We use the term data architect loosely here. Invariably, these architects behave as vendor-specific master data management (MDM) or enterprise data warehouse (EDW) specialists rather than actual 'enterprise information architects'. Needless to say, this type of centralized and hierarchical approach nullifies any benefit that can be achieved through indirect data relationships such as artificial intelligence (AI) and machine learning.

To be able to make real-time decisions and scale quickly in highly competitive markets, you need to transform your enterprise into a hyper-connected and composable  organization. The danger from delayed decisions cannot be overstated in such an environment. To give you an idea of how important this is, we have put together a graph that illustrates the extent of value lost when there is a delay between a business event and action taken.


Despite these acute disadvantages, application data architecture is often prioritized over enterprise information architecture. In some cases, this is because vendor-provided platforms and COTS products pre-determine data models and data access. In other cases, capability-based architectures that claim to represent business capabilities are actually application or technical architectures that collapse business capabilities. For example, consider how ERP systems tend to represent either finance, accounts payable (AP) or human capital capabilities.

This traditional approach exponentially delays the delivery of business insights and decision-making because data must be collected and copied across silos to get actionable information. Further, point-to-point integrations across multiple applications with disparate data architecture becomes an effort-intensive process for enterprise architecture as well as data teams. Finally, developing and maintaining these brittle and tightly-coupled architectures exacerbates the delay in the decision-to-value cycle.

Now, let us see how information-centric architecture unlocks value from hidden data to enable business-as-a-service capabilities in digital ecosystems.

Step 1: Integrate data across the organization

First, organizations must integrate data whether it resides in commercial-off-the-shelf (COTS) products, custom applications or microservices. In our earlier blog, we had proposed a layered information architecture approach (see figure below). Here, information architecture is not tied to either application or platform architectures that prioritize technical architecture. Instead, it lays the foundation for composable architecture by leveraging a hub model.



Information Centric EA: Layered Information Architecture


Step 3: Use fit-for-purpose data hub models to gain business-specific insights


Our previous blog also illustrated how information-centric architecture can be used in COTS as well as custom-built applications. Here is how the data integration hub architecture works in both cases (see figure below). The data hubs provide representations of data that are optimized for the specific needs of the business. For example, key-based data is leveraged for key-based entity relationships, graph-based data is used to analyze complex interdependencies, time-series-based data is used for sequential analysis, search-based data can be used for complex queries, and so on. Thus, information-centric enterprise architecture reduces the decision-to-value curve because data is grouped contextually and data hubs provide the relevant data attributes in a form that optimizes value creation.



Step 4: Apply AI and BI on insights to achieve decision-as-a-service


Data integration hubs and contextual data grouping allow enterprises to design business intelligence (BI) capabilities and machine learning systems that merge programmed intelligence and AI. Further, BI capabilities can extend the base data with specific data requirements needed for analytics. They are exposed through BI services or decision-as-a-service executed for a consumer-specific data context. The key aspect of this design is that business-intelligent capabilities and services can be created, modified or removed without impacting the core and contextual data assets.


The end result?

  • Transitioning from traditional data warehouses to a fit-for-purpose model of multiple data hubs helps organizations leverage traditional BI capabilities and next-generation AI and machine learning
  • Prioritizing layered information-centric enterprise architecture makes data and decision-making organizational and architectural priorities


Simply put, adopting a strategic model instead of a retrofit model enables AI, faster access to enterprise insights and real-time decision-making. In an era where data is king, these are the key capabilities that enterprises need to become service-enabled.


Keep watching this space for enterprise-level case studies and best-practices of information-centric design in microservices, AI and data science.


In case you missed the previous blogs in this series, here are the links:

Part 1

Part 2

The differences between data-centric and capability-centric architecture - Part 2

Dr. Steven Schilders
AVP and Senior Principal Consultant, Enterprise Architecture
Marie-Michelle Strah, Ph.D.
Senior Principal Consultant, Enterprise Architecture

Capability-centric architecture and information-centric architecture are the two most prevalent models in today's organizations.

In Part 1 of this three-blog series, we outlined an information-centric approach to architecture. That approach places business data at the core by decoupling data from application and platform architecture. In this blog, we deep dive into some of the concepts mentioned in the previous blog. Firstly, we describe the key differences between capability-centric and data-centric approaches to architecture. We also explain how they each influence the design of commercial-off-the-shelf (COTS) and custom-built solutions. To know how to expose reusable business capabilities as services, check out our next two blog in this series.

First, a quick summary of our previous blog: Typically, organizations buy COTS solutions that match their business capabilities irrespective of how data is stored, accessed or made available for reuse. Such solutions are often marketed as capability-centric architecture. But, in reality, a COTS solution should be able to ingest external data and extract internal business data. Historically, both these processes use batch processing. However, in today's age of services, there is greater focus on run-time integration through APIs. Nevertheless, data primarily falls within the domain of the COTS application. 

COTS applications: Capability-centric versus information-centric approaches

In information-centric enterprise architecture, the above model is inverted. The COTS solution must integrate across the enterprise information landscape in a solution and vendor-agnostic manner. This approach also decouples data architecture and enterprise information architecture, which are often collapsed when application-specific data architectures are reinforced.

The following illustration and table will clarify the differences between these two architecture models.

Capability-centric versus information-centric architecture for a COTS application


Differences between capability-centric and information-centric architecture for a COTS application



Capability-centric architecture

Information-centric architecture

Data architecture

A black-box that is designed and optimized to support application needs

Remains in a black-box and all application-agnostic data is externalized and synchronized through event-based integration

Relation between application-specific and agnostic data


Decoupled from each other when externalized

Data taxonomy

Is  application-specific

Is externalized and depends on business domain and/or functional context

Exposing data

Done through application APIs or data replication

Done through data services using APIs or data integration/replication adaptors

Removing/replacing applications

Cannot be done without affecting data architecture

Causes minimal impact to enterprise information architecture

Data consumption/extension

Data is not readily available

Data is readily available

Interaction between data and applications

Application is the source and master of its own data

Applications act as systems-of-record by supporting all create-retrieve-update-delete-search (CRUDS) capabilities. Data services act as a systems-of-reference with only read and search-based capabilities (these may change in multi-system architecture)

Using external data sources

Cannot act on an externalized data source.  Application-agnostic data must be replicated into the internal store before process execution. While some applications can call externalized data sources during execution, the data still needs to translated and transformed into the application taxonomy before execution. Such integration creates performance issues and is unsuitable for high-volume and performance-bound transactions

Applications can support inbound and outbound data synchronization through event-based integration. In the illustration, integration is one-way as there is no other application manipulating the data

How applications access data

Several applications use the same data, resulting in data proliferation, multiple access points for the same data and lack of a single and accurate source of truth.

Data is accessed and shared through the system-of-reference data services.  Applications are redesigned to support single application mastering, where possible,  by restricting access (read-and-search only) or by removing capabilities within the application

Data integration

Requires significant effort to move and synchronize data between various applications 

Implementation focuses on integrating data with a reusable core through services

Real-time analytics

Limited to in-built application capabilities and can be applied only once data is moved to the data warehouse

Data is exposed for real-time analytics and AI-based processing


At first glance, information-centric architecture implementation appears more complex, doesn't it? But, consider the advantages of using information-centric architecture instead of capability-centric architecture when decommissioning applications or building new capabilities to leverage data.

Custom-built applications: Capability-centric versus information-centric approaches

Unfortunately, COTS solutions have distorted organizational priorities by prioritizing capabilities over information architecture and data reuse. As a result, custom-built solutions have followed the COTS architecture model whereby business capabilities are built over an application-specific data repository (we will discuss service-based architecture in the final blog).


Capability-centric versus information-centric architecture for a custom-built application



In the information-centric approach, the key differences between a COTS implementation and custom-built application is how data storage is controlled and how data interaction is designed. Custom-built solutions can be designed to use externalized data stores and integration services.

Ever wondered why there is so much recent emphasis on enabling microservices? This is because more and more enterprises are realizing the value of an information-first approach. Such an approach simplifies the design of enterprise architecture, making it easier to execute digital strategies. As custom applications, microservices have complete control over data as well as business capabilities, leading to greater agility.

Capability-first versus Information-first architecture for custom-built applications



Capability-first architecture

Data-first architecture

Data architecture

A white-box that is specifically designed and optimized to support application needs

Application-agnostic data is externalized and integrated with the application through data service APIs. Application-specific data acts as an extension to externalized data and can be designed and optimized to support application needs

Relation between application-specific and agnostic data


Decoupled from each other when externalized

Data taxonomy

Is application-specific

Is externalized, dependent on the business domain and/or functional context and can be translated to application-specific taxonomy, if needed

Exposing data

Done through application APIs or data replication

Done through data services using APIs or data integration/replication adaptors

Removing/replacing applications

Significantly impacts  data architecture

Has minimal impact on data architecture

Data consumption/extension

Data is not readily available

Data is readily available

Interaction between data and applications

Application is the source and master of its own data

Data services are the system-of-record for all application-agnostic data

Using external data sources

Application-agnostic data may need to be copied into the internal store before process execution.  Data services can integrate externalized data

Data services interact with system-of-record data

How applications access data

Several applications use the same data resulting in data proliferation, multiple access points for the same data and lack of a single and accurate source of truth

Data services APIs access and share data 

Data integration

Requires significant effort to move and synchronize data between various applications

Implementation focuses on integrating data with a reusable core through services

Real-time analytics

Limited to in-built application capabilities and can be applied only once data is moved to the data warehouse

Data is exposed for real-time analytics and AI-based processing


Release data for AI-driven processing

Now, let us take a look at the target state of capability-centric and information-centric approaches when we combine both types of applications. Surely, the main difference between the two architectures is how data is consolidated and constructed, which leads to varying levels of business agility. On the one hand, data-centric architecture consolidates data instantly, providing market differentiation. For example, enterprises can process business intelligence (BI) or artificial intelligence (AI) logic in real-time using the application, user context and the complete data set. On the other hand, capability-centric architecture requires data integration and mediation/post-processing for data consolidation - making it nearly impossible to leverage BI/AI-based processing capabilities.

Combined view of COTS and custom-built applications for capability-centric approach and information-centric approach



So, if you want a sophisticated and data-driven digital strategy, adopting information-centric architecture is the way forward. Interestingly, many organizations know this and are planning strategic initiatives to rectify issues arising from capability-centric architecture. However, actually inverting existing systems into data-centric ones can be challenging. While some may sidestep this process by wrapping existing systems with an additional layer of 'digital concrete' such as services and/or APIs, this will inevitably hinder agility and the ability to proactively compete in the market. 

Discover how information-centric architecture delivers agility for service-enabled enterprises in our next blog

Why data should drive your enterprise architecture model - Part 1

Dr. Steven Schilders
AVP and Senior Principal Consultant, Enterprise Architecture
Marie-Michelle Strah, Ph.D.
Senior Principal Consultant, Enterprise Architecture

Information-centric enterprise architecture is about putting data first during assessment, strategy, planning, and transformation. To create a 'composable enterprise', data must be mobile, local and global across departments, partners and joint ventures. This is important if enterprises are to liberate data to improve insight and develop disruptive and differentiated services. 

To achieve this, enterprises must first decouple business data from application and platform architecture. Decoupling business data gives organizations flexibility as well as valuable insights, which are very important during digital transformation, mergers, acquisitions, and divestiture journeys.

A case of the tail wagging the dog

Today, most enterprise architecture follows a variation of The Open Group Architecture Framework (TOGAF) model (business/information/technology architectures). Here, strategic planning and sourcing recommendations for application portfolios are based on decision-making flows such as buy-before-build or reuse-before-buy-before-build. In the TOGAF hierarchy, organizations are meant to define business capabilities, align information management strategies to the business and choose application portfolios that support those strategies.

However, if you were to speak to any experienced enterprise architect, they will tell you that this is not what actually happens. In reality, most decisions are driven by technical aspects such as applications and platforms rather than the actual business. Invariably, applications and platforms are retro-fitted to meet business needs. If we were to present this differently, it is a proverbial example of 'the tail wagging the dog'.

For example, portfolio rationalization is often marketed as business capability-driven. Actually, the focus is on purchasing commercial-of-the-shelf (COTS) products or components that have some out-of-the-box (OOTB) business or technical capabilities (or both) that can meet predefined business and technical requirements. To minimize extreme customization when choosing a COTS product, most organizations will actively seek OOTB capabilities that fit nearly 85% of their requirements and offer vendor-supported configuration changes.

In case there are gaps in the OOTB capabilities, the COTS solution will undergo some level of customization, which may or may not be recommended or even supported by the vendor. The organization may also build custom solutions, either in the beginning or over time, to support or enhance the COTS solution. In our experience, architects have traditionally designed custom-built solutions around business (or technical) capabilities - whichever is the priority. 

Tipping the scales for business vs. technology

Clearly, capability-driven enterprise architecture has an advantage over technology-driven approaches. It aligns business with IT and focuses on business processes and capabilities. However, it is also inextricably linked to specific applications and their inherent legacy architecture. In case you haven't noticed already, there is irony here: platform and cloud-first approaches often reinforce application architecture instead of business capabilities! This is because the business has to adopt COTS data models and storage options for their data. As a result, business capabilities are collapsed into the vendor or COTS applications rather than standing alone.

Let's see why this is alarming. While business requirements and their associated capabilities may change over time, core organizational data does not. Of course, technology also changes, thereby impacting the longevity of COTS and custom-built solutions, but let us ignore this for now. Thus, the common denominator in using COTS as well as custom capability-driven solutions is that information architecture is not a top priority during design. When data models become tied to vendor-provided models, they are unable to reflect organizational enterprise data models (if they exist) or offer flexible and adaptive information capabilities.

The dog wags its tail

Now, data is the most valuable asset that an organization can leverage to achieve market differentiation and success. No matter the condition - be it enhanced, embellished or partly redundant - core data is the lifeblood of any organization. In fact, one can argue that organizations may still exist if they lost all their applications but retained access to their data. The corollary is dangerously true, too: Without data, an organization would cease to exist.

So, if we were to switch these enterprise architecture paradigms, we could make a case to enable 'the dog to wag its tail'. Here, we would establish information architecture as the primary driver of enterprise architecture. This approach will disengage business capabilities from application platforms and vendor lock-in.

Creating an information-centric enterprise architecture

You may be wondering what the primary requirements of an information-centric enterprise architecture are. Here's a short list:

  • Business data should be segmented from business capabilities. This allows us to change/remove capabilities without impacting the underlying data and add new capabilities that can utilize the data when needed.

  • Business data should be separated from application-specific data that is artificially-coupled and may cause unnecessary bloat. This allows us to remove or add applications without impacting business data.

  • Business data should be segmented appropriately based on its domain

  • Each business data domain should be consolidated into a single version of truth

  • Business capabilities should be designed based on the domain-specific business data and associated functionality requirements

  • Wherever possible, business capabilities should be implemented as reusable services either in COTS or custom-developed applications

  • New or composite capabilities can be added by consuming the services


Figure 1: Information-centric enterprise architecture model 



Ultimately, such an architecture model will incorporate perspectives of business, information, application, and technology. We prefer to liken it to the layers of an onion: the business is at the core followed by information designed to support the core business needs, applications to address business capabilities that leverage information above that, and the business capabilities mapped to the technology on top. The key premise of this architecture model is that the technology and applications are free to change without impacting the core business data and business architecture.

Deep-dive into the differences between capability-driven and data-centric approaches to architecture in our next blog.

Continue reading "Why data should drive your enterprise architecture model - Part 1" »

September 12, 2017

Calling all telcos: Monetize the power of digital using B2B2X model

Author: Vishvajeet Saraf, Principal Technology Architect, Enteprise Architecture

Nowadays, every enterprise is adopting digital initiatives to better serve their customers - and differentiation is the key. Typically, most enterprises collaborate with communication service providers or telecom companies to provide access and mobile solutions to their employees. Now, healthcare, retail, government, insurance, energy, and transport industries are looking for trusted partners who can help them create industry-specific and unique propositions that delight customers through innovative digital services. Think about the emerging use cases of partnering with telcos for smart cities, remote health monitoring, and location-based services. There is significant merit in such a model - partnering with telcos offers significant cost benefits compared to investing from the ground-up to build specific capabilities.


In my opinion, this presents telcos with a very lucrative opportunity to become such trusted partners. They already possess the digital infrastructure needed to provide content, network analytics, access/connectivity, mobility, and IoT services to customers. All they need is the right solution and the right environment. 

telecom digital incubator.jpg

Digital incubators - Test before you invest

All innovation begins with testing. Only by failing early and failing fast can new ideas improve and grow to become value-generating services. So, firstly, enterprises need an ecosystem where they can test their proposition before they invest - and this is precisely what telcos can offer. Telcos can provide enterprises across industries with an ecosystem that prototypes, builds and rolls out propositions for their customers - a 'digital incubator', one might say. Such digital incubators are a win-win opportunity: On the one hand, enterprises can test ideas and roll-out successful propositions. On the other hand, telcos gain a new revenue stream that accelerates the B2B2C model.

Here is how telcos can monetize this opportunity for maximum returns:

  • Position themselves as digital service providers (DSPs) of traditional telco services for telephones,     broadband, fixed data, etc., while partnering with content providers to provide content and media services along with access to NFV/SDN, IoT ecosystem, data/network probes/analytics to enterprise customers
  • Create a digital platform that delivers these capabilities as lightweight digital services or APIs for enterprise customers 
  • Become a digital incubator by creating an ecosystem based on open API methodology that allows enterprises to carry out trials, test and build unique propositions for their customers

digital incubator ecosystem.png

How to build an incubator ecosystem?

Below are the key elements of an open API-based digital incubator ecosystem that is built around a digital platform.

incubator ecosystem components.png

I want to highlight that, in such an environment, it is critical to make APIs publicly available. Let me explain why: Publicly-available APIs provide developers and enterprises with programmatic access to the digital capabilities of the chosen telco. In this way, enterprises can develop custom offerings through apps and other channels instead of relying on their telco partner to develop industry-specific offerings. In a manner of speaking, it is about creating a 'You do, I support' model, instead of an 'I do everything' model

Here are the key components for building an incubator ecosystem:

  • Open API specs portal: Through this portal, enterprises and developers can access publicly available API details such as API specifications, sample payloads, error codes, access details, and limitations (if any) 
  • Developer portal and accelerators: This portal not only allows developers to register their apps, but also educates them. Through this channel, telcos can receive feedback and suggestions from enterprises and developers about APIs exposed by them. Several API gateway products provide this feature, thereby helping DSPs  build developer portals
  • Test environments: These allow enterprises to trial and prototype services with the exposed APIs 
  • Digital studios: This is important and requires some investment by enterprises. Digital studios act as demo centers where enterprises can demonstrate their propositions to their customers (B2B2X). Telcos can also provide  value-added services such as rapid prototyping tools as well as designers to help enterprises roll out demos
  • Revenue models, API monetization and partner on-boarding: Needless to say, an open API ecosystem will trigger new revenue models. So, it is important to plan for these new revenue models and create partner/enterprise on-boarding processes that cater to different scenarios and industry verticals. Many API gateway products provide key capabilities for API monetization

Infosys has done it already!

The Infosys Enterprise Architecture team has helped several telcos build digital platforms for successful digital transformation. We help telcos develop the building blocks for digital incubators by leveraging our proprietary accelerators and frameworks such as:

  •  Infosys Digital Enterprise Architecture (I-DEA) for defining the ecosystem architecture
  • Infosys Cornerstone Platform to accelerate building microservices as well as digital platform delivery 
  • Experienced Infosys enterprise architects and reusable artifacts to strategize and build an open API ecosystem for large telcos and government organizations
  • Infosys Digital Studios, a unique differentiator, to help telcos showcase initial demos to their enterprise customers

Inspiration behind writing this blog is Binooj Purayath.  


Models and partner on-boarding templates by TMForum



Infosys EA Blogging Series

Our Enterprise Architecture blog series covers all aspects of business, information and technical architecture in order to demonstrate how we work with all teams across Infosys to provide innovative and coherent technology strategy and Chief Architect expertise to our clients worldwide. For more information on our Enterprise Architecture services, please find us here 

August 30, 2017

Simple definitions to get smarter: From 'big data' to 'AI'

Author: Ramkumar Dargha, AVP and Senior Principal Technology Architect, Enterprise Architecture

Today, business parlance is peppered with words such as big data, data analytics, data science, machine learning, artificial intelligence, and automation. We have all heard these terms being used; some even interchangeably. For those of you wondering what these terms actually mean, fear not. In this blog, I will attempt to demystify these new trends, highlight their significance and, most importantly, explain how they play a vital role when it comes to automation and artificial intelligence. I want to point out that none of these explanations come from industry-standard definitions or existing literature. They are drawn from my experience and shared with you in the hope of making these complex terms more comprehensible.

Business jargon 101

Let me begin with an illustration. The following diagram depicts how each trend works individually and within an ecosystem. While this may seem confusing now, I recommend you refer to it once each term is better understood.

View image

Data science - Firstly, the word 'data' refers to all types (like unstructured data) and all sources of data (like traditional data warehouses). So, data science is a field that encompasses the entire journey or lifecycle of data. It includes steps such as ingesting data, processing data, applying algorithms, generating insights, and visualizing actionable insights

Big data - This refers to data characterized by the four Vs, namely, high volume, high speed (velocity), high diversity (variety), and high veracity (abnormality or ambiguity). On second thought, we may even say five Vs, since big data adds significant value to enterprise operations! Much like data science, big data also represents the entire data lifecycle, which may cause some confusion - but let us bear with this. This brings me to the next important term.

Machine learning - Machine learning is the ability of machines to learn on their own through data - just as humans do through their environment. In machine learning, machines understand and learn from data, apply the learning and, based on the results, revise previous learning from new data. All this is done iteratively. Here, learning refers to the process by which machines convert the data to insights and apply those insights to take action. As you may have observed, data is key, particularly big data. However, ML can also use traditional data for algorithms like classification, linear regression, clustering, etc.

Data analytics - But, how do machines learn from data?  This is where data analytics comes in. Data analytics uses machine learning algorithms like those mentioned above to uncover patterns hidden in input data. These patterns are applied to new (but similar) datasets to create inferences based on past data. These inferences then become insights for future business actions. To know more about how to get data analytics right, check out my blog on "Data Analytics: Doing it Right!".

AI - In 5 steps

In my opinion, artificial intelligence (AI) has five main steps, which are described below: 

  1. Curate/acquire knowledge using approaches such as natural language processing (NLP), optical character recognition (OCR), etc
  2. Generate business rules using knowledge gained through the knowledge curation process or from insights/intelligence acquired through various machine learning techniques (as mentioned above) 
  3.  Leverage an automation engine that stores the collected knowledge and insights as code
  4. Take business actions either automatically through the automation engine or manually where human intervention is required
  5.  Use the feedback loop for continuous improvement by learning new patterns and un-learning old ones (when needed) in an iterative manner, just as humans learn, unlearn and re-learn on a continuous basis     
O   One clarification to be made here is: Some literature considers the generation of insights through machine learning and data analytics (Step 2) as part of knowledge curation (Step 1). I have intentionally separated these two here. According to me, knowledge curation is about acquiring knowledge from an information source such as literature and existing documents through NLP, search, OCR, etc. Alternatively, gaining insights through machine learning is done by applying ML algorithms on existing machine data. In my opinion, these two are distinct processes of acquiring knowledge. There are also traditional sources of knowledge such as human research, discovery, etc., that can be used to create business rules. This is represented as 'other knowledge source' in diagram and it does not necessarily come under the scope of AI.

I hope this piece has helped you better understand these complex concepts. Any thoughts or suggestions on how to improve these definitions? Please feel free to leave your comments and suggestions below.

Infosys EA Blogging Series

Our Enterprise Architecture blog series covers all aspects of business, information and technical architecture in order to demonstrate how we work with all teams across Infosys to provide innovative and coherent technology strategy and Chief Architect expertise to our clients worldwide. For more information on our Enterprise Architecture services, please find us here 

August 16, 2017

Architecture for digital world of ecosystems, platforms and the mesh

Author: Bora Uran, DPS, Senior Principal Consultant, Enterprise Architecture


I have noticed new buzzwords mushrooming in business literature thanks to the surge of digital transformation programs - these include 'digital ecosystems', 'digital platforms', 'platform ecosystems', and 'platform economy' to name a few. Most of these terms refer to ways through which we can increase value by forging relationships between organizations, systems and connected things.

"The Mesh" is another one that has been added recently.  In the "Top 10 Strategic Technology Trends for 2017" research note, Gartner mentions "An intelligent digital mesh is emerging to support the future of digital business and its underlying technology platforms and IT practices. The mesh focuses on people and the Internet of Things (IoT) endpoints, as well as the information and services that these endpoints access".1

Depending on the business or technology context, each of these three words may have a different subjective meaning. However, one cannot argue that they all represent a common theme. Together, they symbolize a future of seamless connectivity where digital and physical worlds merge and traditional boundaries between organizations, systems and technologies are broken.

New architecture for new enterprise ecosystems

I am excited at the prospect of such a future and I am certain that successful digital transformation programs will help us get there. But first, enterprises need architecture that:

  •   Blends different technologies with varying levels of maturity
  •   Integrates diverse range of applications, services and devices
  •  Supports multiple client channels with optimized user experiences
  •  Enables continuous and agile delivery of new features
  •  Increases performance and scalability for high-volume and real-time interactions

 n  In order to satisfy these requirements, enterprise application architecture is evolving from monolithic to modular and flexible structures.There are new deployment models emerging with higher degrees of agility and scalability. From what I have observed, the two major shifts are happening in the areas of:

  •  Service-oriented architecture (SOA) - SOA is adopting new architecture styles, patterns and protocols such  as  web oriented architecture, microservices architecture and REST
  •  Application development - This is transforming to provide: 

o    Modern software application architecture and frameworks such as back-end and front-end Java or Java Script frameworks

o     Lightweight infrastructure with cloud-based and containerized workloads

o     Automation and agility with DevOps and agile development practices 

In the "Top 10 Strategic Technology Trends for 2017: Mesh App and Service Architecture" report, Gartner mentions the following: "Applications need a different architectural approach to support digital business ecosystems. That approach is mesh app and service architecture (MASA)". 2 Here, some of the key architecture styles are cloud-based, serverless, service-oriented, API-led, and event-driven. Implementing such architecture can be challenging. So, enterprises must focus on increasing the maturity of services-based computing and exploring new technologies such as containerization of workloads and serverless architecture. Most importantly, they should build new skills in areas of modern software frameworks, hybrid platforms and continuous delivery and integration.  

Infosys is a key partner with leaders in enterprise architecture and innovation. We help clients realize value from their transformation journey by offering services such as insights-driven enterprise transformation, digitization, ecosystem integration and management, and hybrid cloud enterprise transformation3. Our value proposition stems from our expertise in areas of microservices architecture, API-led connectivity, DevOps and Agile, open source, and cloud adoption.



  1. Top 10 Strategic Technology Trends for 2017 by David W. Cearley et al, Published 14 October 2016 ID: G00317560
  2. Top 10 Strategic Technology Trends for 2017: Mesh App and Service Architecture by David W. CearleyGartner Published: 21 March 2017 ID: G00319580
  3. Infosys Smart EA offering   
  4. Service Technology Architecture Guidance by Steven Schilders, Infosys    

Infosys EA Blogging Series

Our Enterprise Architecture blog series covers all aspects of business, information and technical architecture in order to demonstrate how we work with all teams across Infosys to provide innovative and coherent technology strategy and Chief Architect expertise to our clients worldwide. For more information on our Enterprise Architecture services, please find us here    

August 9, 2017

DevOps as key catalyst in digital transformation

Author: Sridhar Murthy J, Principal Technology Architect, Enterprise Architecture 


Today's digital marketplace is governed by fierce competition - and falling behind is a risk no business can afford. The key to succeeding as a digital business is to embrace emerging technologies and innovation. With digital transformation programs sweeping across industries, enterprises are finding new ways to deliver top-quality products and services to end-users, manage vendors and optimize internal processes.  


When it comes to digital transformation, IT has a significant role to play as a strategic driver of digitization, agility and innovation. The DevOps approach is already revolutionizing software delivery for many companies. However, enabling DevOps is challenging, and requires commitment and careful planning. Often, enterprises choose to adopt DevOps because they truly want to differentiate their market offerings and increase speed-to-market. However, without a clear goal, the drivers for DevOps remain elusive and, in such cases, enterprise IT is unable to support true transformation.


My suggestion is this: If you want to gain a competitive edge through digital transformation, you must first re-evaluate how you develop software and how you harness the talent of development and IT teams.

Transforming the IT landscape

Meeting ever-changing customer needs means having the ability to build innovative, quality and scalable products. However, businesses cannot innovate in isolation. This 'collaboration for innovation' is exactly what DevOps offers.

The DevOps methodology combines agile with constant feedback to continuously roll out new changes. The outcomes of each rollout are meticulously assessed and fed back to enterprises by customers and end-users using a technique known as the 'canary release'. DevOps also helps enterprises leverage micro-services architecture to achieve modular services with business capabilities. In doing so, DevOps ensures constant innovation and refined digital transformation.   


So how do you, as an enterprise, adopt DevOps to revolutionize IT?

Roadmap for digital transformation

In my opinion, there is a mutually dependent relationship between cloud and DevOps. They share common patterns such as infra-as-code, auto-scaling, proactive monitoring, infra-resiliency, and immutable environments. Even as cloud maximizes the value of DevOps, the benefits of cloud can't be fully realized without DevOps-driven automation.


Step 1: Redefine your enterprise vision to align with the short-term and long-term goals of digital transformation.

Step 2: Create a roadmap to transition from the current methodology to DevOps. Here, it is better to start with specific projects and then scale up progressively.

Step 3: Define the tools required to automate workflows and underlying activities to achieve agility, higher quality and shift-left testing.

Step 4: Carefully select the technology you need that will maximize automation and collaboration

Step 5: Automate everything you can in workflows and implement a pattern where 'everything is code'

Securing the last mile

Finally, let's not forget security. DevOps promotes secure development lifecycles where security is in-built rather than tested separately. It requires security checks to be established across the software development lifecycle with a seamless feedback loop into development.

I think the best feature about DevOps is how it enables 'operation by design'. This means that DevOps builds resilience and testability while providing proactive monitoring and self-healing capabilities. Further, it ensures that all environments are stable. Development, quality assurance, user acceptance testing, and production remain consistent, giving you top-quality software.

With all its unique capabilities, I believe that DevOps is one of the key catalysts for digital transformation. Are there others? Let me know what you think.

Infosys EA Blogging Series

Our Enterprise Architecture blog series covers all aspects of business, information and technical architecture in order to demonstrate how we work with all teams across Infosys to provide innovative and coherent technology strategy and Chief Architect expertise to our clients worldwide. For more information on our Enterprise Architecture services, please find us here 

July 31, 2017

5G to unleash new wave of disruption - are you ready?

Author: Shreshta Shyamsundar, Principal Technology Architect, Enterprise Architecture 

The 5th generation of wireless systems - abbreviated as 5G - is almost here! While it is expected to be commercially available in 2019, testing is scheduled to begin in the winter of 2018 in Seoul, South Korea. Many companies are already taking steps to prepare for its rollout. In the last quarter, Qualcomm announced that a commercially viable 5G modem will be available. Meanwhile, Qualcomm is supporting OEMs in building next-gen cellular devices and aiding operators with early 5G trials and deployments.

Originally, 5G will be designed for multimode mobile broadband that works through 4G LTE and 5G devices. But, it will also be extensible as a fixed wireless broadband device. With 5G, customers will experience and consume higher value content at much greater speeds. By enabling interworking and cohesive connectivity, 5G promises to enhance the quality of the broadband experience.

As a step above 4G, 5G will radically increase the speed of data transfers across the network - going beyond merely sending texts, making calls and browsing the web. As a digital user, 5G will enable you to instantly and easily download and upload Ultra HD and 3D videos. Sustaining the hyper-connectedness brought on by the Internet-of-Things (IoT), 5G will help seamlessly connect and support thousands of connected devices across personal and work environments.

Getting the right hardware

However, 5G will come with its own challenges. While we can look forward to greater data rates, more devices and lower latency, rolling out 5G means re-thinking the entire network stack, particularly radio access networks and network hierarchies. Our current network communication hardware cannot support the 1ms latency that 5G commands. Providers will have to consider software-defined networking (SDN) to manage traffic and network function virtualization (NFV) to virtualize network traffic.


These thoughts have been well-articulated in an NGMN white paper. According to the authors, 5G will demand extreme innovation in 6 key areas, namely 1) user experience, 2) system performance, 3) devices, 4) enhanced services, 5) business models, and 6) network deployment and operations. The paper talks about building "architecture that leverages the structural separation of hardware and software, as well as the programmability offered by SDN/NFV. As such, 5G will be a native SDN/NFV architecture covering aspects ranging from devices, (mobile/fixed) infrastructure, network functions, value-enabling capabilities and all the management functions to orchestrate the 5G system. [2]"

The authors add, "On the radio access side, it will be essential to provide enhanced antenna technologies for massive MIMO at frequencies below 6GHz and to develop new antenna designs within practical form factors for large number of antenna elements at higher frequencies[2]."

Energy savings from 5G


5G demands lesser energy to power devices, thereby supporting (theoretically, at least) a greater density of endpoints. In fact, Telstra and Ericsson are collaborating to create "the first national IoT-enabled mobile network[3]". With this technology, the average upload speed will rise to 200-400 KBPS. We can also expect to see greater number of cost-effective and energy-efficient sensor devices. Here, the use-cases are far-reaching. According to the news report, "a sensor network deployed at Pooley Wines in regional Tasmania [can] collect data like soil moisture and temperature, rainfall, solar radiation and wind speed.[3]"

Opportunity for enterprise architects


5G is already in trial phase and pre-user production testing is expected to commence within 18 months. This means there isn't any time to waste: Telcos must focus on building a roadmap for 5G enterprise architecture, particularly for time-bound technology refreshes. They need to prepare for increased data volumes, faster time-to-market and a 5x reduction in latency from 5ms to 1 ms.


In fact, many network operators across continental US, Europe and Asia are already assessing the impact of 5G on existing systems, applications, processes, and operating models. Insights from these assessments can give enterprise architects a head-start on designing offerings that implement 5G in organizations. This will equip them with a competitive edge when it comes to presenting a business case, performing gap analysis and recommending strategic initiatives.


The Infosys SMART EA offering [4] helps operators find numerous ways to implement 5G technology for greater value. However, they need to start thinking about the potential IT roadmap now to reap benefits and gain a substantial market advantage.


[1] - Ofcom 2017 report - 15 June 2017 - http://telecoms.com/479494/ofcom-publishes-beginners-guide-to-5g/

[2] - Next generation mobile networks - 27 Feb 2017 - https://www.ngmn.org/5g-white-paper/5g-white-paper.html

[3] - Telstra gets a Jump in 5G Race - 27 Feb 2017 - http://www.theaustralian.com.au/business/technology/telstra-gets-jump-in-5g-race/news-story/c5cfe570b75a8adb6f198602af01ae21

[4] - Infosys Enterprise Architecture practice - 15 June 2017 https://www.infosys.com/enterprise-architecture/Pages/offerings.aspx#Strategize


Infosys EA Blogging Series

Our Enterprise Architecture blog series covers all aspects of business, information and technical architecture in order to demonstrate how we work with all teams across Infosys to provide innovative and coherent technology strategy and Chief Architect expertise to our clients worldwide. For more information on our Enterprise Architecture services, please find us here