Enterprise architecture at Infosys works at the intersection of business and technology to deliver tangible business outcomes and value in a timely manner by leveraging architecture and technology innovatively, extensively, and at optimal costs. Stay up-to-date on the latest trends and discussions in architecture, business capabilities, digital, cloud, application programming interfaces (APIs), innovations, and more.

May 7, 2019

How to Evolve an Aging eCommerce Platform: Suggestions to Frame a Way Forward

By Pierluca Riminucci, Sr. Principal Enterprise Strategic Architect

Many companies in the retail sector are faced with a common dilemma about how to evolve their existing and now aging eCommerce platforms.  This is a technical but also a business key decision that is more often faced by established "brick & mortar" retail companies.  Such traditional retail businesses have typically embraced eCommerce a decade or so ago, slowly building a capability, predominantly in a tactical way, perhaps more as followers.

In my opinion, a good way to tackle this far reaching strategic decision, is to move from one fundamental truth: the future has in store a lot of difficult-to-predict disruptions and changes.  If we truly believe that statement, only seemingly hollow, then we are in a better position to start uncovering its less obvious consequences, the main of which implies that the real end goal should be building an innovation-enabling technical foundation, rather than simply selecting another eCommerce platform.

And the reason is simple: one of the truly key strategic business advantage of companies will be their ability to evolve their business models quickly, reliably and effectively, and with minimum additional investments.

That will probably be the game of the next few years ahead of us which will determine the winners and the losers.

Now hopefully is more clear why, to make an informed decision regarding "what to do with my current eCommerce platform", is beneficial to broaden the perspective and start addressing - or at least framing in the background - the underlying deeper questions revolving at establishing what is my company vision for the future in the space of "digital retail".  However, it is often very difficult for a Company to articulate a sharp, coherent and concrete-enough-to-be executable vision able to shed light into the depths of uncertainty brought in by a rapidly approaching technology-led disruption.

And that brings us back to the earlier foundational opening statement: The future has in store a lot of uncertainty.  However, amidst uncertainty, there is also a recognizable enough common trend, i.e. the former sharp distinction between eCommerce and "Commerce" is getting less and less meaningful due to technology advances.

Experience in store will likely be augmented by an ever increasing digital dimension.  Products will be "experienced" by Virtual Reality (VR) headset[1] or garments tried-on both physically but also digitally[2].  Store visits will be enriched by virtual aisles technologies, garments could be tried-on in store to see how they fit, 3D pictures could be made and then "saved for later" allowing that garment to be bought on-line with confidence, and at the same time helping relieve the notorious eCommerce high return problem. Digital information points (or "hooks") about products or life-style associated with products could be disseminated in the store offering customers a truly full immersion experience.   

At the same time, the on-line channel can be enhanced by the possibility of leveraging the capabilities that an "brick-and-mortar" infrastructure offers.  For instance, body measurements could be scanned in store by special purpose equipment to enable an unprecedented accuracy in matching garment wearability to potential customer's unique body shapes. This in turn will open up a raft of new and more powerful ways to provide product recommendations, like sending customers "virtual pictures" of them wearing a new garment and even open-up new quasi-mass-market-made-to-measure manufacturing approaches.

I am aware that many of these technologies are not quite ready yet, however there is enough happening to be persuaded that the winners will be those who will be able to quickly and seamlessly integrate new pieces of innovative and best-in-class technologies in their technical ecosystem to enable rapid business models innovation.

In the light of all that, the initial question "what to do with my aging eCommerce platform" takes on a distinctive different perspective. It is not a matter anymore of simply comparing pro-and-cons of different vendors' eCommerce platforms but more appropriately identifying the key quality attributes of an enterprise architecture that would allow a seamless integration (between eCommerce and Commerce operations) along with the possibility to be rapidly evolved into new unknown directions.

So hopefully by now it should be clearer that a more effective way to frame the initial question would be by first identifying a set of guiding architecture principles and then properly spelling out their key consequences to build a robust and far reaching decision framework.

In the remaining part of this article I will illustrate what these architecture principles and their consequences might look like, as a title of example. Obviously, I am aware that the specifics of each company situation might require some fine tuning, given that a fit-for-all solution is unlikely to be really effective.

A guiding set of architecture principles for evolving your eCommerce platform.

AP 1.  Architecture decisions should aim at minimizing technical constraints:

·         Consequence 1: Carefully trade-off the convenience of out-of-the-box comprehensive - but also constraining - eCommerce platform with future yet-unknown requirements.

·         Consequence 2: Watch-out for vendor lock-in warning flags (e.g. proprietary interfaces, lack of extensibility, hegemonic vendor strategy, proprietary languages, etc.).

·         Consequence 3: Take control of your integration strategy and enabling interfaces (use open standard to hide product proprietary protocols).

·         Consequence 4: Carefully look at the extensibility and integration mechanisms of each eCommerce platform (e.g. prefer mainstream mechanisms).

AP 2. Reduce execution friction to innovation:

·         Consequence 1: Aim at achieving a loosely coupled architecture.

·         Consequence 2: Preserve the ability to easily plug & play specialised modules (COTS or bespoke) that you might need to acquire or develop in future.

·         Consequence 3: Institute a lean and proactive architecture governance function to avoid architecture degradation.

·         Consequence 4: Preserve a strong semantic in terms of each module' remit but also in terms of overall coherency and clearly defined interfaces.

·         Consequence 5: Carefully evaluate the needs of introducing new technology paradigm against the increase in fragmentation of required skills (stay mainstream).

AP3. Get more for less and reduce costs (TOC & OPEX):

·         Consequence 1: Leverage existing investments whereas possible. There are no silver bullet platforms that alone can solve your business strategy execution problems.

·         Consequence 2: Constantly maintain your enterprise architecture aligned to a semantically strong domain model (refactor it each time it is feasible during transformation programmes).

·         Consequence 3: Consider Open Source platforms since they tend to be less "hegemonic" and more interoperable.

·         Consequence 4: Consider adopting a cloud strategy but only after a careful analysis of benefits and related economics.

AP4: Think in terms of Omni-channel architecture rather than eCommerce platform:

·         Consequence 1: eCommerce function is not likely to stand alone anymore in the future.

·         Consequence 2: Unify all your commerce function enabling capabilities whilst providing well-defined variation points to cater for channel flexibility.

·         Consequence 3: Strive to build an end-to-end coherent data flow as the main fabric to achieve end to end seamless business integration.

·         Consequence 4: Preserve your ability to leverage eCommerce capabilities to digitized brick & mortar experience.

·         Consequence 5: Collect customer data through each channel and pull it all together to execute a mutually reinforcing cross-channel communication strategy.


In this article I have articulated the need of adopting a holistic view about eCommerce and Commerce capabilities when considering how to evolve your existing legacy eCommerce platform.  I have presented a set of guiding architecture principles that could help framing the decision. The key point behind such principles revolves around the need of preserving your ability to stay in control since your company is likely to need to evolve and rapidly introduce innovation to simply stay still in the market place.


Inder Sidhu with T.C. Doyle, The Digital Revolution. Pearson 2016

[1] Indeed, this is already happening for instance with Audi. Cf. Inder Sidhu et al. "The Digital Revolution" (2016)

[2] This is what "magic mirror" technology is promising to deliver, though not quite there yet in terms of realism.

Reducing Disruptions in Manufacturing using the Internet of Things

By Sivan Veera, Sr. Principal Enterprise Strategic Architect

Shop floors have evolved from simple job-card-based push scheduling to advanced Kanban based pull scheduling. However, even the most sophisticated Kanban systems suffer from various interruptions. A technique called fluid scheduling leverages the Internet of Things (IoT) to solve this problem.  Fluid scheduling is a fundamental change where scheduling happens by what parts are currently available rather than what should be available based on a predetermined schedule. IoT based fluid shop floors also move scheduling out of the shop floor to centralized command centers where multiple physical shop floors can work as a single logical shop floor. This aggregation can plug-in multiple shop floors together in a "shop floor as a service" model creating a pathway for disruptive business models.

What is a Fluid Shop floor and how does IoT help?

Figure 1 illustrates an IoT based Fluid Shop floor. Sensors are mounted on pallets and count parts inside the pallet in regular intervals. Sensors stream these counts to a Central Scheduling Center which are external to the shop floor. Scheduling centers are networked to machines and they send machine schedules in near real time. Scheduling centers consider the availability of parts in real time, continuously update schedules and push the schedules to the machines. Scheduling centers also move parts from pallet to pallet based on the machine schedule. This type of scheduling is different from Kanban based scheduling where parts required are computed for each machine and updated with the assumption that these parts will arrive at right time at right machine at right quantity. This type of IoT based situational scheduling is also different from conventional push scheduling where parts are scheduled for a long period say a week or month and are not updated in between based on disruptions.


Figure 1: IoT based Fluid Shop floor

Role of IoT in Fluid Shop Floors

IoT sensors mounted on every pallet count the parts inside them and stream the data to Central Scheduling Centers. By knowing the exact parts available at each machine and also the parts available in the incoming area of the shop floor itself, scheduling centers will be able to calculate schedules in real time and update the machines. Since schedules need to be calculated real time and they are compute resource intensive, cloud computing fits well for scheduling centers. When scheduling centers integrate across many shop floors, high velocity ingestion from multiple sensors can also be handled effectively by cloud computing. For the sensors to identify parts in the pallet, each part in the shop floor need to be tagged with RFID tags. Sensors stream the part identification and pallet information to the scheduling centers.

What Business Models can IoT Fluid Shop Floors disrupt?

In today's business model, large shop floors are built around producing a small set of Products. Shop floors are optimized for mass production. Introducing new products in market takes years of building custom designed shop floors. Fluid scheduling lifts this limitation.  Many IoT based fluid shop floors can be assembled together into one logical shop floor and produce different parts in a short time. Many startups can build platforms to assemble these shop floors together in a short time. New companies will emerge building and launching products in a shorter time disrupting large established corporations. Pricing of products also will significantly change since large investments need not be made on large shop floors.

What can we expect in the future?

The Internet of Things enables shop floors to transform to the next level of agility and sophistication by dynamically modifying machine schedules based on the available of parts at that moment. This eliminates waste even beyond the gains made by Kanban based scheduling. With IoT based fluid shop floors, customers will be able to order parts by supplying designs and having those parts delivered within weeks with less upfront cost. Will there be a market place where multiple shop floors come and go as a "shop floor as a service?" Can customers bring new products to the market every quarter with such a service? Can shop floors be completely automated and controlled externally?

With the advent of IoT enabled fluid shop floors, we may witness disruption in business models at a rate we have not seen since the advent of industrial revolution.  What do you think?


January 25, 2019

The Architecture of Choices

By Neel Mani Kumar, Lead Consultant, Infosys Ltd.

Decision making is at the heart of management. The success of a firm depends on the quality of decisions taken by management. Every success or mishap, every opportunity grabbed or missed is the result of a decision. Making decisions at the right time and swift implementation are hallmarks of high-performing management. Successful firms implement strategic decisions without delay.

In our fast changing business environment, management has to meet the expectations of stakeholders, align with rapidly-changing policies by government of the country to do business in the international market, and cope with the business disruptive forces created by competitive adoption of advancing technology.

Executives typically use a set of management tools to visualize the information needed to make good decisions. These tools help managers address two key questions:

1.       What is right for the enterprise?

2.       Why and what needs to be done?

Successful decision makers align work to a meaningful purpose.  In doing so, they are seeking to answer the question: “What is right for the enterprise?” Going further, they also provide a clear vision, priorities, and a mission statement (addressing “What needs to be done?”), along with their support to accomplish the mission. Most managers feel that they do this well.  However, in many cases, it is difficult for their staff to understand how these key items relate to their day-to-day lives.  Placing these items into a model gives a team visibility to the information and context needed to make detailed decisions at each successive level of management. Using a modeling language, teams can verify that their decisions are in alignment with strategies at higher level.

Business Operating Model

The Enterprise Architecture function empowers a firm to align decision making at successive levels of management. Enterprise architects are a rare resource.  It is effective to use them on strategic tasks like modeling your business operating model and in strategic planning alignment. Enterprise architecture work starts with the discussion on the operating model that aligns governance, processes and the required inputs with business’s needs.  This discussion promotes effective collaboration between stakeholders. Using architects to describe the business operating model helps the leader to identify “What is right for Enterprise?”.

Industry standards for modeling the business

Modeling languages like ArchiMate™ from Open group and the Decision Model and Notation™ (DMN) from OMG™ can be used to communicate the key elements of business decisions.  These languages add value by grounding discussions in easily readable diagrams.  They also enable the development of a library of reusable decision-making components. These models can be quickly consumed and understood by the different types of stakeholders involved in decision making.

Architectural practices are already familiar with taking the same information (the model) and presenting it to different stakeholders for their needs (the view). Extending these ideas to decision making is a powerful use of the architecture paradigm. Usage of the “view” and “model” concepts provide the opportunity to visually select the choices and answers to the question “What needs to be done?” and “Why does it need to be done?


Capability Mapping

Transformation can be disruptive for a firm.  Every minute that the staff spend confused about their roles, or learning “reworked” processes is a minute that is not spent generating value.  It is imperative that leaders, especially those interested in leading their company through a transformation, focus on minimizing disruption.  The key here is to realize that while many processes will have to be broken up, the “parts” of the process often remain unchanged.  Breaking up a large process into parts, and resequencing the parts, or focusing transformation efforts on a small set of high value parts, can help the organization minimize disruption.  This allows transformation to be taken on “one bite at a time.”

The parts of a transformation are called “capabilities” and an architect adds value by helping to map out the capabilities.  One of the most prominent frameworks used for capabilities and resources identification is VRIO framework. Another most prominent frameworks for understanding capabilities derives from Michael Porter’s Value Chain concept. The key to understanding a value chain is to separate the process into parts that add value, and parts that support value.  Each needs to be optimized, but in different ways. Breaking down these parts and mapping them is very useful for driving the decisions around reutilization and process improvement.

Diagrams aligning the current capability with business need become useful communication aids for the Enterprise Architects. It helps the decision makers to decide which capability needs improvement and, what are the new capabilities required by the firm. Capability models provides a clear vision for the investment opportunity and can help a business analyst to understand and convert the business needs into the requirements at a very high level.  Some IT-centric examples of breaking up larger processes into capabilities are shown in the diagrams below.


Figure 1 IT Value chain


Figure 2 Procurement & Logistic Value Chain

Mitigating Viewpoint difference

Strategic planning often has to make predictions about various business outcomes.  This is especially true when attempting to gauge the expected return on a potential investment.  Different outcomes often have to be weighed against one another and investment priorities have to be aligned. To gauge the “probability of each outcome,” managers start the discussion with multiple internal and external stakeholders. Different people and different teams hold a different point of view about the outcome.

The architectural concepts of “model” and “view” outlined earlier can help peel apart the shared information from the viewpoints.  Modeling can be used to create the view for stakeholders based on the model which stands as a “Single source of truth”. This exercise helps the decision makers to take the correct decision based on the different viewpoint and views of different stakeholders and also helps in stakeholder management.


Figure 3: Point of view disagreement

Modeling for Case-based decision analysis

Another key use of architectural models in decision making appears in understanding the relationships among items that influence an outcome. An influence diagram is one of the techniques used by Case-based decision analysis for decision making. Specifically, influence relationship modeling is introduced in the ArchiMate™ modeling language to enable Case-based decision analysis. By tapping into the right information sources tools of this type can help to take correct decisions even if the decision maker has limited understanding of the causal business operation model. The points collected during Case-based decision analysis can be used for Case-based reasoning for solving new problems.


Figure 4 Sample Influence Diagram


Enterprise architects are skilled at creating models that can be used to support business decisions. Modeling languages, can help to accelerate the decision-making process. Modeling languages supports different decision-making management tools. Models help decision makers to engage with the context of a decision. Models can also build buy-in to extend the influence of key stakeholders.

Continue reading "The Architecture of Choices" »

April 6, 2018

Design Thinking in Business Capability Modeling

A. Nicklas Malik, Senior Principal Enterprise Architect, Infosys Ltd.


One of the most interesting and difficult challenges of Business Architecture is creating the capability model for an enterprise.  In this article, I'll explore how to use the practices of Design Thinking to support the difficult and sometimes contentious process of creating a business capability model for an enterprise.  First, let's understand the problem a little.


For an enterprise that doesn't have a capability model, developing the first one can be tough.  Fortunately, the efforts of the Business Architecture Guild have started to produce value in the form of Reference Models.  Even with a reference model, the challenges can be substantial.  That is because a capability model is not a foregone conclusion.  There are many ways to frame the capabilities of an organization in a capability taxonomy.  Framing is important.  Framing the capabilities in a particular way can drive conversations (both good and bad).  For example, if we describe different capability groups for Marketing, Sales, and Customer Services, in which group do we put "Customer Record Management"?  Will stakeholders argue about it?  Will someone choose not to take responsibility for their data if the capability is aligned to their job title?  Getting this right may be important in getting key executives to step up to their responsibilities in the organization.


A capability model is supposed to be independent of the politics and processes and structure of an organization, but to be honest, the most effective capability models reflect the needs, ownership, and strategy of the organization in subtle ways.  I've seen capability models with dependency connections, with ownership groupings, and with budgetary groupings, all as "overlays" that are both useful to the planning efforts, and which influence the capability model itself.  It's a complex problem but one that we can begin to solve with Design Thinking.


Design Thinking is an interesting technique that can be used to approach complex problems.  It is a method of creative focus that allows excellent ideas to emerge in a repeatable way, often with conflicting inputs.  Design Thinking has emerged as a model for bringing together many excellent practices for fostering creativity in a results-driven world.

Design thinking makes some basic assumptions: (a) We start without actually knowing what the destination is, (b) we center our solutions around deep empathy for the customer, and (c) we refine our creations through rapid prototyping and iteration.  Design Thinking can be used to design a bicycle, a space ship, a house, a business process, a software package, a vacation, and yes, a Business Capability Model.


In many ways, the techniques of design thinking are well suited for the task of generating a capability model.  In most of the situations I've been aware of, stakeholders for a capability model have never seen one before and have no idea how to use one.  It's tough to assist in designing something that you've never used before.  Consider: if you had never taken an airplane trip and someone asks you to design the perfect passenger cabin for an airplane... how well would you do?  That's a tough challenge, but design thinking can help.

Design thinking does not assume that you have experience with the solution before you start.  As a result, you can be comfortable that your "novice-developed airplane cabin" will at least be a reasonable one, even before you take your first flight.


With design thinking, there are five phases: Empathy, Problem Definition, Ideation, Prototype, and Test (with the last two in a quickly spinning cycle).  So let's apply these five stages to Capability Model Development.




Empathize -- The foremost value of the empathize stage is to put the customer at the center of your work, and remove yourself from it.  Your preconceived notions of the "right way" to build something, or "what something should look like according to Expert X" just gets in the way.  Your customer will describe their "conceptual space" in their own way, and different people will do it differently.  To truly empathize, you have to make sure you are representing the right stakeholders, and that you are actually listening to their issues and concerns.


One thing that I find, often, is that most people have "typical" problems.  If someone has typical problems, their concerns will be well understood.  But there are always outliers -- people who seem to always have unusual problems.  These folks can provide greater insight when you are building your understanding of the problem space, because their problems challenge the status quo.  They don't fit neatly into the box.  Look for these people.  Listen to them.


Empathy in capability modeling means, in my experience, to listen to how a team describes themselves and to capture it their way.  Do they discuss processes and procedures?  Do they discuss assets? Locations? Events?  Information? Documents? Workflows? Your capability model will need to reflect a wide array of stakeholders, so as you move between those stakeholders, don't begin by forcing them into your box.   Step into theirs.


Sketch (literally, with pencil) a simple non-structured diagram that represents their way of thinking of their space.  Yes, eventually you will build a capability model, but don't start with the strict definition of capabilities.  Empathize with where your customers actually live, and what they actually live with. 

I wouldn't expect your initial results to be any more "sensible" than something like the following diagram.




Problem Definition -- How many times, when we get to the end of our efforts, do we look back and say "We should have asked better questions?"  Design Thinking puts this problem right up front.  We believe we understand the stakeholders through our empathy, but before we put the capability model onto paper and start hashing it out, let's be clear about what problem we are trying to solve.


Capability models, in my experience, are excellent tools for planning.  We can manage a portfolio and plan for changes.  We can observe processes and plan for improvements.  We can evaluate readiness and plan for training.  We can find overlaps in application functionality and plan for consolidation.  It's planning.  But not every organization plans the same way, and few organizations have a mature planning process.  So as you build your capability model, think about who will own specific capabilities, how those owners will use their parts of the model to develop those plans, and how those plans will roll together.  Think about the inputs to planning: trends, strategies, changes in the ecosystem, changes in the competition, problems with existing systems, and technical debt. 


Your result needs to be a question that frames the problem you are trying to solve.  As you pose this question to your stakeholders, their reaction will tell you if your question was effective.  Don't be afraid to drop your attachment to specific terms or processes or methods.  Let things flow a little.  Phrase the question in terms of the customer's needs.  One good technique is to use the phrase "How can we ..." in your problem statement.


"How can we frame all the abilities needed by our business model so that we can best plan and coordinate our forward march?"


Please don't use my example as anything more than an example.  The problem statement you create should "feel" like it evolves out of the language, terminology, and culture of the organization itself.


Ideation -- A capability model created by a business architect and thrust upon the organization will be dead on arrival.  That is not a prediction.  That is a foregone conclusion.  How do I know?  I've done it.  School of hard knocks.  Let the stakeholders build their capability model through a series of collaborative sessions.  Ideation is the first step in that process.


The ideation step can use any of a number of techniques to open the stakeholders up to different ways to frame the capability model.  At this stage, you are creating capabilities, so we are applying the first series of constraints on the process.  There are a dozen different ways to frame ideas that do NOT end up with a capability model, but for the sake of this exercise, feel free to write them down and not pursue them.  We need our end result to be constrained to capabilities.  Other stuff will fall out. 


If the company has a process model that they actively use, you can start there.   If they don't have one (or don't actually use the one they have), consider using one of the capability reference models from the Business Architecture Guild (businessarchitectureguild.org) as a starting point.  This is far quicker than starting from scratch.  However, it is only a source of ideas.  Let the team reword, rename, join, split, and shred the starting "thing" any way they want. 


To keep ideation from becoming a long involved process, I suggest a series of simple exercises to expand the number of possibilities, and then consolidate that list to the most feasible ones.  Then expand again, and consolidate again.  Each iteration, considering a different aspect of your thinking or understanding. 


An excellent technique is the SCAMPER method, which pushes participants through seven different ways of thinking about the "starting" product to create a new "ending" product.  Those seven ways of thinking are: Substitute, Combine, Adapt, Modify, Put to another use, Eliminate, and Reverse.  There are a number of online resources available if you want to go deeper on using SCAMPER.  Other methods may include brainstorming, worst possible idea, paint by numbers, and many more.  All of these are designed to get creative juices flowing, especially for a group of people, while keeping the results controlled.


Prototype and Test -- Capability models are a unique bird because they represent the abilities needed by an enterprise to achieve a purpose.  For all intents and purposes, you are creating a list.  That list of abilities often exceeds the organization's internal capabilities.  This is why we talk about the capabilities of an "enterprise" and not the capabilities of a "business".  An enterprise may involve many businesses, suppliers, partners, regulators, and even customers in providing the required list of capabilities.


Creating a prototype capability model is a process that is complex if done by hand (without a capability modeling tool).  This is because you may have twenty stakeholders, and most of them do NOT want to see the entire organization in the capability model!  They want to see THEIR PART represented in gory detail and want to see everyone else minimized.  For this reason, you need the ability to prototype a complete model (for the enterprise) but to review segment-level capability models with the stakeholders.  Without a tool of some kind, this can create a great deal of manual effort.  (This is not a problem unique to Design Thinking... this happens with all capability model generation). 


I've found that the prototype effort actually begins during ideation.  Since we are building a knowledge product, and not a physical product or even a software product, the first prototype is actually developed in the collaborative session to some extent.  It is the synthesis of that prototype with the work done by other stakeholders, in their own ideation processes, that creates the enterprise model. 


Resist the temptation to go dark, work for a while, and spring an enterprise model on the organization.  Work your way up from stakeholders who buy in, showing them models that are domain specific.  Put four or five of the domain specific models onto paper and get feedback before attempting to create the first synthesized model.  Otherwise, one domain will have undue influence over the entire structure of the capability model.


With each prototype, you are producing a new enterprise capability model and a complete refresh of domain-specific models.  Run them past key stakeholders for quick responses.  Remember their needs: this is a planning framework.  Can they use the capability model to develop their plans?  Keep asking the core question. 


When you have sufficient representation across the enterprise to have created the enterprise wide model, you can circulate that model with the core planning teams in your organization: these teams may go by names like Strategy Development, Organizational Development, PMO, Enterprise Architecture, Finance, and Strategy Execution. 




Design thinking is certainly not the only way to design a product, and it is relatively novel in specific areas of organizational and strategic planning.  However, as this example illustrates, design thinking can be applied to purely knowledge based products like a capability model in a manner that hopefully builds better buy-in for the final result. 


And who couldn't use a little more buy in?


Useful links

Business Architecture, Setting the Record Straight - William Ulrich and Whynde Kuehn -- http://www.businessarchitectureguild.org/resource/resmgr/BusinessArchitectureSettingt.pdf

Design Thinking Blog - Tim Brown, Ideo - https://designthinking.ideo.com/

Design Thinking and the Enterprise - Pramod Prakash Panda, AVP, Infosys https://www.infosys.com/insights/human-potential/pages/design-thinking.aspx

Design Thinking, What is that? - Fast Company, 20 March 2006 https://www.fastcompany.com/919258/design-thinking-what

A guide to the SCAMPER technique for Creative Thinking - Rafiq Elmansy, Designorate http://www.designorate.com/a-guide-to-the-scamper-technique-for-creative-thinking/

Continue reading "Design Thinking in Business Capability Modeling" »

March 21, 2018

IT Architecture Principles for Digital Architecture

By Ramkumar Dargha, AVP, Senior Principal Technology Architect

The term 'Digital' more often would mean multiple things. However there are certain key characteristics that define whether an application or a service offering is truly digital or not. I find it helpful to take an architectural view to capture some of these key characteristics.

Here is my take on some of the key IT architectural principles an application or a service offering should follow.

Principle 1: Online, multi-channel and rich User Centric Experience. An enterprises should offer its services through online and multi-channel interfaces that are rich, intuitive, responsive, easy to use and visually appealing. Separate the UI look and feel from data. Create Omni Channel and multi-device experience with appropriate personalization and multi lingual features.

Why? An intuitive, consistent and easy to use interface enhances user experience and improves stickiness.


Principle 2: Service Oriented Architecture. Features and functionality should be available as loosely coupled, self-contained, standards based and configurable services. Services could be

  • ·         a domain based service or an aggregation service (aggregation of underlying services for right abstractions) or
  • ·         a technical service (technical common services like logging services, security services etc.) or
  • ·         an integration service or a data service (for abstracting underlying data access and management).

These services should follow granularity (Traditional SOA and/or Microservices) appropriate to a particular business or functionality. Combine this with Asynchronous Messaging and Processing.

Why? Digital systems need to be agile, loosely coupled, ubiquitous and easily scalable. Service oriented and Microservices architectures enable the above needs.


Principle 3: API First approach. When designing services, think what APIs these services will expose. What is the purpose of those APIs? Who and how will these APIs will be consumed? Are the APIs too granular or at the right abstraction level. What standard interfaces (REST, RPC etc.?) will the services expose. Follow API versioning to enable backward compatibility and flexibility.

Why? This principle helps to find the right abstraction level for services. Avoids redundant/un-usable services and chatty situations.


Principle 4: Leverage Data Analytics and insights for differentiation. Leverage data analytics & insights for process contextualization, personalized campaigns, targeting, marketing automation and behavior based segmentation etc. Adopt the right combination of a traditional data management approach and a big data management approach (Polyglot approach).

Why? Availability of diverse data sets (big, traditional, streaming, structured, and unstructured) and data analytics provides opportunities to leverage analytics driven insights for differentiation and customer contextualization thus resulting in ability to personalize and contextualize.

Principle 5: Contextual Awareness. Acquire and leverage user and context data including user preferences, location etc.

Why? This helps to provide context based content, personalized interactions and services through application of data analytics and insights. Improves customer intimacy and loyalty.


Principle 6: Secure by Design. Ensure security is addressed end to end and considered upfront. This includes security considerations across multiple dimensions like authentication, multi-factor, key management, SSO, authorization, auditing, logging, and encryption of data in transit and data at rest.

Why? Secure access to users enhances confidence in adopting digital online channels. Inadequate security features and Compliance issues results in lost customers and high penalties.


Principle 7: Cloud First Approach. Think cloud first approach. Could be a private cloud (hosted cloud using commercial stacks or open stack) or a public cloud (AWS or Azure etc.) or a combination.

Why? Digital systems are expected to be ubiquitous systems across geographies and locations. Digital systems are also expected to be agile and flexible. Cloud based principles and systems are a pre-requisite for IT automation, infrastructure as code and agile approaches like DevOps. Cloud based services and deployments enables flexibility, agility, scalability and performance to deliver services.


Principle 8: DevOps for Agility. Adopt Devops as an enabler for agile development and deployment for Digital systems. Devops is a combination of continuous integration (including Build management, test management and automation), continuous delivery (including environment management and deployment management), infrastructure as code and iterative development approach.

Why? Quick time to market and agility are key tenets of Digital systems. DevOps approach enables these.


Principle 9: Non Functional Requirements (NFR) Considerations. Give due considerations for all NFR (Non Functional Requirements, or Quality of Service parameters) requirements and design through the entire development cycle. NFR includes HA (High Availability), DR (Disaster Recovery), Scalability, Re-Usability, Maintainability, Localization, Configurability, Security and Compliance needs.

Why? Digital systems are required to be mission critical. Operating a Digital enterprise requires industrial-grade, highly available systems that operate 24/7 with minimal support. For example: Scalable architecture should be based on scale-out mechanisms.

Did I miss any principles that your organization focuses on?  Let me know at ramkumar_dargha@infosys.com.

January 16, 2018

Adoption Strategies for Cloud Native and Cloud Hosted Applications

Author: Ramkumar Krishnamurthy Dargha
            AVP, Senior Principal Technology Architect

Cloud technologies have been in forefront for nearly a decade now. Many enterprises have been adopting cloud as one of their key technology strategies. In today's world, no IT strategy discussion happens without cloud technologies in the mix. Should an enterprise go for cloud native application strategy or cloud hosted application strategy? If you are also grappling with such questions, read on.

Cloud Hosted applications and Cloud Native applications. What are these?

Cloud hosted applications are those which are found or made suitable to be 'in the cloud', so that enterprises can take advantages of underlying cloud infrastructure. They have following characteristics:

  • Hosted on Standard platforms: They run on non-proprietary, standard platforms. They run on either standard UNIX or Linux or Windows platforms. These applications will be either remediated or refactored to make them suitable to move or migrate to cloud infrastructure. Some of them may be able to move to cloud infrastructure as-is.

  • Hosted on an on-demand and as-a-service cloud infrastructure: They run on standard cloud infrastructure offered by cloud vendors (public clouds) or on private cloud infrastructure. They leverage cloud infrastructure services and features provided by cloud vendor for security, high availability, reliability and other non-functional requirements.

Cloud Native applications on the other hand, are those which are designed 'for the cloud'. These applications are designed such a way so as to derive the best overall advantage of cloud technologies.

In addition to the characteristics of cloud hosted applications, the cloud native applications display certain unique characteristics. They are:

  • Services based: Cloud native applications are services based. They use service oriented architectures (SOA) and microservice-based architectures. Such architectures make cloud native applications loosely coupled and self-contained. They also make cloud native applications independent (independently deployable) and seamless (ability to move around). These characteristics also enable cloud native application to integrate more easily with other services/applications, which is important in multi cloud, multi-vendor environments. All these characteristics are also key prerequisites for auto scalability needs for cloud applications.
  • Container based: Containers encapsulate specific components of application but provisioned only with the minimal OS resources to do their job. Virtual Machines (VM) encapsulate the guest OS, whereas containers reuse the host OS but encapsulate the application logic, required binaries, libraries and configurations required for the application to run. This makes containers light weight and independently deployable. Docker engine is one of the examples of such container engines.

    In addition, we would need a way to orchestrate multiple components that are encapsulated in each container to form one holistic application. This is where container clusters come into play. Docker swarm and Kubernetes are some of the popular container orchestrator engines which does this job. Microservices need not run on containers. They can even run on traditional virtual servers. But microservices and containers bring in synergies together. While microservices enable loosely coupled application architectures, containers enable such applications to be more seamlessly deployable and moved across multiple cloud infrastructures. Thus microservices together with containerization enables agility and flexibility.
  • API first:  APIs are related to the integration characteristics of Microservices. When we say API first, we mean, before implementing Microservices or applications, think about how and for what purpose a Microservice or application would be used/consumed. Are you duplicating the efforts of other developers who may be developing the same functionality exposed through similar APIs? Do you see a specific need for such a functionality exposed through those APIs? Your implementation of Microservices or application may change, but should not change your APIs. If you really want to change the APIs, give them different versions.

    Though APIs are not new architectural concept, API first strategy is specifically important in cloud native context. The reason is: if you want to make your applications really seamless, loosely coupled, re-usable and self-contained, not only the 'how' part of the design (satisfied by microservices based implementation) but also the 'what' part of the design (The API part) is also crucial. APIs should adhere to established standards (ex: REST, HTTP etc.) for seamless consumption.
  • Security: Security is important in cloud native or otherwise. However cloud native applications give specific focus on security within the application. What does it mean? Cloud native applications do not assume that the application is secure just because the application reside behind a firewall. Cloud native applications take precautions in building security in the application as well. That means employing required application specific authentication, authorization, ACLs, security controls, data security including data at rest and data in transit, adopting latest encryption and more stringent encryption algorithms, multi-factor authentication mechanisms etc.

There are additional design principles which cloud native applications are expected to follow.  For example, stateless as against state-full, design for failure etc. One good reference for these additional design principles is an AWS White paper.

Cloud Hosted applications or Cloud Native applications?

Should enterprises go for cloud native applications or cloud hosted applications. Here are points to consider:

  1.  For Applications which have higher life expectancy, cloud native application strategy is better suited.

  2. Applications which are expected to retire soon or replaced by better alternatives in near term, cloud hosted strategy is more suited.

  3. Applications which are in legacy technologies or platforms (like mainframes), the effort and costs required to make them cloud native may be prohibitive and/or risky. Such candidates may be better suited to reside in existing environment. If there is indeed an attractive business case to re-engineer and to take those applications out of legacy platforms, then adopt cloud native application strategy.

  4. Applications which are expected to undergo more frequent changes, cloud native application strategy suits better.

  5. For applications for which will go through agile development process or DevOps process, cloud native application strategy suits better.

  6. Any new applications that are being freshly developed, cloud native application strategy suits better.

  7. When enterprises are undergoing a large scale applications movement to cloud, to mitigate risks in such large transformations, enterprises may be better off adopting a two phase approach. In phase one, move applications to cloud which are suitable to be hosted on cloud. In this way, enterprises benefit from using cloud infrastructures (on-demand, spend flexibility etc.) in phase one.  Once the applications stabilize on cloud hosted environments, in phase two, adopt a cloud native application strategy.

While cloud native application strategy is attractive in long term, the enterprises would have to leverage a combination of cloud native application strategy and cloud hosting application strategies as per specific needs and circumstances listed above.

Any comments/suggestions let me know.

October 28, 2017

Purposeful Enterprise -- some thoughts on shaping the enterprise of the future

Author: Sarang Shah, Principal Consultant

We are in the midst of an exciting stage in the evolution of the modern economy, where accelerating technological changes, highly networked world, demographic shifts and rapid urbanization are leading to a disruption that is not ordinary [1]. The effects of these disruptive changes are impacting the prime movers of our modern economy - the businesses' or corporations. In this blog post, I would like to share some of my thoughts and questions about the future of these prime movers.

Let us take a step back and talk about the corporation itself. What is a corporation? What is its relationship with the market? What determines its boundary, size and scope? Economists attempt to answer these questions using various approaches - I would like to specifically point out the idea of cost of market transactions or transaction costs as described in Ronald Coase's seminal work 'the nature of the firm'. Coase points out that 'people begin to organize their production in firms when the transaction cost of coordinating production through the market exchange, given imperfect information, is greater than within the firm' as illustrated in the diagram below.[2]


Recent technological advances like mobile, cloud, social media, internet of things, augmented reality, block chain, and many more are causing disintermediation and dematerialization at an unprecedented speed and scale. These technologies directly lead to decrease in the transaction costs that we mentioned above and hence they influence the nature of the corporation also. We see these changes manifesting themselves in new digital business models, unbundling of corporations and redrawing of industry boundaries e.g. a mobile company provides payment services, an e-commerce retailer provides credit facilities, and so on.

Along with the technological changes, we are also seeing demographic and behavioral shifts in our economy. For instance, today's customer is more demanding in terms of value they get from the product or service than in the past, as they have easier access and the ability to consult and compare between various products/services in the market as a result of technological advances. In fact, even regulations are also promoting behaviors that allow customers more choice - e.g. sharing of payments data by the banks (Open Banking, PSD2) or mobile phone number portability across network carriers. The same demographic and behavioral shifts that affect the customers also influence the staff employed by the enterprise. We are beginning to see large parts of the workforce are now digital natives, who have access to information and are networked like never before. I believe these shifts impact the way the enterprise functions and is architected.

We are already seeing these shifts impacting the way enterprises function - enterprises that empathize with their customers and put them at the center more so then ever before; enterprises that understand that taking a long view on capital may be more beneficial for all the stakeholders; enterprises that are responsible towards the social and natural ecosystems they operate in and take a circular economy approach to the future; and enterprises that understand that in the future man and intelligent machines will work together collaboratively.

These changes lead us to ask some fundamental questions like: How will enterprises look like in the future? How will enterprises transform and adapt under volatile, uncertain, complex and ambiguous conditions? How should enterprise processes and policies be designed for digital natives and gig economy? How should enterprise ethics evolve as intelligent machines become integral to the enterprise? and many more.

I believe that taking a holistic and systemic perspective is required in shaping the purposeful enterprises of the future and we as enterprise architects have a unique opportunity to do the same. I and my colleagues at Infosys will be writing more about the same in the future blogs here.


I would like to thank A. Nick Malik & Steven Schilders for providing their suggestions for this post.



[1] No Ordinary Disruption: The Four Global Forces Breaking All the Trends by Dobbs, R. and Manyika, J.

[2] The nature of the firm (http)

[3] The nexus of forces is creating the digital business, Dec 2014, Gartner (http)

[4] Unbundling the corporation, Mar 1999, Harvard Business Review (http)

[5] The self-tuning enterprise, June 2015, Harvard Business Review (http)



The terms 'business', 'company', 'corporation', 'enterprise' & 'firm' have been used interchangeably. The primary intent of this blog is for-profit enterprises, though some of the ideas described above are applicable to other types of enterprises also.

October 21, 2017

The benefits of leveraging information-centric enterprise architecture - Part 3

Dr. Steven Schilders
AVP and Senior Principal Consultant, Enterprise Architecture
Marie-Michelle Strah, Ph.D.
Senior Principal Consultant, Enterprise Architecture

Continuing our three-blog series on information-centric architecture, this blog highlights the benefits of the data-first approach. While explaining how this approach drives agility, we want to emphasize that these blogs do not advocate a complete implementation of information-centric architecture. Rather, we are presenting an alternate view on the two most prevalent architecture paradigms.

In Part 1 and Part 2 of this series, we explored how organizations typically implement systems based on business capabilities rather than data. Such an approach invariably creates extreme data segmentation because system capabilities dictate what data is stored, how many copies are stored and how it is accessed. In today's age, no organization can succeed with fragmented data as data and its relationships - both direct and indirect - are the lifeblood of an organization.

Integration challenges in data warehousing solutions

Data warehousing solutions are quite popular for data integration. However, these solutions involve lengthy processing making it difficult to forge business-critical data connections, thereby diminishing the value of data. Further, data warehousing approaches - and the assigned 'data architects' - become tied to vendor data models. We use the term data architect loosely here. Invariably, these architects behave as vendor-specific master data management (MDM) or enterprise data warehouse (EDW) specialists rather than actual 'enterprise information architects'. Needless to say, this type of centralized and hierarchical approach nullifies any benefit that can be achieved through indirect data relationships such as artificial intelligence (AI) and machine learning.

To be able to make real-time decisions and scale quickly in highly competitive markets, you need to transform your enterprise into a hyper-connected and composable  organization. The danger from delayed decisions cannot be overstated in such an environment. To give you an idea of how important this is, we have put together a graph that illustrates the extent of value lost when there is a delay between a business event and action taken.


Despite these acute disadvantages, application data architecture is often prioritized over enterprise information architecture. In some cases, this is because vendor-provided platforms and COTS products pre-determine data models and data access. In other cases, capability-based architectures that claim to represent business capabilities are actually application or technical architectures that collapse business capabilities. For example, consider how ERP systems tend to represent either finance, accounts payable (AP) or human capital capabilities.

This traditional approach exponentially delays the delivery of business insights and decision-making because data must be collected and copied across silos to get actionable information. Further, point-to-point integrations across multiple applications with disparate data architecture becomes an effort-intensive process for enterprise architecture as well as data teams. Finally, developing and maintaining these brittle and tightly-coupled architectures exacerbates the delay in the decision-to-value cycle.

Now, let us see how information-centric architecture unlocks value from hidden data to enable business-as-a-service capabilities in digital ecosystems.

Step 1: Integrate data across the organization

First, organizations must integrate data whether it resides in commercial-off-the-shelf (COTS) products, custom applications or microservices. In our earlier blog, we had proposed a layered information architecture approach (see figure below). Here, information architecture is not tied to either application or platform architectures that prioritize technical architecture. Instead, it lays the foundation for composable architecture by leveraging a hub model.



Information Centric EA: Layered Information Architecture


Step 3: Use fit-for-purpose data hub models to gain business-specific insights


Our previous blog also illustrated how information-centric architecture can be used in COTS as well as custom-built applications. Here is how the data integration hub architecture works in both cases (see figure below). The data hubs provide representations of data that are optimized for the specific needs of the business. For example, key-based data is leveraged for key-based entity relationships, graph-based data is used to analyze complex interdependencies, time-series-based data is used for sequential analysis, search-based data can be used for complex queries, and so on. Thus, information-centric enterprise architecture reduces the decision-to-value curve because data is grouped contextually and data hubs provide the relevant data attributes in a form that optimizes value creation.



Step 4: Apply AI and BI on insights to achieve decision-as-a-service


Data integration hubs and contextual data grouping allow enterprises to design business intelligence (BI) capabilities and machine learning systems that merge programmed intelligence and AI. Further, BI capabilities can extend the base data with specific data requirements needed for analytics. They are exposed through BI services or decision-as-a-service executed for a consumer-specific data context. The key aspect of this design is that business-intelligent capabilities and services can be created, modified or removed without impacting the core and contextual data assets.


The end result?

  • Transitioning from traditional data warehouses to a fit-for-purpose model of multiple data hubs helps organizations leverage traditional BI capabilities and next-generation AI and machine learning
  • Prioritizing layered information-centric enterprise architecture makes data and decision-making organizational and architectural priorities


Simply put, adopting a strategic model instead of a retrofit model enables AI, faster access to enterprise insights and real-time decision-making. In an era where data is king, these are the key capabilities that enterprises need to become service-enabled.


Keep watching this space for enterprise-level case studies and best-practices of information-centric design in microservices, AI and data science.


In case you missed the previous blogs in this series, here are the links:

Part 1

Part 2

The differences between data-centric and capability-centric architecture - Part 2

Dr. Steven Schilders
AVP and Senior Principal Consultant, Enterprise Architecture
Marie-Michelle Strah, Ph.D.
Senior Principal Consultant, Enterprise Architecture

Capability-centric architecture and information-centric architecture are the two most prevalent models in today's organizations.

In Part 1 of this three-blog series, we outlined an information-centric approach to architecture. That approach places business data at the core by decoupling data from application and platform architecture. In this blog, we deep dive into some of the concepts mentioned in the previous blog. Firstly, we describe the key differences between capability-centric and data-centric approaches to architecture. We also explain how they each influence the design of commercial-off-the-shelf (COTS) and custom-built solutions. To know how to expose reusable business capabilities as services, check out our next two blog in this series.

First, a quick summary of our previous blog: Typically, organizations buy COTS solutions that match their business capabilities irrespective of how data is stored, accessed or made available for reuse. Such solutions are often marketed as capability-centric architecture. But, in reality, a COTS solution should be able to ingest external data and extract internal business data. Historically, both these processes use batch processing. However, in today's age of services, there is greater focus on run-time integration through APIs. Nevertheless, data primarily falls within the domain of the COTS application. 

COTS applications: Capability-centric versus information-centric approaches

In information-centric enterprise architecture, the above model is inverted. The COTS solution must integrate across the enterprise information landscape in a solution and vendor-agnostic manner. This approach also decouples data architecture and enterprise information architecture, which are often collapsed when application-specific data architectures are reinforced.

The following illustration and table will clarify the differences between these two architecture models.

Capability-centric versus information-centric architecture for a COTS application


Differences between capability-centric and information-centric architecture for a COTS application



Capability-centric architecture

Information-centric architecture

Data architecture

A black-box that is designed and optimized to support application needs

Remains in a black-box and all application-agnostic data is externalized and synchronized through event-based integration

Relation between application-specific and agnostic data


Decoupled from each other when externalized

Data taxonomy

Is  application-specific

Is externalized and depends on business domain and/or functional context

Exposing data

Done through application APIs or data replication

Done through data services using APIs or data integration/replication adaptors

Removing/replacing applications

Cannot be done without affecting data architecture

Causes minimal impact to enterprise information architecture

Data consumption/extension

Data is not readily available

Data is readily available

Interaction between data and applications

Application is the source and master of its own data

Applications act as systems-of-record by supporting all create-retrieve-update-delete-search (CRUDS) capabilities. Data services act as a systems-of-reference with only read and search-based capabilities (these may change in multi-system architecture)

Using external data sources

Cannot act on an externalized data source.  Application-agnostic data must be replicated into the internal store before process execution. While some applications can call externalized data sources during execution, the data still needs to translated and transformed into the application taxonomy before execution. Such integration creates performance issues and is unsuitable for high-volume and performance-bound transactions

Applications can support inbound and outbound data synchronization through event-based integration. In the illustration, integration is one-way as there is no other application manipulating the data

How applications access data

Several applications use the same data, resulting in data proliferation, multiple access points for the same data and lack of a single and accurate source of truth.

Data is accessed and shared through the system-of-reference data services.  Applications are redesigned to support single application mastering, where possible,  by restricting access (read-and-search only) or by removing capabilities within the application

Data integration

Requires significant effort to move and synchronize data between various applications 

Implementation focuses on integrating data with a reusable core through services

Real-time analytics

Limited to in-built application capabilities and can be applied only once data is moved to the data warehouse

Data is exposed for real-time analytics and AI-based processing


At first glance, information-centric architecture implementation appears more complex, doesn't it? But, consider the advantages of using information-centric architecture instead of capability-centric architecture when decommissioning applications or building new capabilities to leverage data.

Custom-built applications: Capability-centric versus information-centric approaches

Unfortunately, COTS solutions have distorted organizational priorities by prioritizing capabilities over information architecture and data reuse. As a result, custom-built solutions have followed the COTS architecture model whereby business capabilities are built over an application-specific data repository (we will discuss service-based architecture in the final blog).


Capability-centric versus information-centric architecture for a custom-built application



In the information-centric approach, the key differences between a COTS implementation and custom-built application is how data storage is controlled and how data interaction is designed. Custom-built solutions can be designed to use externalized data stores and integration services.

Ever wondered why there is so much recent emphasis on enabling microservices? This is because more and more enterprises are realizing the value of an information-first approach. Such an approach simplifies the design of enterprise architecture, making it easier to execute digital strategies. As custom applications, microservices have complete control over data as well as business capabilities, leading to greater agility.

Capability-first versus Information-first architecture for custom-built applications



Capability-first architecture

Data-first architecture

Data architecture

A white-box that is specifically designed and optimized to support application needs

Application-agnostic data is externalized and integrated with the application through data service APIs. Application-specific data acts as an extension to externalized data and can be designed and optimized to support application needs

Relation between application-specific and agnostic data


Decoupled from each other when externalized

Data taxonomy

Is application-specific

Is externalized, dependent on the business domain and/or functional context and can be translated to application-specific taxonomy, if needed

Exposing data

Done through application APIs or data replication

Done through data services using APIs or data integration/replication adaptors

Removing/replacing applications

Significantly impacts  data architecture

Has minimal impact on data architecture

Data consumption/extension

Data is not readily available

Data is readily available

Interaction between data and applications

Application is the source and master of its own data

Data services are the system-of-record for all application-agnostic data

Using external data sources

Application-agnostic data may need to be copied into the internal store before process execution.  Data services can integrate externalized data

Data services interact with system-of-record data

How applications access data

Several applications use the same data resulting in data proliferation, multiple access points for the same data and lack of a single and accurate source of truth

Data services APIs access and share data 

Data integration

Requires significant effort to move and synchronize data between various applications

Implementation focuses on integrating data with a reusable core through services

Real-time analytics

Limited to in-built application capabilities and can be applied only once data is moved to the data warehouse

Data is exposed for real-time analytics and AI-based processing


Release data for AI-driven processing

Now, let us take a look at the target state of capability-centric and information-centric approaches when we combine both types of applications. Surely, the main difference between the two architectures is how data is consolidated and constructed, which leads to varying levels of business agility. On the one hand, data-centric architecture consolidates data instantly, providing market differentiation. For example, enterprises can process business intelligence (BI) or artificial intelligence (AI) logic in real-time using the application, user context and the complete data set. On the other hand, capability-centric architecture requires data integration and mediation/post-processing for data consolidation - making it nearly impossible to leverage BI/AI-based processing capabilities.

Combined view of COTS and custom-built applications for capability-centric approach and information-centric approach



So, if you want a sophisticated and data-driven digital strategy, adopting information-centric architecture is the way forward. Interestingly, many organizations know this and are planning strategic initiatives to rectify issues arising from capability-centric architecture. However, actually inverting existing systems into data-centric ones can be challenging. While some may sidestep this process by wrapping existing systems with an additional layer of 'digital concrete' such as services and/or APIs, this will inevitably hinder agility and the ability to proactively compete in the market. 

Discover how information-centric architecture delivers agility for service-enabled enterprises in our next blog

Why data should drive your enterprise architecture model - Part 1

Dr. Steven Schilders
AVP and Senior Principal Consultant, Enterprise Architecture
Marie-Michelle Strah, Ph.D.
Senior Principal Consultant, Enterprise Architecture

Information-centric enterprise architecture is about putting data first during assessment, strategy, planning, and transformation. To create a 'composable enterprise', data must be mobile, local and global across departments, partners and joint ventures. This is important if enterprises are to liberate data to improve insight and develop disruptive and differentiated services. 

To achieve this, enterprises must first decouple business data from application and platform architecture. Decoupling business data gives organizations flexibility as well as valuable insights, which are very important during digital transformation, mergers, acquisitions, and divestiture journeys.

A case of the tail wagging the dog

Today, most enterprise architecture follows a variation of The Open Group Architecture Framework (TOGAF) model (business/information/technology architectures). Here, strategic planning and sourcing recommendations for application portfolios are based on decision-making flows such as buy-before-build or reuse-before-buy-before-build. In the TOGAF hierarchy, organizations are meant to define business capabilities, align information management strategies to the business and choose application portfolios that support those strategies.

However, if you were to speak to any experienced enterprise architect, they will tell you that this is not what actually happens. In reality, most decisions are driven by technical aspects such as applications and platforms rather than the actual business. Invariably, applications and platforms are retro-fitted to meet business needs. If we were to present this differently, it is a proverbial example of 'the tail wagging the dog'.

For example, portfolio rationalization is often marketed as business capability-driven. Actually, the focus is on purchasing commercial-of-the-shelf (COTS) products or components that have some out-of-the-box (OOTB) business or technical capabilities (or both) that can meet predefined business and technical requirements. To minimize extreme customization when choosing a COTS product, most organizations will actively seek OOTB capabilities that fit nearly 85% of their requirements and offer vendor-supported configuration changes.

In case there are gaps in the OOTB capabilities, the COTS solution will undergo some level of customization, which may or may not be recommended or even supported by the vendor. The organization may also build custom solutions, either in the beginning or over time, to support or enhance the COTS solution. In our experience, architects have traditionally designed custom-built solutions around business (or technical) capabilities - whichever is the priority. 

Tipping the scales for business vs. technology

Clearly, capability-driven enterprise architecture has an advantage over technology-driven approaches. It aligns business with IT and focuses on business processes and capabilities. However, it is also inextricably linked to specific applications and their inherent legacy architecture. In case you haven't noticed already, there is irony here: platform and cloud-first approaches often reinforce application architecture instead of business capabilities! This is because the business has to adopt COTS data models and storage options for their data. As a result, business capabilities are collapsed into the vendor or COTS applications rather than standing alone.

Let's see why this is alarming. While business requirements and their associated capabilities may change over time, core organizational data does not. Of course, technology also changes, thereby impacting the longevity of COTS and custom-built solutions, but let us ignore this for now. Thus, the common denominator in using COTS as well as custom capability-driven solutions is that information architecture is not a top priority during design. When data models become tied to vendor-provided models, they are unable to reflect organizational enterprise data models (if they exist) or offer flexible and adaptive information capabilities.

The dog wags its tail

Now, data is the most valuable asset that an organization can leverage to achieve market differentiation and success. No matter the condition - be it enhanced, embellished or partly redundant - core data is the lifeblood of any organization. In fact, one can argue that organizations may still exist if they lost all their applications but retained access to their data. The corollary is dangerously true, too: Without data, an organization would cease to exist.

So, if we were to switch these enterprise architecture paradigms, we could make a case to enable 'the dog to wag its tail'. Here, we would establish information architecture as the primary driver of enterprise architecture. This approach will disengage business capabilities from application platforms and vendor lock-in.

Creating an information-centric enterprise architecture

You may be wondering what the primary requirements of an information-centric enterprise architecture are. Here's a short list:

  • Business data should be segmented from business capabilities. This allows us to change/remove capabilities without impacting the underlying data and add new capabilities that can utilize the data when needed.

  • Business data should be separated from application-specific data that is artificially-coupled and may cause unnecessary bloat. This allows us to remove or add applications without impacting business data.

  • Business data should be segmented appropriately based on its domain

  • Each business data domain should be consolidated into a single version of truth

  • Business capabilities should be designed based on the domain-specific business data and associated functionality requirements

  • Wherever possible, business capabilities should be implemented as reusable services either in COTS or custom-developed applications

  • New or composite capabilities can be added by consuming the services


Figure 1: Information-centric enterprise architecture model 



Ultimately, such an architecture model will incorporate perspectives of business, information, application, and technology. We prefer to liken it to the layers of an onion: the business is at the core followed by information designed to support the core business needs, applications to address business capabilities that leverage information above that, and the business capabilities mapped to the technology on top. The key premise of this architecture model is that the technology and applications are free to change without impacting the core business data and business architecture.

Deep-dive into the differences between capability-driven and data-centric approaches to architecture in our next blog.

Continue reading "Why data should drive your enterprise architecture model - Part 1" »

Subscribe to this blog's feed

Follow us on

Blogger Profiles

Infosys on Twitter