Enterprise architecture at Infosys works at the intersection of business and technology to deliver tangible business outcomes and value in a timely manner by leveraging architecture and technology innovatively, extensively, and at optimal costs. Stay up-to-date on the latest trends and discussions in architecture, business capabilities, digital, cloud, application programming interfaces (APIs), innovations, and more.

April 6, 2018

Design Thinking in Business Capability Modeling

A. Nicklas Malik, Senior Principal Enterprise Architect, Infosys Ltd.

 

One of the most interesting and difficult challenges of Business Architecture is creating the capability model for an enterprise.  In this article, I'll explore how to use the practices of Design Thinking to support the difficult and sometimes contentious process of creating a business capability model for an enterprise.  First, let's understand the problem a little.

 

For an enterprise that doesn't have a capability model, developing the first one can be tough.  Fortunately, the efforts of the Business Architecture Guild have started to produce value in the form of Reference Models.  Even with a reference model, the challenges can be substantial.  That is because a capability model is not a foregone conclusion.  There are many ways to frame the capabilities of an organization in a capability taxonomy.  Framing is important.  Framing the capabilities in a particular way can drive conversations (both good and bad).  For example, if we describe different capability groups for Marketing, Sales, and Customer Services, in which group do we put "Customer Record Management"?  Will stakeholders argue about it?  Will someone choose not to take responsibility for their data if the capability is aligned to their job title?  Getting this right may be important in getting key executives to step up to their responsibilities in the organization.

 

A capability model is supposed to be independent of the politics and processes and structure of an organization, but to be honest, the most effective capability models reflect the needs, ownership, and strategy of the organization in subtle ways.  I've seen capability models with dependency connections, with ownership groupings, and with budgetary groupings, all as "overlays" that are both useful to the planning efforts, and which influence the capability model itself.  It's a complex problem but one that we can begin to solve with Design Thinking.

 

Design Thinking is an interesting technique that can be used to approach complex problems.  It is a method of creative focus that allows excellent ideas to emerge in a repeatable way, often with conflicting inputs.  Design Thinking has emerged as a model for bringing together many excellent practices for fostering creativity in a results-driven world.


Design thinking makes some basic assumptions: (a) We start without actually knowing what the destination is, (b) we center our solutions around deep empathy for the customer, and (c) we refine our creations through rapid prototyping and iteration.  Design Thinking can be used to design a bicycle, a space ship, a house, a business process, a software package, a vacation, and yes, a Business Capability Model.

 

In many ways, the techniques of design thinking are well suited for the task of generating a capability model.  In most of the situations I've been aware of, stakeholders for a capability model have never seen one before and have no idea how to use one.  It's tough to assist in designing something that you've never used before.  Consider: if you had never taken an airplane trip and someone asks you to design the perfect passenger cabin for an airplane... how well would you do?  That's a tough challenge, but design thinking can help.


Design thinking does not assume that you have experience with the solution before you start.  As a result, you can be comfortable that your "novice-developed airplane cabin" will at least be a reasonable one, even before you take your first flight.

 

With design thinking, there are five phases: Empathy, Problem Definition, Ideation, Prototype, and Test (with the last two in a quickly spinning cycle).  So let's apply these five stages to Capability Model Development.

 

design-thinking-amalik-1.png

 

Empathize -- The foremost value of the empathize stage is to put the customer at the center of your work, and remove yourself from it.  Your preconceived notions of the "right way" to build something, or "what something should look like according to Expert X" just gets in the way.  Your customer will describe their "conceptual space" in their own way, and different people will do it differently.  To truly empathize, you have to make sure you are representing the right stakeholders, and that you are actually listening to their issues and concerns.

 

One thing that I find, often, is that most people have "typical" problems.  If someone has typical problems, their concerns will be well understood.  But there are always outliers -- people who seem to always have unusual problems.  These folks can provide greater insight when you are building your understanding of the problem space, because their problems challenge the status quo.  They don't fit neatly into the box.  Look for these people.  Listen to them.

 

Empathy in capability modeling means, in my experience, to listen to how a team describes themselves and to capture it their way.  Do they discuss processes and procedures?  Do they discuss assets? Locations? Events?  Information? Documents? Workflows? Your capability model will need to reflect a wide array of stakeholders, so as you move between those stakeholders, don't begin by forcing them into your box.   Step into theirs.

 

Sketch (literally, with pencil) a simple non-structured diagram that represents their way of thinking of their space.  Yes, eventually you will build a capability model, but don't start with the strict definition of capabilities.  Empathize with where your customers actually live, and what they actually live with. 


I wouldn't expect your initial results to be any more "sensible" than something like the following diagram.

 

design-thinking-amalik-2.png

 


Problem Definition -- How many times, when we get to the end of our efforts, do we look back and say "We should have asked better questions?"  Design Thinking puts this problem right up front.  We believe we understand the stakeholders through our empathy, but before we put the capability model onto paper and start hashing it out, let's be clear about what problem we are trying to solve.

 

Capability models, in my experience, are excellent tools for planning.  We can manage a portfolio and plan for changes.  We can observe processes and plan for improvements.  We can evaluate readiness and plan for training.  We can find overlaps in application functionality and plan for consolidation.  It's planning.  But not every organization plans the same way, and few organizations have a mature planning process.  So as you build your capability model, think about who will own specific capabilities, how those owners will use their parts of the model to develop those plans, and how those plans will roll together.  Think about the inputs to planning: trends, strategies, changes in the ecosystem, changes in the competition, problems with existing systems, and technical debt. 

 

Your result needs to be a question that frames the problem you are trying to solve.  As you pose this question to your stakeholders, their reaction will tell you if your question was effective.  Don't be afraid to drop your attachment to specific terms or processes or methods.  Let things flow a little.  Phrase the question in terms of the customer's needs.  One good technique is to use the phrase "How can we ..." in your problem statement.

 

"How can we frame all the abilities needed by our business model so that we can best plan and coordinate our forward march?"

 

Please don't use my example as anything more than an example.  The problem statement you create should "feel" like it evolves out of the language, terminology, and culture of the organization itself.

 

Ideation -- A capability model created by a business architect and thrust upon the organization will be dead on arrival.  That is not a prediction.  That is a foregone conclusion.  How do I know?  I've done it.  School of hard knocks.  Let the stakeholders build their capability model through a series of collaborative sessions.  Ideation is the first step in that process.

 

The ideation step can use any of a number of techniques to open the stakeholders up to different ways to frame the capability model.  At this stage, you are creating capabilities, so we are applying the first series of constraints on the process.  There are a dozen different ways to frame ideas that do NOT end up with a capability model, but for the sake of this exercise, feel free to write them down and not pursue them.  We need our end result to be constrained to capabilities.  Other stuff will fall out. 

 

If the company has a process model that they actively use, you can start there.   If they don't have one (or don't actually use the one they have), consider using one of the capability reference models from the Business Architecture Guild (businessarchitectureguild.org) as a starting point.  This is far quicker than starting from scratch.  However, it is only a source of ideas.  Let the team reword, rename, join, split, and shred the starting "thing" any way they want. 

 

To keep ideation from becoming a long involved process, I suggest a series of simple exercises to expand the number of possibilities, and then consolidate that list to the most feasible ones.  Then expand again, and consolidate again.  Each iteration, considering a different aspect of your thinking or understanding. 

 

An excellent technique is the SCAMPER method, which pushes participants through seven different ways of thinking about the "starting" product to create a new "ending" product.  Those seven ways of thinking are: Substitute, Combine, Adapt, Modify, Put to another use, Eliminate, and Reverse.  There are a number of online resources available if you want to go deeper on using SCAMPER.  Other methods may include brainstorming, worst possible idea, paint by numbers, and many more.  All of these are designed to get creative juices flowing, especially for a group of people, while keeping the results controlled.

 

Prototype and Test -- Capability models are a unique bird because they represent the abilities needed by an enterprise to achieve a purpose.  For all intents and purposes, you are creating a list.  That list of abilities often exceeds the organization's internal capabilities.  This is why we talk about the capabilities of an "enterprise" and not the capabilities of a "business".  An enterprise may involve many businesses, suppliers, partners, regulators, and even customers in providing the required list of capabilities.

 

Creating a prototype capability model is a process that is complex if done by hand (without a capability modeling tool).  This is because you may have twenty stakeholders, and most of them do NOT want to see the entire organization in the capability model!  They want to see THEIR PART represented in gory detail and want to see everyone else minimized.  For this reason, you need the ability to prototype a complete model (for the enterprise) but to review segment-level capability models with the stakeholders.  Without a tool of some kind, this can create a great deal of manual effort.  (This is not a problem unique to Design Thinking... this happens with all capability model generation). 

 

I've found that the prototype effort actually begins during ideation.  Since we are building a knowledge product, and not a physical product or even a software product, the first prototype is actually developed in the collaborative session to some extent.  It is the synthesis of that prototype with the work done by other stakeholders, in their own ideation processes, that creates the enterprise model. 

 

Resist the temptation to go dark, work for a while, and spring an enterprise model on the organization.  Work your way up from stakeholders who buy in, showing them models that are domain specific.  Put four or five of the domain specific models onto paper and get feedback before attempting to create the first synthesized model.  Otherwise, one domain will have undue influence over the entire structure of the capability model.

 

With each prototype, you are producing a new enterprise capability model and a complete refresh of domain-specific models.  Run them past key stakeholders for quick responses.  Remember their needs: this is a planning framework.  Can they use the capability model to develop their plans?  Keep asking the core question. 

 

When you have sufficient representation across the enterprise to have created the enterprise wide model, you can circulate that model with the core planning teams in your organization: these teams may go by names like Strategy Development, Organizational Development, PMO, Enterprise Architecture, Finance, and Strategy Execution. 

 

Conclusion

 

Design thinking is certainly not the only way to design a product, and it is relatively novel in specific areas of organizational and strategic planning.  However, as this example illustrates, design thinking can be applied to purely knowledge based products like a capability model in a manner that hopefully builds better buy-in for the final result. 

 

And who couldn't use a little more buy in?

 

Useful links

Business Architecture, Setting the Record Straight - William Ulrich and Whynde Kuehn -- http://www.businessarchitectureguild.org/resource/resmgr/BusinessArchitectureSettingt.pdf

Design Thinking Blog - Tim Brown, Ideo - https://designthinking.ideo.com/

Design Thinking and the Enterprise - Pramod Prakash Panda, AVP, Infosys https://www.infosys.com/insights/human-potential/pages/design-thinking.aspx

Design Thinking, What is that? - Fast Company, 20 March 2006 https://www.fastcompany.com/919258/design-thinking-what

A guide to the SCAMPER technique for Creative Thinking - Rafiq Elmansy, Designorate http://www.designorate.com/a-guide-to-the-scamper-technique-for-creative-thinking/

Continue reading "Design Thinking in Business Capability Modeling" »

March 21, 2018

IT Architecture Principles for Digital Architecture

By Ramkumar Dargha, AVP, Senior Principal Technology Architect

The term 'Digital' more often would mean multiple things. However there are certain key characteristics that define whether an application or a service offering is truly digital or not. I find it helpful to take an architectural view to capture some of these key characteristics.

Here is my take on some of the key IT architectural principles an application or a service offering should follow.

Principle 1: Online, multi-channel and rich User Centric Experience. An enterprises should offer its services through online and multi-channel interfaces that are rich, intuitive, responsive, easy to use and visually appealing. Separate the UI look and feel from data. Create Omni Channel and multi-device experience with appropriate personalization and multi lingual features.

Why? An intuitive, consistent and easy to use interface enhances user experience and improves stickiness.

 

Principle 2: Service Oriented Architecture. Features and functionality should be available as loosely coupled, self-contained, standards based and configurable services. Services could be

  • ·         a domain based service or an aggregation service (aggregation of underlying services for right abstractions) or
  • ·         a technical service (technical common services like logging services, security services etc.) or
  • ·         an integration service or a data service (for abstracting underlying data access and management).

These services should follow granularity (Traditional SOA and/or Microservices) appropriate to a particular business or functionality. Combine this with Asynchronous Messaging and Processing.

Why? Digital systems need to be agile, loosely coupled, ubiquitous and easily scalable. Service oriented and Microservices architectures enable the above needs.

 

Principle 3: API First approach. When designing services, think what APIs these services will expose. What is the purpose of those APIs? Who and how will these APIs will be consumed? Are the APIs too granular or at the right abstraction level. What standard interfaces (REST, RPC etc.?) will the services expose. Follow API versioning to enable backward compatibility and flexibility.

Why? This principle helps to find the right abstraction level for services. Avoids redundant/un-usable services and chatty situations.

 

Principle 4: Leverage Data Analytics and insights for differentiation. Leverage data analytics & insights for process contextualization, personalized campaigns, targeting, marketing automation and behavior based segmentation etc. Adopt the right combination of a traditional data management approach and a big data management approach (Polyglot approach).

Why? Availability of diverse data sets (big, traditional, streaming, structured, and unstructured) and data analytics provides opportunities to leverage analytics driven insights for differentiation and customer contextualization thus resulting in ability to personalize and contextualize.


Principle 5: Contextual Awareness. Acquire and leverage user and context data including user preferences, location etc.

Why? This helps to provide context based content, personalized interactions and services through application of data analytics and insights. Improves customer intimacy and loyalty.

 

Principle 6: Secure by Design. Ensure security is addressed end to end and considered upfront. This includes security considerations across multiple dimensions like authentication, multi-factor, key management, SSO, authorization, auditing, logging, and encryption of data in transit and data at rest.

Why? Secure access to users enhances confidence in adopting digital online channels. Inadequate security features and Compliance issues results in lost customers and high penalties.

 

Principle 7: Cloud First Approach. Think cloud first approach. Could be a private cloud (hosted cloud using commercial stacks or open stack) or a public cloud (AWS or Azure etc.) or a combination.

Why? Digital systems are expected to be ubiquitous systems across geographies and locations. Digital systems are also expected to be agile and flexible. Cloud based principles and systems are a pre-requisite for IT automation, infrastructure as code and agile approaches like DevOps. Cloud based services and deployments enables flexibility, agility, scalability and performance to deliver services.

 

Principle 8: DevOps for Agility. Adopt Devops as an enabler for agile development and deployment for Digital systems. Devops is a combination of continuous integration (including Build management, test management and automation), continuous delivery (including environment management and deployment management), infrastructure as code and iterative development approach.

Why? Quick time to market and agility are key tenets of Digital systems. DevOps approach enables these.

 

Principle 9: Non Functional Requirements (NFR) Considerations. Give due considerations for all NFR (Non Functional Requirements, or Quality of Service parameters) requirements and design through the entire development cycle. NFR includes HA (High Availability), DR (Disaster Recovery), Scalability, Re-Usability, Maintainability, Localization, Configurability, Security and Compliance needs.

Why? Digital systems are required to be mission critical. Operating a Digital enterprise requires industrial-grade, highly available systems that operate 24/7 with minimal support. For example: Scalable architecture should be based on scale-out mechanisms.

Did I miss any principles that your organization focuses on?  Let me know at ramkumar_dargha@infosys.com.


January 16, 2018

Adoption Strategies for Cloud Native and Cloud Hosted Applications

Author: Ramkumar Krishnamurthy Dargha
            AVP, Senior Principal Technology Architect

Cloud technologies have been in forefront for nearly a decade now. Many enterprises have been adopting cloud as one of their key technology strategies. In today's world, no IT strategy discussion happens without cloud technologies in the mix. Should an enterprise go for cloud native application strategy or cloud hosted application strategy? If you are also grappling with such questions, read on.

Cloud Hosted applications and Cloud Native applications. What are these?

Cloud hosted applications are those which are found or made suitable to be 'in the cloud', so that enterprises can take advantages of underlying cloud infrastructure. They have following characteristics:

  • Hosted on Standard platforms: They run on non-proprietary, standard platforms. They run on either standard UNIX or Linux or Windows platforms. These applications will be either remediated or refactored to make them suitable to move or migrate to cloud infrastructure. Some of them may be able to move to cloud infrastructure as-is.

  • Hosted on an on-demand and as-a-service cloud infrastructure: They run on standard cloud infrastructure offered by cloud vendors (public clouds) or on private cloud infrastructure. They leverage cloud infrastructure services and features provided by cloud vendor for security, high availability, reliability and other non-functional requirements.

Cloud Native applications on the other hand, are those which are designed 'for the cloud'. These applications are designed such a way so as to derive the best overall advantage of cloud technologies.

In addition to the characteristics of cloud hosted applications, the cloud native applications display certain unique characteristics. They are:

  • Services based: Cloud native applications are services based. They use service oriented architectures (SOA) and microservice-based architectures. Such architectures make cloud native applications loosely coupled and self-contained. They also make cloud native applications independent (independently deployable) and seamless (ability to move around). These characteristics also enable cloud native application to integrate more easily with other services/applications, which is important in multi cloud, multi-vendor environments. All these characteristics are also key prerequisites for auto scalability needs for cloud applications.
  • Container based: Containers encapsulate specific components of application but provisioned only with the minimal OS resources to do their job. Virtual Machines (VM) encapsulate the guest OS, whereas containers reuse the host OS but encapsulate the application logic, required binaries, libraries and configurations required for the application to run. This makes containers light weight and independently deployable. Docker engine is one of the examples of such container engines.

    In addition, we would need a way to orchestrate multiple components that are encapsulated in each container to form one holistic application. This is where container clusters come into play. Docker swarm and Kubernetes are some of the popular container orchestrator engines which does this job. Microservices need not run on containers. They can even run on traditional virtual servers. But microservices and containers bring in synergies together. While microservices enable loosely coupled application architectures, containers enable such applications to be more seamlessly deployable and moved across multiple cloud infrastructures. Thus microservices together with containerization enables agility and flexibility.
  • API first:  APIs are related to the integration characteristics of Microservices. When we say API first, we mean, before implementing Microservices or applications, think about how and for what purpose a Microservice or application would be used/consumed. Are you duplicating the efforts of other developers who may be developing the same functionality exposed through similar APIs? Do you see a specific need for such a functionality exposed through those APIs? Your implementation of Microservices or application may change, but should not change your APIs. If you really want to change the APIs, give them different versions.

    Though APIs are not new architectural concept, API first strategy is specifically important in cloud native context. The reason is: if you want to make your applications really seamless, loosely coupled, re-usable and self-contained, not only the 'how' part of the design (satisfied by microservices based implementation) but also the 'what' part of the design (The API part) is also crucial. APIs should adhere to established standards (ex: REST, HTTP etc.) for seamless consumption.
  • Security: Security is important in cloud native or otherwise. However cloud native applications give specific focus on security within the application. What does it mean? Cloud native applications do not assume that the application is secure just because the application reside behind a firewall. Cloud native applications take precautions in building security in the application as well. That means employing required application specific authentication, authorization, ACLs, security controls, data security including data at rest and data in transit, adopting latest encryption and more stringent encryption algorithms, multi-factor authentication mechanisms etc.

There are additional design principles which cloud native applications are expected to follow.  For example, stateless as against state-full, design for failure etc. One good reference for these additional design principles is an AWS White paper.

Cloud Hosted applications or Cloud Native applications?

Should enterprises go for cloud native applications or cloud hosted applications. Here are points to consider:

  1.  For Applications which have higher life expectancy, cloud native application strategy is better suited.

  2. Applications which are expected to retire soon or replaced by better alternatives in near term, cloud hosted strategy is more suited.

  3. Applications which are in legacy technologies or platforms (like mainframes), the effort and costs required to make them cloud native may be prohibitive and/or risky. Such candidates may be better suited to reside in existing environment. If there is indeed an attractive business case to re-engineer and to take those applications out of legacy platforms, then adopt cloud native application strategy.

  4. Applications which are expected to undergo more frequent changes, cloud native application strategy suits better.

  5. For applications for which will go through agile development process or DevOps process, cloud native application strategy suits better.

  6. Any new applications that are being freshly developed, cloud native application strategy suits better.

  7. When enterprises are undergoing a large scale applications movement to cloud, to mitigate risks in such large transformations, enterprises may be better off adopting a two phase approach. In phase one, move applications to cloud which are suitable to be hosted on cloud. In this way, enterprises benefit from using cloud infrastructures (on-demand, spend flexibility etc.) in phase one.  Once the applications stabilize on cloud hosted environments, in phase two, adopt a cloud native application strategy.

While cloud native application strategy is attractive in long term, the enterprises would have to leverage a combination of cloud native application strategy and cloud hosting application strategies as per specific needs and circumstances listed above.

Any comments/suggestions let me know.

October 28, 2017

Purposeful Enterprise -- some thoughts on shaping the enterprise of the future

Author: Sarang Shah, Principal Consultant

We are in the midst of an exciting stage in the evolution of the modern economy, where accelerating technological changes, highly networked world, demographic shifts and rapid urbanization are leading to a disruption that is not ordinary [1]. The effects of these disruptive changes are impacting the prime movers of our modern economy - the businesses' or corporations. In this blog post, I would like to share some of my thoughts and questions about the future of these prime movers.

Let us take a step back and talk about the corporation itself. What is a corporation? What is its relationship with the market? What determines its boundary, size and scope? Economists attempt to answer these questions using various approaches - I would like to specifically point out the idea of cost of market transactions or transaction costs as described in Ronald Coase's seminal work 'the nature of the firm'. Coase points out that 'people begin to organize their production in firms when the transaction cost of coordinating production through the market exchange, given imperfect information, is greater than within the firm' as illustrated in the diagram below.[2]

purposefulEA-Shah.png

Recent technological advances like mobile, cloud, social media, internet of things, augmented reality, block chain, and many more are causing disintermediation and dematerialization at an unprecedented speed and scale. These technologies directly lead to decrease in the transaction costs that we mentioned above and hence they influence the nature of the corporation also. We see these changes manifesting themselves in new digital business models, unbundling of corporations and redrawing of industry boundaries e.g. a mobile company provides payment services, an e-commerce retailer provides credit facilities, and so on.

Along with the technological changes, we are also seeing demographic and behavioral shifts in our economy. For instance, today's customer is more demanding in terms of value they get from the product or service than in the past, as they have easier access and the ability to consult and compare between various products/services in the market as a result of technological advances. In fact, even regulations are also promoting behaviors that allow customers more choice - e.g. sharing of payments data by the banks (Open Banking, PSD2) or mobile phone number portability across network carriers. The same demographic and behavioral shifts that affect the customers also influence the staff employed by the enterprise. We are beginning to see large parts of the workforce are now digital natives, who have access to information and are networked like never before. I believe these shifts impact the way the enterprise functions and is architected.

We are already seeing these shifts impacting the way enterprises function - enterprises that empathize with their customers and put them at the center more so then ever before; enterprises that understand that taking a long view on capital may be more beneficial for all the stakeholders; enterprises that are responsible towards the social and natural ecosystems they operate in and take a circular economy approach to the future; and enterprises that understand that in the future man and intelligent machines will work together collaboratively.

These changes lead us to ask some fundamental questions like: How will enterprises look like in the future? How will enterprises transform and adapt under volatile, uncertain, complex and ambiguous conditions? How should enterprise processes and policies be designed for digital natives and gig economy? How should enterprise ethics evolve as intelligent machines become integral to the enterprise? and many more.

I believe that taking a holistic and systemic perspective is required in shaping the purposeful enterprises of the future and we as enterprise architects have a unique opportunity to do the same. I and my colleagues at Infosys will be writing more about the same in the future blogs here.

 

I would like to thank A. Nick Malik & Steven Schilders for providing their suggestions for this post.

 

References:

[1] No Ordinary Disruption: The Four Global Forces Breaking All the Trends by Dobbs, R. and Manyika, J.

[2] The nature of the firm (http)

[3] The nexus of forces is creating the digital business, Dec 2014, Gartner (http)

[4] Unbundling the corporation, Mar 1999, Harvard Business Review (http)

[5] The self-tuning enterprise, June 2015, Harvard Business Review (http)

 

Note:

The terms 'business', 'company', 'corporation', 'enterprise' & 'firm' have been used interchangeably. The primary intent of this blog is for-profit enterprises, though some of the ideas described above are applicable to other types of enterprises also.

October 21, 2017

The benefits of leveraging information-centric enterprise architecture - Part 3

Authors:
Dr. Steven Schilders
AVP and Senior Principal Consultant, Enterprise Architecture
Marie-Michelle Strah, Ph.D.
Senior Principal Consultant, Enterprise Architecture


Continuing our three-blog series on information-centric architecture, this blog highlights the benefits of the data-first approach. While explaining how this approach drives agility, we want to emphasize that these blogs do not advocate a complete implementation of information-centric architecture. Rather, we are presenting an alternate view on the two most prevalent architecture paradigms.

In Part 1 and Part 2 of this series, we explored how organizations typically implement systems based on business capabilities rather than data. Such an approach invariably creates extreme data segmentation because system capabilities dictate what data is stored, how many copies are stored and how it is accessed. In today's age, no organization can succeed with fragmented data as data and its relationships - both direct and indirect - are the lifeblood of an organization.

Integration challenges in data warehousing solutions

Data warehousing solutions are quite popular for data integration. However, these solutions involve lengthy processing making it difficult to forge business-critical data connections, thereby diminishing the value of data. Further, data warehousing approaches - and the assigned 'data architects' - become tied to vendor data models. We use the term data architect loosely here. Invariably, these architects behave as vendor-specific master data management (MDM) or enterprise data warehouse (EDW) specialists rather than actual 'enterprise information architects'. Needless to say, this type of centralized and hierarchical approach nullifies any benefit that can be achieved through indirect data relationships such as artificial intelligence (AI) and machine learning.

To be able to make real-time decisions and scale quickly in highly competitive markets, you need to transform your enterprise into a hyper-connected and composable  organization. The danger from delayed decisions cannot be overstated in such an environment. To give you an idea of how important this is, we have put together a graph that illustrates the extent of value lost when there is a delay between a business event and action taken.

information_centric_EA_Shilders_Strah_3a_sm.png

Despite these acute disadvantages, application data architecture is often prioritized over enterprise information architecture. In some cases, this is because vendor-provided platforms and COTS products pre-determine data models and data access. In other cases, capability-based architectures that claim to represent business capabilities are actually application or technical architectures that collapse business capabilities. For example, consider how ERP systems tend to represent either finance, accounts payable (AP) or human capital capabilities.

This traditional approach exponentially delays the delivery of business insights and decision-making because data must be collected and copied across silos to get actionable information. Further, point-to-point integrations across multiple applications with disparate data architecture becomes an effort-intensive process for enterprise architecture as well as data teams. Finally, developing and maintaining these brittle and tightly-coupled architectures exacerbates the delay in the decision-to-value cycle.

Now, let us see how information-centric architecture unlocks value from hidden data to enable business-as-a-service capabilities in digital ecosystems.

Step 1: Integrate data across the organization

First, organizations must integrate data whether it resides in commercial-off-the-shelf (COTS) products, custom applications or microservices. In our earlier blog, we had proposed a layered information architecture approach (see figure below). Here, information architecture is not tied to either application or platform architectures that prioritize technical architecture. Instead, it lays the foundation for composable architecture by leveraging a hub model.

 

 information_centric_EA_Shilders_Strah_3b_sm.png

Information Centric EA: Layered Information Architecture

 


Step 3: Use fit-for-purpose data hub models to gain business-specific insights

 

Our previous blog also illustrated how information-centric architecture can be used in COTS as well as custom-built applications. Here is how the data integration hub architecture works in both cases (see figure below). The data hubs provide representations of data that are optimized for the specific needs of the business. For example, key-based data is leveraged for key-based entity relationships, graph-based data is used to analyze complex interdependencies, time-series-based data is used for sequential analysis, search-based data can be used for complex queries, and so on. Thus, information-centric enterprise architecture reduces the decision-to-value curve because data is grouped contextually and data hubs provide the relevant data attributes in a form that optimizes value creation.


information_centric_EA_Shilders_Strah_3c_sm.png

 

Step 4: Apply AI and BI on insights to achieve decision-as-a-service

 

Data integration hubs and contextual data grouping allow enterprises to design business intelligence (BI) capabilities and machine learning systems that merge programmed intelligence and AI. Further, BI capabilities can extend the base data with specific data requirements needed for analytics. They are exposed through BI services or decision-as-a-service executed for a consumer-specific data context. The key aspect of this design is that business-intelligent capabilities and services can be created, modified or removed without impacting the core and contextual data assets.

 

The end result?

  • Transitioning from traditional data warehouses to a fit-for-purpose model of multiple data hubs helps organizations leverage traditional BI capabilities and next-generation AI and machine learning
  • Prioritizing layered information-centric enterprise architecture makes data and decision-making organizational and architectural priorities

 

Simply put, adopting a strategic model instead of a retrofit model enables AI, faster access to enterprise insights and real-time decision-making. In an era where data is king, these are the key capabilities that enterprises need to become service-enabled.

 

Keep watching this space for enterprise-level case studies and best-practices of information-centric design in microservices, AI and data science.

 

In case you missed the previous blogs in this series, here are the links:

Part 1

Part 2



The differences between data-centric and capability-centric architecture - Part 2

Authors:
Dr. Steven Schilders
AVP and Senior Principal Consultant, Enterprise Architecture
Marie-Michelle Strah, Ph.D.
Senior Principal Consultant, Enterprise Architecture


Capability-centric architecture and information-centric architecture are the two most prevalent models in today's organizations.

In Part 1 of this three-blog series, we outlined an information-centric approach to architecture. That approach places business data at the core by decoupling data from application and platform architecture. In this blog, we deep dive into some of the concepts mentioned in the previous blog. Firstly, we describe the key differences between capability-centric and data-centric approaches to architecture. We also explain how they each influence the design of commercial-off-the-shelf (COTS) and custom-built solutions. To know how to expose reusable business capabilities as services, check out our next two blog in this series.

First, a quick summary of our previous blog: Typically, organizations buy COTS solutions that match their business capabilities irrespective of how data is stored, accessed or made available for reuse. Such solutions are often marketed as capability-centric architecture. But, in reality, a COTS solution should be able to ingest external data and extract internal business data. Historically, both these processes use batch processing. However, in today's age of services, there is greater focus on run-time integration through APIs. Nevertheless, data primarily falls within the domain of the COTS application. 

COTS applications: Capability-centric versus information-centric approaches

In information-centric enterprise architecture, the above model is inverted. The COTS solution must integrate across the enterprise information landscape in a solution and vendor-agnostic manner. This approach also decouples data architecture and enterprise information architecture, which are often collapsed when application-specific data architectures are reinforced.

The following illustration and table will clarify the differences between these two architecture models.

Capability-centric versus information-centric architecture for a COTS application

information_centric_EA_Shilders_Strah_2a_sm.png


Differences between capability-centric and information-centric architecture for a COTS application

 

Features

Capability-centric architecture

Information-centric architecture

Data architecture

A black-box that is designed and optimized to support application needs

Remains in a black-box and all application-agnostic data is externalized and synchronized through event-based integration

Relation between application-specific and agnostic data

Intermixed

Decoupled from each other when externalized

Data taxonomy

Is  application-specific

Is externalized and depends on business domain and/or functional context

Exposing data

Done through application APIs or data replication

Done through data services using APIs or data integration/replication adaptors

Removing/replacing applications

Cannot be done without affecting data architecture

Causes minimal impact to enterprise information architecture

Data consumption/extension

Data is not readily available

Data is readily available

Interaction between data and applications

Application is the source and master of its own data

Applications act as systems-of-record by supporting all create-retrieve-update-delete-search (CRUDS) capabilities. Data services act as a systems-of-reference with only read and search-based capabilities (these may change in multi-system architecture)

Using external data sources

Cannot act on an externalized data source.  Application-agnostic data must be replicated into the internal store before process execution. While some applications can call externalized data sources during execution, the data still needs to translated and transformed into the application taxonomy before execution. Such integration creates performance issues and is unsuitable for high-volume and performance-bound transactions

Applications can support inbound and outbound data synchronization through event-based integration. In the illustration, integration is one-way as there is no other application manipulating the data

How applications access data

Several applications use the same data, resulting in data proliferation, multiple access points for the same data and lack of a single and accurate source of truth.

Data is accessed and shared through the system-of-reference data services.  Applications are redesigned to support single application mastering, where possible,  by restricting access (read-and-search only) or by removing capabilities within the application

Data integration

Requires significant effort to move and synchronize data between various applications 

Implementation focuses on integrating data with a reusable core through services

Real-time analytics

Limited to in-built application capabilities and can be applied only once data is moved to the data warehouse

Data is exposed for real-time analytics and AI-based processing

 

At first glance, information-centric architecture implementation appears more complex, doesn't it? But, consider the advantages of using information-centric architecture instead of capability-centric architecture when decommissioning applications or building new capabilities to leverage data.

Custom-built applications: Capability-centric versus information-centric approaches

Unfortunately, COTS solutions have distorted organizational priorities by prioritizing capabilities over information architecture and data reuse. As a result, custom-built solutions have followed the COTS architecture model whereby business capabilities are built over an application-specific data repository (we will discuss service-based architecture in the final blog).

 

Capability-centric versus information-centric architecture for a custom-built application

information_centric_EA_Shilders_Strah_2b_sm.png

 

In the information-centric approach, the key differences between a COTS implementation and custom-built application is how data storage is controlled and how data interaction is designed. Custom-built solutions can be designed to use externalized data stores and integration services.

Ever wondered why there is so much recent emphasis on enabling microservices? This is because more and more enterprises are realizing the value of an information-first approach. Such an approach simplifies the design of enterprise architecture, making it easier to execute digital strategies. As custom applications, microservices have complete control over data as well as business capabilities, leading to greater agility.

Capability-first versus Information-first architecture for custom-built applications

 

Features

Capability-first architecture

Data-first architecture

Data architecture

A white-box that is specifically designed and optimized to support application needs

Application-agnostic data is externalized and integrated with the application through data service APIs. Application-specific data acts as an extension to externalized data and can be designed and optimized to support application needs

Relation between application-specific and agnostic data

Intermixed

Decoupled from each other when externalized

Data taxonomy

Is application-specific

Is externalized, dependent on the business domain and/or functional context and can be translated to application-specific taxonomy, if needed

Exposing data

Done through application APIs or data replication

Done through data services using APIs or data integration/replication adaptors

Removing/replacing applications

Significantly impacts  data architecture

Has minimal impact on data architecture

Data consumption/extension

Data is not readily available

Data is readily available

Interaction between data and applications

Application is the source and master of its own data

Data services are the system-of-record for all application-agnostic data

Using external data sources

Application-agnostic data may need to be copied into the internal store before process execution.  Data services can integrate externalized data

Data services interact with system-of-record data

How applications access data

Several applications use the same data resulting in data proliferation, multiple access points for the same data and lack of a single and accurate source of truth

Data services APIs access and share data 

Data integration

Requires significant effort to move and synchronize data between various applications

Implementation focuses on integrating data with a reusable core through services

Real-time analytics

Limited to in-built application capabilities and can be applied only once data is moved to the data warehouse

Data is exposed for real-time analytics and AI-based processing

 

Release data for AI-driven processing

Now, let us take a look at the target state of capability-centric and information-centric approaches when we combine both types of applications. Surely, the main difference between the two architectures is how data is consolidated and constructed, which leads to varying levels of business agility. On the one hand, data-centric architecture consolidates data instantly, providing market differentiation. For example, enterprises can process business intelligence (BI) or artificial intelligence (AI) logic in real-time using the application, user context and the complete data set. On the other hand, capability-centric architecture requires data integration and mediation/post-processing for data consolidation - making it nearly impossible to leverage BI/AI-based processing capabilities.

Combined view of COTS and custom-built applications for capability-centric approach and information-centric approach

information_centric_EA_Shilders_Strah_2c_sm.png

information_centric_EA_Shilders_Strah_2d_sm.png


So, if you want a sophisticated and data-driven digital strategy, adopting information-centric architecture is the way forward. Interestingly, many organizations know this and are planning strategic initiatives to rectify issues arising from capability-centric architecture. However, actually inverting existing systems into data-centric ones can be challenging. While some may sidestep this process by wrapping existing systems with an additional layer of 'digital concrete' such as services and/or APIs, this will inevitably hinder agility and the ability to proactively compete in the market. 

Discover how information-centric architecture delivers agility for service-enabled enterprises in our next blog


Why data should drive your enterprise architecture model - Part 1

Authors:
Dr. Steven Schilders
AVP and Senior Principal Consultant, Enterprise Architecture
Marie-Michelle Strah, Ph.D.
Senior Principal Consultant, Enterprise Architecture


Information-centric enterprise architecture is about putting data first during assessment, strategy, planning, and transformation. To create a 'composable enterprise', data must be mobile, local and global across departments, partners and joint ventures. This is important if enterprises are to liberate data to improve insight and develop disruptive and differentiated services. 

To achieve this, enterprises must first decouple business data from application and platform architecture. Decoupling business data gives organizations flexibility as well as valuable insights, which are very important during digital transformation, mergers, acquisitions, and divestiture journeys.

A case of the tail wagging the dog

Today, most enterprise architecture follows a variation of The Open Group Architecture Framework (TOGAF) model (business/information/technology architectures). Here, strategic planning and sourcing recommendations for application portfolios are based on decision-making flows such as buy-before-build or reuse-before-buy-before-build. In the TOGAF hierarchy, organizations are meant to define business capabilities, align information management strategies to the business and choose application portfolios that support those strategies.

However, if you were to speak to any experienced enterprise architect, they will tell you that this is not what actually happens. In reality, most decisions are driven by technical aspects such as applications and platforms rather than the actual business. Invariably, applications and platforms are retro-fitted to meet business needs. If we were to present this differently, it is a proverbial example of 'the tail wagging the dog'.

For example, portfolio rationalization is often marketed as business capability-driven. Actually, the focus is on purchasing commercial-of-the-shelf (COTS) products or components that have some out-of-the-box (OOTB) business or technical capabilities (or both) that can meet predefined business and technical requirements. To minimize extreme customization when choosing a COTS product, most organizations will actively seek OOTB capabilities that fit nearly 85% of their requirements and offer vendor-supported configuration changes.

In case there are gaps in the OOTB capabilities, the COTS solution will undergo some level of customization, which may or may not be recommended or even supported by the vendor. The organization may also build custom solutions, either in the beginning or over time, to support or enhance the COTS solution. In our experience, architects have traditionally designed custom-built solutions around business (or technical) capabilities - whichever is the priority. 

Tipping the scales for business vs. technology

Clearly, capability-driven enterprise architecture has an advantage over technology-driven approaches. It aligns business with IT and focuses on business processes and capabilities. However, it is also inextricably linked to specific applications and their inherent legacy architecture. In case you haven't noticed already, there is irony here: platform and cloud-first approaches often reinforce application architecture instead of business capabilities! This is because the business has to adopt COTS data models and storage options for their data. As a result, business capabilities are collapsed into the vendor or COTS applications rather than standing alone.

Let's see why this is alarming. While business requirements and their associated capabilities may change over time, core organizational data does not. Of course, technology also changes, thereby impacting the longevity of COTS and custom-built solutions, but let us ignore this for now. Thus, the common denominator in using COTS as well as custom capability-driven solutions is that information architecture is not a top priority during design. When data models become tied to vendor-provided models, they are unable to reflect organizational enterprise data models (if they exist) or offer flexible and adaptive information capabilities.

The dog wags its tail

Now, data is the most valuable asset that an organization can leverage to achieve market differentiation and success. No matter the condition - be it enhanced, embellished or partly redundant - core data is the lifeblood of any organization. In fact, one can argue that organizations may still exist if they lost all their applications but retained access to their data. The corollary is dangerously true, too: Without data, an organization would cease to exist.

So, if we were to switch these enterprise architecture paradigms, we could make a case to enable 'the dog to wag its tail'. Here, we would establish information architecture as the primary driver of enterprise architecture. This approach will disengage business capabilities from application platforms and vendor lock-in.

Creating an information-centric enterprise architecture

You may be wondering what the primary requirements of an information-centric enterprise architecture are. Here's a short list:

  • Business data should be segmented from business capabilities. This allows us to change/remove capabilities without impacting the underlying data and add new capabilities that can utilize the data when needed.

  • Business data should be separated from application-specific data that is artificially-coupled and may cause unnecessary bloat. This allows us to remove or add applications without impacting business data.

  • Business data should be segmented appropriately based on its domain

  • Each business data domain should be consolidated into a single version of truth

  • Business capabilities should be designed based on the domain-specific business data and associated functionality requirements

  • Wherever possible, business capabilities should be implemented as reusable services either in COTS or custom-developed applications

  • New or composite capabilities can be added by consuming the services

 

Figure 1: Information-centric enterprise architecture model 


information_centric_EA_Shilders_Strah_1.png

 

Ultimately, such an architecture model will incorporate perspectives of business, information, application, and technology. We prefer to liken it to the layers of an onion: the business is at the core followed by information designed to support the core business needs, applications to address business capabilities that leverage information above that, and the business capabilities mapped to the technology on top. The key premise of this architecture model is that the technology and applications are free to change without impacting the core business data and business architecture.

Deep-dive into the differences between capability-driven and data-centric approaches to architecture in our next blog.

Continue reading "Why data should drive your enterprise architecture model - Part 1" »

September 12, 2017

Calling all telcos: Monetize the power of digital using B2B2X model

Author: Vishvajeet Saraf, Principal Technology Architect, Enteprise Architecture


Nowadays, every enterprise is adopting digital initiatives to better serve their customers - and differentiation is the key. Typically, most enterprises collaborate with communication service providers or telecom companies to provide access and mobile solutions to their employees. Now, healthcare, retail, government, insurance, energy, and transport industries are looking for trusted partners who can help them create industry-specific and unique propositions that delight customers through innovative digital services. Think about the emerging use cases of partnering with telcos for smart cities, remote health monitoring, and location-based services. There is significant merit in such a model - partnering with telcos offers significant cost benefits compared to investing from the ground-up to build specific capabilities.

 

In my opinion, this presents telcos with a very lucrative opportunity to become such trusted partners. They already possess the digital infrastructure needed to provide content, network analytics, access/connectivity, mobility, and IoT services to customers. All they need is the right solution and the right environment. 


telecom digital incubator.jpg


Digital incubators - Test before you invest


All innovation begins with testing. Only by failing early and failing fast can new ideas improve and grow to become value-generating services. So, firstly, enterprises need an ecosystem where they can test their proposition before they invest - and this is precisely what telcos can offer. Telcos can provide enterprises across industries with an ecosystem that prototypes, builds and rolls out propositions for their customers - a 'digital incubator', one might say. Such digital incubators are a win-win opportunity: On the one hand, enterprises can test ideas and roll-out successful propositions. On the other hand, telcos gain a new revenue stream that accelerates the B2B2C model.


Here is how telcos can monetize this opportunity for maximum returns:

  • Position themselves as digital service providers (DSPs) of traditional telco services for telephones,     broadband, fixed data, etc., while partnering with content providers to provide content and media services along with access to NFV/SDN, IoT ecosystem, data/network probes/analytics to enterprise customers
  • Create a digital platform that delivers these capabilities as lightweight digital services or APIs for enterprise customers 
  • Become a digital incubator by creating an ecosystem based on open API methodology that allows enterprises to carry out trials, test and build unique propositions for their customers

digital incubator ecosystem.png

How to build an incubator ecosystem?


Below are the key elements of an open API-based digital incubator ecosystem that is built around a digital platform.


incubator ecosystem components.png


I want to highlight that, in such an environment, it is critical to make APIs publicly available. Let me explain why: Publicly-available APIs provide developers and enterprises with programmatic access to the digital capabilities of the chosen telco. In this way, enterprises can develop custom offerings through apps and other channels instead of relying on their telco partner to develop industry-specific offerings. In a manner of speaking, it is about creating a 'You do, I support' model, instead of an 'I do everything' model


Here are the key components for building an incubator ecosystem:


  • Open API specs portal: Through this portal, enterprises and developers can access publicly available API details such as API specifications, sample payloads, error codes, access details, and limitations (if any) 
  • Developer portal and accelerators: This portal not only allows developers to register their apps, but also educates them. Through this channel, telcos can receive feedback and suggestions from enterprises and developers about APIs exposed by them. Several API gateway products provide this feature, thereby helping DSPs  build developer portals
  • Test environments: These allow enterprises to trial and prototype services with the exposed APIs 
  • Digital studios: This is important and requires some investment by enterprises. Digital studios act as demo centers where enterprises can demonstrate their propositions to their customers (B2B2X). Telcos can also provide  value-added services such as rapid prototyping tools as well as designers to help enterprises roll out demos
  • Revenue models, API monetization and partner on-boarding: Needless to say, an open API ecosystem will trigger new revenue models. So, it is important to plan for these new revenue models and create partner/enterprise on-boarding processes that cater to different scenarios and industry verticals. Many API gateway products provide key capabilities for API monetization

Infosys has done it already!

The Infosys Enterprise Architecture team has helped several telcos build digital platforms for successful digital transformation. We help telcos develop the building blocks for digital incubators by leveraging our proprietary accelerators and frameworks such as:


  •  Infosys Digital Enterprise Architecture (I-DEA) for defining the ecosystem architecture
  • Infosys Cornerstone Platform to accelerate building microservices as well as digital platform delivery 
  • Experienced Infosys enterprise architects and reusable artifacts to strategize and build an open API ecosystem for large telcos and government organizations
  • Infosys Digital Studios, a unique differentiator, to help telcos showcase initial demos to their enterprise customers


Inspiration behind writing this blog is Binooj Purayath.  


References

Models and partner on-boarding templates by TMForum

 

 

Infosys EA Blogging Series

Our Enterprise Architecture blog series covers all aspects of business, information and technical architecture in order to demonstrate how we work with all teams across Infosys to provide innovative and coherent technology strategy and Chief Architect expertise to our clients worldwide. For more information on our Enterprise Architecture services, please find us here 

August 30, 2017

Simple definitions to get smarter: From 'big data' to 'AI'

Author: Ramkumar Dargha, AVP and Senior Principal Technology Architect, Enterprise Architecture

Today, business parlance is peppered with words such as big data, data analytics, data science, machine learning, artificial intelligence, and automation. We have all heard these terms being used; some even interchangeably. For those of you wondering what these terms actually mean, fear not. In this blog, I will attempt to demystify these new trends, highlight their significance and, most importantly, explain how they play a vital role when it comes to automation and artificial intelligence. I want to point out that none of these explanations come from industry-standard definitions or existing literature. They are drawn from my experience and shared with you in the hope of making these complex terms more comprehensible.

Business jargon 101

Let me begin with an illustration. The following diagram depicts how each trend works individually and within an ecosystem. While this may seem confusing now, I recommend you refer to it once each term is better understood.

View image

Data science - Firstly, the word 'data' refers to all types (like unstructured data) and all sources of data (like traditional data warehouses). So, data science is a field that encompasses the entire journey or lifecycle of data. It includes steps such as ingesting data, processing data, applying algorithms, generating insights, and visualizing actionable insights

Big data - This refers to data characterized by the four Vs, namely, high volume, high speed (velocity), high diversity (variety), and high veracity (abnormality or ambiguity). On second thought, we may even say five Vs, since big data adds significant value to enterprise operations! Much like data science, big data also represents the entire data lifecycle, which may cause some confusion - but let us bear with this. This brings me to the next important term.

Machine learning - Machine learning is the ability of machines to learn on their own through data - just as humans do through their environment. In machine learning, machines understand and learn from data, apply the learning and, based on the results, revise previous learning from new data. All this is done iteratively. Here, learning refers to the process by which machines convert the data to insights and apply those insights to take action. As you may have observed, data is key, particularly big data. However, ML can also use traditional data for algorithms like classification, linear regression, clustering, etc.

Data analytics - But, how do machines learn from data?  This is where data analytics comes in. Data analytics uses machine learning algorithms like those mentioned above to uncover patterns hidden in input data. These patterns are applied to new (but similar) datasets to create inferences based on past data. These inferences then become insights for future business actions. To know more about how to get data analytics right, check out my blog on "Data Analytics: Doing it Right!".

AI - In 5 steps

In my opinion, artificial intelligence (AI) has five main steps, which are described below: 

  1. Curate/acquire knowledge using approaches such as natural language processing (NLP), optical character recognition (OCR), etc
  2. Generate business rules using knowledge gained through the knowledge curation process or from insights/intelligence acquired through various machine learning techniques (as mentioned above) 
  3.  Leverage an automation engine that stores the collected knowledge and insights as code
  4. Take business actions either automatically through the automation engine or manually where human intervention is required
  5.  Use the feedback loop for continuous improvement by learning new patterns and un-learning old ones (when needed) in an iterative manner, just as humans learn, unlearn and re-learn on a continuous basis     
O   One clarification to be made here is: Some literature considers the generation of insights through machine learning and data analytics (Step 2) as part of knowledge curation (Step 1). I have intentionally separated these two here. According to me, knowledge curation is about acquiring knowledge from an information source such as literature and existing documents through NLP, search, OCR, etc. Alternatively, gaining insights through machine learning is done by applying ML algorithms on existing machine data. In my opinion, these two are distinct processes of acquiring knowledge. There are also traditional sources of knowledge such as human research, discovery, etc., that can be used to create business rules. This is represented as 'other knowledge source' in diagram and it does not necessarily come under the scope of AI.

I hope this piece has helped you better understand these complex concepts. Any thoughts or suggestions on how to improve these definitions? Please feel free to leave your comments and suggestions below.



Infosys EA Blogging Series

Our Enterprise Architecture blog series covers all aspects of business, information and technical architecture in order to demonstrate how we work with all teams across Infosys to provide innovative and coherent technology strategy and Chief Architect expertise to our clients worldwide. For more information on our Enterprise Architecture services, please find us here 

August 29, 2017

Blockchain - New kid on the block

By Ramanath Shanbhag, Senior Principal Technology Architect

In some of the conversations I've had with our clients, I get asked about how to identify the applicability of blockchain to their organization. There are many articles on technical aspects of blockchain as well as their implications in various industries. There is a very good PoV on Infosys digital site on few applications of blockchain in some industries. 

Without going into the history of block chain, which started with crypto-currencies called bit-coins, blockchain at a basic level is a distributed ledger system. What you record within the ledger is a matter of your business. You could choose to record the value of money itself, in which case it becomes a crypto-currency, or you could record payments trail, in which case it becomes a payments system. One could choose to record proof of evolution, as in case of agriculture demonstrating the origins of a particular produce, in which case it becomes a certificate of assurance. Or one could choose to record a contract, so that the terms and conditions are tracked, audited and enforced. Whatever you log within the ledger, it will guarantee immutability and audit trail.

The key characteristic of blockchain is its ability to establish trust worthiness by demonstrating audit trail. So what are the key characteristics of a block chain process and how do we find processes in which blockchain can be implemented. I would define a worthy blockchain process as having few distinct characteristics which can be uncovered by asking the below questions.

  1. Does the process involve ecosystem players, especially outside of your organization's control?

  2. Are there existing processes which are costly in terms of operations, because of issues of trust between ecosystem players?

  3. Are there processes / businesses which are un-explored / not possible today because of issues of trust and provenance in the ecosystem?

The fundamental problem that blockchain solves is the problem of trust between multiple parties in an ecosystem by providing immutability to distributed transactions. Let's take the example of shipping industry. A simple process of shipping goods from one location to another involves multiple parties including the sender, receiver, shipper, freight forwarders, ocean carriers, port & customs, bank. A lot of effort, time and costs goes managing and co-ordinating documents across these different parties, given the nature of the entire process. There is obviously a lot of cost savings which can occur if we implement blockchain to resolve some of the issues of trust. But it's not just the system that needs to be built. It's the entire co-ordination within the ecosystem which needs to be built, so as to ensure that the cost savings and time and transparency benefits are shared. That is a business process re-engineering exercise and involves business and management effort, not just technology.

Let's take another example of agriculture industry. Within agriculture, a lot of focus is on sustainable agriculture practices. Companies have committed to adhere to GAP and sustainable farming and procurement policies. Consumers are becoming more health conscious and may be willing to pay for products adhering to sustainable agriculture practices. The ecosystem has disparate players from farmers to commodity processors to FMCG manufacturing companies / retail outlets. It is difficult to provide proof of the origin of the produce and adherence to GAP at a batch level. Will the ability to provide proof of provenance, allow companies to create products / brands with higher value for the players? Will customers be willing to pay more for such brands? If technology is able to provide this proof of provenance, will companies be able to create new differentiated products? These are questions that need to be validated with market research and refined using design thinking and business ideation. And if the answer to the above questions is that more value can be created by increasing trust and transparency in the ecosystem, then block chain is the right building block for the process.

Blockchain can be an enabler, but by itself, it may not be sufficient to create value within the ecosystem. Supported by other technologies such as IoT and mobile, it can create new ecosystems. Take for example energy industry where microgrids can be created so that supply of energy which is typically centralized, can be de-centralized. By leveraging blockchain, IoT and mobile applications, a new ecosystem driven by local communities can be created to supplement an existing ecosystem of energy distribution, as has been demonstrated by the Brooklyn microgrid.

Block chain, is the new kid on the block. But it can be useful only if we are able to discover the right use cases for the tool, and engage with the right ecosystem of partners to extract value from the value chain or to create a new value chain. At Infosys, we can help you find the right processes for your organization and help implement the same. For more details on Infosys block chain offerings refer to https://www.infosys.com/blockchain/#offerings.

The question remains as to which organization will take the lead in creating this eco-system and what additional benefits can be accrued from building this ecosystem. Well, the answer depends on who is the dominant player in the ecosystem who wants to take the lead. Or is it about who wants to become the dominant player in the ecosystem by taking the lead. Do share with us your views about the potential of blockchain for your organization.

Continue reading "Blockchain - New kid on the block" »

Subscribe to this blog's feed

Follow us on

Blogger Profiles

Infosys on Twitter


Categories