At Infosys Cards and Payments, we help our clients harness the power of technology-led innovation across the entire payments ecosystem encompassing payment networks, merchant services, stored value, FI payment services, and payment aggregators. Our thought leadership and a design thinking approach helps us co-create solutions with our clients to address their business problems.

« Payment Banks - Key technology imperatives | Main | My idea of an ultimate shopping App »

Why I called it Fraud - world of explainable AI

Artificial Intelligence (AI) and Machine Learning(ML) have made significant progress in predicting outcomes and prescribing proactive steps to address risks. Unfortunately, it still cannot reliably explain why it took a decision. This 'Black Box' approach to AI is a problem as we cannot trust a decision which cannot explain itself. For example, AI may find fraudulent transactions better than a human operator but it cannot explain why it thinks it is a fraud. Latest developments in AI is trying to address this explainability problem called Explainable AI (XAI).

Explainability is not only a technology problem; human experts also take decisions on 'gut-feel'. We are not always objective or fair - we have our own cognitive biases and blind spots. Many times we take a decision correctly but cannot fully explain it explicitly. But society and regulations demands explanation and we are extremely biased against machines in this regard. Starting 2018 European Union is enforcing a law requiring any decisions made by a machine to be explainable. This is a difficult proposition for technology sector as AI is still learning how to be XAI.

Over time most of the decisions will be taken by machines and hence they must learn to explain their decisions to gain the trust. Machines should also be able to advise their limitations where human intervention is potentially required. Researchers are following multiple techniques to make AI inner workings explainable - one of them is to design new kinds of modular deep neural networks which can be explained at the modular level and then plug them as Lego blocks to achieve a more complex decision.

Days are not far away when machines will be able to catch a fraudulent transaction and explain its decision. That will be the ultimate AI success story!

Post a comment

(If you haven't left a comment here before, you may need to be approved by the site owner before your comment will appear. Until then, it won't appear on the entry. Thanks for waiting.)

Please key in the two words you see in the box to validate your identity as an authentic user and reduce spam.