The Infosys Labs research blog tracks trends in technology with a focus on applied research in Information and Communication Technology (ICT)

« September 2018 | Main | December 2018 »

November 30, 2018

Explainable AI - Introduction and applications

AI systems have essentially remained black boxes, with deep learning models frequently remaining opaque. It has become imperative to build systems which can justify their decisions, very similar to how humans operate. Significant advances in this area will result in the evolution of autonomous systems that are able to learn, make decisions and implement them without the support of any external agents. Explainable AI (XAI) is artificial intelligence that is programmed to describe its purpose, rationale and decision-making process in a way that can be understood by the average person. Powerful algorithms often churn out useful results, without explaining how they arrived at it. Thus, transparency is often compromised while arriving at sophisticated experimental results using AI systems. As AI models become more complex, it is important for these systems to provide verifiable explanations of the decisions they make. Key benefits derived from the implementation of XAI include:

·         Aid in faster and broader deployment of AI

·         Bring convenience and speed to consumers along with building trust

·         Adoption of best practices around the areas of compliance, accountability and ethics

·         Reduce impact of biased algorithms

The figure below illustrates the concept of XAI as demonstrated by Defense Advanced Research Projects Agency (DARPA):


Explainable AI.jpg

                                                                   Source: XAI Concept by DARPA

AI systems have multiple applications across industries. For example, in the financial services domain it will be important for AI systems to be able to explain their decision making in order to be fully embraced and gain trust in the industry. If a loan application process is denied by an automated system powered by AI, bank executives should be able to trace the decision to the specific step where the denial occurred and also provide a reasoning for the AI system's decision at that particular step.

An AI system which is determining the premium charges for car insurance should also be able to provide the rationale behind such a decision based on several factors including age, gender, car type, accident history, address, mileage etc. It should also aid in providing a personalized experience by mentioning what the customer needs to do in order to reduce premium charges, for example drive accident free for the next one year.

An ethical risk is also prevalent in this scenario as bias can unintentionally creep into algorithmic models and thereby result in discriminatory practices. This puts organizations at risk as consumers are likely to switch brands once they understand about these prejudices. For example, certain existing AI algorithms imposed higher charges for Asian Americans opting for SAT tutoring. Facial recognition software is being increasingly used for law enforcement and is also promulgating racial and gender bias. Earlier this year, Joy Buoalamwini from the Massachusetts Institute of Technology showed that gender-recognition AIS from IBM, Microsoft and Chinese company Megvii were able to identify gender from a photograph for white men with an accuracy of 99%. However, this number was staggeringly low at 35% for dark-skinned women. This poses increased risk towards false identification of women and minorities.

Explainable AI will thus help to build models which can identify relevant stakeholders and the information they require about how the model arrives at decisions. This would also identify any form of bias which has crept in and aid data scientists weed them out at an early stage. Eventually as humans and machines work together more effectively, it will be imperative for us to understand the machine logic lying underneath.

Transparency will become an important requirement to keep up with compliance regulations. For example, the General Data Protection Regulation (GDPR) with a focus on right to explanation mandates that users should be able to demand data behind algorithmic decisions made by recommendation engines. This puts the onus on companies to translate complicated reasoning behind AI algorithms to simple and easily interpretable language.