The Infosys Labs research blog tracks trends in technology with a focus on applied research in Information and Communication Technology (ICT)

« November 2018 | Main | March 2019 »

December 12, 2018

Resetting Robot's Dream

"Cal is a helper house-Robot owned by Mr. Northrop, an author and technology enthusiast. Mr. Northrop is a prolific writer and sometimes loses track of other activities, he likes the way Cal picks up after him, runs his printer, stacks his disks, and other things. He doesn't need a complicated robot and Cal surely fits in. But Cal is a special robot with a level of intelligence not completely explored and with time Cal develops curiosity and interest in writing. More like being influenced by the author persona of his master. As Mr. Northrop comes to know Cal's interest he decides to upgrade Cal with dictionary, vocabulary, grammar, and other essentials for writing stuff. Cal starts writing, initially he wrote random letters like gibberish. But with more upgrades and advice from Mr. Northrop, Cal got better and better. After few attempts Cal wrote a satire with perfect sense of the ridiculous, Mr. Northrop read the story 2-3 times; a sudden feeling of insecurity came to him, what if Cal writes more stories and continues to improve each time? Mr. Northrop decided to undo all improvements and reset Cal as it was when he bought. "

Above is the summary of science fiction short story written by Isaac Asimov in 1991. He wrote many stories on robotics and often credited with devising Three laws of Robotics, which was adapted into  Hollywood sci-fi action film "I, Robot" starring Will Smith.

The vision on future of robotic automation and questions raised by Asimov on freedom of choice is even more relevant in era growing practice of AI. The core issue, that may have prompted Mr. Northrop to take the reset route, is his inability to appreciate the robots did and the grey area around robots decision making which is incomprehensible. Recently Facebook was experimenting with chatbots which were to negotiate among each other for ownership of virtual items, but after a few rounds the AI programs seemed to be interacting in a language that only they understood; Facebook had to shut down the experiment.

Transparency is a major factor that we need to address for building sustainable AI systems, in above case had Mr. Northrop knew that Cal was only trying to mimic him for extending help rather than being a competition, his action could have been different. Along with that interpretability and explainability of decision taken by AI systems would nullify grey areas, thereby building confidence among user community on trustworthiness of the systems. The factors will be crucial as organizations sail through the transformation journey of industry 4.0 where AI will have significant penetration across industry verticals.

To stay ahead with the AI curve, Organizations must build trust in their AI application. That will also speed up adoption of AI application among the stakeholders within and outside the organizations. For example, there is huge potential for AI in banking sector. In areas like traditional loan approval value chain from application to disbursement, AI can be applied at stages such as validation, due diligence, and approval; but lack of trust & transparency in AI applications hinders the adoption of AI led loan evaluation process. There are many such cases across industries like customer recommended in retail, optimizing the distribution of energy, fraudulent reimbursement in insurance etc.

Moving on to digital era we will be surrounded smart AI systems and would interacting with real life CAL s for day-in day-out. So, it's our imperative to build robust mechanism for explainability as well as trusted and sustainable AI systems.


December 7, 2018

Rise of Emotional Intelligence in AI

We typically prefer to be with people who can understand us and are emotionally intelligent. Body language and tone play a significant part in what we think and feel. Emotional intelligence encompasses the ability of people to recognize, understand and control their own emotions as well as recognize, understand and influence others' emotions. EQ has become an important consideration when we talk about AI development. As per Rana el Kaliouby, co founder and CEO of Affectiva, an MIT spinout company that works on emotional recognition technology, "If it's interfacing with a human, it needs social and emotional skills." The addition of EQ to AI will help such systems respond better to more complex human needs leading to creation of better customer experiences and thereby improve customer satisfaction.

Businesses are increasingly benefitting from advances in emotionally intelligent AI as they uncover new opportunities by understanding consumer likes and dislikes along with gauging their affinity towards a brand or product. As per a recent study by Market Research Future (MRFR), the global emotion analytics market is expected to reach USD 25 billion by 2023, growing at a CAGR of 17% between 2017 and 2023. Also, Gartner predicts that by 2022, 10% of our personal devices will include emotional AI capabilities, up from less than 1% in 2018. Using sentiment analysis to understand consumer perception towards a product/brand in the offline world has remained a daunting task. Detecting emotions from facial expressions using AI can be used as a substitute to better understand consumer preferences and how they engage with particular brands.

Traditionally market research companies have relied on using different methods such a surveys, trade interviews to better understand consumer requirements. However, these methods assume a direct correlation between future actions and what the consumers state verbally, which may not always be accurate. In this scenario, behavioral methods are considered more objective and are often deployed to observe a user's reaction while interacting with a product/brand. Manually analyzing video feeds of users interacting with a product/brand can be pretty labor intensive. Facial emotion recognition can be useful in this scenario as they allow market research companies to record facial expressions automatically and derive meaningful insights from them.

Disney has designed an AI-powered algorithm to gain a better understanding of how audiences enjoy its movies, this algorithm can recognize complex facial expressions and also predict how audiences will react for the remaining part of the movie. As per reports, the tests processed a staggering figure of 16 million data points derived from 3,179 viewers.

Earlier this year, Soul Machines partnered with Daimler Financial Services to present "Sarah", a digital human as an interface to Daimler's financial services and mobility ecosystem aiding them to deliver enhanced customer experiences in the areas of car financing, leasing and insurance by utilizing facial gestures and natural voice intonation.

Annette Zimmermann, vice president of research at Gartner claimed in January 2018, "By 2022, your personal device will know more about your emotional state than your own family." Facial analysis, voice pattern analysis and deep learning when used together in conjunction can help decipher human emotions with applications across a broad range of industries such as retail, financial services, medical diagnosis, autonomous cars, fraud detection and recruitment among others.

The shift from data-driven interactions relying heavily on IQ to EQ-guided experiences will also present companies an opportunity to connect with customers on a much more intimate level. However, emotions are immensely personal and companies working in this space should be wary about consumer concerns such as intrusion of personal space and manipulation. Suitable psychological training for people is also required to interpret emotional results from these machines and fix deviations as deemed appropriate.