« AI Coworker | Main | Imagine Shopping for Your Data... »

Summoning the Demon

Recently, we published an article on the "AI Coworker". Thanks for all your views. We talked about the role that "Buddy" might play in operations to increase productivity in the field and decision support center analysts to make sense of all the data that is being generated, and get a jump on predictive decisions that could create more value out of your existing assets. It sounds like a great opportunity to integrate human experience with artificial intelligence, but there are a few challenges along the way that we will discuss in this article. Will artificial intelligence take over in a digital world putting humans to the side? Will "Buddy" be a valuable partner, or will AI be considered "our biggest existential threat" as the entrepreneur, Elon Musk, said as he compared the research under way equivalent to "summoning the demon."

Warnings about the potential impact of artificial intelligence have recently been discussed by prominent business and technology leaders. Some warn the technology will destroy jobs while others point to ways it will create new jobs. In December 2014, in an article for the BBC, Professor Stephen Hawking said, "the development of full artificial intelligence could spell the end of the human race." Hawking goes on to summarize that artificial intelligence "would take off on its own and re-design itself at an ever-increasing rate. Humans, who are limited by slow biological evolution, couldn't compete and would be superseded." 

Now we are not going to go as far as suggesting we are nearing the "end of the human race." We have a few more practical challenges to consider first. Any successful implementation of new technology will be planned and implemented in stages, as the human and existing business culture get more comfortable with the AI agent. It is a matter of trust, a viable business case and leadership direction and encouragement. Of course, we are assuming that the technology works.

First, does the AI agent understand what I am asking for? Putting a natural language speech interface on the AI system is a start. This requires adoption of a common language, or semantic ontology, that covers even very specific terms and acronyms (can Buddy get the buzzwords down as he reads the comment columns of the field report?).

Second, start with easier challenges as a first step. Start Buddy off with responsibility for specific pieces of equipment or common problems (can Buddy recognize the precursors for equipment maintenance requirements, performance or potential failure, for example?). Does Buddy know how to navigate through your data ecosystem and retrieve the right data for your query from the multiple data stores where the best answers are kept? Does Buddy know how to recognize alarm conditions from your SCADA system and begin the response and triage process, based on established best practices?

As the comfort and trust levels expand and AI agent (Buddy) performs actions to meet expectations, more of the capability of the AI can be implemented, including the closed loop automation of some field processes.

There are skeptics out there and their concerns are relevant. With the advent of the Big Data explosion and the digitization of almost everything, large amounts of data are being collected and fed to algorithms to make predictions. What would happen if a computer could make predictions so accurate that they could beat the decisions that humans are making? What if we begin to depend too much on those algorithms but their models go in the wrong direction, mistaking a strong statistical correlation with a weaker one causes an industrial accident? Or worse? We still need humans in the loop to keep things on the right track.

How far could AI really go? The world, digital or not, is a complex environment, little is standard and many processes are difficult to model. There is a lot of uncertainty and reactions to events that make up a day in the life. The futurist and inventor Ray Kurzweil thinks true, human-level A.I. will be here in less than two decades. While it may well be more than that, however, it remains a real possibility and not a question of if, but when.

Here's something to think about. Researchers at the Facebook Artificial Intelligence Research (FAIR) lab describe using machine learning to train their dialog agents (bots) to negotiate. It turns out bots are actually quite good at deal making. At one point, the researchers write, they had to tweak one of their models because the bot-to-bot conversation "led to divergence from human language as the agents developed their own language for negotiating." They had to change to a fixed, supervised model instead because the model that allowed two bots to have a conversation--and use machine learning to constantly iterate strategies for that conversation along the way--led to those bots communicating in their own non-human language. A bit scary don't you think? Anybody remember what Steven Hawking said?

Some worry about who will "own" or control emerging technology. Many of advanced startup companies in Silicon Valley are backed by massive investments from international sponsors, including the Chinese. Does this become a political issue around intellectual property? Or a national security issue? Will the solutions be open to all or controlled by a very few individuals, companies or governments? As data becomes recognized as a valued asset, will algorithms become the new competitive advantage over other physical assets?

This will be a journey marked more by change management acceptance by human operators for the AI coworker's capabilities, and not on the advances in the algorithms (they will proceed faster with the data science team than in the operations center). It will take several steps, over several years, and proceed at different paces, in different industries and companies, but the transformation has already started.

 I believe humans do not get cut out of the picture, but our ability to monitor and optimize complex operations, will only grow as Buddy becomes a trusted coworker. Our productivity will rise as Buddy helps us shift through more data, faster and analyze exceptions to predicted behavior. Maybe we really can do more with less with Buddy's help.

Even if we are not of the digital native generation, we still can learn new lessons. While "resistance may be futile" (using a Star Trek metaphor), collaboration might be productive. Digital technology should be a welcomed partner for the human, enabling us to become more productive and gain greater insights into our business challenges. We need not fight the demon like some digital luddite, but embrace the coming changes.

OK, Buddy let's give this a chance.


Jim Crompton is a thought leader for Noah Consulting, an Infosys Company, who is helping pioneer the relationships between complex Upstream processes and enterprises with automation to create competitive advantage.  His experience over numerous decades combined with the development capability of Infosys is working to ensure successful alignment of man and machine.

Post a comment

(If you haven't left a comment here before, you may need to be approved by the site owner before your comment will appear. Until then, it won't appear on the entry. Thanks for waiting.)

Please key in the two words you see in the box to validate your identity as an authentic user and reduce spam.