« Automotive Megatrends 2015: Connected Vehicles | Main | Carbon Divestment Becomes a Financial Imperative »

April 16, 2015

Great Strides in A.I. Come From Video Games

Posted by Dr. Srinivas Padmanabhuni (View Profile | View All Posts) at 9:54 AM

Computer teaches itself to play games - BBC News [Source: https://www.youtube.com/watch?v=nwx96e7qck0]

Artificial Intelligence (A.I.) has been on our minds and our innovation agendas since at least the 1950s, when a bunch of science fiction TV shows included friendly robots. But what the public really thought about A.I. was probably best captured in the late 1960s classic, 2001: A Space Odyssey, when the computer HAL becomes smart enough to take over the spaceship.

I never subscribed to the notion that A.I. would result in a sinister plot by computers to take over the world, or even the recent furore over 'safe A.I.' However, I do believe that it is important to focus on what the core task of A.I. has been - that is, trusted self-learning machines help humans liberate themselves from menial tasks so that they can concentrate on solving larger, complex problems.

So it was with great interest that I read about a new computer that has mastered some 50 video games on the classic 1980s gaming console, the Atari 2600. Those of you who are reading this and are under the age of 40 probably won't even know what the Atari 2600 is, or how popular it was in the late 1970s and early 1980s. It was an important device in that it brought video gaming into people's homes - you could hook up the console to the television set and play a number of well-known games that first appeared in public arcades. Remember Pong, Space Invaders, Asteroids, and Pac-Man?

Each game took a while to figure out and (eventually) master. That's what made the 2600 such a commercial success. Its owners were constantly creating new game cartridges that children would add to their collections once they became tired with a previous game. It appears as though a program known as the deep Q-network (DQN) has mastered many of these vintage video games by teaching itself.

At first, aficionados of A.I. compared DQN to IBM's supercomputer Deep Blue, the machine that beat chess champion Gary Kasparov in 1997. But remember that Deep Blue was programmed to be a chess-playing machine. Likewise, I was also reminded of the work on the computer program Chinook, which was declared the Man-Machine World Champion in checkers in 1994. Chinook was developed at the University of Alberta (which also happens to be my alma mater!) much before Deep Blue.

DQN, on the other hand, taught itself how to play each Atari game and eventually became really good at playing them - no outside assistance provided. In fact, DQN is giving rise to a new term: 'general A.I.' It's ushering in a new era of computers that can indeed teach themselves how to do certain things on their own.

One expert wrote that what made the DQN such a success with 1980s video games is that it uses two A.I. techniques: deep neural networks, a cornerstone of A.I. research since they were developed in the early 1980s (when, ironically, the Atari was at the height of its popularity), and reinforcement learning, modeled on behavioral psychology.

What watchers of A.I. progress are so excited about is that the DQN is adaptable, much like humans and animals. The big difference between a brain and a computer was that the latter always had to be programmed with a specific set of instructions on how to behave. Without specific instructions, it wouldn't be able to teach itself to overcome problems and think for itself. Now that the DQN has shown us that a machine can indeed be adaptable, the floodgates have truly opened.

My prediction is that enterprises will be asking themselves the ways in which they could apply this self-learning computer technology to their own business models. Think of all the tasks currently accomplished by humans that can someday be performed by A.I.-enabled machines. Across verticals, A.I. has proven itself by performing a range of tasks such as fraud detection, diagnosis, language translation, social media analytics, etc. to name a few. Even in IT services, there is a wave of robotic automation helping to take on repetitive tasks in remote IT support, or remote process support.

It's yet another sign that we're living through an era defined by a new human revolution.

Post a comment

(If you haven't left a comment here before, you may need to be approved by the site owner before your comment will appear. Until then, it won't appear on the entry. Thanks for waiting.)

Please key in the two words you see in the box to validate your identity as an authentic user and reduce spam.

Search InfyTalk

+1 and Like InfyTalk

Subscribe to InfyTalk feed

InfyTalk VBlogs: Watch Now

Infosys on Twitter