If robots had a brain

Gerald E. Loeb
Share:
The Big Picture
Explore and monitor how Innovation is affecting economies, industries and global issues
A hand holding a looking glass by a lake
Crowdsource Innovation
Get involved with our crowdsourced digital platform to deliver impact at scale
Stay up to date:

Innovation

In the 1960s and 1970s, a great wave of enthusiasm for artificial intelligence (AI) in computers based on human thought processes crashed against our sheer ignorance of those processes. Engineers mostly stopped talking about “intelligence” and opted for certain algorithms and relational databases that appear to perform intelligently but only within highly scripted tasks – e.g. robots. That is why moving robots from the mindless assembly line into the real world usually produces pitiful, sometimes catastrophic results. If a robot encounters an unfamiliar object or even a familiar one in the wrong position, it is likely to proceed extremely cautiously or to damage the object or itself by handling it inappropriately. A human worker that inept would be fired immediately.

Good design starts with a clear understanding of requirements. If robots are going to work with humans in the real world, we need to formally recognize the human capabilities that we often take for granted. In particular, humans are aware of their surroundings and can draw on a lifetime of experience to help them to identify and interact with other entities.

Awareness requires sensors, but also something else. It was relatively easy to replace biological ears with microphones and biological eyes with video cameras, but it has been much harder to achieve useful speech recognition or machine vision, especially to understand context and to interact in real time with a dynamically changing world. Our work on the way robots could be made to sense, touch and pick up objects forced us to confront a general feature of intelligent decision-making: iterative behaviour. Humans (and other animals) don’t collect all the data in an unbiased way before making a decision. Instead, they start with a contextual guess and try to confirm it as quickly as possible so they can get on with their lives.

In order to understand what an object is like, you have to touch it, or “actively explore” it. The way you understand it depends as much on the way you decide to touch it – the “selection of the exploratory movement” – as on the physical properties of the object. Thus, presumptions about the object inform the decision about which movement to make next, whether by a human or a robot. By contrast, human vision and machine vision have evolved differently from each other. Human high-resolution vision covers only a tiny, central part of the retina, so we move our eyes about three times a second as we interpret a complex scene, using the previously obtained information to guide this exploration. A computer with a video camera can receive uniformly high-resolution images as long as it happens to be pointed in the right direction. Machine touch will be fundamentally more complicated than machine vision because there is no way to cheat by processing all the sensory information at once.

Machine touch requires both biomimetic sensors, which imitate human processes, and biomimetic information processing algorithms and behaviours. This is the essence of intelligence that we must distill into computer algorithms for what is now called “strong AI”.

If robots and humans were using the same sort of intelligence, they would succeed and fail in the same ways. Because humans are always thinking as fast as possible in real time, they are easily tricked by something that seems familiar but is subtly different – the basis of optical and other sensory illusions. Evolution guarantees that these “mistakes” usually lead to functional or at least rapidly correctable behaviours. Computers – and robots – tend to be more objective but also to fail in laughable ways (e.g. crushing a Styrofoam coffee cup because it looks like a ceramic mug). Until we get robots and humans to succeed and fail in similar ways, we will easily be amused by the shortcomings of robots even as we forgive our own illusions and prejudices.

Read the Technology Pioneers 2014 report.

Author: Gerald E. Loeb is Chief Executive Officer of SynTouch LLC, a World Economic Forum Technology Pioneer for 2014, and Professor of Biomedical Engineering at the University of Southern California.

Image: Children touch the hands of a humanoid robot in Zurich REUTERS/Michael Bulhozer.

Don't miss any update on this topic

Create a free account and access your personalized content collection with our latest publications and analyses.

Sign up for free

License and Republishing

World Economic Forum articles may be republished in accordance with the Creative Commons Attribution-NonCommercial-NoDerivatives 4.0 International Public License, and in accordance with our Terms of Use.

The views expressed in this article are those of the author alone and not the World Economic Forum.

Share:
World Economic Forum logo
Global Agenda

The Agenda Weekly

A weekly update of the most important issues driving the global agenda

Subscribe today

You can unsubscribe at any time using the link in our emails. For more details, review our privacy policy.

About Us

Events

Media

Partners & Members

  • Join Us

Language Editions

Privacy Policy & Terms of Service

© 2024 World Economic Forum