As Picasso once observed, “Computers are useless, all they can do is answer questions.” He was right. We need humans to ask timely and important questions like, “Will machines make better decisions than humans?”
Lately it seems everyone from Stephen Hawking to Elon Musk is raising concerns about the “Singularity”: the point when machines will surpass us. But it’s not us versus the machines; it’s us and the machines. The best decisions will be made by diverse groups of humans working together with diverse groups of machines. Let’s call it the “Multiplicity”.
This idea has a long history. Three hundred years ago Swiss clockmakers used the latest advances in mechanics to build automata to explore the shifting boundaries between humans and machines. Today we routinely trust machines like auto-pilots and pacemakers to make important decisions that require precision and speed. Multiple modules and voting are often used to reduce errors. The cutting edge of research now is on machines that can learn.
The robot paradox
It was recently proved that a sufficiently diverse group of machines will learn to make better decisions than any single machine. The field of ensemble learning explores mathematical models for the collective behavior of machines, in the same way that political science studies models for the collective behavior of humans.
Consider robots, the automata of our generation. There are now over a million robots working in factories around the world, but we still don’t have them in our homes. This is elegantly summarized in a paradox posed by Hans Moravec 30 years ago:
“Tasks that are hard for humans, like precision spot welding, are easy for robots, while tasks that are easy for humans, like clearing the dinner table, are very hard for robots.”
This is still true today, despite enormous advances in computing and theory.
Put yourself in the position of being a robot: your sensors and motors are noisy and inconsistent. Nothing is precise, not even your own body. Outside the factory, the central problem for robots is uncertainty.
Now consider Google’s robot car. Like an auto-pilot, this robot can make better decisions than a human, especially one who’s sleepy, intoxicated, or checking Instagram. This is because Google discovered that driving is similar to clearing the dishes.
In both cases, robots can cope with uncertainty using spatial probability distributions, convolution, and statistical learning to maximize expected utility and make optimal decisions. Google’s insight is that the processing can be performed remotely in the cloud, and as a side-effect, robots share data so the collective learns to make better decisions over time.
To study how Cloud Robots can assist surgeons, we are establishing the Center for Automation and Learning for Medical Robotics (Cal-MR) at UC Berkeley. In October, we showed for the first time that surgical robots can learn how to perform repetitive subtasks by analyzing a diverse set of examples provided by human surgeons:
A vital ingredient for Google’s robot cars and surgical robots is diversity, learning from a sufficiently diverse set of examples, which requires engaging with a sufficiently diverse group of humans.
So in the spirit of Picasso, let’s ditch the “Singularity” and focus instead on “Multiplicity”, a much more practical and useful model where diverse groups of humans ask important questions and work together with diverse groups of machines to answer them.
This article is published in collaboration with Medium. Publication does not imply endorsement of views by the World Economic Forum.
To keep up with Forum:Agenda subscribe to our weekly newsletter.
Author: Ken Goldberg is an American artist, writer, inventor, and researcher in the field of robotics and automation.
Image: A man shakes hands with a robotic prosthetic hand in the Intel booth at the International Consumer Electronics show (CES) in Las Vegas, Nevada January 6, 2015. REUTERS/Rick Wilking.