Emerging Technologies

Could computers learn like humans?

James Devitt
Deputy Director of Media Relations , New York University
Share:
Our Impact
What's the World Economic Forum doing to accelerate action on Emerging Technologies?
The Big Picture
Explore and monitor how The Digital Economy is affecting economies, industries and global issues
A hand holding a looking glass by a lake
Crowdsource Innovation
Get involved with our crowdsourced digital platform to deliver impact at scale
Stay up to date:

Emerging Technologies

This article is published in collaboration with Futurity.

Scientists have developed an algorithm that captures our learning abilities, enabling computers to recognize and draw simple visual concepts that are mostly indistinguishable from those created by humans.

The work, which appears in the journal Science, dramatically shortens the time it takes computers to “learn” new concepts and broadens their application to more creative tasks.

“Our results show that by reverse engineering how people think about a problem, we can develop better algorithms,” explains Brenden Lake, a Moore-Sloan Data Science Fellow at New York University and the paper’s lead author. “Moreover, this work points to promising methods to narrow the gap for other machine learning tasks.”

When humans are exposed to a new concept—such as new piece of kitchen equipment, a new dance move, or a new letter in an unfamiliar alphabet—they often need only a few examples to understand its make-up and recognize new instances. While machines can now replicate some pattern-recognition tasks previously done only by humans—ATMs reading the numbers written on a check, for instance—machines typically need to be given hundreds or thousands of examples to perform with similar accuracy.

“It has been very difficult to build machines that require as little data as humans when learning a new concept,” observes coauthor Ruslan Salakhutdinov, an assistant professor of computer science at the University of Toronto. “Replicating these abilities is an exciting area of research connecting machine learning, statistics, computer vision, and cognitive science.”

Examples of the letter ‘A’

Salakhutdinov helped to launch recent interest in learning with “deep neural networks,” in a paper published in Science almost 10 years ago with his doctoral advisor Geoffrey Hinton. Their algorithm learned the structure of 10 handwritten character concepts—the digits 0-9—from 6,000 examples each, or a total of 60,000 training examples.

In the new work, the researchers sought to shorten the learning process and make it more akin to the way humans acquire and apply new knowledge—i.e., learning from a small number of examples and performing a range of tasks, such as generating new examples of a concept or generating whole new concepts.

[COMPUTERS LEARN TO FALL FOR OPTICAL ILLUSIONS]

To do so, they developed a “Bayesian Program Learning” (BPL) framework, where concepts are represented as simple computer programs. For instance, the letter ‘A’ is represented by computer code—resembling the work of a computer programmer—that generates examples of that letter when the code is run. Yet no programmer is required during the learning process: the algorithm programs itself by constructing code to produce the letter it sees.

Also, unlike standard computer programs that produce the same output every time they run, these probabilistic programs produce different outputs at each execution. This allows them to capture the way instances of a concept vary, such as the differences between how two people draw the letter ‘A.’

Learning to learn

While standard pattern recognition algorithms represent concepts as configurations of pixels or collections of features, the BPL approach learns “generative models” of processes in the world, making learning a matter of “model building” or “explaining” the data provided to the algorithm.

[ROBOTS CAN LEARN BY PLAYING, JUST LIKE BABIES]

In the case of writing and recognizing letters, BPL is designed to capture both the causal and compositional properties of real-world processes, allowing the algorithm to use data more efficiently. The model also “learns to learn” by using knowledge from previous concepts to speed learning on new concepts—e.g., using knowledge of the Latin alphabet to learn letters in the Greek alphabet.

The authors applied their model to over 1,600 types of handwritten characters in 50 of the world’s writing systems, including Sanskrit, Tibetan, Gujarati, Glagolitic—and even invented characters such as those from the television series Futurama.

Turing tests

In addition to testing the algorithm’s ability to recognize new instances of a concept, the authors asked both humans and computers to reproduce a series of handwritten characters after being shown a single example of each character, or in some cases, to create new characters in the style of those it had been shown. The scientists then compared the outputs from both humans and machines through “visual Turing tests.” Here, human judges were given paired examples of both the human and machine output, along with the original prompt, and asked to identify which of the symbols were produced by the computer.

[ALGORITHM CAN IDENTIFY WHAT’S IN YOUR PHOTOS]

While judges’ correct responses varied across characters, for each visual Turing test, fewer than 25 percent of judges performed significantly better than chance in assessing whether a machine or a human produced a given set of symbols.

“Before they get to kindergarten, children learn to recognize new concepts from just a single example, and can even imagine new examples they haven’t seen,” notes coauthor Joshua Tenenbaum, a professor at MIT in the department of brain and cognitive sciences and the Center for Brains, Minds and Machines.

“I’ve wanted to build models of these remarkable abilities since my own doctoral work in the late nineties. We are still far from building machines as smart as a human child, but this is the first time we have had a machine able to learn and use a large class of real-world concepts—even simple visual concepts such as handwritten characters—in ways that are hard to tell apart from humans’.”

Grants from the National Science Foundation to MIT’s Center for Brains, Minds and Machines, the Army Research Office, the Office of Naval Research, and the Moore-Sloan Data Science Environment at New York University supported the work.

Publication does not imply endorsement of views by the World Economic Forum.

To keep up with the Agenda subscribe to our weekly newsletter.

Author: James Devitt is the Deputy Director of Media Relations at New York University and a contributor for Futurity.

Image: A man types on a computer keyboard in Warsaw. REUTERS/Kacper Pempel.

Don't miss any update on this topic

Create a free account and access your personalized content collection with our latest publications and analyses.

Sign up for free

License and Republishing

World Economic Forum articles may be republished in accordance with the Creative Commons Attribution-NonCommercial-NoDerivatives 4.0 International Public License, and in accordance with our Terms of Use.

The views expressed in this article are those of the author alone and not the World Economic Forum.

Share:
World Economic Forum logo
Global Agenda

The Agenda Weekly

A weekly update of the most important issues driving the global agenda

Subscribe today

You can unsubscribe at any time using the link in our emails. For more details, review our privacy policy.

Stanford just released its annual AI Index report. Here's what it reveals

James Fell

April 26, 2024

About Us

Events

Media

Partners & Members

  • Join Us

Language Editions

Privacy Policy & Terms of Service

© 2024 World Economic Forum