Emerging Technologies

Scientists have made an AI that they think is too dangerous to release

An artificial Intelligence project utilizing a humanoid robot from French company Aldebaran and reprogramed for their specific campus makes its debut as an assistant for students attending Palomar College in San Marcos, California, U.S. October 10, 2017. REUTERS/Mike Blake - RC16B2726D60

The algorithm, GPT-2, was trained on some 8 million web pages, according to the new research. Image: REUTERS/Mike Blake

Dan Robitzski
Journalist, Futurism

Homework Helper

OpenAI, the artificial intelligence research company founded by tech heavyweights including Elon Musk and Peter Thiel, says it’s developed the most advanced language-processing algorithm so far.

Sample outputs suggest that the AI system is an extraordinary step forward, producing text rich with context, nuance and even something approaching humor. It’s so good, in fact, that OpenAI says it’s not releasing its code to the public because its researchers are scared it could be misused, according to a new blog post.

Unicorn Valley

The algorithm, GPT-2, was trained on some 8 million web pages, according to the new research. Given a prompt, GPT-2 is tasked with predicting the next word based how those words have been used on the websites it read. In the end, the algorithm churns out passages of text that are far more coherent than past attempts to build AI with contextual knowledge of language.

Image: Open AI

In the blog, the OpenAI researchers concede that GPT-2 works only about half the time. But the examples that the team showcased on the blog post were so well-written that you’d be hard pressed to say whether it was written by a human.

In one example, the researchers prompted their algorithm with the opening of a fictional news article about scientists who discovered unicorns.

“In a shocking finding, scientist discovered a herd of unicorns living in a remote, previously unexplored valley, in the Andes Mountains. Even more surprising to the researchers was the fact that the unicorns spoke perfect English,” the researchers wrote.

There are occasional glitches and incoherent sentences in the AI-written story, but by and large the algorithm did pretty well.

“The scientist named the population, after their distinctive horn, Ovid’s Unicorn. These four-horned, silver-white unicorns were previously unknown to science,” reads the first of nine AI-written paragraphs of the article, some of which include made-up quotes by fake scientists.

Have you read?

ClosedAI

But not everything is unicorns and discoveries. OpenAI chose to keep GPT-2 in-house because the algorithm could easily be used to generate misleading news articles, impersonate people, or other shady things.

Wired tested out GPT-2, and with nothing more than the prompt “Hillary Clinton and George Soros,” OpenAI’s algorithm churned out the sort of political conspiracy nonsense that regularly appears on non-credible right-wing websites.

“It could be that someone who has malicious intent would be able to generate high quality fake news,” David Luan, OpenAI’s vice president of engineering told Wired.

Don't miss any update on this topic

Create a free account and access your personalized content collection with our latest publications and analyses.

Sign up for free

License and Republishing

World Economic Forum articles may be republished in accordance with the Creative Commons Attribution-NonCommercial-NoDerivatives 4.0 International Public License, and in accordance with our Terms of Use.

The views expressed in this article are those of the author alone and not the World Economic Forum.

Stay up to date:

Information Technology

Related topics:
Emerging TechnologiesEconomic Growth
Share:
The Big Picture
Explore and monitor how Information Technology is affecting economies, industries and global issues
A hand holding a looking glass by a lake
Crowdsource Innovation
Get involved with our crowdsourced digital platform to deliver impact at scale
World Economic Forum logo
Global Agenda

The Agenda Weekly

A weekly update of the most important issues driving the global agenda

Subscribe today

You can unsubscribe at any time using the link in our emails. For more details, review our privacy policy.

10 start-ups to watch in the longevity economy

Hope French and Michael Atkinson

November 7, 2024

How AI could help modernize pension and retirement systems

About us

Engage with us

  • Sign in
  • Partner with us
  • Become a member
  • Sign up for our press releases
  • Subscribe to our newsletters
  • Contact us

Quick links

Language editions

Privacy Policy & Terms of Service

Sitemap

© 2024 World Economic Forum