Scientists have made an AI that they think is too dangerous to release
The algorithm, GPT-2, was trained on some 8 million web pages, according to the new research. Image: REUTERS/Mike Blake
Homework Helper
OpenAI, the artificial intelligence research company founded by tech heavyweights including Elon Musk and Peter Thiel, says it’s developed the most advanced language-processing algorithm so far.
Sample outputs suggest that the AI system is an extraordinary step forward, producing text rich with context, nuance and even something approaching humor. It’s so good, in fact, that OpenAI says it’s not releasing its code to the public because its researchers are scared it could be misused, according to a new blog post.
Unicorn Valley
The algorithm, GPT-2, was trained on some 8 million web pages, according to the new research. Given a prompt, GPT-2 is tasked with predicting the next word based how those words have been used on the websites it read. In the end, the algorithm churns out passages of text that are far more coherent than past attempts to build AI with contextual knowledge of language.
In the blog, the OpenAI researchers concede that GPT-2 works only about half the time. But the examples that the team showcased on the blog post were so well-written that you’d be hard pressed to say whether it was written by a human.
In one example, the researchers prompted their algorithm with the opening of a fictional news article about scientists who discovered unicorns.
“In a shocking finding, scientist discovered a herd of unicorns living in a remote, previously unexplored valley, in the Andes Mountains. Even more surprising to the researchers was the fact that the unicorns spoke perfect English,” the researchers wrote.
There are occasional glitches and incoherent sentences in the AI-written story, but by and large the algorithm did pretty well.
“The scientist named the population, after their distinctive horn, Ovid’s Unicorn. These four-horned, silver-white unicorns were previously unknown to science,” reads the first of nine AI-written paragraphs of the article, some of which include made-up quotes by fake scientists.
ClosedAI
But not everything is unicorns and discoveries. OpenAI chose to keep GPT-2 in-house because the algorithm could easily be used to generate misleading news articles, impersonate people, or other shady things.
Wired tested out GPT-2, and with nothing more than the prompt “Hillary Clinton and George Soros,” OpenAI’s algorithm churned out the sort of political conspiracy nonsense that regularly appears on non-credible right-wing websites.
“It could be that someone who has malicious intent would be able to generate high quality fake news,” David Luan, OpenAI’s vice president of engineering told Wired.
Don't miss any update on this topic
Create a free account and access your personalized content collection with our latest publications and analyses.
License and Republishing
World Economic Forum articles may be republished in accordance with the Creative Commons Attribution-NonCommercial-NoDerivatives 4.0 International Public License, and in accordance with our Terms of Use.
The views expressed in this article are those of the author alone and not the World Economic Forum.
Stay up to date:
Information Technology
Related topics:
The Agenda Weekly
A weekly update of the most important issues driving the global agenda
You can unsubscribe at any time using the link in our emails. For more details, review our privacy policy.
More on Emerging TechnologiesSee all
Hope French and Michael Atkinson
November 7, 2024