Emerging Technologies

Meet the world's ‘first psychopath AI’

Cables plugged into a Bitcoin mining computer server are pictured in Bitminer Factory in Florence, Italy, April 6, 2018. Picture taken April 6, 2018. REUTERS/Alessandro Bianchi

He's called Norman and he takes a very dim view of the world. Image: REUTERS/Alessandro Bianchi

John McKenna
Senior Writer, Formative Content
Share:
Our Impact
What's the World Economic Forum doing to accelerate action on Emerging Technologies?
The Big Picture
Explore and monitor how Behavioural Sciences is affecting economies, industries and global issues
A hand holding a looking glass by a lake
Crowdsource Innovation
Get involved with our crowdsourced digital platform to deliver impact at scale
Stay up to date:

Behavioural Sciences

Artificial intelligence has passed a new landmark in its development: its first psychopath.

Researchers at MIT Media Lab have developed Norman, a machine-learning algorithm fed on a data diet of dark subject matter.

Norman was plugged into dark discussions on Reddit, and then asked to interpret inkblots – a standard psychological test used to detect underlying thought disorders.

When Norman was shown the inkblots, the results were predictably disturbing, especially when compared with a standard AI’s interpretation:

Norman’s inkblot interpretations differ wildly from those of a standard AI. Image: MIT

The standard AI interpretation of the images was provided by an image caption machine learning algorithm that has viewed more than 1 million objects in everyday situations.

Image captioning is a form of deep learning used to generate a textual description of an image. While the standard AI learns to caption images based on a wide range of data, Norman was subjected to a narrow dataset: image captions from an infamous subreddit that is dedicated to documenting and observing the disturbing reality of death.

As a result, where the standard AI saw a vase of flowers, Norman saw a man shot dead. Another inkblot that the standard AI interprets as a red and white umbrella, Norman sees as a man being electrocuted while crossing the street.

It isn’t the first time that researchers at MIT have used AI to explore the darker side of life.

It’s accelerating

In 2016, MIT developed the Nightmare Machine, an AI capable of generating scary versions of faces and famous landmarks.

Following on from that, it has since developed Shelley – the first AI to write horror stories.

Both Shelley and the Nightmare Machine employed deep learning through collaboration, asking internet users to vote on images they found scary or feed in scary story ideas for the AI.

The team behind Norman now hope that by opening it up to input from internet users, it will become more balanced in its image interpretations.

Deep learning represents the cutting edge of machine learning – a form of AI that has been available in simpler forms for nearly 30 years, with early examples including email sorting and predictive text.

While standard machine learning can parse data and extract information to make decisions, it must be guided to know when a prediction it makes is correct or incorrect. Deep learning, on the other hand, can learn not only to make predictions, but also whether those predictions are likely to be accurate, and adjust the way it interprets to improve its predictions.

Deep learning is the technology behind today’s major AI advances, such as self-driving cars and medical diagnostics.

However, as Norman proves, even deep learning AI is only ever as accurate in its predictions as the data it is fed.

Without great data, it’s flawed

The importance of having unbiased datasets was highlighted by work carried out by researchers at MIT and Stanford University.

They found that several different types of image recognition software were most accurate in recognizing the faces of white men, and least accurate in recognizing the faces of darker skinned women.

This bias towards white men reflects the type of data being fed into the image recognition software, which was largely based on images of employees at western technology companies.

Have you read?

According to the paper, researchers at a major US technology company claimed an accuracy rate of more than 97% for a face-recognition system they had designed. But the data set used to assess its performance was more than 77% male and more than 83% white.

Such biased datasets led, in the case of one of the image recognition systems, to it failing to recognize one in three faces of women with darker skin.

While experiments like Norman may be fun, they have a serious point: without diverse datasets that truly reflect reality, future machines may at best reinforce existing social prejudices, and at worst be extremely warped.

Don't miss any update on this topic

Create a free account and access your personalized content collection with our latest publications and analyses.

Sign up for free

License and Republishing

World Economic Forum articles may be republished in accordance with the Creative Commons Attribution-NonCommercial-NoDerivatives 4.0 International Public License, and in accordance with our Terms of Use.

The views expressed in this article are those of the author alone and not the World Economic Forum.

Share:
World Economic Forum logo
Global Agenda

The Agenda Weekly

A weekly update of the most important issues driving the global agenda

Subscribe today

You can unsubscribe at any time using the link in our emails. For more details, review our privacy policy.

Why the Global Digital Compact's focus on digital trust and security is key to the future of internet

Agustina Callegari and Daniel Dobrygowski

April 24, 2024

About Us

Events

Media

Partners & Members

  • Join Us

Language Editions

Privacy Policy & Terms of Service

© 2024 World Economic Forum