Artificial intelligence has passed a new landmark in its development: its first psychopath.
Researchers at MIT Media Lab have developed Norman, a machine-learning algorithm fed on a data diet of dark subject matter.
Norman was plugged into dark discussions on Reddit, and then asked to interpret inkblots – a standard psychological test used to detect underlying thought disorders.
When Norman was shown the inkblots, the results were predictably disturbing, especially when compared with a standard AI’s interpretation:
The standard AI interpretation of the images was provided by an image caption machine learning algorithm that has viewed more than 1 million objects in everyday situations.
Image captioning is a form of deep learning used to generate a textual description of an image. While the standard AI learns to caption images based on a wide range of data, Norman was subjected to a narrow dataset: image captions from an infamous subreddit that is dedicated to documenting and observing the disturbing reality of death.
As a result, where the standard AI saw a vase of flowers, Norman saw a man shot dead. Another inkblot that the standard AI interprets as a red and white umbrella, Norman sees as a man being electrocuted while crossing the street.
It isn’t the first time that researchers at MIT have used AI to explore the darker side of life.
In 2016, MIT developed the Nightmare Machine, an AI capable of generating scary versions of faces and famous landmarks.
Following on from that, it has since developed Shelley – the first AI to write horror stories.
Both Shelley and the Nightmare Machine employed deep learning through collaboration, asking internet users to vote on images they found scary or feed in scary story ideas for the AI.
The team behind Norman now hope that by opening it up to input from internet users, it will become more balanced in its image interpretations.
Deep learning represents the cutting edge of machine learning – a form of AI that has been available in simpler forms for nearly 30 years, with early examples including email sorting and predictive text.
While standard machine learning can parse data and extract information to make decisions, it must be guided to know when a prediction it makes is correct or incorrect. Deep learning, on the other hand, can learn not only to make predictions, but also whether those predictions are likely to be accurate, and adjust the way it interprets to improve its predictions.
Deep learning is the technology behind today’s major AI advances, such as self-driving cars and medical diagnostics.
However, as Norman proves, even deep learning AI is only ever as accurate in its predictions as the data it is fed.
Without great data, it’s flawed
The importance of having unbiased datasets was highlighted by work carried out by researchers at MIT and Stanford University.
They found that several different types of image recognition software were most accurate in recognizing the faces of white men, and least accurate in recognizing the faces of darker skinned women.
This bias towards white men reflects the type of data being fed into the image recognition software, which was largely based on images of employees at western technology companies.
Have you read?
According to the paper, researchers at a major US technology company claimed an accuracy rate of more than 97% for a face-recognition system they had designed. But the data set used to assess its performance was more than 77% male and more than 83% white.
Such biased datasets led, in the case of one of the image recognition systems, to it failing to recognize one in three faces of women with darker skin.
While experiments like Norman may be fun, they have a serious point: without diverse datasets that truly reflect reality, future machines may at best reinforce existing social prejudices, and at worst be extremely warped.