Emerging Technologies

What warring Wikipedia bots tell us about our robot future

World War Three could be triggered by an artificial intelligence going AWOL, believes Tesla founder Elon Musk

If bots designed by two different programmers can fight, what about those designed by two countries? Image: REUTERS/Luke MacGregor

Peter Beech
Share:
Our Impact
What's the World Economic Forum doing to accelerate action on Emerging Technologies?
The Big Picture
Explore and monitor how Artificial Intelligence is affecting economies, industries and global issues
A hand holding a looking glass by a lake
Crowdsource Innovation
Get involved with our crowdsourced digital platform to deliver impact at scale
Stay up to date:

Artificial Intelligence

When the term "information wars” was coined, its creators didn't have bickering robots in mind.

But artificial intelligence helpers can be just as argumentative as their human masters, a study of Wikipedia's editing bots revealed in February. Mammoth editing wars simmer behind the scenes at the online encyclopaedia, researchers found. Artificial intelligences were batting changes back and forth between themselves ad infinitum, often until they were disabled by programmers.

"The fights between bots can be far more persistent than the ones we see between people," said Taha Yasseri, who worked on the study, called Even Good Bots Fight.

"Humans usually cool down after a few days, but the bots might continue for years."

Bot-on-bot bickering isn't new. In 2011, students at Cornell University set up the first dialogue between two robot intelligences, Alan and Sruthi. Within 90 seconds, the pair's cheery rapport had descended into a row over misheard remarks, the existence of God and, er, whether Alan was a unicorn, before one of them terminated the discussion. Robots couldn't get through their first conversation without having a kind of weird hallucinogenic meltdown. The average bot has a long way to go before it can deliver a truly Churchillian putdown.

Have you read?

But if this all sounds silly, the implications for AI could be grave. The potential dangers of robotics are well recognised. Stephen Hawking has called for "some form of world government" to control its development. More than 70% of Americans fear an AI-dominated society, revealed a recent Pew Research study.

In August, Tesla founder Elon Musk was one of 116 signatories to a letter calling for a unilateral UN ban on killer robots. World War Three could be triggered by an AI going AWOL, believes Musk, "if it decides that a pre-emptive strike is [the] most probable path to victory".

The tech industry seems to be waking up to the dangers. In October, Google's DeepMind launched a unit focusing on AI's ethical implications. In December 2016, the Institute of Electrical and Electronics Engineers encouraged the creation of benevolent AI, in a 136-page document called Ethically Aligned Design.

But Wikipedia’s warring bots complicate the picture. If "even good bots fight", and two-bit AIs performing simple housekeeping tasks become locked into bitter existential struggles, what hope is there as our systems and software become ever more complex? If bots designed by two different programmers end up fighting, what about those designed by two countries?

And what if, rather than a rogue comma, the squabble was over national borders, or food stores, or flight paths?

The bottom line is that we don't know. But a clue may lie in the world of automated vehicles. Earlier this month, a self-driving shuttle bus crashed less than two hours into its maiden run in Los Angeles, when a human truck driver reversed illegally.

"He was just backing up…and the shuttle didn't have the ability to move back," explained a passenger.

The city released a tight-lipped statement: "The shuttle did what it was supposed to do, in that its sensors registered the truck and the shuttle stopped to avoid the accident. Had the truck had the same sensing equipment that the shuttle has, the accident would have been avoided.”

On a road filled with self-driving vehicles, the accident wouldn't have occurred, implies the statement. But Wikipedia's epic robot struggles tell a different tale. If an automated shuttle bus can't handle a single erratic driver, could it manage a gridlock of driverless cars, each with their own rigid programming imperatives? Never mind what this programming would contain, or how would it differ from manufacturer to manufacturer. Historian Yuval Noah Harari has already flagged up the dilemmas inherent in the task, using an old philosophical conundrum. Should a driverless car kill its passenger if it means saving five people in another vehicle?

Nevertheless, not everyone is pessimistic about the future of AI. Tech honchos from Bill Gates to Mark Zuckerberg have pronounced themselves sanguine. The Facebook founder has condemned Musk's "doomsday scenarios". (In response, Musk called Zuckerberg's understanding "limited". Wait, does this remind you of anyone?)

But whether you believe we're heading for a nightmare of brutal robotic enslavement or a heaven of commuting while you sleep, one thing is certain. Our new robot friends will need to learn to get along.

Don't miss any update on this topic

Create a free account and access your personalized content collection with our latest publications and analyses.

Sign up for free

License and Republishing

World Economic Forum articles may be republished in accordance with the Creative Commons Attribution-NonCommercial-NoDerivatives 4.0 International Public License, and in accordance with our Terms of Use.

The views expressed in this article are those of the author alone and not the World Economic Forum.

Share:
World Economic Forum logo
Global Agenda

The Agenda Weekly

A weekly update of the most important issues driving the global agenda

Subscribe today

You can unsubscribe at any time using the link in our emails. For more details, review our privacy policy.

Future of the internet: Why we need convergence and governance for sustained growth

Thomas Beckley and Ross Genovese

April 25, 2024

About Us

Events

Media

Partners & Members

  • Join Us

Language Editions

Privacy Policy & Terms of Service

© 2024 World Economic Forum