Jobs and the Future of Work

What happens if intelligent machines commit crimes?

David Yuratich
Lecturer, Bournemouth University
Share:
Our Impact
What's the World Economic Forum doing to accelerate action on Jobs and the Future of Work?
The Big Picture
Explore and monitor how Future of Work is affecting economies, industries and global issues
A hand holding a looking glass by a lake
Crowdsource Innovation
Get involved with our crowdsourced digital platform to deliver impact at scale
Stay up to date:

Future of Work

The fear of powerful artificial intelligence and technology is a popular theme, as seen in films such as Ex Machina, Chappie, and the Terminator series.

And we may soon find ourselves addressing fully autonomous technology with the capacity to cause damage. While this may be some form of military wardroid or law enforcement robot, it could equally be something not created to cause harm, but which could nevertheless do so by accident or error. What then? Who is culpable and liable when a robot or artificial intelligence goes haywire? Clearly, our way of approaching this doesn’t neatly fit into society’s view of guilt and justice.

While some may choose to dismiss this as too far into the future to concern us, remember that a robot has already been arrested for buying drugs. This also ignores how quickly technology can evolve. Look at the lessons from the past – many of us still remember the world before the internet, social media, mobile technology, GPS – even phones or widely available computers. These once-dramatic innovations developed into everyday technologies which have created difficult legal challenges.

A guilty robot mind?

How quickly we take technology for granted. But we should give some thought to the legal implications. One of the functions of our legal system is to regulate the behaviour of legal persons and to punish and deter offenders. It also provides remedies for those who have suffered, or are at risk of suffering harm.

Legal persons – humans, but also companies and other organisations for the purposes of the law – are subject to rights and responsibilities. Those who design, operate, build or sell intelligent machines have legal duties – what about the machines themselves? Our mobile phone, even with Cortana or Siri attached, does not fit the conventions for a legal person. But what if the autonomous decisions of their more advanced descendents in the future cause harm or damage?

Criminal law has two important concepts. First, that liability arises when harm has been or is likely to be caused by any act or omission. Physical devices such as Google’s driverless car, for example, clearly has the potential to harm, kill or damage property. Software also has the potential to cause physical harm, but the risks may extend to less immediate forms of damage such as financial loss.

Second, criminal law often requires culpability in the offender, what is known as the “guilty mind” or mens rea – the principle being that the offence, and subsequent punishment, reflects the offender’s state of mind and role in proceedings. This generally means that deliberate actions are punished more severely than careless ones. This poses a problem, in terms of treating autonomous intelligent machines under the law: how do we demonstrate the intentions of a non-human, and how can we do this within existing criminal law principles?

Robocrime?

This isn’t a new problem – similar considerations arise in trials of corporate criminality. Some thought needs to go into when, and in what circumstances, we make the designer or manufacturer liable rather than the user. Much of our current law assumes that human operators are involved.

For example, in the context of highways, the regulatory framework assumes that there is a human driver to at least some degree. Once fully autonomous vehicles arrive, that framework will require substantial changes to address the new interactions between human and machine on the road.

As intelligent technology that by-passes direct human control becomes more advanced and more widespread, these questions of risk, fault and punishment will become more pertinent. Film and television may dwell on the most extreme examples, but the legal realities are best not left to fiction.
The Conversation

This article is published in collaboration with The Conversation. Publication does not imply endorsement of views by the World Economic Forum.

To keep up with the Agenda subscribe to our weekly newsletter.

Authors: Jeffrey Wale is a Lecturer in Law at Bournemouth University. David Yuratich is a Lecturer In Law at Bournemouth University.

Image: The DEKA Arm System is pictured in this Pentagon’s Defense Advanced Research Projects Agency (DARPA) handout image released May 9, 2014. REUTERS/DARPA/Handout via Reuters.

Don't miss any update on this topic

Create a free account and access your personalized content collection with our latest publications and analyses.

Sign up for free

License and Republishing

World Economic Forum articles may be republished in accordance with the Creative Commons Attribution-NonCommercial-NoDerivatives 4.0 International Public License, and in accordance with our Terms of Use.

The views expressed in this article are those of the author alone and not the World Economic Forum.

Related topics:
Jobs and the Future of WorkFourth Industrial RevolutionEmerging Technologies
Share:
World Economic Forum logo
Global Agenda

The Agenda Weekly

A weekly update of the most important issues driving the global agenda

Subscribe today

You can unsubscribe at any time using the link in our emails. For more details, review our privacy policy.

What can employers do to combat STEM talent shortages?

Timo Lehne

May 21, 2024

About Us

Events

Media

Partners & Members

  • Join Us

Language Editions

Privacy Policy & Terms of Service

© 2024 World Economic Forum