Artificial Intelligence

Algorithmic warfare is coming. Humans must retain control

Image: REUTERS/Luke MacGregor

Peter Maurer
Co-Chair, Humanitarian and Resilience Investing Initiative
Share:
Our Impact
What's the World Economic Forum doing to accelerate action on Artificial Intelligence?
The Big Picture
Explore and monitor how Artificial Intelligence is affecting economies, industries and global issues
A hand holding a looking glass by a lake
Crowdsource Innovation
Get involved with our crowdsourced digital platform to deliver impact at scale
Stay up to date:

Artificial Intelligence

Humanity is faced with a grave new reality – the rise of autonomous weapons. It may sound like a Hollywood script, but the risk is real: humans so far removed from wartime choices that life-and-death decision making is effectively left to sensors and software.

What sort of future looms if the taking of a human life is relegated to algorithms? Robots and computer systems lack the distinct and unique human ability to understand both the complex environment they’re operating in and the weapons they’re using. Moreover, the more complex weapon systems become – for example incorporating machine learning – and the greater freedom they are given to take action without human intervention, the more unpredictable the consequences in their use.

We at the International Committee of the Red Cross believe that the responsibility for decisions to kill, injure and destroy must remain with humans. It’s the human soldier or fighter – not a machine – who understands the law and the consequences of violating it, and who is responsible for applying it. These obligations cannot be transferred to a computer program. Governments – with the input of civil society and the tech industry – must waste no time in agreeing to limits on autonomy in weapon systems.

Technological advances represent great opportunities. Whether it’s in the fields of medicine, transport, agriculture, commerce, finance or virtually any other domain, robotics, AI and machine learning are having a transformative effect by advancing how we analyze and act upon data from the world around us. It makes sense that this technology be considered for national security and defense purposes, which is why it’s not a surprise that we are seeing many countries invest heavily in AI and in military robotic systems with greater autonomy.

But when it comes to armed conflict, we must not forget that even wars have limits. Governments that must now define the limits for autonomous weapon systems need to ensure compliance with international humanitarian law and be firmly rooted in the principles of humanity and the dictates of public conscience.

The good news is that when the Group of Governmental Experts charged with examining autonomous weapon systems met in April this year in Geneva, Switzerland, there was broad agreement that human control must be retained over weapon systems.

In a sense, though, that’s the easy part. At their next meeting in late August, governments must now build on this agreement to answer the more difficult question: What level of human control is required to ensure both compatibility with our laws and acceptability to our values?

Image: REUTERS

The answer to this matters because some weapon systems with autonomy in their “critical functions” of selecting and attacking targets are already in use in limited circumstances for very specific tasks, such as shooting down incoming missiles or rockets. After activation by a human operator it is the weapon system that selects a target and launches a counterstrike.

However, the scope of possible future AI-powered weapon systems is much broader. It encompasses the full range of armed robotic systems, and potentially builds on the technology of AI “decision-aid” systems already being tested that analyze video feeds and detect possible targets.

With new advances happening at a breakneck pace, governments – with the support of the scientific and technical community – must take urgent action to agree on limits that will preserve the crucial human element, without stifling or slowing the technological progress that brings obvious benefits.

The alternative is the deeply unnerving prospect of a new arms race and a future where wars are fought with algorithms, with uncertain outcomes for civilians and combatants alike.

Have you read?
Don't miss any update on this topic

Create a free account and access your personalized content collection with our latest publications and analyses.

Sign up for free

License and Republishing

World Economic Forum articles may be republished in accordance with the Creative Commons Attribution-NonCommercial-NoDerivatives 4.0 International Public License, and in accordance with our Terms of Use.

The views expressed in this article are those of the author alone and not the World Economic Forum.

Related topics:
Artificial IntelligenceInternational SecurityCybersecurityHuman Rights
Share:
World Economic Forum logo
Global Agenda

The Agenda Weekly

A weekly update of the most important issues driving the global agenda

Subscribe today

You can unsubscribe at any time using the link in our emails. For more details, review our privacy policy.

How we can prepare for the future with foundational policy ideas for AI in education

TeachAI Steering Committee

April 16, 2024

About Us

Events

Media

Partners & Members

  • Join Us

Language Editions

Privacy Policy & Terms of Service

© 2024 World Economic Forum