Cybersecurity

Why cyberattacks could be war crimes

Department of Homeland Security workers at the National Cybersecurity and Communications Integration Center in Arlington, Virginia, January 13, 2015.     REUTERS/Larry Downing   (UNITED STATES - Tags: POLITICS SCIENCE TECHNOLOGY CRIME LAW MILITARY) - RTR4LBN0

Cyberattacks avoid bombs and bullets, but they could be just as devastating Image: REUTERS/Larry Downing

Patrick Lin
Director, Ethics and Emerging Sciences Group, California Polytechnic State University (Cal Poly)
Share:
Our Impact
What's the World Economic Forum doing to accelerate action on Cybersecurity?
The Big Picture
Explore and monitor how Cybersecurity is affecting economies, industries and global issues
A hand holding a looking glass by a lake
Crowdsource Innovation
Get involved with our crowdsourced digital platform to deliver impact at scale
Stay up to date:

Cybersecurity

Cyberattacks are the new normal, but, when they come from abroad, they can raise panic about an invisible cyberwar. If international conflicts are unavoidable, isn’t a cyberwar better than a physical war with bombs and bullets?

Sure, cyberwar is better than a kinetic or physical war in many ways, but it could also make war worse. Unless it’s very carefully designed, a cyberattack could be a war crime.

Imagine that you’re a political leader and you want to take out an enemy base. We suspect it’s a propaganda machine and financing terrorist activities. How would you do it?

Well, you could go the old fashioned way — call in some airstrikes or send troops to blow up the building — but this would be an open declaration of war, worsening tensions. It would also be a political disaster if your troops or even drones were captured.

Now, there is another way: you could launch a cyberattack against the facility. This is more invisible and therefore less risky. It’d take too long to directly hack into the facility’s secure network, but you’ve already created an email virus that can knock out the town’s energy grid, which would take out the base.

Let’s say you plan to disguise the malware as an official United Nations email to help ensure it’ll be opened by the local leaders. Once opened, the malware will autonomously spread on its own across the town’s networks until it finds the energy grid and is able to disable its controls and overload its transformers.

Without power, the enemy headquarters has effectively been taken out, without a single boot on the ground or bullet fired. So, in this scenario, should you launch that cyberattack?

Before you do, your legal advisor might tell you: “Not so fast.”

By taking out an energy grid, you’re not only blacking out the enemy base, but also all local civilians. You will also infect innocent computers with malware — you used them to reach the energy grid — and this seems to break a bedrock rule in the Laws of Armed Conflict: the principle of distinction, which requires that we never target non-combatants and spare them from the effects of an attack as much as possible.

SUFFOLK, Va. (June 16,2016)  Participants at Cyber Guard 2016 works through a training scenario during the nine-day exercise Suffolk, Va., June 16, 2016. (DoD Photo by Navy Petty Officer 2nd Class Jesse A. Hyatt)
A US Department of Defense team participating in a cyber security training exercise Image: DoD News

Collateral damage is allowed, of course, but within limits. If a few nearby civilians are accidentally killed while some important target is blown up, that’s tragic, but not illegal in war, if the military advantage gained outweighs the deadly side effect. This is the rule of proportionality, which means that collateral damage must not be disproportionate or unreasonable.

Bombing an entire town to kill a lone sniper, for instance, would likely be disproportionate. Causing a blackout for an entire town or city? That could be excessive, too. Remember, electricity doesn’t just turn on the lights, it also keeps medicine and food refrigerated and runs air conditioning and heating units, without which hundreds of people — or more — could die in the summer or winter. Blowing up transformers could also start wildfires that affect or kill local residents.

Let’s say no town is nearby and no innocent civilians are affected in this scenario. There’s still a prior question of whether that enemy building is a legal target in the first place. If it’s only a propaganda machine and a bank for terrorists, yes, it certainly plays a crucial role in enabling militants. But being crucial doesn’t make something a legal target. The Laws of Armed Conflict prohibit the targeting of media and financiers, allowing only people and objects directly participating in hostilities as targets.

Even if we can resolve all of these things — no collateral damage, no affected civilians and a confirmed legal target — there’s also a rule against perfidy or treacherous deceit. Dressing up as a humanitarian worker or in a UN uniform to gain access and attack an enemy is an example of illegal perfidy. In your cyberattack, pretending that your email is coming from UN offices might break that rule — you’re disguising it with what’s supposed to be a neutral or protected status in war.

And, even if we can somehow resolve this issue, unleashing an autonomous cyberweapon could be a problem. In ongoing debates about killer robots, a key argument is that autonomous robots are illegal if we can’t retain meaningful human control. Their autonomy may create a responsibility gap, where it’s hard to pin liability on a person if things go wrong. After all, we can’t punish artificial intelligence (AI) for its decisions and actions.

Responsibility aside, without meaningful human control, we could see “flash escalations”, as military AI interacts with other AI systems at digital speed and causes unpredictable, cascading effects too fast for us to stop. This is something like the “flash crashes” that still plague stock markets or “flash spikes” from competing price-bots that can drive the sale price of a textbook to $23 million.

There are many other legal and ethical issues too and it seems weird that war is governed by so many rules. But the Laws of Armed Conflict exist to protect us all, so that war doesn’t become a free-for-all in which terrible, inhumane weapons are used, like biological weapons or chemical gas; innocent civilians pay for the sins of their politicians; and fighting is so cruel that lasting peace is impossible.

Deliberately breaking those rules means risking the charge of a war crime. It also sets a dangerous precedent that our enemies may follow, putting us all at risk. It undermines the rule of law and erodes the values such laws are meant to safeguard.

Have you read?

Now, it could be that those laws and norms need to evolve with technological realities. This isn’t meant to argue that cyberweapons should never be used. Again, something seems right about firing digital bullets instead of real ones. But, while we wait for the law to align with changing realities, some victims may turn to self-help measures, such as “hacking back” or counter cyberattacks, that could exacerbate international tensions.

Many other questions are now emerging. Recently, a Facebook glitch accidentally revealed personal information about its content moderators, potentially exposing them to retaliation from the terrorist groups they thwart. Under the old rules of war, it’d certainly feel wrong that these civilian office workers could be legitimate targets. But if cyberspace is just another battlefield domain, then those content moderators could arguably be “combatants directly participating in hostilities” and therefore liable to attack. Anyone else who participates in cyber operations against an adversary should be aware of this risk before they sign up, if the argument, which is untested in law, works.

Given the risks and uncertainty, this is a conversation we need to have right now, not after the cyber genie is out of the bottle and has ripped through the laws of war. By that time, it may be too late.

Don't miss any update on this topic

Create a free account and access your personalized content collection with our latest publications and analyses.

Sign up for free

License and Republishing

World Economic Forum articles may be republished in accordance with the Creative Commons Attribution-NonCommercial-NoDerivatives 4.0 International Public License, and in accordance with our Terms of Use.

The views expressed in this article are those of the author alone and not the World Economic Forum.

Related topics:
CybersecurityInternational Security
Share:
World Economic Forum logo
Global Agenda

The Agenda Weekly

A weekly update of the most important issues driving the global agenda

Subscribe today

You can unsubscribe at any time using the link in our emails. For more details, review our privacy policy.

'Pig-butchering’ scams on the rise as technology amplifies financial fraud, INTERPOL warns

Spencer Feingold and Johnny Wood

April 10, 2024

About Us

Events

Media

Partners & Members

  • Join Us

Language Editions

Privacy Policy & Terms of Service

© 2024 World Economic Forum