In the hit TV show Person of Interest, a sentient machine tries to prevent violent crime. In the movie Avengers: Age of Ultron, an AI robot sets out on a mission to kill all humans. Countless other examples attest to the powerful hold on storytellers’ imagination of the way new technologies could impact our individual and collective security. But while the science fiction of the past used to project a vision of potential technological capabilities, the science driving the plots of today’s stories is more likely to be available to download now or next year. How much should we worry about science fiction becoming reality?
At the World Economic Forum’s Annual Meeting in Davos this year, participants discussed how the Fourth Industrial Revolution will dramatically change the character of warfare – and what we should do in response.
Throughout history, technological advances have always created asymmetries that could be exploited in warfare. At the outset, rapid technological advances usually favour the attacker, with defensive counter-measures lagging behind. As the pace of technological change accelerates, regional or global balances of power could be radically transformed by a simple software update. Yet there is little shared awareness and technical literacy of how these transformations will inevitably and rapidly reshape security policies and military doctrine.
The world urgently needs a more informed discussion on how to manage the development and application of innovations that have both civilian and military applications. Approaches should come from across the diplomatic, legal and regulatory field, and be informed by normative or ethical considerations. Before such a discussion can happen, however, there is a need to demystify the issues and establish commonly understood definitions and shared narratives.
Autonomous weapons, bio-weapons and cyberwar are three areas where greater literacy is urgently needed. While some of the applications described below may seem as fanciful as their portrayals in popular culture, change can happen quickly: not long ago, a situation where everyone has a small, always-connected computer in their pockets would have seemed like science fiction too.
Last year, more than 3,000 scientists wrote an open letter calling for a ban on autonomous weapons: machines which can be programmed to identify a target and decide to open fire without needing to check with a human first. While the technology is not there yet, some 40 countries are estimated to be working on developing autonomous weapons. Experts in Davos stressed the importance of moving fast to come up with international agreements related to this technology.
The attractions are obvious. Autonomous weapons could allow soldiers to be kept out of harm’s way, much as drones have now greatly reduced the need to risk the lives of fighter pilots – a clear plus for politicians keen to minimize casualties on their own side. Some suggest artificial intelligence could also make more coolly rational decisions than humans in the heat of battle.
However, it is not clear that autonomous weapons can be programmed with an understanding of the laws of war, which require the capacity for nuanced moral judgment. When things go wrong, does the legal and moral responsibility for their actions sit within the chain of command, or with the programmers? Where does accountability fall when the technology reacts badly to a situation it was not prepared for? The world lacks a commonly-agreed framework to navigate such grey areas. But if we’re not careful, control over this will soon be taken out of our hands.
It would currently cost around $35,000 for an individual to construct a basic biotech laboratory that is capable of manufacturing a bio-weapon such as a virus that attacks only members of a specific racial group. Akin to a small brewery, such a lab could be located in a home basement or garage. The equipment is not sophisticated and all the necessary knowledge is available online.
Fortunately, bio-weapons are not easy to get right. To be effective, a bio-weapon would need a synthetic virus that is hard to contain through isolation and doesn’t die out by killing its host too quickly. Making such a weapon at scale depends on culturing yeast, a substance that is freely available, though the process requires a great deal of skill. However, the spread of knowledge is far more difficult to control than risky resources like fissile material.
Nonetheless, organized or lone actors will soon be able to create physical viruses with the same ease that hackers construct computer viruses today. Steps to counter this threat, such as programming DNA synthesis machines to email the FBI if they detect suspicious activities, are under way but there is no coordinated global mechanism as yet.
With critical infrastructure, transport and communication systems increasingly connected through the emerging “internet of things”, this presents new vulnerabilities for cyberattacks to cause widespread damage in unpredictable ways. Some experts believe it is now possible for a small group of private individuals to replicate Stuxnet, the best-known example of a state-sponsored cyber weapon.
Cyberattacks are generally harder to attribute with confidence than physical attacks. For example, the government of Ireland does not know who was responsible for taking three of its agencies offline for a day last month – or if they do, they prefer not to say. The US’s decision to blame North Korea for the cyberattack on Sony in 2014 came after initially claiming they did not know who was responsible. Attribution is partly a question of cyber-literacy and capability, but also one of politics and strategy. Ambiguity may be preferred to absolute anonymity in cases where the motive for an attack is demonstrating capability in order to create a deterrent effect. Such ambiguity is dangerous because it expands the potential for disputes about responsibility for a cyberattack to lead to escalation of conflict.
Policy-makers struggle to understand issues related to cyberwar. Is a parallel internet now needed for matters with security implications? Is better encryption needed, despite its potential for nefarious use? Or will the availability of free messaging services with strong encryption and advances in quantum computing inevitably settle the case in favour of the advocates of privacy? If so, might a world where we can all communicate secretly actually be a safer place? With technologies evolving quickly, public-private sector collaboration is needed to explore such questions and solutions.
The way forward
Left to itself, warfare evolves according to a strict competitive logic, heedless of social and political results. Malicious actors, less constrained by normative or legal considerations, will be among the early adopters of new technologies. A future where military advantage shifts to individuals or small groups of activists makes the doctrine of “mutually assured destruction” much less relevant. The blurring of a distinction between combatants and non-combatants on an expanded battlefield makes citizens more vulnerable to novel forms of terrorism.
Efforts to update international agreements on rules of engagement face numerous challenges. Negotiating mechanisms in the current international system are cumbersome and dysfunctional. States may be unwilling to share their progress in innovating new offensive capabilities. The cutting-edge knowledge necessary to anticipate imminent innovations with the potential to revolutionize warfare often resides in the private sector.
We need to encourage fresh thinking about the kind of collaboration demanded by new forms of warfare. For example: when critical infrastructure is privately owned and operated, from electricity grids to flight control systems, how can governments and the private sector collaborate most effectively to improve its security? It is important to note that not all sub-state actors are hostile. In 2015 Chicago hosted a discussion on the foreign policy of cities, so how long before they formulate a defence policy to go with it? With the capacity to attack becoming more and more decentralized, what are the most effective ways to decentralize defence to the community or individual levels?
How can we shift incentives from developing offensive capabilities towards improving defence for the common good? The Defcon hacker conference, for example, encourages public-spirited hackers to use their skills to ensure a secure and free internet and share information on structural vulnerabilities such as those affecting critical infrastructure. But because we lack a framework for international cooperation, Defcon faces growing difficulty in encouraging participants to share their insights. Hacking skills they offer for free commands increasingly high fees elsewhere, and government policies to purchase and hoard information on vulnerabilities is ramping up prices. Issues related to encryption, data integrity and security could be one example of low-hanging fruit where the interests of many stakeholders are closely enough aligned to allow for new forms of collaboration.
It is no coincidence that the Chemical Weapons Convention, widely considered one of the most successful examples of controlling arms proliferation, was elaborated with the help of the industry. As technological developments promise to reshape the nature of warfare at accelerating speed, we need new mechanisms that can bring in a wider range of stakeholders, including the disruptive elements, to inform and be part of the solution.