How can we better handle the threat to international security posed by groups who turn new technologies into weapons? To make a change by 2030, Espen Barth Eide, United Nations Special Adviser on Cyprus and co-chair of the Global Future Council on International Security, says we need international cooperation and experts from a wide range of fields to think through the potential dark side of technological advances.
Why do we need a Global Future Council on International Security?
Issues of security and power relations are always there, under the surface. They may seem less pertinent if you live in Switzerland rather than Syria, but a peaceful and predictable daily life doesn’t mean those issues have gone away. It simply says that they are being managed effectively and kept under control - for now.
What are the key trends that could change international security between now and 2030?
Imagine ruling a country 110 years ago and thinking: “I have a powerful army and navy, I don’t need to worry about airplanes.” You wouldn’t have remained powerful for long. Technological change has always shaped the evolution of international security and threatened to upset the balance of power, that’s not new. What is different is the rapid and accelerating pace of technological change.
Our growing reliance on connected services has made cyberspace an entirely new domain of warfare, just as the invention of the plane created the need to defend against attacks from the air, at a time when battles had previously been fought only on land and at sea.
More generally, we’re seeing an undesired democratisation of the capacity to inflict major damage. Even very small groups of people with innovative ideas and access to new technology can now effectively challenge much larger and more organised collectives. Historically, it’s been easier to defend than attack, but that’s now rapidly changing.
What are some examples of potentially game-changing new capabilities?
Get some yeast and a few more easily accessible ingredients. Then, with the help of information found online, you can set up a home microbrewery that’s capable of manufacturing bacteriological weapons. Another example: combine homemade explosives with an off-the-shelf drone, a smartphone, and commercial face recognition and geolocation software, and you could rig up a device that’s capable of autonomously targeting particular types of individual.
Such weapons may be less powerful than nuclear weapons, but they are far more difficult to control. We needn’t fear bright college students who understand how to make a nuclear bomb, because they can’t buy fissile material on eBay. As it becomes easier to make weapons with more destructive power, there may come a time when we grow nostalgic for the relative simplicity of the task of preventing nuclear weapons from falling into the wrong hands.
Who are the key players in controlling the spread of new weapons?
Think about a company like Lockheed Martin: they’re obviously aware that they make weapons and know they have to try to follow rules and regulations, sell only to state actors, and so on. But what about the people who are developing drones, face recognition software, 3D printers, gene therapies, or myriad other technologies? Generally they are enthused by the potential for civilian public good or commercial development. The idea that what they’re doing might be weaponized couldn’t be further from their minds.
So we need to get private sector innovators thinking about the potential dark side of what they’re doing without stopping beneficial technologies from being developed. We need expertise from politicians, military, academics and civil society to think through the implications of weaponizing new technologies. In all kinds of areas, from civil liberties to geopolitics, we are identifying questions, but not yet arriving at many answers.
Have you read?
Is non-proliferation of new weapons a task that needs to go beyond the United Nations?
There are pockets of interest in the United Nations in addressing these issues, and the arrival of a new team with the incoming Secretary-General offers some opportunities. But the UN has a strong preference for working through state actors, and member states need to be willing to participate. It can’t be the only place for these discussions – we need platforms that can bring in a wider range of stakeholders, and the Forum is an ideal candidate.
Where could we be by 2030?
In a best case scenario, we would have more widespread awareness and be better at having these discussions. More academic organisations will think like the Korea Institute of Advanced Study, which explicitly encourages its students to reflect on the ethical implications of the scientific fields they are exploring. More innovators will consider the potential dark side of what they’re doing, and if they have concerns will know how to express them responsibly. We will have functioning mechanisms of international cooperation that effectively minimise the dangers of technological advance, without standing in the way of innovation.
In a worst case scenario, none of that happens, and the world by 2030 is unstable and unpredictable, with varied threats coming from many sources. It’s a world we need to do our best to avoid.