This article is part of the World Economic Forum's Geostrategy platform

For all of human history, politics has been fundamentally driven by conscious human action and the collective actions and interactions of humans within networks and organizations. Now, advances in artificial intelligence (AI) hold out the prospect of a fundamental change in this arrangement: the ideaof a non-human entity having specific agency could create radical change in our understanding of politics at the widest levels.

Not least because of the influence of literature, cinema and television, popular thinking about AI can tend towards the fanciful. Fictional, apocalyptic depictions of war between humans and robots have influenced breathless coverage of sometimes relatively minor AI developments.

Periodically, too, leading figures in the fields of science and technology have issued stark warnings that AI may pose an existential threat to human life. Together, these have given rise to a perception among the general public that a new form of intelligence that exceeds human intelligence is just around the corner – or even with us already.

Humans and limited forms of AI already coexist: AI technology helps us to navigate, to translate text and to find cheap flights, to give just a few examples; and – notwithstanding its known flaws and limitations – it looks set to be emblematic of a radically transformed future.

Small steps

But the more extreme ideas of what advances in AI may mean for how humans live, work and engage is far distant from the current reality. The nature of AI in 2018 – and very likely for the foreseeable future – is somewhat mundane. Indeed, the field is seeing relatively minor advancements that bring specific practical benefits in identified areas, rather than AI with general application.

Artificial Intelligence and International Affairs, Disruption Anticipated examines some of the challenges for policymakers that may arise from the advancement and increasing application of AI. It draws together strands of thinking about the impact that AI may have on selected areas of international affairs – from military, human security and economic perspectives – over the next 10–15 years.

The report sets out a broad framework to define and distinguish between the types of roles that artificial intelligence might play in policymaking and international affairs: these roles are identified as analytical, predictive and operational.

In analytical roles, AI systems might allow fewer humans to make higher-level decisions, or to automate repetitive tasks such as monitoring sensors set up to ensure treaty compliance. In these roles, AI may well change – and in some ways it has already changed – the structures through which human decision-makers understand the world. But the ultimate impact of those changes is likely to be attenuated rather than transformative.

Predictive uses of AI could have more acute impacts, though likely on a longer timeframe. Such employments may change how policymakers and states understand the potential outcomes of specific courses of action. This could, if such systems become sufficiently accurate and trusted, create a power gap between those actors equipped with such systems and those without – with notably unpredictable results.

Operational uses of AI are unlikely to fully materialize in the near term. The regulatory, ethical and technological hurdles to fully autonomous vehicles, weapons and other physical-world systems such as robotic personal assistants are very high – although rapid progress towards overcoming these barriers is being made. In the longer term, however, such systems could radically transform not only the way decisions are made but the manner in which they are carried out.


The report makes the following recommendations for governments and international non-governmental organizations, which will have a particularly important role in developing and advocating for new ethical norms:

  • In the medium to long term, AI expertise must not reside in only a small number of countries – or solely within narrow segments of the population. Governments worldwide must invest in developing and retaining home-grown talent and expertise in AI if their countries are to be independent of the dominant AI expertise that is now typically concentrated in the US and China. And they should work to ensure that engineering talent is nurtured across a broad base in order to mitigate inherent bias issues.
  • Corporations, foundations and governments should allocate funding to develop and deploy AI systems with humanitarian goals. The humanitarian sector could derive significant benefit from such systems, which might for example decrease response times in emergencies. Since AI for humanitarian purposes is unlikely to be immediately profitable for the private sector, however, a concerted effort needs to be made to develop them on a not-for-profit basis.
  • Understanding of the capacities and limitations of artificially intelligent systems must not be the exclusive preserve of technical experts. Better education and training on what AI is – and, critically, what it is not – should be made as broadly available as possible, while understanding of underlying ethical and policy goals should be a much higher priority to those developing the technologies.
  • Developing strong working relationships, particularly in the defence sector, between public and private AI developers is critical, as much of the innovation is taking place in the commercial sector. Ensuring that intelligent systems charged with critical tasks can carry them out safely and ethically will require openness between different types of institutions.
  • Clear codes of practice are necessary to ensure that the benefits of AI can be shared widely while its concurrent risks are well managed. In developing these codes of practice, policymakers and technologists should understand the ways in which regulating artificially intelligent systems may be fundamentally different from regulating arms or trade flows, while also drawing relevant lessons from those models.
  • Particular attention must be paid by developers and regulators to the question of human–machine interfaces. Artificial and human intelligence are fundamentally different, and interfaces between the two must be designed carefully, and reviewed constantly, in order to avoid misunderstandings that in many applications could have serious consequences.

Artificial Intelligence and International Affairs, Disruption Anticipated, Dr Jacob Parakilas, Mary L. ‘Missy’ Cummings , Dr Heather Roff, Kenn Cukier and Hannah Bryce, Chatham House