Earlier this month, researchers created an AI-driven malware that can be used to hack hospital CT scans, generating false cancer images that deceived even the most skilled doctors. If introduced into today’s hospital networks, healthy people could be treated with radiation or chemotherapy for non-existent tumours, while early-stage cancer patients could be sent home with false diagnoses. Today’s medical intelligence about the treatment of cancers, blood clots, brain lesions and viruses could be manipulated, corrupted and destroyed. This is just one example of how “data-poisoning” – when data is manipulated to deceive – poses a risk to our most critical infrastructures. Without a common understanding of how AI is converging with other technologies to create new and fast-moving threats, far more than our hospital visits may turn into a nightmare.
Policymakers need to start working with technologists to better understand the security risks emerging from AI’s combination with other dual-use technologies and critical information systems. If not, they must prepare for large-scale economic and social harms inflicted by new forms of automated data-poisoning and cyberattacks. In an era of increasing AI-cyber conflicts, our multilateral governance system is needed more strongly than ever.
Have you read?
Data attacks are the nuclear weapon of the 21st century. Far more important than who controls territory, whoever controls data has the capacity to manipulate the hearts and minds of populations. AI-driven algorithms can corrupt data to influence beliefs, attitudes, diagnoses and decision-making, with an increasingly direct impact on our day-to-day lives. Data-poisoning is a new and extremely powerful tool for those who wish to sow deception and mistrust in our systems.
The risk is amplified by the convergence of AI with other technologies: data-poisoning may soon infect country-wide genomics databases, and potentially weaponize biological research, nuclear facilities, manufacturing supply chains, financial trading strategies and political discourse. Unfortunately, most of these fields are governed in silos, without a good understanding of how new technologies might, through convergence, create system-wide risks at a global level. In a new report entitled The New Geopolitics of Converging Risks: The UN and Prevention in the Era of AI, I explore these inter-related risks, develop scenarios that illustrate how emerging technologies may play out in the coming period, and offer a way forward for the multilateral system to help prevent large-scale crises triggered by AI convergence.
Converging risks: data-poisoning, drone swarms and automated bio-labs
Here is a likely scenario:
1. Data-poisoning: Similar to the falsification of hospitals’ CT scans, malicious actors could use machine-learning algorithms to wage data-poisoning attacks on automated biotech supply chains. As bio-experiments are increasingly run by AI software, a malware could corrupt engineering instructions, leading to the contamination of vital stocks of antibiotics, vaccines and expensive cell-therapies.
2. Genetic-engineering: Cloud labs let you control up to 50 types of bio-experiments from anywhere in the world while sitting at your computer. Hackers could rely on such automated workflow to modify the genetic makeup of the E. coli bacteria and turn it into a multi-drug resistant bio-agent.
3. Delivery: As a next step, hackers could harness off-the-shelf drones, and equip them with aerosols, to spread the multi-drug resistant bacteria within water-systems or on farms. Farmers already use drones to spray insecticides on crops.
4. False narratives: Finally, hackers could inundate social media with warning messages about contaminated antibiotics, sowing fear and confusion among afflicted populations.
Such a combination of data-poisoning, weaponization of bio-manufacturing and manipulation of strategic information would have drastic economic costs and potential lethal outcomes for populations. it would also have a significant impact on societal wellbeing. Yet the most damaging impact would be on citizens’ trust – trust in governing institutions, emergency data-systems, industrial laboratories, food supply chains, hospitals and critical infrastructures.
AI-cyber conflicts: vulnerable states and populations
New forms of covert data-poisoning attacks go far beyond biosafety and biosecurity. The capacity of a range of actors to influence public opinion and destabilize political, financial and critical institutions could have powerful, long-term implications for peace and security.
State or non-state actors can already generate high-quality forgeries targeted at an ethnic or religious group to foment violence and discriminations. In Myanmar, a UN report confirmed that Facebook posts have fuelled virulent hate speech directed at Rohingya Muslims. Across India in summer 2018, manipulative messages on social media sites, including Facebook and WhatsApp, painted certain groups as responsible for child abduction. The hysteria led to more than 30 deaths and left many injured. As the lines between reality and deception become blurred, there is a growing potential for large-scale mobilization of people, resources and weapons around false narratives.
The cyber- and human security implications of data manipulation are corrosive, with the landscape of hybrid threats expanding as well as the attack surface. Every country is a potential target, but especially those that have poor, vulnerable and outdated technological and cyber-infrastructures.
As vulnerable states are unable to prevent and mitigate data-poisoning attacks, they could become fertile operating grounds for cyber mercenaries, terrorist groups and other actors, increasingly compromising the data integrity and the robustness of our globalized intelligence system.
We could face a new geopolitics of inequality and insecurity, driven by the growing digital and cybersecurity divide. To meet these challenges, we need a common understanding of emerging security risks across the international community, driven by incentives for a shared approach to prevention.
The need for strategic foresight
These dire scenarios point to the need to collectively develop a strategic foresight for the kinds of global risks posed by AI convergence. Over the past year, I have worked with the Centre for Policy Research at UN University to begin this work, and to help the UN develop better foresight capacities.
Corporate and government leaders should conduct combined foresight analyses across technological domains to anticipate and mitigate emerging threats that could harness data-manipulation and target critical infrastructures. For instance, we already know that data centres (think of medical databases or banks) and cloud environments (such as cloud bio-laboratories) are highly vulnerable to data-poisoning and other types of adversarial cyberattacks. Foresight efforts should imperatively include cooperation with states in the global south.
From data-manipulation on the safety of vaccines or gene-therapies to disinformation campaigns about the health of financial institutions, the attack surface in AI-cyber conflicts is large and complex. It is urgent that governments collaborate with the private sector to create more efficient early warning-systems to detect and analyse the sources of data-forgeries and targeted propaganda. States will need to continuously map how these new deception tools influence public discourse and opinion. And they will need to foster cybersecurity and (bio)technological literacy among large swaths of the population.
I am convinced that there is no unilateral or bilateral solution to the kinds of pervasive threats posed by these new technologies. Our ability to understand emerging global security risks must be developed collectively or risks becoming infected.