Innovation is vital to progress. Advances in science, and the new technologies flowing from them, have propelled economic and societal development throughout history. Emerging technologies that are visible today have the potential to further increase global prosperity and tackle major global challenges.
However, innovation also creates new risks. Understanding the hazards that can stem from new technologies is critical to avoiding potentially catastrophic consequences. The recent wave of cyber-attacks exemplify how new technologies can be exploited for malicious ends and create new threats to global safety. Risk governance needs to keep pace with advances in scientific innovation.
Error or terror
What is the “next cyber”? Synthetic biology and artificial intelligence are two examples of emerging technologies with the capacity to deliver enormous benefits but which also present significant challenges to government, industry and society at large.
Take synthetic biology: creating new organisms from DNA building blocks offers the potential to fight infectious disease, treat neurological disorders, alleviate food security and create biofuels. The flipside is that the genetic manipulation of organisms could also create significant harm, through error or terror. The accidental leakage of dangerous synthetized organisms, perhaps in the form of deadly viruses or plant mutations, could create massive damage. Bio-terrorism threats could emerge from organized groups or lone individuals in the growing “biohacker” community who access synthetic biology inventions online.
The double-edged sword of Artificial Intelligence
Artificial intelligence (AI) also presents a double-edged sword. Advances in AI can increase economic productivity, but might also create large-scale structural unemployment leading to serious social upheaval. AI developments also raise new questions about accountability and liability: who is accountable for the decisions made by self-driving cars, when they weigh the choice of harming pedestrians versus passengers? Some have even posited that the achievement of “singularity”, when machine brains surpass human intelligence, presents an existential threat to humanity.
Risk governance for these and other emerging technologies is extremely challenging. Many more institutions, as well as communities, are engaged in R&D and the pace of innovation is accelerating. National legal and regulatory frameworks are underdeveloped, so certain topics and techniques escape scrutiny by not being specified. Institutions that are meant to provide oversight struggle to cope with advances that cross departmental jurisdictions and, short on resources, they are often unable to assess the risks with the rigor that they might wish.
At international level, weaknesses also exist. For example, the Cartagena Protocol on Biosafety provides guidelines on the handling and transportation of living modified organisms, but not their development. The UN Convention on Biological Diversity addresses synthetic biology, but the resulting agreement is not legally binding. A current live concern is that large-scale international negotiations such as the Transatlantic Trade and Investment Partnership (TTIP) may inhibit new governance proposals and influence global norms in pursuit of open markets and more streamlined regulation.
Six steps to take
So what is the way forward? Realising the potential benefits from emerging technologies requires a willingness to accept risk, but we also need to manage this risk to avert disasters that might have been avoidable. Governance and control frameworks need to be reinvigorated and accountability needs to be clearer. I recommend six actions:
- More energetic dialogue around risk governance priorities between stakeholders – this includes innovators, industry more broadly, civil society, governments and regulators.
- Increasing funding and priority for research related to risk governance.
- Broadening disclosure standards to allow deeper risk assessment – we need to find the right balance between confidentiality and transparency, but intellectual property rights should not be used to restrict access to information needed for effective risk regulation.
- Filling gaps in national regulation in the areas that present the greatest risk, and changing regulatory design to be more adaptable to new developments.
- Strengthening discussions within international governance bodies to reach beyond principles to more binding protocols.
- Promoting a culture of responsibility around innovation – to encourage more self-policing among innovators, and de-glamorize hackers.
Innovation must be encouraged, but in parallel we need to set a course for rigorous risk governance of emerging technologies. It is much better to confront difficult issues now than endure an incident with disastrous consequences later. As we know all too well, history is littered with risk mitigation measures that proved ineffective because they were put in place too late.
The Global Risks 2015 report is now live.
Author: John Drzik is President of Global Risks and Specialties at Marsh, Marsh & McLennan Companies
Image: A computer market during a power outage. REUTERS/Athar Hussai