Artificial Intelligence

The genie of emerging technology

John P. Drzik
Chairman, Marsh & McLennan Insights
Share:
Our Impact
What's the World Economic Forum doing to accelerate action on Artificial Intelligence?
The Big Picture
Explore and monitor how Artificial Intelligence is affecting economies, industries and global issues
A hand holding a looking glass by a lake
Crowdsource Innovation
Get involved with our crowdsourced digital platform to deliver impact at scale
Stay up to date:

Artificial Intelligence

Innovation is vital to progress. Advances in science, and the new technologies flowing from them, have propelled economic and societal development throughout history. Emerging technologies that are visible today have the potential to further increase global prosperity and tackle major global challenges.

However, innovation also creates new risks. Understanding the hazards that can stem from new technologies is critical to avoiding potentially catastrophic consequences. The recent wave of cyber-attacks exemplify how new technologies can be exploited for malicious ends and create new threats to global safety. Risk governance needs to keep pace with advances in scientific innovation.

S7A

Error or terror

What is the “next cyber”?  Synthetic biology and artificial intelligence are two examples of emerging technologies with the capacity to deliver enormous benefits but which also present significant challenges to government, industry and society at large.

Take synthetic biology: creating new organisms from DNA building blocks offers the potential to fight infectious disease, treat neurological disorders, alleviate food security and create biofuels. The flipside is that the genetic manipulation of organisms could also create significant harm, through error or terror. The accidental leakage of dangerous synthetized organisms, perhaps in the form of deadly viruses or plant mutations, could create massive damage. Bio-terrorism threats could emerge from organized groups or lone individuals in the growing “biohacker” community who access synthetic biology inventions online.

The double-edged sword of Artificial Intelligence

Artificial intelligence (AI) also presents a double-edged sword. Advances in AI can increase economic productivity, but might also create large-scale structural unemployment leading to serious social upheaval. AI developments also raise new questions about accountability and liability: who is accountable for the decisions made by self-driving cars, when they weigh the choice of harming pedestrians versus passengers? Some have even posited that the achievement of “singularity”, when machine brains surpass human intelligence, presents an existential threat to humanity.

Risk governance for these and other emerging technologies is extremely challenging. Many more institutions, as well as communities, are engaged in R&D and the pace of innovation is accelerating. National legal and regulatory frameworks are underdeveloped, so certain topics and techniques escape scrutiny by not being specified. Institutions that are meant to provide oversight struggle to cope with advances that cross departmental jurisdictions and, short on resources, they are often unable to assess the risks with the rigor that they might wish.

At international level, weaknesses also exist. For example, the Cartagena Protocol on Biosafety provides guidelines on the handling and transportation of living modified organisms, but not their development. The UN Convention on Biological Diversity addresses synthetic biology, but the resulting agreement is not legally binding. A current live concern is that large-scale international negotiations such as the Transatlantic Trade and Investment Partnership (TTIP) may inhibit new governance proposals and influence global norms in pursuit of open markets and more streamlined regulation.

Six steps to take

So what is the way forward? Realising the potential benefits from emerging technologies requires a willingness to accept risk, but we also need to manage this risk to avert disasters that might have been avoidable. Governance and control frameworks need to be reinvigorated and accountability needs to be clearer. I recommend six actions:

  1. More energetic dialogue around risk governance priorities between stakeholders – this includes innovators, industry more broadly, civil society, governments and regulators.
  2. Increasing funding and priority for research related to risk governance.
  3. Broadening disclosure standards to allow deeper risk assessment – we need to find the right balance between confidentiality and transparency, but intellectual property rights should not be used to restrict access to information needed for effective risk regulation.
  4. Filling gaps in national regulation in the areas that present the greatest risk, and changing regulatory design to be more adaptable to new developments.
  5. Strengthening discussions within international governance bodies to reach beyond principles to more binding protocols.
  6. Promoting a culture of responsibility around innovation – to encourage more self-policing among innovators, and de-glamorize hackers.

Innovation must be encouraged, but in parallel we need to set a course for rigorous risk governance of emerging technologies. It is much better to confront difficult issues now than endure an incident with disastrous consequences later. As we know all too well, history is littered with risk mitigation measures that proved ineffective because they were put in place too late.

The Global Risks 2015 report is now live.

Author: John Drzik is President of Global Risks and Specialties at Marsh, Marsh & McLennan Companies

Image: A computer market during a power outage. REUTERS/Athar Hussai

Don't miss any update on this topic

Create a free account and access your personalized content collection with our latest publications and analyses.

Sign up for free

License and Republishing

World Economic Forum articles may be republished in accordance with the Creative Commons Attribution-NonCommercial-NoDerivatives 4.0 International Public License, and in accordance with our Terms of Use.

The views expressed in this article are those of the author alone and not the World Economic Forum.

Related topics:
Artificial IntelligenceInequalityEconomic Progress
Share:
World Economic Forum logo
Global Agenda

The Agenda Weekly

A weekly update of the most important issues driving the global agenda

Subscribe today

You can unsubscribe at any time using the link in our emails. For more details, review our privacy policy.

How we can prepare for the future with foundational policy ideas for AI in education

TeachAI Steering Committee

April 16, 2024

About Us

Events

Media

Partners & Members

  • Join Us

Language Editions

Privacy Policy & Terms of Service

© 2024 World Economic Forum