Emerging Technologies

Now we know the risks of AI, here's how policy frameworks can mitigate them

The risks of AI are becoming clearer, so it's about time we build risk mitigation tools into policy frameworks.

The risks of AI are becoming clearer, so it's about time we build risk mitigation tools into policy frameworks. Image: Getty Images/iStockphoto

Miray Salman
Global Shaper, San Francisco Hub; Researcher, UC Berkeley
Theodore Sherbin
Researcher, UC Berkeley
Avalon Bauman
Master of Public Policy Candidate, UC Berkeley; Research Fellow, Accountability Counsel San Francisco
Share:
Our Impact
What's the World Economic Forum doing to accelerate action on Emerging Technologies?
The Big Picture
Explore and monitor how Artificial Intelligence is affecting economies, industries and global issues
A hand holding a looking glass by a lake
Crowdsource Innovation
Get involved with our crowdsourced digital platform to deliver impact at scale
Stay up to date:

Artificial Intelligence

This article is part of: Annual Meeting of the New Champions
  • Even the companies developing new AI systems, including generative AI like ChatGPT, have warned of the risks they pose if not regulated correctly.
  • But implementing regulation isn't simple — we must ensure we constrain the negatives while allowing the positives of this new tech to flourish.
  • Potential risk mitigation tools could be technical, socio-technical or manual. Whatever they are, we must start integrating them now.

While the development of AI systems has largely been led by the private sector, the potential impacts of this technological leap forward are broad — they will touch us all.

Despite that, regulation on AI is lagging behind. Rather than malicious intent, failure to regulate AI mainly stems from an asymmetry in technology capacity and knowledge among the tech sector and policy-makers.

Accounting for potential risks, including those posed by generative AI in AI development is essential, and any effort must be solution-focused, with a clear pathway to their technical implementation.

Relevant stakeholders in responsible AI are not limited to those directly engaged in the company, but also include those affected by the externalities, such as the public, interest groups, certain occupations (artists, writers) and children. Who is affected depends on the product itself, but there are fewer and fewer categories of people that won’t in some way find their lives changed by AI.

Have you read?

Why generative AI

Recently, a push in developing and deploying generative AI spear-headed by OpenAI’s ChatGPT and DallE has had disruptive effects in the tech sector and beyond.

Generative AI has proven capable of conducting medical imaging analysis and high-resolution weather forecasts, writing code and can pass graduate-level university tests. It has even been used to help create new drugs and improve fibrosis treatment.

Generative AI can have detrimental impacts, too. Through the automated, high-paced generation and submission of comments, democratic systems can be attacked and public commenting functions can be artificially overwhelmed through spam.

Not to mention the impact of generative AI on the climate, which has not yet garnered the attention from policymakers and climate activists as an industry with such a high carbon footprint should.

Risk mitigation tools: a basis for policy-making

The May 2023 fact sheet on AI released by the White House pushes for public input into AI governance. OpenAI has similarly highlighted the need for democratic AI governance, calling for innovations in designing, testing and iterating such formats. Democratic AI governance is a chance for more safety, security and equity in the development and deployment of generative AI, and an important field to venture into. The challenges are of a managerial nature and concern the actual socio and/or technical implementation of the technology.

The managerial aspect comes into play when considering how stakeholders can best be brought together, ensuring that risks are discussed and addressed. Some such frameworks have already been developed, as policy-makers and developers have emphasised public participation.

Once discussed and formulated, the question becomes how to operationalise risk mitigation requirements in policy frameworks to positively influence the development and deployment of generative AI models.

It is a critical time for AI policy regulation. The EU, the US and, more recently, diverse stakeholder groups are already weighing legally binding frameworks. Risk mitigation, and the tools to carry it out, must be part of the conversation.

Technical, socio-technical and manual solutions

Risk mitigation tools that can be used to implement policies and guidelines in the future, can follow technical, socio-technical (frameworks), or human-led (manual) approaches. Among several risk mitigation techniques for AI, four particularly address generative AI models.

These tools differ and overlap in type of intervention they provide, time at which they intervene, type of risk they address, stakeholders they enable to intervene, and type of knowledge they require.

Likelihood-free importance weighting is a technical method that mitigates biases in AI-generated results and increases their accuracy (by training a probabilistic classifier to conduct an importance sampling), but requires technical AI expertise.

Counterintuitive to the premise of openness around code prevalent in tech, Behavioral use licensing provides a legal, patenting avenue that gives developers and other stakeholders (e.g. those creating the data) power to restrict the use of their technologies with an ethical intent. This socio-technical approach, however, requires legal skills.

The Contestable AI framework allows for the scrutiny of automated decisions, ensuring accountability and fairness. It requires both explainability and the possibility of human intervention throughout the system's lifecycle, ensuring the transparency of the AI system to the stakeholders involved, but overlaps with other tools.

Evaluating verifiability combines automated and human assessments to measure the verifiability of search engine results manually, but has few unique functionalities. VE may however be preferred in resource-constrained settings for scrutinizing generative AI models, as it does not require AI expertise.

A combined approach to AI risk mitigation

The truth is that none of these tools alone can sufficiently mitigate the risks of generative AI. Combinatory approaches of technical and socio-technical tools are needed, varying depending on the use case, organization and its resources (know-how, financial) and product.

The next crucial step is to trial these solutions in practice. Applying these tools is necessary to draw more reliable conclusions and — most importantly — to further develop and iterate them. Different combinations should be explored, along with best practices for implementing risk mitigation tools.

Addressing risk mitigation now is essential due to the rapid adoption of generative AI and its disruptive potential compared with prior AI innovations.

Warnings of the risks of AI have come thick and fast. Even the companies and individuals behind the technology have warned of catastrophic potential consequences of the tools they are creating. That’s why risk mitigation is so important — we must mitigate the bad and harness the good of this new, transformative technology.

Loading...
Don't miss any update on this topic

Create a free account and access your personalized content collection with our latest publications and analyses.

Sign up for free

License and Republishing

World Economic Forum articles may be republished in accordance with the Creative Commons Attribution-NonCommercial-NoDerivatives 4.0 International Public License, and in accordance with our Terms of Use.

The views expressed in this article are those of the author alone and not the World Economic Forum.

Related topics:
Emerging TechnologiesFourth Industrial Revolution
Share:
World Economic Forum logo
Global Agenda

The Agenda Weekly

A weekly update of the most important issues driving the global agenda

Subscribe today

You can unsubscribe at any time using the link in our emails. For more details, review our privacy policy.

Robot rock stars, pocket forests, and the battle for chips - Forum podcasts you should hear this month

Robin Pomeroy and Linda Lacina

April 29, 2024

About Us

Events

Media

Partners & Members

  • Join Us

Language Editions

Privacy Policy & Terms of Service

© 2024 World Economic Forum