How can we best navigate the frontier of AI regulation?

Human and robot handshake with empty space on blue background illustrating the importance of AI regulation

AI regulation requires a firm hand Image: Getty Images/iStockphoto

Kay Firth-Butterfield
Senior Research Fellow, University of Texas at Austin
Satwik Mishra
Acting Executive Director, Centre for Trustworthy Technology
Share:
A hand holding a looking glass by a lake
Crowdsource Innovation
Get involved with our crowdsourced digital platform to deliver impact at scale
Stay up to date:

Tech and Innovation

Listen to the article

  • In March 2023, over 33,000 people in the AI industry signed the Future of Life Institute open letter asking for “all AI labs to immediately pause for at least 6 months the training of AI systems more powerful than GPT-4.”
  • The aim was to bring the huge concerns about generative AI into the mainstream and it has succeeded.
  • Steps are being taken to ensure that AI is only used as a force for good, but there are concerns about whether the resulting AI regulation will be enough.

In March 2023, over 33,000 individuals involved with the design development and use of AI signed the Future of Life Institute open letter asking for “all AI labs to immediately pause for at least 6 months the training of AI systems more powerful than GPT-4.” This was never expected to happen, but the aim was to bring the huge concerns about generative AI into the mainstream. In July, the White House unveiled a framework of voluntary commitments for regulating AI. Evidently, American policymakers are paying attention. Central to these safeguards are the principles of promoting 'safety, security and trust.' Seven prominent AI companies have consented – Amazon, Anthropic, Google, Inflection, Meta, Microsoft, and OpenAI.

They have agreed upon: Internal and external independent security testing of AI systems before public release; sharing of best practices; investing in cybersecurity; watermarking generative AI content; publicly sharing capabilities and limitations and investing in mitigating societal risks, such as bias and misinformation.

Discover

How is the World Economic Forum creating guardrails for Artificial Intelligence?

The positive takeaways

This announcement sends a resounding message to the market that AI development shouldn’t harm the social fabric. It follows through on demands from civil society groups, leading AI experts and some AI companies emphasizing the need for regulation. It reveals an upcoming executive order and legislation on AI regulation. Finally, it highlights ongoing international-level consultation, both bilaterally with several countries and at the UN, the G7 and the Global Partnership on AI led by India. This paved the way for meaningful outcomes at recent and upcoming international summits, including the upcoming G20 summit in India this week and the AI Safety summit in the UK in November.

However, can we afford to be complacent? The White House announcement demands an unwavering follow-through. It shouldn’t be an eloquent proclamation of ideals, failing to drive any significant change in the status quo.

The concerns

These are voluntary safeguards. They don’t enforce accountability on the companies for all purposes, but merely request action. There is very little that can be done if a company doesn’t or only reluctantly enforces these safeguards. Further, many of the safeguards, enlisted in the announcement are found in documents published by these companies. For instance, security testing or what is called ‘red teaming’ is carried out by Open AI before it releases its models to the public and yet we see the problems writ large.

These seven companies do not encompass the entire industry landscape, for example, Apple and IBM are missing. To ensure a collective and effective approach, mechanisms should hold every actor, especially potentially bad actors, accountable and incentivize broader industry compliance.

Adhering to the voluntary safeguards doesn’t comprehensively address the varied challenges that AI models present. For instance, one of the voluntary safeguards announced by the White House is “investing in cybersecurity and insider threat safeguards to protect proprietary and unreleased model weights.” Model weights are the core components determining functionality. Access to them is considered a proxy for being able to reconstruct the model with threshold compute and data. This is just one source of vulnerability, however. Models trained on biased or incorrect data, for instance, can still lead to vulnerabilities and malfunctioning systems when released to the public. Additional safeguards need to be designed and implemented to tackle these intricate issues effectively.

Have you read?

Urging companies to invest in trust and safety is ambiguous. AI safety research at companies substantially pales in comparison to development research. For example, of all the AI articles published till May 2023, a mere 2% focus on AI safety. Within this limited body of AI safety research, only 11% originates from private companies. In this context, it is difficult to anticipate that voluntary guidelines alone will be enough to alter this pattern.

Finally, AI models are rapidly being developed and deployed globally. Disinformation, misinformation and fraud, amongst other harms, perpetuated by unregulated AI models in foreign countries have far-reaching repercussions, even within the US. Merely creating a haven in the US might not be enough to shield against the harms caused by unregulated AI models from other nations.

Hence, more comprehensive and substantive steps are needed within the US and in collaboration with global partners to address the varied risks. Firstly, an agreement on a standard for testing AI model safety before its deployment anywhere in the world would be a great start. The G20 summit and UK summit on AI safety are critical forums in this regard.

Secondly, we need enforceability of any conceived standards via national legislation/executive action as deemed fit by different countries. The AI Act in Europe can be a great model for this endeavour.

Thirdly, we need more than a call to principles and ethics to make these models safe. We need engineering safeguards. Watermarking generative AI content assuring information integrity is a good example of this urgent requirement. Implementing identity assurance mechanisms on social media platforms and AI services, which can help identify and address the presence of AI bots, enhancing user trust and security could be another formidable venture.

Finally, national governments must develop strategies to fund, incentivize and encourage AI safety research in the public and private sectors.

The White House's intervention marks a significant initial action. It can be the catalyst for responsible AI development and deployment within the US and beyond, provided, this announcement is a springboard to push forth more tangible regulatory measures. As the announcement emphasizes, implementing carefully curated "binding obligations" would be crucial for ensuring a safe, secure and trustworthy AI regime.

Loading...
Don't miss any update on this topic

Create a free account and access your personalized content collection with our latest publications and analyses.

Sign up for free

License and Republishing

World Economic Forum articles may be republished in accordance with the Creative Commons Attribution-NonCommercial-NoDerivatives 4.0 International Public License, and in accordance with our Terms of Use.

The views expressed in this article are those of the author alone and not the World Economic Forum.

Share:
World Economic Forum logo
Global Agenda

The Agenda Weekly

A weekly update of the most important issues driving the global agenda

Subscribe today

You can unsubscribe at any time using the link in our emails. For more details, review our privacy policy.

About Us

Events

Media

Partners & Members

  • Join Us

Language Editions

Privacy Policy & Terms of Service

© 2024 World Economic Forum