Emerging Technologies

Understanding the US 'AI Bill of Rights' - and how it can help keep AI Accountable

A new Blueprint for an AI Bill of Rights, released in the US last week, outlines five key protections to protect US citizens against AI harms.

A new Blueprint for an AI Bill of Rights, released in the US last week, outlines five key protections to protect US citizens against AI harms. Image: Photo by ThisisEngineering RAEng on Unsplash

Kay Firth-Butterfield
Senior Research Fellow, University of Texas at Austin
Karen Silverman
Founder and Chief Executive Officer, The Cantellus Group
Benjamin Larsen
Project Lead, World Economic Forum
Share:
Our Impact
What's the World Economic Forum doing to accelerate action on Emerging Technologies?
The Big Picture
Explore and monitor how United States is affecting economies, industries and global issues
A hand holding a looking glass by a lake
Crowdsource Innovation
Get involved with our crowdsourced digital platform to deliver impact at scale
Stay up to date:

United States

Listen to the article

  • While technology brings many benefits, without governance it can bring significant harm.
  • A new AI Bill of Rights, released in the US last week outlines five key protections.
  • Many feel the document is a critical starting point but wish more checks and balances existed to keep AI accountable.

Last week, the US White House Office of Science and Technology Policy (OSTP) released a “Blueprint for an AI Bill of Rights” along with several related agency actions. The document provides an important framework for how government, technology companies, and citizens can work together to ensure more accountable AI.

Here's what's key to understand about the new guidelines - both what they cover, what they don't and what other work is being done in for AI accountability.

Why is an AI Bill of Rights needed?

The need to resolve issues around the Responsible Use of Artificial Intelligence (AI) has become increasingly important for countries, citizens, and businesses over the last eight years. Approximately 60 countries now have National AI Strategies and many have, or are creating, policies which allow for responsible use of a technology which can bring huge benefits but, without adequate governance, can do significant harm to individuals and our society.

Discover

How is the World Economic Forum ensuring the responsible use of technology?

Since 2018 the European Union has been leading steps to advance the design, development and deployment of AI in its region whilst seeking to protect its citizens from misuse. The EU AI Act, due in 2024, will be the culmination of that work. There has been much discussion about how the USA would respond to this new legislative framework. Likewise, China has been developing its own regulatory regime for the use of AI. Whilst much has been written about the AI Arms Race and technological decoupling between the US and China, the underlying question is whether this is the beginning of path-departing technological regimes and governance mechanisms internationally. The EU-US Trade and Technology Council points to new efforts aimed at ensuring greater alignment across the Atlantic, while the focus of the EU and China on greater regulation of algorithms, suggests that the US is lagging behind in terms of rule-setting for the digital economy. There are also a number of international efforts to set out best practices in the use of AI. For example the Global Partnership on AI was formed in 2020 and includes both the EU and the USA. UNESCO and the OECD have also set principles for proper use of AI and at the World Economic Forum we have been creating scaleable and accessible frameworks for good governance of AI by governments and business since 2017.

What's in the 'Blueprint for an AI Bill of Rights'?


The Blueprint contains five principles, each of which includes a technical companion that provides guidance for responsible implementation of the principles, which as released are as follows:

  • Safe and Effective Systems: You should be protected from unsafe or ineffective systems.
  • Algorithmic Discrimination Protections: You should not face discrimination by algorithms and systems should be used and designed in an equitable way.
  • Data Privacy: You should be protected from abusive data practices via built-in protections and you should have agency over how data about you is used.
  • Notice and Explanation: You should know that an automated system is being used and understand how and why it contributes to outcomes that impact you.
  • Alternative Options: You should be able to opt out, where appropriate, and have access to a person who can quickly consider and remedy problems you encounter.

The intent of the blueprint is to “help guide the design, use, and deployment of automated systems to protect the American Public.” The principles are non-regulatory and non-binding: a "Blueprint," as advertised, and not yet an enforceable “Bill of Rights” with the legislative protections.

The Blueprint is 76 pages and includes many examples of AI use cases that the White House OSTP considers problematic. Importantly, the document clarifies that the Blueprint should only apply to automated systems that have the potential to meaningfully impact the American public’s rights, opportunities, or access to critical resources or services, generally excluding many industrial and/or operational applications of AI. The Blueprint expands on examples for use of AI in Lending, Human Resources, surveillance and other areas (which would also find a counterpart in the ‘high-risk’ use case framework of the forthcoming EU AI Act).

How have experts reacted to the 'Blueprint for an AI Bill of Rights'?

The release of the Blueprint was met with mixed reception from the press, industry, and academia. Some advocates for government controls believe it does not go far enough and will be largely ineffectual. These individuals wish the document had more of the checks and balances available in the EU AI Act. In addition to concerns from advocates, The Wall Street Journal, for instance, noted fears from some tech execs that regulation could stifle AI innovation.

On the other hand, there is a great amount of support for not moving to regulation in order to allow beneficial innovation and competition in the many uses of AI, to flourish. Policy experts have also highlighted the important protections this document could have for a range of groups including Black and Latino Americans. As the head of the nonprofit Center for AI and Digital Policy noted in MIT Technology Review, the Bill of Rights is key starting point, and an "impressive" one at that.

Have you read?

Where can I read Forum work on AI accountability?

The World Economic Forum has already published work to assist business, governments and citizens understand how to responsibly use AI in Human Resources and by Law Enforcement. Last year, the Forum released this practical toolkit to promote the responsible use of artificial intelligence-based tools in human resources. In addition, a framework developed last fall provides critical guidance for facial recognition use in law enforcement.

What other work is being done for AI accountability?

Though the Blueprint is non-binding, the White House also announced that a number of federal agencies will be rolling out related actions and guidance regarding their use of AI systems, including new policies regarding procurement. Agency activity on the Blueprint varies widely in terms of level of maturity, and it is unclear how new guidance will relate or complement existing directives on AI (i.e. Executive Order 13960 on Promoting the Use of Trustworthy Artificial Intelligence in the Federal Government, or the National Institute of Standards and Technology (NIST’s) AI Risk Management Framework) and statements by the Federal Trade Commission, EEOC, CFPB and HHS.

Nonetheless, the “Blueprint for an AI Bill of Rights” can be seen as enhancing prior statements that existing standards and laws apply to add to the normative enforcement of aspects of other proposed legal mechanisms such as the Algorithmic Accountability Act, which was reintroduced in the Senate in an amended form earlier in 2022. These legal mechanisms would not only give the AI Bill of Rights more teeth but would also, tentatively, create a greater alignment of regulatory best practices between the US, and the EU’s incoming AI Act.

When assessing the bigger picture of international mechanisms and forms of best practice in the governance of AI, the AI Bill of Rights is a welcome initiative that must be rightly situated in the context of other forthcoming initiatives, both within the US and elsewhere. As mentioned, the EU and China are already steaming ahead with devising and implementing actual regulatory regimes, which will affect global best practices. In order to ensure that the US does not lose its ability to affect international de jure standards in the area of AI, it is important that policymakers strive to do more to implement new policies and practices that will safeguard the interests of US citizens as well as beneficial innovation going forward. Undoubtedly, these developments will have ripple effects on fragile yet emergent international best practices in AI governance.

Loading...
Don't miss any update on this topic

Create a free account and access your personalized content collection with our latest publications and analyses.

Sign up for free

License and Republishing

World Economic Forum articles may be republished in accordance with the Creative Commons Attribution-NonCommercial-NoDerivatives 4.0 International Public License, and in accordance with our Terms of Use.

The views expressed in this article are those of the author alone and not the World Economic Forum.

Related topics:
Emerging TechnologiesGlobal Cooperation
Share:
World Economic Forum logo
Global Agenda

The Agenda Weekly

A weekly update of the most important issues driving the global agenda

Subscribe today

You can unsubscribe at any time using the link in our emails. For more details, review our privacy policy.

Robot rock stars, pocket forests, and the battle for chips - Forum podcasts you should hear this month

Robin Pomeroy and Linda Lacina

April 29, 2024

About Us

Events

Media

Partners & Members

  • Join Us

Language Editions

Privacy Policy & Terms of Service

© 2024 World Economic Forum