How to manage AI procurement in public administration

The US has announced that the Office of Management and Budget (OMB) will release AI procurement guidelines.

The US has announced that the Office of Management and Budget (OMB) will release AI procurement guidelines. Image: Katie Moum/Unsplash

Ravit Dotan
Researcher, Center for Governance and Markets, University of Pittsburgh
Emmaline Rial
Graduate student, University of Pittsburgh
Ana Maria Dimand
Assistant Professor of Public Policy and Administration, Boise State University
Virginia Dignum
Professor of Responsible Artificial Intelligence, Umeå School of Business, Economics and Statistics
Share:
A hand holding a looking glass by a lake
Crowdsource Innovation
Get involved with our crowdsourced digital platform to deliver impact at scale

Listen to the article

  • Governments are increasingly purchasing AI systems, but there is a need for responsible procurement practices to consider societal risks and opportunities.
  • Existing resources include general guidelines, targeted recommendations and regulatory initiatives from organizations like AI Now, UK Government and the US Office of Management and Budget.
  • More tools and repositories are needed to support responsible AI procurement, including RFP templates, contract clauses and scorecards for evaluating AI ethics maturity.

Governments and public administrators buy AI at an increasingly greater scale. For example, in 2022 the US spent 2.5 more on AI-related contracts than it did in 2017. These purchases can be very helpful, but they include high-risk systems with immense potential for harm. Therefore, public administrators who buy AI systems must have the knowledge and resources required to procure AI responsibly, taking into account societal risks and opportunities. There are already useful resources being published, but there is room to fill additional gaps, to accommodate for the evolvement of AI systems and their applications. This article reviews existing English resources and gaps to fill.

Guidelines and regulation

The most common type of resource available for public administrators procuring AI is guidelines AI. The guidelines fall into three categories: 1) general procurement advice; 2) targeted recommendations, focusing on a sector, a use case, a subject matter, or a lens; and 3) regulatory initiatives.

Have you read?
  • The Future of Jobs Report 2023

The general guidelines and recommendations include those produced by AI Now, DataEthics, Hickok, Miller and Waters, Rubenstein, UK Government, and the World Economic Forum. In addition, the US has announced that the Office of Management and Budget (OMB) will release AI procurement guidelines.

Among the targeted guidelines, the Prison Policy Initiative provides guidelines for purchasing tablets in prisons. While they don’t touch on AI specifically, these guidelines incorporate tech ethics requirements that could be adapted to AI. Coglianese & Lampmann recommend adding contract clauses about trade secrets, privacy and cybersecurity, auditing, and public participation. Jacobs & Mulligan suggest using a method called "measurement modelling" in procurement processes to understand the expected impacts of AI systems.

Regulatory initiatives also provide guidelines. Procurement practitioners can seek guidance from these initiatives, even if it doesn’t apply to their jurisdiction and even if it didn’t pass into law.

Discover

How is the World Economic Forum creating guardrails for Artificial Intelligence?

In Canada, federal agencies must comply with the Directive on Automated Decision-Making, which includes requirements for AI procurement around impact assessment, transparency, quality assurance, and more. In the US, existing AI procurement regulation includes Executive Order (EO) 13960, which requires that federal agencies design, develop, acquire, and use AI in a manner that fosters public trust and confidence, empathizing themes such as privacy, civil rights, and civil liberties. The AI Training Act requires budgeting for AI training for the procurement workforce. Last, the states of Maryland, Connecticut, and Washington proposed bills that introduce AI ethics requirements into the procurement process.

These guidelines and regulatory initiatives are a good start. However, there remain gaps to fill to equip practitioners with the right tools. In conversations with the authors of this article, practitioners expressed concern that, without proper support, responsible AI procurement practices won’t be implemented even if they were to be required by law. This has happened with similar procurement regulation in the past.

Tools

There are useful and relevant tools publicly available but more attention needs to be given to their proper implementation, use, improvement, and expansion. The Ford Foundation, Richardson, and the World Economic Forum have created lists of questions for procurement practitioners to ask as they go through the procurement life cycle. The City of Amsterdam created an English template for AI procurement contracts, incorporating responsible AI features (and see Miller and Waters for some contextualizing and discussion). The city of San Jose, California has created several AI procurement tools. These include a map of the review process when purchasing an AI system, and a form to collect information from AI vendors. Last, Alameda County in California has published an RFP for an AI system which includes privacy requirements. It can be used as a template.

Even though the above represents great efforts in the right direction, more work is needed to tackle responsible AI procurement. For example, practitioners express the need for full RFPs templates, brief clauses to plug into existing contracts, and scorecards for evaluating the AI ethics maturity of a prospective vendor. In addition, the existing tools don’t address challenges that practitioners encounter when trying to procure AI responsibly, such as difficulties to get vendors to disclose AI-ethics-related information or follow up on AI ethics commitments, even if the commitments are stated in signed contracts.

Repositories

We found two kinds of repositories related to the responsible procurement of AI in public administration. Some repositories help with knowledge management. For example, Gutierrez’s soft law database contains a list of soft laws related to AI procurement from the years 2001-2019. You can find discussions of this repository here and here. Another relevant knowledge management repository is the US’s Periodic Table of Acquisition Innovations, created by the US Federal Acquisition Institute (FAI).

Other repositories document information about AI procured by public administration. The Netherlands, the City of Amsterdam, the City of Helsinki, and the City of San Jose (California) have set up registries for algorithms in their municipalities. In addition, Eurocities created a schema for registering algorithms used in cities

Repositories that curate resources on the responsible procurement of AI have not been found. However, without a repository, practitioners may struggle to find relevant resources and choose the best ones for them. Our review required a deep-dive that many involved in the procurement of AI may not be able to conduct.

Take aways

The current resources for the responsible procurement of AI in public administration are helpful but we need more attention on developing these tools further and building additional resources to ensure practitioners are fully equipped to tackle the full procurement cycle. In developing new tools, it is critical to include practitioners to a greater degree. Many of the resources we found do not mention substantive consultation with practitioners and they neglect to address the challenges that practitioners face. Closing these gaps is a critical task. Responsible procurement of AI in public administration is key for public safety and for influencing the tech sector to develop AI more responsibly.

This article has benefited greatly from the contribution of Beth Schwanke and Hubert Halopé. Beth Schwanke, PhD is the Executive Director at University of Pittsburgh Institute for Cyber Law, Policy, and Security. Hubert Halopé is the Artificial Intelligence & Machine Learning Lead at the World Economic Forum

Don't miss any update on this topic

Create a free account and access your personalized content collection with our latest publications and analyses.

Sign up for free

License and Republishing

World Economic Forum articles may be republished in accordance with the Creative Commons Attribution-NonCommercial-NoDerivatives 4.0 International Public License, and in accordance with our Terms of Use.

The views expressed in this article are those of the author alone and not the World Economic Forum.

Share:
World Economic Forum logo
Global Agenda

The Agenda Weekly

A weekly update of the most important issues driving the global agenda

Subscribe today

You can unsubscribe at any time using the link in our emails. For more details, review our privacy policy.

About Us

Events

Media

Partners & Members

  • Join Us

Language Editions

Privacy Policy & Terms of Service

© 2024 World Economic Forum