Emerging Technologies

AI: Why companies need to build algorithmic governance ahead of the law

Stakeholders must build algorithmic governance through ethical rules.

Stakeholders must build algorithmic governance through ethical rules. Image: Getty Images/iStockphoto.

Mark Esposito
Chief Learning Officer, Nexus FrontierTech, Professor at Hult International Business School
Aurélie Jean
Founder, In Silico Veritas
Terence Tse
Executive Director, Nexus FrontierTech, Professor of Finance, Hult International Business School
Guillaume Sibout
Consultant in Digital Numeracy, Silico Veritas
Share:
Our Impact
What's the World Economic Forum doing to accelerate action on Emerging Technologies?
The Big Picture
Explore and monitor how Internet Governance is affecting economies, industries and global issues
A hand holding a looking glass by a lake
Crowdsource Innovation
Get involved with our crowdsourced digital platform to deliver impact at scale
Stay up to date:

Generative Artificial Intelligence

Listen to the article

  • Algorithm governance covers the rules and practices for the construction and use of algorithms embedded in AI technologies.
  • It ensures that the algorithms function properly and guards against any errors such as technological discrimination or non-compliance with the law.
  • We outline the different models of algorithm governance available to organizations.

In June 2023, the European Parliament voted on a legal instrument intended to regulate the design and use of artificial intelligence (AI) according to its level of risk to citizens' fundamental rights. Public and private stakeholders, whether consumers or designers of AI, must develop algorithmic governance to ensure the compliance of their activities and, above all, to avoid causing harm – even to a minority of users – and to minimise the reputational risk associated with algorithmic scandals such as those seen in recent years.

In this way, OpenAI could have gradually rolled out its conversational agent, ChatGPT – and its GPT algorithm – whilst anticipating the risks associated with mass appropriation thereof and with errors and approximations in responses and users' misunderstanding of the tool. Stakeholders must build algorithmic governance through ethical rules and practices for designing and using these algorithms, using a methodology that is as rigorous as it is pragmatic in order that it may be applied effectively and measurably.

Have you read?

What is algorithmic governance? (and what it is not)

Algorithm governance covers the rules and practices for the construction and use of algorithms embedded in AI technologies designed for a specific use case. It is not simply a charter or a set of ethical principles, but is based on all the measures to be taken in order to ensure that the algorithm in question functions properly and to guard against any errors such as technological discrimination or non-compliance with the law. This is like a law, which must be accompanied by a judicial system if it is to be applied and its compliance assessed at all times.

Algorithmic governance must be multidisciplinary and cross-cutting with a number of sciences such as sociology, politics, and anthropology. It must link the various stakeholders in a project, including the end user, and incorporate their level of understanding of the technology and algorithmic science in general, as well as their rights, obligations and duties with respect to the algorithm in question.

Algorithmic governance must also enable two simultaneous approaches: the historical top-down approach, which enables a steering committee or project managers to infuse and impose best practices and their application, and the bottom-up approach, which enables every member of a company's staff or the end user to contribute directly (through tangible, practical involvement in the project) or indirectly (for example, by collecting user feedback) to the smooth running of the project: from the launch and development through to deployment thereof.

Finally, governance must be carried out and evaluated in a manner so transparent that it can be understood by all stakeholders in a project involving the design, procurement or use of an algorithm. It should be emphasised here that unconditional transparency of the source code in which the algorithm is programmed should not make it possible to exempt oneself from algorithmic governance. In practice, any transparency being taken into consideration should involve the source code, all the data sets used and the criteria used for algorithmic training. In some cases, this could impede innovation by only making companies' intellectual property available to the public. Transparency in governance must be imposed unconditionally.

Loading...

Algorithmic governance models

Few stakeholders have set up or communicated their algorithmic governance model, apart from their ethical charter for trusted AI. Some attempts in the past have been unsuccessful, such as Google's AI ethics committee, which was discontinued in 2019, or Microsoft's first AI charter in 2017, which was not massively adopted. Added to this is the limited number of academic studies setting out ready-to-use algorithmic governance models, unlike those on data or more generally on the importance of ethics in AI.

And yet, acting upstream of legislative debate is essential for these stakeholders. There are many advantages to being ahead of the law: anticipating the ethical implications of a new technology, preventing costly litigation, avoiding reputational pitfalls, boosting stakeholder confidence, differentiation from competitors by enhancing attractiveness, and playing an active role with European legislative bodies.

Which model of algorithmic governance should be adopted?

There is not just one model, but a multitude of models that can be adapted to industry, to a company and its ambitions, and also to a type of algorithmic project. That being stated, a general structure should be used to build the foundations of one’s own governance. This structure should include several phases of a project to design or procure an algorithm: from the ideation phase, which includes the formulation of the business problem to be solved, to the use phase, which includes feedback from the end user.

It should include the business and technical specification phases, including environmental issues associated with the extraction of raw materials for the manufacture of hardware, as well as the computing power - and therefore energy consumption - involved in training the algorithm and running it. It must also include data collection phases, including any sampling and representativeness tests to be carried out, computer programming and algorithmic training, validation and deployment of the algorithm, and the tests carried out on it once it has been used, sometimes by millions of individuals. Moreover, it should include assessment and enhancement of the level of technical understanding by stakeholders, including business staff and end-users.

Discover

How is the World Economic Forum creating guardrails for Artificial Intelligence?

A risk score must be calculated as the governance process progresses, in order to assess the scientific, technical, ethical and reputational risks associated with this algorithm. Charters and principles of good practice should be drafted when the structure on which the algorithmic governance is based is being drawn up, in order to add any questions and points concerning the project that need to be addressed. The presence of an internal or external ethics committee can help to ensure that the governance process runs smoothly.

Explainability calculations, which are statistical methods used to control or extract the algorithm's operating logic, must be systematically applied before (i.e. on data sets), during and after automatic training. These calculations, which make it possible to reduce the opacity of the algorithm and control its responses and their variability, significantly reduce the risk of errors, bugs or algorithmic bias at the root of technological discrimination. This is how one could have prevented the first facial recognition algorithms from being biased with respect to the colour of one’s face, or the Goldman-Sachs algorithm in the Apple Card application from giving women lines of credit far lower than those given to men.

Where to start?

The company must identify a use case involving algorithmic technology to meet one of its precise business needs. Together, the technical and business teams must identify the form of each phase of algorithmic governance, and define the questions, actions and tests to be carried out for each of them. Moreover, the teams must decide how to calculate the algorithm's risk score and the governance success metrics. This initial governance is then applied to the use case and then to several other cases in order to check that it can be scaled up.

Finally, using an iterative and agile method, the teams must adjust the company's governance whilst deploying it within the organisation. In this way, the company will design algorithms that are inclusive, respectful of citizens and the environment. By communicating and sharing its governance publicly, it will also contribute to legislative debates in the drafting of the next relevant and sustainable laws governing algorithms.

Loading...
Don't miss any update on this topic

Create a free account and access your personalized content collection with our latest publications and analyses.

Sign up for free

License and Republishing

World Economic Forum articles may be republished in accordance with the Creative Commons Attribution-NonCommercial-NoDerivatives 4.0 International Public License, and in accordance with our Terms of Use.

The views expressed in this article are those of the author alone and not the World Economic Forum.

Related topics:
Emerging TechnologiesBusiness
Share:
World Economic Forum logo
Global Agenda

The Agenda Weekly

A weekly update of the most important issues driving the global agenda

Subscribe today

You can unsubscribe at any time using the link in our emails. For more details, review our privacy policy.

The cybersecurity industry has an urgent talent shortage. Here’s how to plug the gap

Michelle Meineke

April 28, 2024

About Us

Events

Media

Partners & Members

  • Join Us

Language Editions

Privacy Policy & Terms of Service

© 2024 World Economic Forum