Artificial Intelligence

The European Union’s Artificial Intelligence Act, explained

The proposed Artificial Intelligence Act would classify AI systems by risk and mandate various development and use requirements.

The proposed Artificial Intelligence Act would classify AI systems by risk and mandate various development and use requirements. Image: REUTERS/Yves Herman

Spencer Feingold
Digital Editor, World Economic Forum
Our Impact
What's the World Economic Forum doing to accelerate action on Artificial Intelligence?
The Big Picture
Explore and monitor how Artificial Intelligence is affecting economies, industries and global issues
A hand holding a looking glass by a lake
Crowdsource Innovation
Get involved with our crowdsourced digital platform to deliver impact at scale
Stay up to date:

Artificial Intelligence

Listen to the article

This article was first published in March 2023 and updated in June 2023.

  • The European Union is considering far-reaching legislation on artificial intelligence (AI).
  • The proposed Artificial Intelligence Act would classify AI systems by risk and mandate various development and use requirements.
  • European lawmakers agreed to more stringent amendments in June 2023.
  • But European companies have said the draft legislation could impact Europe's 'competitiveness and technological sovereignty'.

The European Union (EU) is working on a new legal framework that aims to significantly bolster regulations on the development and use of artificial intelligence.

The proposed legislation, the Artificial Intelligence (AI) Act, focuses primarily on strengthening rules around data quality, transparency, human oversight and accountability. It also aims to address ethical questions and implementation challenges in various sectors ranging from healthcare and education to finance and energy.

“[AI] has been around for decades but has reached new capacities fuelled by computing power,” Thierry Breton, the EU’s Commissioner for Internal Market, said in a statement back in 2021 when it was first proposed.

In June, changes to the draft Artificial Intelligence Act were agreed on, to now include a ban on the use of AI technology in biometric surveillance and for generative AI systems like ChatGPT to disclose AI-generated content.

But in an open letter signed by more than 150 executives, European companies from Renault to Heineken warned of the impact the draft legislation could have on business.

“In our assessment, the draft legislation would jeopardize Europe’s competitiveness and technological sovereignty without effectively tackling the challenges we are and will be facing," the letter to the European Commission, seen by the Financial Times, said.

Have you read?

What is the EU's Artificial Intelligence Act?

The AI Act aims to “strengthen Europe's position as a global hub of excellence in AI from the lab to the market, ensure that AI in Europe respects our values and rules, and harness the potential of AI for industrial use”.

The cornerstone of the AI Act is a classification system that determines the level of risk an AI technology could pose to the health and safety or fundamental rights of a person. The framework includes four risk tiers: unacceptable, high, limited and minimal.

AI systems with limited and minimal risk—like spam filters or video games—are allowed to be used with little requirements other than transparency obligations. Systems deemed to pose an unacceptable risk—like government social scoring and real-time biometric identification systems in public spaces—are prohibited with little exception.

On artificial intelligence, trust is a must, not a nice to have.

Margrethe Vestager, Executive Vice-President for a Europe fit for the Digital Age

High-risk AI systems are permitted, but developers and users must adhere to regulations that require rigorous testing, proper documentation of data quality and an accountability framework that details human oversight. AI deemed high risk include autonomous vehicles, medical devices and critical infrastructure machinery, to name a few.

The proposed legislation also outlines regulations around so-called general purpose AI, which are AI systems that can be used for different purposes with varying degrees of risk. Such technologies include, for example, large language model generative AI systems like ChatGPT.

Safely harnessing AI's full potential

“With this Act, the EU is taking the lead in attempting to make AI systems fit for the future we as human want,” said Kay Firth-Butterfield, Executive Director of the Centre for Trustworthy Technology, part of the World Economic Forum's Fourth Industrial Revolution Network.

The Forum launched its AI Governance Alliance in June, which aims to unite industry leaders, governments, academic institutions, and civil society organizations to champion responsible global design and release of transparent and inclusive AI systems.

At the Forum's Annual Meeting of the New Champions, in Tianjin, China, the Alliance facilitated the session Generative AI: Friend or Foe?, moderated by Cathy Li, the Forum's Head of AI, Data and Metaverse and Member of the Executive Committee.

She said: “It’s crucial for everybody to understand the enormous potential that we see with this novel technology but also the challenges and responsibilities that come with it.”

The session followed a Forum summit on responsible AI leadership, which convened thought leaders and practitioners and produced the Presidio Recommendations on Responsible Generative AI.


What next for the AI Act?

The Artificial Intelligence Act proposes steep non-compliance penalties. For companies, fines can reach up to €30 million or 6% of global income. Submitting false or misleading documentation to regulators can result in fines, too.

“With these landmark rules, the EU is spearheading the development of new global norms to make sure AI can be trusted,” Margrethe Vestager, the Executive Vice-President for a Europe Fit for the Digital Age and Competition, added in a statement. “Future-proof and innovation-friendly, our rules will intervene where strictly needed: when the safety and fundamental rights of EU citizens are at stake.”

The proposed law also aims to establish a European Artificial Intelligence Board, which would oversee the implementation of the regulation and ensure uniform application across the EU. The body would be tasked with releasing opinions and recommendations on issues that arise as well as providing guidance to national authorities.

“The board should reflect the various interests of the AI eco-system and be composed of representatives of the member states,” the proposed legislation reads.

The Artificial Intelligence Act was originally proposed by the European Commission in April 2021. A so-called general approach position on the legislation was adopted by the European Council in late 2022.

Amendments were adopted on 14 June and now the draft text of the legislation serves as the negotiating position between member states and the European Commission, which can be a lengthy process.

Don't miss any update on this topic

Create a free account and access your personalized content collection with our latest publications and analyses.

Sign up for free

License and Republishing

World Economic Forum articles may be republished in accordance with the Creative Commons Attribution-NonCommercial-NoDerivatives 4.0 International Public License, and in accordance with our Terms of Use.

The views expressed in this article are those of the author alone and not the World Economic Forum.

World Economic Forum logo
Global Agenda

The Agenda Weekly

A weekly update of the most important issues driving the global agenda

Subscribe today

You can unsubscribe at any time using the link in our emails. For more details, review our privacy policy.

How we can prepare for the future with foundational policy ideas for AI in education

TeachAI Steering Committee

April 16, 2024

About Us



Partners & Members

  • Join Us

Language Editions

Privacy Policy & Terms of Service

© 2024 World Economic Forum