EU sets global standards with first major AI regulations: Here's what you need to know
Europe becomes the first major world power to enact comprehensive AI regulations, covering areas like transparency, use of AI in public spaces, and high-risk systems. Image: Pexels/picjumbo.com
Foo Yun Chee
Author, ReutersMartin Coulter
Correspondent, ReutersSupantha Mukherjee
Author, ReutersGet involved with our crowdsourced digital platform to deliver impact at scale
Stay up to date:
Generative Artificial Intelligence
- Europe becomes the first major world power to enact comprehensive AI regulations, covering areas like transparency, use of AI in public spaces, and high-risk systems.
- High-impact models with systemic risks face stricter requirements, including model evaluation, risk mitigation, and incident reporting.
- Governments can use real-time facial recognition in limited cases, excluding cognitive manipulation and social scoring.
- The law takes effect two years after formal ratification, expected in early 2024.
Europe on Friday reached a provisional deal on landmark European Union rules governing the use of artificial intelligence including governments' use of AI in biometric surveillance and how to regulate AI systems such as ChatGPT.
With the political agreement, the EU moves toward becoming the first major world power to enact laws governing AI. Friday's deal between EU countries and European Parliament members came after nearly 15 hours of negotiations that followed an almost 24-hour debate the previous day.
The two sides are set to hash out details in the coming days, which could change the shape of the final legislation.
"Europe has positioned itself as a pioneer, understanding the importance of its role as a global standard setter. This is yes, I believe, a historical day," European Commissioner Thierry Breton told a press conference.
The accord requires foundation models such as ChatGPT and general purpose AI systems (GPAI) to comply with transparency obligations before they are put on the market. These include drawing up technical documentation, complying with EU copyright law and disseminating detailed summaries about the content used for training.
High-impact foundation models with systemic risk will have to conduct model evaluations, assess and mitigate systemic risks, conduct adversarial testing, report to the European Commission on serious incidents, ensure cybersecurity and report on their energy efficiency.
How is the World Economic Forum creating guardrails for Artificial Intelligence?
GPAIs with systemic risk may rely on codes of practice to comply with the new regulation.
Governments can only use real-time biometric surveillance in public spaces in cases of victims of certain crimes, prevention of genuine, present, or foreseeable threats, such as terrorist attacks, and searches for people suspected of the most serious crimes.
The agreement bans cognitive behavioural manipulation, the untargeted scrapping of facial images from the internet or CCTV footage, social scoring and biometric categorisation systems to infer political, religious, philosophical beliefs, sexual orientation and race.
Consumers would have the right to launch complaints and receive meaningful explanations while fines for violations would range from 7.5 million euros ($8.1 million) or 1.5% of turnover to 35 million euros or 7% of global turnover.
Business group DigitalEurope criticised the rules as yet another burden for companies, on top of other recent legislation.
"We have a deal, but at what cost? We fully supported a risk-based approach based on the uses of AI, not the technology itself, but the last-minute attempt to regulate foundation models has turned this on its head," its Director General Cecilia Bonefeld-Dahl said.
Privacy rights group European Digital Rights was equally critical.
"It’s hard to be excited about a law which has, for the first time in the EU, taken steps to legalise live public facial recognition across the bloc," its senior policy advisor Ella Jakubowska said.
"Whilst the Parliament fought hard to limit the damage, the overall package on biometric surveillance and profiling is at best lukewarm."
The legislation is expected to enter into force early next year once both sides formally ratify it and should apply two years after that.
Governments around the world are seeking to balance the advantages of the technology, which can engage in human-like conversations, answer questions and write computer code, against the need to put guardrails in place.
Europe's ambitious AI rules come as companies like OpenAI, in which Microsoft (MSFT.O) is an investor, continue to discover new uses for their technology, triggering both plaudits and concerns. Google owner Alphabet (GOOGL.O) on Thursday launched a new AI model, Gemini, to rival OpenAI.
The EU law could become the blueprint for other governments and an alternative to the United States' light-touch approach and China's interim rules.
Don't miss any update on this topic
Create a free account and access your personalized content collection with our latest publications and analyses.
License and Republishing
World Economic Forum articles may be republished in accordance with the Creative Commons Attribution-NonCommercial-NoDerivatives 4.0 International Public License, and in accordance with our Terms of Use.
The views expressed in this article are those of the author alone and not the World Economic Forum.
Related topics:
The Agenda Weekly
A weekly update of the most important issues driving the global agenda
You can unsubscribe at any time using the link in our emails. For more details, review our privacy policy.
More on Geographies in DepthSee all
Andrea Willige
September 26, 2024
Klaus Schwab
September 20, 2024
Sarah Rickwood, Sue Bailey and Daniel Mora-Brito
August 13, 2024
Mthuli Ncube
August 13, 2024
John Letzing
August 12, 2024
Mohamed Elshabik
August 9, 2024