Emerging Technologies

Global push to regulate artificial intelligence, plus other AI stories to read this month

Published · Updated
The Kuwait News Agency has debuted a virtual news presenter generated by artificial intelligence.

The Kuwait News Agency has debuted a virtual news presenter generated by artificial intelligence. Image: Kuwait News/Twitter

Cathy Li
Head, AI, Data and Metaverse; Member of the Executive Committee, World Economic Forum
Share:

Listen to the article

  • This artificial intelligence round-up brings you the key AI stories from the last month.
  • Top stories: Calls for a global summit on AI regulation; Study shows generative AI boosts worker productivity; Kuwait debuts an AI news anchor.

1. Calls for a global summit on AI regulation

The rapid advance of generative AI tools has drawn the attention of regulators around the world. Political bodies and policy-makers are accelerating efforts to put laws in place controlling the potential risks of AI and making developers accountable for the actions of their systems.

A group of EU lawmakers working on AI legislation is calling for a global summit to find ways to control the development of advanced AI systems, according to reports from Reuters. The 12 European Parliament members have urged US President Joe Biden and European Commission President Ursula von der Leyen to convene a meeting of world leaders.

The World Economic Forum Centre for the Fourth Industrial Revolution convened leading industry, academic and government experts to explore generative AI systems' technical, ethical and societal implications at a three-day meeting in San Francisco.

In an article published on Project Syndicate, the Forum's Founder and Executive Chairman Klaus Schwab and I explained how all stakeholders must work together "to devise ways to mitigate negative externalities and deliver safer, more sustainable, and more equitable outcomes."

Research shows that efforts to regulate AI appear to be gathering pace. Stanford University's 2023 AI Index shows 37 AI-related bills were passed into law globally in 2022. The US led the push for regulation, passing nine laws, followed by Spain with five and the Philippines with four.

A chart showing 37 AI-related bills became law globally in 2022
Legislative bodies in 127 countries passed AI-related laws in 2022. Image: Stanford University 2023 AI Index

The European Commission proposed draft rules for an AI Act nearly two years ago. It is expected to classify AI tools according to their perceived level of risk, from low to unacceptable. There are also reports that the European Data Protection Board has set up a task force on ChatGPT, in a first step towards possible regulation on privacy around AI.

The US is also acting on potential accountability measures for AI systems as questions loom about the impact on national security and education. The National Telecommunications and Information Administration wants to know if there are measures that could be put in place to provide assurance "that AI systems are legal, effective, ethical, safe, and otherwise trustworthy", Reuters reports.

OpenAI, the developer of ChatGPT, says it is collaborating with policy-makers "to ensure that AI systems are developed in a trustworthy manner".

Loading...

2. Workers more productive when paired with AI, says MIT study

Researchers from the Massachusetts Institute of Technology (MIT) and Stanford University have been looking into the productivity of workers using generative AI to help them with their work. The study, which assessed the performance of more than 5,000 customer support agents, showed workers were on average 14% more productive when using generative AI tools.

A chart showing the productivity increase of staff working with generative AI tools.
Customer service agents working with AI were able to handle 14% more issues per hour. Image: Stanford University/MIT/NBER

Pairing workers with an AI assistant proved much more effective with novice and low-skilled employees, according to the research. The technology's impact on highly skilled staff was minimal.

A broader study by the Pew Research Center found that 62% of Americans believe AI will have a major impact on jobs over the next 20 years. Their biggest concern is the use of AI in hiring and firing processes, which 77% of respondents are opposed to.

Integrating generative AI into the workforce of the future will be a key topic at the World Economic Forum's Growth Summit, taking place in Geneva, Switzerland, on 2-3 May 2023. Sessions on reskilling workforces and how to align education systems with industry demand for AI talent will form a central part of the discussions.

Loading...

3. News in brief: AI stories from around the world

A news agency in Kuwait has introduced online viewers to an AI newsreader who may become the face of its news bulletins. The Guardian reports that the virtual presenter's first words, spoken in Arabic, were: “I’m Fedha, the first presenter in Kuwait who works with artificial intelligence at Kuwait News. What kind of news do you prefer? Let’s hear your opinions.” Kuwait News published the presenter's first appearance on Twitter. But the use of AI in the news media is likely to prove controversial as there are well-documented cases of AI delivering inaccurate information. There are also concerns about AI being used to spread disinformation.

Loading...

The health sector is one area where AI is already being used to improve patient care and outcomes, as we have reported previously. Now an Israeli health-tech company is using AI to match patients with the most effective drugs to treat depression, according to a report from the BBC. The process, which combines stem cell technology with AI, helps to cut the risk of side effects and make sure the treatment is as effective as possible.

The US Homeland Security Agency will create a task force to develop its approach to using artificial intelligence, according to Secretary Alejandro Mayorkas. AI could be used in roles from protecting critical infrastructure to screening cargo. Mayorkas said the technology would "drastically alter the threat landscape," adding: "Our department will lead in the responsible use of AI to secure the homeland and in defending against the malicious use of this transformational technology."

The University of California, Berkeley School of Law is among the first educational institutions to adopt a formal policy on student use of generative AI. The policy limits students to using AI to conduct research or correct grammar, and says it may not be used in exams or to compose assignments. The rules also forbid the use of AI in any way that constitutes plagiarism. “The approach of finals made us realize that we had to say something," said Professor Chris Hoofnagle. "We want to make sure we have clear guidelines so that students don’t inadvertently attract an honour code violation.”

4. More on AI from Agenda

This article looks at 4 ways AI is helping humans make informed decisions. It covers applications from predicting where wildfires are likely to break out, to improving business sales and helping to detect disease. You can also learn how AI is helping firefighters to locate people trapped in burning buildings.

Domestic chores could be done by robots 40% of the time within a decade, a new study predicts. Automation of these tasks could lead to significant social and economic consequences.

The arrival of generative AI has raised concerns about students using it to write essays, and even to sit exams on their behalf. But it's clear that AI could also bring a range of benefits to the education sector, from personalized learning to the arrival of intelligent textbooks.

Loading...
Share:
Contents
1. Calls for a global summit on AI regulation2. Workers more productive when paired with AI, says MIT study 3. News in brief: AI stories from around the world4. More on AI from Agenda

About us

Engage with us

  • Sign in
  • Partner with us
  • Become a member
  • Sign up for our press releases
  • Subscribe to our newsletters
  • Contact us

Quick links

Language editions

Privacy Policy & Terms of Service

Sitemap

© 2024 World Economic Forum