Emerging Technologies

Stanford just released its annual AI Index report. Here's what it reveals

Published · Updated
Figurines with computers and smartphones are seen in front of the words "Artificial Intelligence AI" in this illustration taken, February 19, 2024. REUTERS/Dado Ruvic/Illustration

It's Stanford's 7th AI Index report. Image: REUTERS/Dado Ruvic/Illustration

James Fell
Senior Writer, Formative
  • Stanford University has released its seventh AI Index report.
  • It covers trends such as technical advancements in AI and public perceptions of the technology.
  • In an effort to alleviate concerns around AI governance, the World Economic Forum has spearheaded the AI Governance Alliance.

Artificial intelligence’s (AI) influence on society has never been more pronounced. Since ChatGPT became a ubiquitous feature on computer desktops in late 2022, the rapid development and deployment of generative AI and large language model (LLM) tools have started to transform industries and show the potential to touch many aspects of modern life.

AI has even surpassed human-level performance on several benchmark tasks and is succeeding in helping workers become more productive and produce better-quality work. That’s according to Stanford University’s AI Index report.

The report, which is in its seventh edition, covers trends such as technical advancements in AI, public perceptions of the technology and the geopolitical dynamics surrounding its development.

Here are 10 key takeaways.

1. AI is outperforming humans on various tasks

A graph showing AI technical performance benchmarks versus human performance
AI exceeds human performance on a selection of intellectual task categories. Image: Stanford University-AI Index

As of 2023, AI is surpassing human performance on some benchmarks, including in image classification, visual reasoning and English understanding. However, there are still some task categories where AI fails to exceed human ability, most notably complex cognitive tasks. These tasks include the likes of visual common-sense reasoning and planning, and competition-level mathematics.

2. Industry takes the lead

Until 2014, academia led in the release of machine learning models. That’s no longer the case. In 2023, there were 51 machine learning models produced by industry compared with just 15 from academia. Interestingly, 21 notable models were created in 2023 as a result of industry-academia collaborations, which represents a new high.

What’s behind industry’s phenomenal uplift? Creating cutting-edge AI models now demands a substantial amount of data, computing power and financial resources, which are not typically accessible in academia.

Have you read?

3. Frontier models reach unprecedented costs

As mentioned before, LLMs aren’t cheap to run or train. According to AI Index estimates, the training costs of leading AI models have increased significantly. For example, OpenAI’s GPT-4 training costs were estimated to be $78 million, while Gemini Ultra by Google cost $191 million.

In comparison, back in 2017, the original Transformer model, which is recognized as introducing the architecture that underpins virtually all modern LLMs, cost around $900 to train.

4. The United States is the leading source of top AI models

A graph showing the number of notable machine learning models launched by country in 2023
The US leads other countries when it comes to releasing notable AI models. Image: Stanford University-AI Index

To gain an understanding of the evolving geopolitical landscape of AI development, the AI Index research team analyzed the country of origin of notable models. The results showed that, in 2023, the United States leads, with 61 notable models, outpacing the European Union’s 21 and China’s 15. Since 2003, the US has produced more models than other major geographic regions.

5. Standardized benchmark reporting for responsible AI is lacking

The effectiveness of benchmarks when it comes to AI tools largely depends on their standardized approach and application. However, research from the AI Index reveals a significant lack of standardization in responsible AI reporting. For instance, leading developers, including OpenAI, Google and Anthropic, primarily test their models against different responsible AI benchmarks. These different testing models on different benchmarks complicate comparisons, as individual benchmarks have unique natures. Standardized benchmark testing is considered critical to enhance transparency around AI capabilities.

6. Investment in generative AI is sky-high

While overall AI private investment decreased in 2023, funding for generative AI sharply increased. The sector attracted $25.2 billion last year, nearly nine times the investment of 2022 and about 30 times the amount in 2019. Generative AI accounted for over a quarter of all AI-related private investment in 2023.


How is the World Economic Forum creating guardrails for Artificial Intelligence?

7. AI is making workers more productive and creating higher quality work

While using AI without proper oversight can lead to diminished performance, several studies assessing AI’s impact on labour suggest that it enables workers to complete tasks more quickly and improved the quality of their output. The studies also showed AI’s potential to bridge the skills gap between low- and high-skilled workers.


8. AI is playing a growing role in scientific progress

While 2022 saw AI begin to advance scientific discovery, 2023 made further leaps in terms of science-related AI application launches, says the AI Index. Examples include Synbot, an AI-driven robotic chemist for synthesizing organic molecules, and GNoME, which discovers stable crystals for the likes of robotics and semiconductor manufacturing.

9. AI regulations in the US are on the rise

A chart showing the number of AI-related regulations introduced in the United States from 2016-2023
The number of AI-related regulations introduced in the US has increased significantly. Image: Stanford University-AI Index

In 2023, 25 AI-related regulations were enacted in the US, growing the total number by 56.3%. Compare that to 2016, when just one was introduced.

The number of AI-related regulations passed by the EU jumped from 22 in 2022 to 32 in 2023. Despite this growth, approved EU regulations were at their peak in 2021, when 46 were passed.

10. People are more aware – and nervous of – AI’s impact

The report includes information from an Ipsos survey that shows, over the past year, that the proportion of people who think AI will dramatically affect their lives in the next three to five years has increased from 60% to 66%.

Unease towards AI products and services saw a 13 percentage point rise from 2022, with 55% reported to feel nervous. The report also cites Pew data that states that 52% of Americans feel more concerned than excited about AI, up from 38% in 2022.

In an effort to alleviate concerns about AI governance globally, the World Economic Forum has established a group called the AI Governance Alliance. Consisting of industry leaders, governments, academic institutions and civil society organizations, it aims to promote the creation of transparent and inclusive AI systems globally.

1. AI is outperforming humans on various tasks2. Industry takes the lead3. Frontier models reach unprecedented costs4. The United States is the leading source of top AI models5. Standardized benchmark reporting for responsible AI is lacking6. Investment in generative AI is sky-high7. AI is making workers more productive and creating higher quality work8. AI is playing a growing role in scientific progress9. AI regulations in the US are on the rise10. People are more aware – and nervous of – AI’s impact

About Us



Partners & Members

  • Join Us

Language Editions

Privacy Policy & Terms of Service

© 2024 World Economic Forum