Emerging Technologies

3 AI predictions for 2023 and beyond, according to an AI expert

Responsibly harnessing the power of AI.

Responsibly harnessing the power of AI. Image: Getty Images/iStockphoto.

Michael Schmidt
Chief Technology Officer, DataRobot
Our Impact
What's the World Economic Forum doing to accelerate action on Emerging Technologies?
The Big Picture
Explore and monitor how Artificial Intelligence is affecting economies, industries and global issues
A hand holding a looking glass by a lake
Crowdsource Innovation
Get involved with our crowdsourced digital platform to deliver impact at scale
Stay up to date:

Tech and Innovation

Listen to the article

  • The field of artificial intelligence (AI) has seen huge growth in recent years.
  • Companies seeking to harness AI must overcome key societal concerns.
  • Key predictions outline how to achieve value from responsible AI growth.

If there’s one thing we know for certain when looking at the year ahead, it’s that the organizations that are prepared to take on uncertainty – from market conditions to geopolitical unrest and everything in between – will be the ones best suited to serve their customers, employees, and shareholders.

The artificial intelligence (AI) field has seen incredible growth in the last five years because it has provided new capabilities to mitigate uncertainty by leveraging data to rapidly respond to changing environments as quickly as new data comes in.

Have you read?

The technology and its benefits are no longer a great unknown to the majority, instead, many have seen first hand the ability AI has to work quickly and efficiently in solving many of society’s most pressing challenges. We’ve seen it play a role in the record speed at which COVID-19 vaccines were delivered, help hospitals identify and treat their most at-risk patients, and more broadly, vastly reduce the number of human errors in data.

As we look to the year ahead, we think the ramifications of heightened societal awareness of AI, increased regulatory pressure, the increased momentum of investments in the space, and how AI will continue to increase employee productivity may come to a head. Practical and applied AI concerns will become paramount to enable continued value from AI growth.

1. Heightened awareness and ethical concerns

Algorithmic bias has been a growing subject of discussion and debate in the use of AI. It is a difficult topic to navigate, both due to the potential complexity of mathematically identifying, analysing, and mitigating the presence of bias in the data and because of the social implications of determining what it means to be “fair” in decision-making.

Fairness is situationally dependent, in addition to being a reflection of values, ethics, and legal regulations. That said, there are clear ways to approach questions of AI fairness by using data and models with guardrails in place, as well as suggested steps organizations can take to mitigate issues of uncovered bias.

The largest source of bias in an AI system is the data it was trained on. That data might have historical patterns of bias encoded in its outcomes. Ultimately, machine learning gains knowledge from data, but that data comes from us – our decisions and systems.

Because of the expanding use of the technology and society’s heightened awareness of AI, you can expect to see organizations auditing their systems and local governments working to ensure AI bias does not negatively impact their residents. In New York City for example, a new law will go into effect in 2023 penalizing organizations that have AI bias in their hiring tools.


How is the World Economic Forum ensuring the responsible use of technology?

2. Increased regulatory pressure

In the year ahead, I expect companies to face increased regulatory pressure around their AI models. Regulatory changes are likely to include requirements around both explanations for individual predictions as well as detailed records and tracking of the history and lineage of how models were trained.

Increased AI regulation will ultimately be welcomed by the industry as evidenced by 81% of tech leaders saying they would like to see increased government regulation in a recent DataRobot survey. However, the recent Blueprint for an AI Bill of Rights, which provides a set of five principles and associated practices to protect the rights of the American public in the age of AI, has prompted companies into action. More companies are now cognizant of having to react to the potential conversion of voluntary guidelines into regulations in regulated industries and the potential costs of reactively achieving compliance in a short period of time.

Because of this, I predict most companies will need to invest in systems with model governance in place. By investing in systems that have the appropriate guardrails, companies can continue to focus on technological innovation with the peace of mind that their systems comply with legal and regulatory obligations.

3. Further investments in the space

In 2023, I expect to see continued momentum in AI investments, particularly among businesses most directly impacted by economic and supply chain disruptions, as well as mature industries generally able to scale AI adoption the most, such as financial services, retail, healthcare, and manufacturing. However, I also predict that, while some investments will progress, some AI technology trends will continue to be experimental.

Looking at financial services for example, I expect that use-cases will turn to AI systems that can improve accuracy of fraud detection and speed up laborious reporting processes. With rising expectations and an onslaught of security breaches, financial services need to secure a competitive advantage with AI technologies that can help mitigate these detrimental issues. Additionally, AI will help to improve job satisfaction and free up employees to focus on adding customer value.

Looking at technology trends, generative AI is receiving tremendous interest based on newly developed deep learning models (from OpenAI and others). However, I predict these models are still too new to be practical for most enterprises because of a few challenges. The first being the fact that it’s difficult to ensure their behaviour on necessary issues like bias and fairness; despite efforts, current versions can be easy to break. This means businesses will need to truly trust providers of these models since they will have no hope to build or create their own.

Adapting these models for desired use-cases is also difficult for most to get right. While I expect to see companies continue to work with generative AI, I believe applications will continue to be experimental for many enterprises in the coming year until the business cases and their expected return on investment is better understood.

Overall however, businesses that focus on building an AI mentality across the organization by continuing to make investments in the space and fully integrating AI into their operations (including assessing new developments) will be better suited to handle market uncertainty and drive long-term success.

Don't miss any update on this topic

Create a free account and access your personalized content collection with our latest publications and analyses.

Sign up for free

License and Republishing

World Economic Forum articles may be republished in accordance with the Creative Commons Attribution-NonCommercial-NoDerivatives 4.0 International Public License, and in accordance with our Terms of Use.

The views expressed in this article are those of the author alone and not the World Economic Forum.

World Economic Forum logo
Global Agenda

The Agenda Weekly

A weekly update of the most important issues driving the global agenda

Subscribe today

You can unsubscribe at any time using the link in our emails. For more details, review our privacy policy.

Industry transformation at AMNC24

Pooja Chhabria

June 23, 2024

About Us



Partners & Members

  • Join Us

Language Editions

Privacy Policy & Terms of Service

© 2024 World Economic Forum