Emerging Technologies

Here's how investors are navigating the opportunities and pitfalls of the AI era

Amid investors' excitement about AI, concerns about the challenges and risks are also growing.

Amid investors' excitement about AI, concerns about the challenges and risks are also growing. Image: Getty Images/iStockphoto

Chris Gillam
Research Fellow, CPP Investments
Judy Wade
GLT-Managing Director, Head of Strategy Execution and Relationships Management, CPP Investments
Share:
Our Impact
What's the World Economic Forum doing to accelerate action on Emerging Technologies?
The Big Picture
Explore and monitor how Artificial Intelligence is affecting economies, industries and global issues
A hand holding a looking glass by a lake
Crowdsource Innovation
Get involved with our crowdsourced digital platform to deliver impact at scale
Stay up to date:

Artificial Intelligence

  • Investors are grappling with the challenges and risks of the fast-growing genAI market.
  • Leading investors are already using genAI to source opportunities and manage assets.
  • Investors view responsible AI as critical to capturing the technology’s value and mitigating risk.

Generative artificial intelligence (genAI) is transforming the world of investing. By generating novel investment opportunities and enhancing the investing process itself, the technology has already given asset managers a powerful new tool to create value. And with billions in investment still to come, the furious momentum behind it is unlikely to flag any time soon. All said, the total economic potential of all AI systems could add upwards of $25.6 trillion annually to the global economy.

Have you read?

But amid the excitement, concern has also grown about the challenges and risks associated with genAI. How are investors organizing to leverage this technology? Where do they see investment opportunities? And what steps are they taking to ensure AI is adopted responsibly in their operations?

We spoke with a number of our partners to determine how they’re thinking about these and other issues. The following are some key insights drawn from those discussions:

GenAI is being rolled out in internal operations – but at varying speeds

Our partners are using genAI across a wide range of use cases. Some are looking to create efficiencies and synthesize information within acknowledged realms. Others are pushing the boundaries of what the technology can do, including using OpenAI tools to develop financial models or to predict bond prices, challenging the notion that large language models (LLMs) can’t do maths.

Significant differences also exist among our partners when it comes to the maturity of genAI adoption and the degree to which use cases are being scaled:

  • GenAI leaders in the group are already scaling in meaningful ways. These investors have systematic governance and frameworks to prioritize and scale use cases. Other organizations that have only rolled out horizontal platforms (e.g. enterprise ChatGPT) and remain in the piloting and experimentation phase are at risk of being left behind.
  • Centralized governance is important. For the most part, genAI pilots and rollouts are being centrally managed. Even where experimentation is more flexible and decentralized, pilots are nevertheless being driven by business units within centrally developed guidelines. The larger the financial institution, the more likely it is to have stringent central vetting and management of all use cases and pilots.
  • Data hygiene is the key to unlocking value in proprietary data. The application of genAI to unstructured proprietary data is seen as a big opportunity. However, there is a recognition that many data hygiene challenges must be overcome first (e.g. identifying which data can or can’t be ingested by genAI models).

The best investors are currently using AI for data-driven investment sourcing (e.g. building AI models atop investor data to identify insights and prioritize investment opportunities) and asset management (e.g. scraping portfolio company websites to track the direction of product offerings). They are also looking at AI across their entire portfolio, where we turn our attention next.

AI in the portfolio: Focus broadens to include new opportunities

Discussions with our partners suggest their focus on AI investing is evolving:

  • Application layer draws most attention. At CPP Investments, we’re using the framework below to help determine potential investment opportunities across the tech stack. Picking winners is difficult given we’re still in the early innings of AI. Yet our partners are bullish on the application layer (i.e. use-case applications of AI technology, such as customer service chatbots) of the AI value chain and believe the most value will accrue in this part of the AI tech stack in the long run.
A conceptual framework to research investment opportunities in AI
A conceptual framework to research investment opportunities in AI Image: CPP Investments
  • Investors are also focusing on more specialized AI applications. From 2020 to 2023, investors’ attention transitioned from foundational AI technologies and research to a broader spectrum of practical applications. Indeed, our partners have turned toward both highly specialized AI solutions and GPT-augmented “holistic” enterprise applications that disrupt traditional industries. Critically, these applications can be employed by users with fewer technical skills. This shift mirrors a maturation in the AI field as it moves from general-purpose technologies to sector-specific AI advancements that offer tangible business value.
  • AI is a value-creation opportunity. AI’s value creation opportunity will only grow as it becomes more powerful and cheaper to deploy. As investment rises and costs decline, expect new applications and AI agents powered by various AI models to emerge. These will pave the way for broader genAI adoption across the consumer and enterprise landscape. Implementing AI across entire portfolios was repeatedly identified as a value creation and stewardship opportunity for investors; at some point, all companies will be considered AI companies.

Responsible AI is viewed as critical

Our discussions with investors revealed a keen understanding of AI’s risks and the importance of developing, assessing, and deploying it in a safe, trustworthy and ethical way (i.e. responsible AI). This hinges on adhering to principles of validity and reliability, safety, fairness, security and resilience, accountability and transparency, explainability and interpretability, and privacy.

As part of a collaboration with the World Economic Forum, we talked to investors about how to accelerate the adoption of responsible AI in their portfolios:

  • Focus on the real economy. Much of the early discourse on responsible genAI has dealt with the development of AI systems and LLMs. With more policies and procedures established in this area, attention is shifting to how enterprises adopt and apply both traditional AI and genAI technologies. Large capital providers can help enterprises align with responsible AI standards and frameworks at this critical stage.
  • Consider the full spectrum of risk. Important concerns have emerged about discrimination and bias creeping into AI algorithms. For instance, discrimination in credit underwriting or racial bias in projecting likelihood of criminals to reoffend can systematically disadvantage certain groups of people. Investors and other stakeholders should consider both these issues and the wider range of risks associated with AI. Topping that list are potential job displacements, hiring biases, skills gaps and other large-scale effects on the economy and society.
  • Focus on AI governance. By engaging with boards, investors can ensure enterprises are able to capture the full potential of AI while minimizing downside risks. Solid AI governance was highlighted as the core expectation to ensure companies are developing and adopting AI responsibly. Given the rapid evolution of AI technology, governance will need to be dynamic and adaptable.

With contributions from CPP Investments’ Private Equity (Growth Equity) team including Max Miller (Managing Director), Natalie Deschamps (Principal), Bryton Hewitt (Senior Associate), Evelyn Chow (Senior Associate) and Kanishk Malhotra (Associate), and Active Equity team, Nadeem Janmohamed (Managing Director).

Discover

How is the World Economic Forum creating guardrails for Artificial Intelligence?

Don't miss any update on this topic

Create a free account and access your personalized content collection with our latest publications and analyses.

Sign up for free

License and Republishing

World Economic Forum articles may be republished in accordance with the Creative Commons Attribution-NonCommercial-NoDerivatives 4.0 International Public License, and in accordance with our Terms of Use.

The views expressed in this article are those of the author alone and not the World Economic Forum.

Related topics:
Emerging TechnologiesTrade and Investment
Share:
World Economic Forum logo
Global Agenda

The Agenda Weekly

A weekly update of the most important issues driving the global agenda

Subscribe today

You can unsubscribe at any time using the link in our emails. For more details, review our privacy policy.

Robot rock stars, pocket forests, and the battle for chips - Forum podcasts you should hear this month

Robin Pomeroy and Linda Lacina

April 29, 2024

About Us

Events

Media

Partners & Members

  • Join Us

Language Editions

Privacy Policy & Terms of Service

© 2024 World Economic Forum