Why AI needs smart investment pathways to ensure a sustainable impact

AI is projected to generate $7 trillion in value through generative AI alone.
Image: Sean Pollock
Stay up to date:
Artificial Intelligence
- Many AI investments fail due to unclear business value, poor planning and lack of ROI tracking.
- This highlights the need for smarter, structured funding decisions.
- A portfolio-based investment strategy, guided by clear metrics and adoption readiness, ensures scalable, high-impact AI outcomes.
AI is projected to generate $7 trillion in value through generative AI alone, and is expected to boost US labour productivity by 0.5-0.9% annually through 2030. When combined with other automation technologies, generative AI could drive productivity growth to 3-4% per year. Experiments across 18 knowledge-based tasks show these gains are driven by increasing speed by 25% and quality by 40%. This has the potential to transform industries and accelerate sustainable development.
However, in 2025, 30% of enterprise generative AI projects are expected to stall, due to poor data quality, inadequate risk controls, escalating costs or unclear business value — findings also echoed by Deloitte. RAND research highlights that over 80% of AI projects fail and Goldman Sachs questions whether the estimated $1 trillion in AI capital expenditures over the coming years will ever deliver a meaningful return. And Microsoft’s CEO Satya Nadella recently warned that there may be an overbuild of AI infrastructure necessitating that we start measuring AI’s real impact. That’s because AI isn’t a magic bullet; without the right structures, companies will spend heavily, only to write those investments off when projects collapse. The time of aimless experimentation and spending on AI is over.
From readiness to results: How to drive AI experimentation to returns
The question is no longer whether to invest in AI but how to make strategic decisions on funding the right projects — and knowing when to divest from those that fail to deliver results. As companies navigate AI adoption, two key approaches emerge:
The first is approach building AI readiness across the organization. Successful AI adoption requires more than just technology — it demands a foundation of accessible data, transparent governance, appropriate tools, and a workforce equipped with AI-relevant skills. By investing in employee training, establishing the right structures, and ensuring access to data and AI tools, companies can tap into internal expertise and proximity to day-to-day operations to drive AI innovation from within. This approach creates fertile ground across functions and relies on employees to play and experiment with AI, figuring out where its most promising uses are and leaving lots of room for discovery.
The second approach is a use-case-driven AI strategy, which focuses on identifying and implementing high-value AI applications that directly impact business performance. This approach tends to be more narrowly focused on well-articulated, existing frictions in the organization, the need for cost reduction or boosting competitive advantage. It rests on the notion that not everybody needs to receive a seat license just like not everybody needs to have data access rights in an organization. It allows for greater control and a sense of pacing of expenditures.
But both approaches come with advantages and disadvantages. The first one creates the greatest optionality for the emergence of the most fruitful use cases, but it is the less defined path at the outset. The second is less risky, if implemented well, but also more of a lock-in at an early stage when AI is still more of a general-purpose toolset and less specialised. In either case, however, it is imperative that organizations evaluate the benefit against the effort expended. Being experimentation minded is no excuse not to monitor the experiments. Conversely, focused experiments are not guarantees for efficacy, much less for scalability across an organization, as focus can increase requirements and slow productivity. This is the reason why we believe we’re seeing the statistics featured above because too many functions and too many organizations are not employing a thoughtful enough design to either approach. Saying “it’s early days, so be patient” seems like an easy way out, but not one that CFOs or shareholders will appreciate, given the associated expenditures.
What is the World Economic Forum doing about the Fourth Industrial Revolution?
Measuring AI’s impact: The AI RoI Framework
One of the biggest gaps in AI investment models is the lack of a structured way to measure whether AI projects deliver the expected impact — or signal the need for early divestment. Without a clear assessment framework, companies risk pouring resources into initiatives that fail to scale or generate meaningful returns. The AI RoI Framework addresses this challenge by offering a structured, metric-driven approach to track AI projects from initial exploration to function-wide, business unit wide or enterprise-wide adoption.
This framework evaluates AI initiatives along two critical dimensions: technical feasibility — how mature the AI solution is and whether it can be implemented at scale — and adoption readiness, which assesses whether the business is equipped to integrate and operationalize AI effectively. By defining clear transition points, companies can determine when projects are ready to progress toward full deployment or whether they are at risk of stalling.

At its core, the framework maps AI projects through three key stages: their current state, a realistic near-term target (3-6 months), and an ideal state within 6-12 months. This structured progression helps businesses quantify the expected impact of AI investments and focus on initiatives that deliver measurable ROI. The size of each stage’s “bubble” represents the projected benefits — whether faster time-to-market, strategic differentiation, or deeper technical capabilities. This is an important element and requires early hypotheses but also frequent adaptation and iteration. The insights generated by experiments may mean that the benefits-framing will need to be evolutionary, rather than fixed and normative on day one. A dashboard of metrics and KPIs under observation may need to consist of different meters and dials, ranging from the less quantifiable like learning or satisfaction or creativity-stimulating to the highly quantifiable like marketing lead generation, customer churn reduction, time savings, process efficacy, etc.
Often it takes some deliberation between the people leading experiments and their finance managers to define how direct impact metrics flow up to top-line revenue or bottom-line cost, i.e. profitability. But charting that flow is nonetheless imperative, especially with an eye toward justifying increased compute expenditures and employee time on top of existing workloads.
Here are some examples of flowing immediate impact metrics into top and bottom line results:
1. Increased Employee Productivity
↑ Revenue: Higher output
↓ Costs: Reduced labour hours
2. Improved Customer Satisfaction
↑ Revenue: Retention & referrals
↓ Costs: Less support needed
3. Enhanced Technical Automation
↑ Revenue: Faster delivery
↓ Costs: Process streamlining
4. Better Lead Generation
↑ Revenue: More potential customers
↓ Costs: Efficient marketing
5. Streamlined Customer Acquisition
↑ Revenue: Higher conversion rates
↓ Costs: Lower acquisition costs
How is the World Economic Forum creating guardrails for Artificial Intelligence?
When applied within AI Sandboxes, this framework ensures that AI experimentation is not an end in itself but a systematic pathway to execution. It enables organizations to test, refine, and accelerate AI adoption while keeping investments aligned with business value. By leveraging this structured approach, companies can turn AI from an experimental cost center into a scalable, high-impact driver of competitive advantage.
A portfolio approach to AI: No single bets
AI investments, like financial or venturing portfolios, require diversification to balance short-term efficiency gains with long-term transformation. Betting on a single initiative is risky—some projects will drive value, while others may stall. A portfolio approach spreads investment across core, adjacent, and transformational AI initiatives, allowing companies to capture value while managing uncertainty.
Success depends not just on what to invest in but how to manage AI over time. A structured portfolio approach ensures that AI investments are distributed across different time horizons. In the short term, AI enhances or streamlines existing operations, improving efficiency and reducing costs. Over the medium term, it enables new business models and differentiation, while long-term investments focus on transformational AI innovations that create new market opportunities, especially as agentic AI, digital twins, physical AI and other technologies combine and allow the creation of new types of offerings, like, for instance, in smart cities, smart supply chain management, home-based automated healthcare or the space economy.
As time progresses, risk increases due to market and technological uncertainty. Therefore, companies must continuously rebalance their AI portfolios — doubling down on high-impact projects, pivoting where needed, and divesting from underperforming initiatives. To ensure these investments deliver measurable impact, the AI RoI Framework provides a structured way to assess feasibility, adoption readiness, and ROI potential. By defining clear transition points, it helps companies scale successful projects, refine those in progress, and exit failing initiatives early, preventing wasted resources.
Rather than committing large sums upfront, companies can start small, adjust based on results, and scale what works. This structured, data-driven approach ensures AI moves beyond experimentation to become a sustained driver of competitive advantage.
We’re advocating for a deliberate design approach that leaves room for discovery but not excuses for wasting shareholder money, employee time, or expensive compute power with commensurate energy consumption. AI is not to blame for wastage. Lack of thinking about pathways is.
Accept our marketing cookies to access this content.
These cookies are currently disabled in your browser.
Don't miss any update on this topic
Create a free account and access your personalized content collection with our latest publications and analyses.
License and Republishing
World Economic Forum articles may be republished in accordance with the Creative Commons Attribution-NonCommercial-NoDerivatives 4.0 International Public License, and in accordance with our Terms of Use.
The views expressed in this article are those of the author alone and not the World Economic Forum.
Forum Stories newsletter
Bringing you weekly curated insights and analysis on the global issues that matter.
More on Emerging TechnologiesSee all
Kelly Ommundsen
July 4, 2025
Michael Donatti and Laura Fisher
July 3, 2025
Benjamin Hertz-Shargel
July 1, 2025
Ravi Kumar S. and Rohan Murty
July 1, 2025
Abayomi Olusunle
July 1, 2025
Stacy Greiner
July 1, 2025