Artificial Intelligence

Enterprise-wide AI can unleash the technology's potential: Here's how you get there

business people working in the office, double exposure of hands closeup, monitoring and analytics: Responsible AI has been shown to reduce AI incident costs by up to 8%

To be truly innovative, AI must influence entire workflows, not just single tasks. Image: Getty Images/iStockphoto

Jasmeet Singh
Executive Vice President, Global Head of Manufacturing and Chair Infosys Public Services and Infosys Automotive GmbH, Infosys
Inderpreet Sawhney
Chief Legal Officer and Chief Compliance Officer, Infosys
This article is part of: World Economic Forum Annual Meeting
  • Enterprise-wide artificial intelligence (AI) can unlock more potential and has been shown to reduce AI incident costs by up to 8% but only 2% of leaders embed it.
  • For enterprise-wide scaling, you need AI-ready data, fit-for-purpose AI models, upskilled talent and responsible AI governance; doing so in tandem ensures productivity without compromising standards.
  • The World Economic Forum provides an impartial platform for business to help make sense of new technologies and drive their responsible adoption and application, including via the AI Governance Alliance.

Artificial intelligence (AI) is helping humans work better, increasing productivity by as much as 40%. But there’s a significant difference between introducing AI in a few select domains, such as software engineering and marketing and scaling it enterprise-wide.

To be truly innovative, AI must influence entire workflows, not just single tasks, while unlocking consistent, real-time decision-making.

There are several prerequisites to scale AI enterprise-wide: AI-ready data; fit-for-purpose AI models; upskilled talent; and responsible AI governance.

In combination, these ensure that AI systems deliver business impact in production without veering off course. For example, applying AI for predictive maintenance to reduce downtime needs to be done while protecting worker safety.

Have you read?

Responsible innovation is a growth driver

Getting these aspects right is vital for organizations. However, responsible innovation with AI is anything but easy. According to Infosys research, only 2% of organizations were ready for enterprise AI at the beginning of 2025 across the five pillars of strategy, data, technology, governance and talent.

Further, recent research found that only 2% of leaders embed the requisite responsible AI practices as they operationalize AI at scale (15% are followers), with the weakest capabilities being risk mitigation and trust in AI solutions.

For those who did achieve responsible AI success, the benefits were significant. Responsible AI leaders in the research group reduced AI incident costs and severity.

A clear trend is emerging: clients increasingly favour open-source in innovation discussions.

Meanwhile, responsible AI best practices, such as explainability and reliability techniques, along with model validation and engineering processes or intellectual property infringement protections, reduced overall AI spend by as much as 8%.

Further, a more focused, responsible AI approach enabled greater AI project throughput. No wonder then that 78% of executives in the research said that responsible AI is a key growth driver for their business.

The importance of a platform, foundry and factory

AI is evolving rapidly and it is critical to adopt the best, most appropriate models and cloud infrastructure based on target use cases. This “poly AI,” “poly cloud” platform approach – that is, multiple strategic use of multiple AI models and cloud platforms – along with the requisite governance frameworks, facilitates responsible AI innovation at scale.

It also avoids locking organizations into multi-year AI investments.

This platform approach also enables agentic AI, a technology now applied across industries, where goal-seeking software bots complete tasks with little to no human intervention.

The platform uses specific communication rules, known as the model context protocol and agent-to-agent protocol, to allow AI agents to safely interact with sensitive enterprise systems such as SAP and Salesforce.

These protocols are designed to provide strong security guarantees, ensuring that no agent can perform unintended or harmful tasks.

A platform approach enables privacy-compliant processes while accelerating the creation of AI solutions. Recent research and client experience suggest a two-stage approach: First, create an AI foundry to experiment with new models and solutions, and then operationalize those learnings using an AI factory. This approach manages AI-related risks while scaling adoption.

The open-source inflection point

Equally critical to scaling AI responsibly is selecting the underlying AI models. In 2026, many organizations will turn to open-source solutions, building on the 63% of organizations that already leverage open-source AI tools today.

For system integrators such as Infosys, opensource democratizes AI by reducing dependence on a few dominant providers and enabling the development of scalable, cost-efficient solutions.

The primary reason for choosing open-source over proprietary models is its lower implementation and maintenance costs, combined with the flexibility and breadth of model choices offered by ecosystems such as Hugging Face, a centralized AI community for developers.

A clear trend is emerging: clients increasingly favour open-source in innovation discussions. Enterprises are drawn to the enhanced transparency and flexibility that these models offer, along with their ability to be fine-tuned for industry-specific domains, where model accuracy and contextual understanding are critical.

However, there is a flip side: questions remain about just how “open” these models truly are. Many still withhold access to training datasets, pre-training processes and evaluation code. This can undermine the models’ accountability and complicate the distribution of liability across the AI value chain.

The need for centralized governance

In addition to building a production-grade AI platform and open-sourcing models, the most effective way to operationalize responsible innovation is to centralize governance. A centralized registry of AI models and agents supports scalable deployment while maintaining security and adherence to operational standards.

Centralized capabilities also enable cost tracking, performance monitoring and ongoing innovation, ensuring AI deployments remain responsible, reliable and efficient.

Infosys has launched several initiatives to embed responsibility and transparency into its AI ecosystem:

  • AI management system: A framework for the continuous validation of AI usage across development and deployment cycles.
  • Software component certification: A verifiable system ensuring that all software components are tracked, properly licensed, documenting vulnerabilities, mitigating supply chain risks and preventing license violations.
  • Responsible AI guardrails: Infosys has developed a responsible AI toolkit with built-in mechanisms for bias detection, explainability and hallucination control, ensuring AI systems operate within ethical and operational boundaries. This toolkit is also open-sourced by Infosys for wider adoption by industry.
  • Dataset governance: Use of datasheets and model provenance validation, aligned with the European Union’s AI Act and intellectual property covenants, to ensure responsible data use and traceability.
  • ISO 42001 certification: Achieving this emerging responsible AI standard enhances trust in AI systems among both employees and customers.

Together, these innovations help Infosys operationalize secure, responsible AI solutions at scale. This capability supports innovation while safeguarding trust and brand reputation – all of this is critical in a business environment where AI-related incidents often result in significant reputational damage.

Responsible AI innovation through people

Amid all these factors, the talent dimension may be the most critical. According to Infosys’ AI Business Value Radar, organizations that actively prepare and engage their workforce achieve the highest returns – consistently outperforming those that implement AI without fully supporting their people.

Indeed, innovation culture often outweighs strategy and equipping teams to become builders of advanced AI solutions, as Infosys has done, is a decisive step toward gaining competitive advantage in the emerging era of agentic AI.

Loading...
Don't miss any update on this topic

Create a free account and access your personalized content collection with our latest publications and analyses.

Sign up for free

License and Republishing

World Economic Forum articles may be republished in accordance with the Creative Commons Attribution-NonCommercial-NoDerivatives 4.0 International Public License, and in accordance with our Terms of Use.

The views expressed in this article are those of the author alone and not the World Economic Forum.

Stay up to date:

Artificial Intelligence

Related topics:
Artificial Intelligence
Digital Trust and Safety
Business
Share:
The Big Picture
Explore and monitor how Artificial Intelligence is affecting economies, industries and global issues
World Economic Forum logo

Forum Stories newsletter

Bringing you weekly curated insights and analysis on the global issues that matter.

Subscribe today

More on Artificial Intelligence
See all

Why rare diseases are a proving ground for medical innovation

Antonio Estrella and Elizabeth Hampson

January 16, 2026

Enhancing AI digital infrastructure within planetary boundaries

About us

Engage with us

Quick links

Language editions

Privacy Policy & Terms of Service

Sitemap

© 2026 World Economic Forum