How to make AI work in your enterprise through integration, not silos

Enterprise AI can be tricky but forward-deployed engineers can sit alongside teams for better results Image: Getty Images/iStockphoto
- The use of artificial intelligence (AI) has risen rapidly, as model performance has improved dramatically. Consumers believe it delivers meaningful results and "mostly right" is often good enough, while the cost of error is low.
- Enterprise AI is more challenging because it relies on clean data and on delivering a high degree of accuracy with higher stakes at play. AI must be integrated into workflows and governance embedded from the start.
- Forward-deployed engineers are now being deployed within teams delivering outcomes, rather than building AI systems in isolation.
Artificial intelligence (AI) isn’t just changing how we work. It’s compressing decades of change into months.
That acceleration is not theoretical. The Stanford AI Index shows that the compute used to train AI systems has doubled roughly every six months. Model performance has also improved dramatically.
Researchers introduced new benchmarks in 2023:
- MMMU: to evaluate multimodal models on massive multi-disciplinary tasks that demand college-level subject knowledge.
- GPQA: to test advanced reasoning in AI.
- SWE-bench: to evaluate large language models on real-world software issues collected from GitHub.
A year later, performance increased sharply: scores rose by 18.8, 48.9 and 67.3 points on those benchmarks, respectively.
On the consumer side, adoption has been explosive. A global KPMG study of over 48,000 people found that 66% use AI regularly and 83% believe it will deliver meaningful benefits. For many, AI feels intuitive and embedded in daily life, helping people move faster, make better decisions and reduce everyday friction.
AI models have reached a level of capability where they are broadly useful, no longer just experimental. Models can research, reason, generate, summarize, classify and assist at a level that would have felt impossible just a few years ago.
In consumer contexts, this capability translates directly into value because environments are unconstrained and forgiving. “Mostly right” is often good enough and the cost of error is low.
The challenge, then, is not whether AI works. It clearly can. The gap between what AI can do and what it does emerges when these same models are introduced into enterprise environments, where expectations, constraints and organizational change management are fundamentally different.
What challenges does enterprise AI face?
Inside the enterprise, the adoption story has been more complex. Despite investment, most organizations struggle to translate AI potential into operational change and measurable business impact. An MIT study found that only 5% of custom AI projects reach production.
KPMG research showed that 66% of employees rely on AI output without validating accuracy and 56% report making mistakes in their work because of it.
This gap is not about model capability. It is about how AI is safely and securely deployed into complex workflows that demand precision, nuance and trust.
The foundation has to be clean and harmonized data. Imagine your customer relationship manager and enterprise resource planning system both contain the same contact. In one system, they are a customer; in the other, a supplier. The email addresses match but one record includes a middle initial and the other doesn’t.
Which record is correct? Which system is the source of truth? And which version does your AI act on?
This is what enterprise AI looks like in practice. Unlike consumer tools, enterprise workflows operate under strict constraints. They require accuracy, measurability and consistency, while operating within regulatory frameworks, customer expectations and real financial risk. These environments demand extreme precision; “mostly right” is not good enough.
This is why enterprise AI has yet to deliver on its promise and return on investment. Enterprises must pull from fragmented and diverse data sources, rely on legacy systems that were not designed to work together and attempt to automate workflows full of exceptions, manual overrides and undocumented rules.
In enterprise environments, success depends less on model sophistication and more on how well AI is integrated into real workflows. The organizations that see the best results from AI embed it directly into day-to-day operations, put line owners in charge of design and deployment and pair engineers closely with domain experts.
Governance is built into production from the start. Traditional SaaS approaches struggle in this context: centralized teams, long feedback loops and fixed requirements do not work for AI systems that must continuously adapt to the work they support.
To move from experimentation to execution, organizations must rethink how they build, deploy and govern AI.
How will forward-deployed engineers deliver better AI?
As the cost of building software continues to decline, effectiveness is no longer defined by “out-of-the-box” workflows. Increasingly, it is defined by customizability, service quality, iterative feedback and governance in production.
Rather than having central tech teams build AI in isolation and hand it off to users, leading organizations embed engineers directly alongside the teams responsible for outcomes. This evolution has given rise to the forward-deployed engineer.
This shift is already reshaping how organizations deliver AI. In 2025, job postings for forward-deployed engineers increased by more than 800%, signalling a broader change in where enterprises believe value is created.
Forward-deployed engineers operate at the intersection of engineering, operations and execution. They work directly with domain experts to design the evaluation criteria of an AI system before it is ever built.
They use that rubric to design, deploy and continuously refine AI-powered workflows in real-world environments. They don’t just ship features for test environments; they apply scientific rigour to ensure safe and secure outcomes.
By sitting close to the work, they also shorten feedback loops, improve reliability and ensure AI systems adapt to production realities rather than idealized assumptions. This proximity enables effective governance, with human oversight and verification designed directly into the workflow.
The value of this model becomes clear in real-world deployments. In professional sports, the Charlotte Hornets partnered with forward-deployed engineers to transform raw video footage into actionable basketball intelligence, addressing a major data gap and enabling more informed scouting and performance decisions.
As with other successful deployments, impact depended on close collaboration with operators, continuous feedback and governance embedded from the start.
Governance is not a policy exercise; it is a key operational capability. It defines success, determines who verifies outputs, when human judgment is in the loop, and governs how performance is monitored and how systems improve safely over time.
When governance is treated as an afterthought, it slows adoption and erodes trust. When it is designed into workflows and supported by embedded forward-deployed engineering, it enables responsible deployment.
The future of work in a generative AI world will not be defined by who adopts AI first. It will be defined by who reimagines how work gets done. The organizations that succeed will redesign workflows around AI, embed engineering where execution happens and treat governance as foundational.
AI models are ready. The question is whether our operating models are prepared to meet them.
Don't miss any update on this topic
Create a free account and access your personalized content collection with our latest publications and analyses.
License and Republishing
World Economic Forum articles may be republished in accordance with the Creative Commons Attribution-NonCommercial-NoDerivatives 4.0 International Public License, and in accordance with our Terms of Use.
The views expressed in this article are those of the author alone and not the World Economic Forum.
Stay up to date:
Artificial Intelligence
Related topics:
Forum Stories newsletter
Bringing you weekly curated insights and analysis on the global issues that matter.
More on Artificial IntelligenceSee all
Amir Banifatemi and Karla Yee Amezaga
January 13, 2026





