Opinion
Why scaling AI is about company culture, not winners and losers

Building an AI culture at work – and beyond – must involve a human element. Image: iStock/Cecilie_Arcurs
- Integrating new technology like artificial intelligence (AI) is about more than simply adopting new tools.
- Companies and nations need to integrate AI into their culture, building a shared understanding of what new technology tools can and should achieve.
- Creating an AI culture can help companies and nations gain the agility needed to face the future.
For as long as I can remember, conversations about technology in business have been structured around the same set of warnings: “disrupt or be disrupted”, “winners and losers”, “don't get left behind”. These are the phrases heard at keynotes given by executives at conferences and peppered throughout investor decks.
They are not entirely wrong. Disruption is real, markets do shift and organizations that fail to adapt can fall behind. But this framing suffers from a significant blind spot and after more than two decades of experience in the telecom and technology space, I am convinced it is costing us more than we realize.
The blind spot in this narrative is its almost singular focus on what technology can do, without any regard for who will use it and how. Adoption is a binary outcome. You either have the technology or you don't.
But the harder, more consequential variable is something far more human: culture. It is a shared understanding by a group of people around what they are trying to achieve with the tool in their hand.
Building an AI culture
I have watched organizations invest heavily in very capable platforms and systems, only to find that, 18 months later, they have little to show for it. Not because the technology failed, but because the people from different departments had no common purpose or shared language for how to use the tool.
This is the real challenge ahead for scaling artificial intelligence (AI) inside an enterprise or across a nation. And I believe the answer lies not in better algorithms, but in weaving an “AI culture” into the fabric of an organization, and even a nation.
For most people, AI is like a silver bullet. It’s a solution you deploy with the expectation it will solve problems on its own. But AI is not a plug-and-play answer to complex organizational challenges. It is a capability and, like all capabilities, its value is entirely dependent on the people using it and the shared intent behind it.
Building an AI culture means ensuring everyone across your organization — from the technology group, to the HR function and the finance department, to the customer-facing teams – all speak the same language about what AI is for and how to get value out of it. Without that shared language, there is no scale, only pockets of fragmented experiments and siloed data with no agreement on who has access or how the data can serve people.
Trusting, verifying and building on AI
We often talk about AI as if its power comes from the machine, but I think about it differently. The real power of AI is how it scales the human intuition of gifted people in an organization: those who know from experience what works and what doesn't, those who can sense a pattern before they can explain it. For these gifted people, AI gives them something they never had before: the ability to understand why their intuition is right, and to apply this understanding at a scale no individual can reach alone.
This is not a theoretical argument. Every year, during the Hajj season, as Muslims from around the world make a pilgrimage to Mecca in Saudi Arabia, Mobily manages network operations for one of the largest and most concentrated human gatherings on earth. We use AI to anticipate demand and crowd movement, to allocate capacity and maintain telecommunications and internet service quality across millions of simultaneous connections in a compressed geography.
But the insights driving the annual improvements of this system do not come from the model. These insights come from the people who examine what the model did, understand the reasoning behind it and use that understanding to refine their approach for the following year. The question is not whether AI can manage complexity, but whether people can understand what AI is doing well enough to trust it, verify it and build on it.
That requires investment and data scientists who architect traceability into how decisions are made, along with training beyond technical certification to address organizational habits.
The future AI transition
This brings me back to the narrative I want to retire.
The framing of winners and losers, of disruption as an inevitable sorting mechanism, assumes technology is the primary variable. But, in my experience, the primary variable is always people and how well they understand a shared purpose, how clearly they can connect their day-to-day decisions to an intended outcome and how much they trust the tools and the institutions they work with and within.
An AI culture does not leave people behind. It has no losers by design because its foundation is a shared language, a common purpose and the human capability to understand and guide the machines we work alongside.
The transition ahead is significant. But the organizations – and the nations – that will navigate it most successfully are not necessarily those with the most advanced technology, but those who built the right culture before the technology arrived. This will give them the agility to iterate on this culture and face whatever comes next.
Don't miss any update on this topic
Create a free account and access your personalized content collection with our latest publications and analyses.
License and Republishing
World Economic Forum articles may be republished in accordance with the Creative Commons Attribution-NonCommercial-NoDerivatives 4.0 International Public License, and in accordance with our Terms of Use.
The views expressed in this article are those of the author alone and not the World Economic Forum.
Stay up to date:
Artificial Intelligence
Related topics:
Forum Stories newsletter
Bringing you weekly curated insights and analysis on the global issues that matter.
More on Artificial IntelligenceSee all
Chiara Barbeschi and Tarik Fayad
April 20, 2026







