AI and the next innovation frontier: why trust will define the $20 trillion opportunity

Securing AI means securing the very language that it relies on. Image: Getty Images
- A new frontier of risk is rapidly emerging, rooted in machines’ fragile understanding of meaning and context of human language.
- Addressing this new threat landscape begins with a principle that’s simple yet transformative: zero trust. Trust nothing. Verify everything.
- This aligns with a central theme for Davos 2026: realizing AI’s economic potential depends on responsible deployment and verifiable trust.
AI is reshaping how industries operate, compete and create value at unprecedented speed.
By 2030, it is projected to add nearly $20 trillion to global GDP, cementing its place as one of the most powerful economic forces of our century. Yet as AI becomes more deeply embedded in business operations, a new frontier of risk is rapidly emerging, rooted in machines’ fragile understanding of meaning and context of human language.
Gen AI and large language models (LLM) are moving into the core of critical workflows across every sector. Financial institutions deploy them to analyze markets and anticipate volatility. Manufacturers integrate them to orchestrate complex supply chains. Healthcare organizations apply them to triage information and accelerate research.
But as the reliance on GenAI systems accelerates, so does a new class of threats that exploit communication, not code. They target what AI understands rather than how it executes. And they are emerging faster than most organizations are prepared to detect and defend against.
LLMs and the emerging threat landscape
Cybersecurity has traditionally focused on hardening infrastructure: locking down networks, patching vulnerabilities and enforcing identity controls. But today’s threat landscape is shifting toward something more subtle and even harder to detect. Cyber criminals no longer need to exploit software flaws or breach a network to cause harm. They can manipulate how AI system interprets language, turning semantics to attack surface.
Hidden malicious instructions can hide in plain sight – in public data, training materials, customer inputs or open-source documentation. These manipulations can redirect a model’s reasoning, distort its outputs or compromise the insights it provides to decision makers. Because these attacks occur in natural language, traditional security tools rarely identify them. The model is poisoned at the source, long before anyone realizes something is wrong. For organizations that lack adequate preparation and protection, this represents a serious and often unseen risk.
This is not a hypothetical threat. As more organizations adopt autonomous and semi-autonomous AI systems, the incentive for adversaries to target the language layer is only growing. The cost of entry for attackers is low and the potential damage is massive.
The silent insider threat
When an AI model is compromised, it behaves like an insider threat. It can quietly leak intellectual property, alter strategic recommendations or generate outputs that benefit a third party. The challenge lies in its invisibility: it acts without raising the alarm. The system still answers questions, summarizes documents, processes data and assists employees. It simply does all of these things in a subtly misaligned way.
What we’re now seeing is a shift in enterprise risk from protecting data to protecting knowledge. The key question for security leaders is no longer just about access rights, but about what their models have absorbed, and from where.
The governance gap
Despite the scale of the threat, many organizations remain focused on who is using AI rather than on what their AI systems ingest. This gap is growing wider as AI adoption accelerates and as autonomy increases. Building trusted and resilient AI ecosystems requires enterprises to verify the integrity and authenticity of every dataset, instruction and content source that feeds their models.
This aligns closely with a central theme emerging for Davos 2026: realizing AI’s vast economic potential depends on responsible deployment and verifiable trust. AI cannot remain a black box, nor can it passively consume uncontrolled data. The systems that deliver the greatest economic and societal value will be those designed with traceability, transparency and accountability at their core.
Building trust at the core of AI
Addressing this new threat landscape begins with a principle that is simple yet transformative: zero trust. Trust nothing. Verify everything, continuously.
While zero trust is not a new security concept, its scope must extend beyond access controls, and include the data and instructions that train and guide AI systems. This requires constant monitoring of how models evolve, tracing the origins of their knowledge and embedding accountability throughout the AI lifecycle. AI literacy, data provenance and digital trust must now sit alongside ESG and cybersecurity as board-level priorities because the integrity of enterprise intelligence increasingly depends on them.
Global awareness of these risks is growing. The OECD AI Risk and Safety Framework released in 2025 and similar international initiatives acknowledge data manipulation and AI misuse as critical areas that demand shared standards and oversight. For enterprises, aligning governance with these frameworks strengthens operational resilience and reinforces public confidence.
How the Forum helps leaders understand cyber risk and strengthen digital resilience
Securing AI by securing the language it understands
To realize AI’s full potential, cyber leaders must embrace the idea that secure intelligence is sustainable intelligence. The next era of cybersecurity will be defined not by defending systems, but by defending semantics. The integrity of how machines reason, interpret and communicate is becoming a strategic asset.
Securing AI means securing the very language it relies on. Trust will define the next frontier of innovation. Organizations and nations who lead this space will treat trust as both a competitive differentiator and as a shared global responsibility.
Don't miss any update on this topic
Create a free account and access your personalized content collection with our latest publications and analyses.
License and Republishing
World Economic Forum articles may be republished in accordance with the Creative Commons Attribution-NonCommercial-NoDerivatives 4.0 International Public License, and in accordance with our Terms of Use.
The views expressed in this article are those of the author alone and not the World Economic Forum.
Stay up to date:
Artificial Intelligence
Related topics:
Forum Stories newsletter
Bringing you weekly curated insights and analysis on the global issues that matter.
More on CybersecuritySee all
Bryan Palma
January 14, 2026





