Why public uncertainty is a defining challenge of the AI era
A growing wave of public anxiety is rising when it comes to AI. Image: REUTERS/Dado Ruvic
- Public anxiety about AI is rising, stemming from a lack of transparency, fears of job loss, and concerns over bias and data privacy.
- This unease is a critical challenge because it threatens to undermine the legitimacy and beneficial adoption of AI within society.
- Instead of seeing these reservations as technophobia, we must treat anxiety as vital feedback, demanding more governance, inclusion and transparency in AI systems.
Today, artificial intelligence (AI) is no longer a novelty but an essential part of our daily lives — in recommendation systems, smart assistants or automated decision-making. And yet, a growing wave of public anxiety is rising. Far from being mere technophobia, this unease signals something deeper: a mismatch between the pace of innovation and our collective ability to understand, influence and live with AI. Addressing this anxiety matters now more than ever: how society responds will shape the legitimacy, direction and benefit of AI for years to come.
How is the World Economic Forum creating guardrails for Artificial Intelligence?
The roots of the unease
There are multiple dimensions to this anxiety: a lack of transparency into the technology’s mechanisms, job and economy fears, privacy and bias concerns, and a sense that AI is moving faster than people can meaningfully engage with.
The “black box experience”
One source of anxiety is AI’s lack of transparency. We usually don’t know how a model reached its answer: whether it was based on bias, trained on outdated information, or created by dubious training processes. This “black box experience” and its uncontrollable results create a deep mistrust that causes a far-reaching refrain from wider AI adoption.
Fears of job displacement and economic upheaval
With the expanding use of AI powered systems, reports have found that many are worried about their personal data being collected without their consent, an inability to distinguish real from synthetic content, and AI bias in hiring, policing and credit decisions. In workplaces, “many employees worry about AI monitoring their work patterns, communications and even bathroom breaks,” Forbes has reported. Without transparency about what is being monitored, how data is used or how decisions are made, trust quickly erodes.
The speed of change and the feeling of being outpaced
Perhaps most fundamentally, many feel that AI is moving at a speed that outpaces both regulation and public deliberation. The OECD warned that while AI capabilities have advanced exponentially, policy responses remain “fragmented and reactive”. The perception of being outpaced creates a sense of helplessness that feeds public unease.
Why it matters now
AI is not merely a new gadget, it is a fundamental change in how things can be done. In the public sector, OECD data shows governments are increasingly adopting AI to improve efficiency and accessibility of services, and the World Economic Forum estimates that AI will create 69 million new jobs globally by 2028.
These anxieties and feelings of unease towards AI cannot continue to go unaddressed. As AI systems are being embedded ever more deeply in everyday life the urgency of “getting it right” is higher now because the stakes are higher. Adding to that, the mismatch between innovation pace and governance mechanisms is more pronounced than ever. The speed of adoption combined with limited transparency means the risk of backlash looms. Finally, anxiety can become self-reinforcing. If many members of the public, as well as organizations, resist or disengage from AI, the promise of AI could be undermined. It may stall or distort how AI is integrated and adopted, and that would harm both the promise and people.
Reframing anxiety as feedback
Instead of dismissing public anxiety about AI as “just fear of technology”, it is more helpful to treat it as a vital feedback mechanism — an indication we must step back, open up the conversation and build bridges. Three imperatives stand out:
Transparency and explanation: Systems should not only work, but also be understandable in a meaningful way. This is a signal for better education, clearer communication, and transparent and explainable design. The less mysterious a system, the less room for fear to fester.
Inclusion and dialogue: Anxiety often reflects the feeling of being left out or unheard. Trust in how AI is used depends not just on technical robustness but on public participation — on inclusive decision-making, on having a say in design, deployment and oversight. For example, 73% of experts say the views of White adults are taken into account in AI design, but only 27% say the same about Black adults. This gap is a clear signal of the need for more inclusivity.
Governance and accountability: People don’t just fear AI itself — they fear the institutions behind it. “Don’t trust tech executives to self-regulate” appears repeatedly in surveys. If we are to build public trust, regulatory frameworks must not only exist — they must be visible, responsive and legitimate.
A call to conversation
The anxiety around AI is a sign to accelerate our collective work on the social infrastructure of AI. Technical advances alone are not enough, what matters is how individuals, communities and organizations experience AI: do they feel in control? Do they trust the systems that affect their lives?
Technology will continue to move fast. But the human side — education, dialogue, governance — must keep up. If not, we risk building powerful systems into societies that have not yet reached consensus about their role, limitations or values. In short: now, when the embedment of AI is deepening but public understanding and trust remain fragile, building public trust in AI isn’t optional — it is foundational to ensuring that these technologies serve society, rather than alienate it.
When anxiety about AI is treated as a signal to act, we open the possibility of a more inclusive, transparent and deliberate AI era. That is why this moment matters — and why addressing public uncertainty is a defining challenge, and opportunity, of the AI era.
Don't miss any update on this topic
Create a free account and access your personalized content collection with our latest publications and analyses.
License and Republishing
World Economic Forum articles may be republished in accordance with the Creative Commons Attribution-NonCommercial-NoDerivatives 4.0 International Public License, and in accordance with our Terms of Use.
The views expressed in this article are those of the author alone and not the World Economic Forum.
Stay up to date:
Artificial Intelligence
Forum Stories newsletter
Bringing you weekly curated insights and analysis on the global issues that matter.
