Artificial Intelligence

Using AI? Here’s how you can preserve agency in an age of superhuman persuasion

Technology continues to outpace our current systems of AI governance.

Technology continues to outpace our current systems of AI governance. Image: REUTERS/Aly Song

Allison Shapira
Adjunct Lecturer in Public Policy, Harvard Kennedy School of Government
This article is part of: World Economic Forum Annual Meeting
  • Technology like artificial intelligence (AI) can be used to help people connect more effectively with one another.
  • But there are also lots of ways for AI to be used by bad actors to spread mis- and disinformation.
  • Global leaders need to both understand the technology's potential and learn how to harness it to foster trust and collaboration.

Imagine you are delivering a speech to an audience of global leaders with simultaneous interpretation. Only, the interpreter is not translating into different foreign languages. Instead, it translates your speech into the different moral frames of each audience member.

As you propose a new peace initiative, or a new corporate policy, your interpreter translates that argument into distinct messaging designed to persuade each individual member of the audience. It is also beta testing this messaging, in real-time, to see which framing causes their heart rate to increase.

Just as algorithms test which videos make you watch longer, these AI interpreters would determine which messages make you more engaged. And while this is a fictional scenario right now, it could be possible in the future.

Then, the question will be: Who controls the interpreter?

Having advised global leaders on how to communicate with impact and influence for over 20 years – and having used AI in this process for more than five years – I am optimistic that technology can help us to connect more effectively with one another. However, I can also clearly see the opportunities for its misuse by bad actors.

Leaders need to both understand this technology's potential, and learn how to harness it – to foster trust and collaboration through effective governance.

Have you read?

Teaching persuasion

When I teach persuasion and influence to graduate students at the Harvard Kennedy School, we learn time-honoured frameworks such as Aristotle's modes of persuasion: Credibility (ethos), argumentation (logos) and appeals to emotion (pathos) are powerful levers we can adjust in order to win the hearts and minds of our audience.

When I teach these frameworks, I caution my students that no one knows what makes an argument persuasive to everyone because what one person finds persuasive depends on their background, beliefs and experiences. "And thank goodness we don't know," I say. "Because that person would wield a power that no single human should have."

But that power now exists. I call it "Aristotle's algorithm" – the application of ancient persuasive principles, executed by AI at superhuman scale. For 2,300 years, Aristotle's modes of persuasion were constrained by human limitations: one speaker, one message. AI has begun to overcome many of those constraints, posing profound ethical questions.

When Aristotle's algorithm runs at scale, every individual inhabits a separate persuasive reality.

AI's capabilities

Misinformation/disinformation is the top short-term global risk identified by the World Economic Forum’s Global Risks Report 2025. The weaponization of generative AI is a key driver of this risk. A 2024 study of OpenAI's GPT-4 found that, when the model was fed background information on people, it became 82% more persuasive than humans. That is to say, AI dramatically exceeds human persuasiveness when given access to personal data.

"LLM-based persuasion poses profound ethical and societal risks, including the spread of misinformation, the magnification of biases and the invasion of privacy," Ghent University researchers warned in a November 2024 survey. "These risks underscore the urgent need for ethical guidelines and updated regulatory frameworks."

And one of the most revealing findings about AI persuasion is that research shows it becomes significantly less effective when people know they are interacting with AI rather than a human. This demonstrates that deception is a key component of the kinds of manipulation that experts are concerned about when it comes to the spread of mis- and dis-information.

So, when an AI system has access to your voting history and shopping habits, and it can create a psychological profile based on your social media presence, then that AI is capable of a level of targeted persuasion unlike anything we have ever seen before – and at exponential scale.

From one perspective, the use of Aristotle's algorithm could save millions of lives. Imagine if the World Health Organization is trying to disseminate disease prevention methods to millions of people. One message could instantly be "translated" into different arguments based on each person’s profile.

On the other hand, what prevents a person or institution from using that power to manipulate you into taking action that serves their interests, not yours?

Today's leaders have access to superhuman persuasive tools, with little transparency into how those tools were developed and a lack of agreement on how to disclose when and how they are being used. These AI systems will interpret the world for us, affecting the beliefs of billions of people.

What kind of consensus should we create on how these AI systems are deployed? And what protections are in place through AI governance to prevent one rogue employee from manually changing the values of the model?

Loading...

Responsibility for AI governance

I consider myself a human-centred optimist when it comes to technology. The determining factor is how we use these tools. The decisions made about AI in the next two years – transparency into how the models are trained and clarity around how our data is used – will determine the future trajectory of human history. These themes are central to my book, AI for the Authentic Leader: How to Communicate More Effectively Without Losing Your Humanity, which explores how leaders can use generative AI to enhance authenticity, trust and connection across cultures.

There are many possible solutions, including putting “nutrition labels” on AI systems to show how data is used, requiring disclosures such as those on political advertisements in the US or giving individuals control over their own data so they can decide when and how it is used by others. Any solution requires understanding how the technology works as well as developing deep sensitivity to its sociological impacts.

As technology continues to outpace our current systems of AI governance, time is of the essence. The World Economic Forum will convene leaders from around the world for its Annual Meeting in January 2026. These leaders must discuss how to create boundaries around AI that will protect human agency. They could start a new conversation on AI and influence, defining AI governance principles that ensure this technology serves truth, transparency and human agency – not manipulation or control.

If the next era of leadership is one of superhuman persuasion, then it must also be one of superhuman responsibility. We have a rare opportunity to determine the future of persuasion so that every voice, in every language, can be heard on its own terms.

Don't miss any update on this topic

Create a free account and access your personalized content collection with our latest publications and analyses.

Sign up for free

License and Republishing

World Economic Forum articles may be republished in accordance with the Creative Commons Attribution-NonCommercial-NoDerivatives 4.0 International Public License, and in accordance with our Terms of Use.

The views expressed in this article are those of the author alone and not the World Economic Forum.

Related topics:
Artificial Intelligence
Emerging Technologies
Global Risks
Digital Trust and Safety
Share:
World Economic Forum logo

Forum Stories newsletter

Bringing you weekly curated insights and analysis on the global issues that matter.

Subscribe today

More on Artificial Intelligence
See all

Shared infrastructure can enable sovereign AI – if we can make it trustworthy

Cathy Li and Florian Mueller

February 16, 2026

How AI-first operating models unlock scalable value

About us

Engage with us

Quick links

Language editions

Privacy Policy & Terms of Service

Sitemap

© 2026 World Economic Forum