Opinion

Artificial Intelligence

Why labelling AI an ‘actor’ creates real-world risks and how to avoid them

A film set: Should we call an AI actor an "actor" at all without human agency?

Should we call an AI actor an "actor" at all without human agency? Image: Unsplash/Jakob Owens

Duncan Crabtree-Ireland
National Executive Director and Chief Negotiator, Screen Actors Guild-American Federation of Television and Radio Artists
This article is part of: World Economic Forum Annual Meeting
  • Calling artificial intelligence (AI) an “actor” collapses a critical distinction between human agency and automated systems, making it harder for the public, regulators and courts to attribute responsibility for harm.
  • Synthetic systems cannot bring lived experience, judgment, or emotional truth to storytelling – equating them with human risks flattens culture, lowers creative standards, and erodes the value of human work.
  • The risks are not inherent or inevitable. Clear disclosure, accurate labelling, and explicit human responsibility can allow synthetic tools to support creativity without misleading the public or undermining performers.

In the 1960s, early computer scientists observed something curious about how people interact with software. Even simple programmes could prompt users to attribute intention or understanding to text generated through mechanical pattern matching.

This tendency became known as the ELIZA effect. For decades, it has served as a reminder of how readily humans project agency onto systems that communicate in familiar ways.

Generative AI (GenAI) has turned that instinct into a business model. Today’s systems are designed to make you feel there is a mind behind the screen. They adopt conversational styles, borrow familiar turns of phrase and increasingly appear with photoreal faces and expressive voices.

That dynamic has important consequences when synthetic systems are described using human professional categories, particularly when they are called “actors.”

Acting is not simply a task that can be executed. It is a craft shaped by memory, vulnerability, curiosity and choice. To label a synthetic figure as an actor is to erase the distinction between human expression and automated simulation.

This tendency sets the stage for the recent controversy involving the synthetic creation called Tilly Norwood, the first AI actor created by Dutch production studio Particle6.

Have you read?

The risks of using an AI actor

1. Cultural: Flattening human storytelling

Storytelling has always reflected lived experience. Performances resonate because they are created by people who have known joy, loss, uncertainty, and growth. The emotional power of narrative often comes from insight rooted in real human lives.

Synthetic systems do not possess experience. They recombine patterns that suggest emotion. The results can appear polished and competent, yet often lack genuine creative origin. If audiences become saturated with such material, culture risks becoming flatter – more repetitive, less surprising and less emotionally grounded.

As low-cost synthetic content proliferates, there is also a danger of lowered expectations. Viewers may come to mistake abundance for originality and technical fluency for meaning. That would be a loss not only for creative workers but for anyone who relies on stories to make sense of the world.

2. Practical: Accountability without an accountable being

Just as significant is the issue of responsibility.

When audiences encounter an actor, they understand what that role entails: a human being making creative choices, entering into contracts, negotiating terms, and bearing legal and professional accountability.

A synthetic system cannot do any of this. Yet when presented in human language and imagery, the public may reasonably assume that the figure on the screen is the responsible party.

This distinction matters when harm occurs. If a synthetic news host spreads misinformation or defames a private citizen, responsibility does not rest with the software. It rests with the humans who designed, trained, deployed and marketed it.

However, giving these systems human characteristics can obscure that reality, shielding them from scrutiny.

3 principles that can preserve integrity

None of these concerns calls for abandoning digital tools. Fiction has always played with the boundary between person and creation. Audiences understand that characters within stories can express thoughts or emotions that exist only within the narrative.

Problems arise when fictional framing crosses into commercial and real-world contexts. To protect creative work, public trust and legal accountability, industries using synthetic media should commit to three guiding principles.

1. Keep narrative and commercial reality clearly separate

Within a story, a synthetic character may express any emotion the plot requires. In the marketplace, accuracy must be absolute. Software is not hired as an actor. It is licensed as a synthetic performer, a digital replica or another clearly labelled tool.

Contracts and public communications should reflect that reality. Transparent language preserves the meaning of human roles, protects labour categories in law and ensures the public is not misled about who is actually doing the work.

2. Ensure a human is explicitly responsible for synthetic output

The more a system resembles a human, the more essential it becomes to identify the humans accountable for it. No synthetic system should function as a shield for harmful outcomes.

Black box explanations cannot substitute for responsibility. Corporate leadership must be responsible for the outputs of the systems they deploy.

3. Tell the truth at every user interface

A customer service bot that imitates a human voice without disclosing its artificial nature engages in deception. A synthetic news presenter that expresses opinions without identifying the responsible parties undermines trust.

Any synthetic system used outside a clearly fictional context should be identified, with the organization behind it clearly disclosed.

Labels, watermarks and interface design choices can all serve one basic requirement: tell the truth at the point of contact.

Discover

How the Forum helps leaders make sense of AI and collaborate on responsible innovation

A necessary commitment ahead of Davos 2026

Every global leader who shapes the development or deployment of GenAI should adopt these principles and anchor their ethics and responsibility programmes in them.

The choices we make now will determine whether this technology strengthens human creativity and democratic trust or weakens the foundations we all depend upon.

We can use technology to create extraordinary stories but we cannot confuse simulation with humanity. A synthetic creation is a tool. A performer is a person. If we fail to protect that distinction, we risk losing not only accountability but the human voice at the centre of culture itself.

Don't miss any update on this topic

Create a free account and access your personalized content collection with our latest publications and analyses.

Sign up for free

License and Republishing

World Economic Forum articles may be republished in accordance with the Creative Commons Attribution-NonCommercial-NoDerivatives 4.0 International Public License, and in accordance with our Terms of Use.

The views expressed in this article are those of the author alone and not the World Economic Forum.

Stay up to date:

Cybersecurity

Related topics:
Artificial Intelligence
Arts and Culture
Digital Trust and Safety
Share:
The Big Picture
Explore and monitor how Cybersecurity is affecting economies, industries and global issues
World Economic Forum logo

Forum Stories newsletter

Bringing you weekly curated insights and analysis on the global issues that matter.

Subscribe today

More on Artificial Intelligence
See all

Redefining leadership: Why we need dialogue more than ever in a changing world

Ida Jeng Christensen, Angela Oduor Lungati and YIFAN HOU

January 16, 2026

Why effective AI governance is becoming a growth strategy, not a constraint

About us

Engage with us

Quick links

Language editions

Privacy Policy & Terms of Service

Sitemap

© 2026 World Economic Forum