Forum Institutional

Educating people about generative AI starts now: Here's where to begin

'With the right sort of collective engagement with the public, generative AI can be better understood – and utilized.'

'With the right sort of collective engagement with the public, generative AI can be better understood – and utilized.' Image: REUTERS/Aly Song

Victor Riparbelli
Co-Founder and Chief Executive Officer, Synthesia
Our Impact
What's the World Economic Forum doing to accelerate action on Forum Institutional?
The Big Picture
Explore and monitor how Artificial Intelligence is affecting economies, industries and global issues
A hand holding a looking glass by a lake
Crowdsource Innovation
Get involved with our crowdsourced digital platform to deliver impact at scale
Stay up to date:

Generative Artificial Intelligence

This article is part of: World Economic Forum Annual Meeting
  • As the technology is more widely adopted, the public must be educated about the benefits and risks of generative AI.
  • Regulating AI is important, but the industry must also practice proactive collective engagement.
  • Consent issues and public awareness campaigns emphasizing positive use cases are key areas to consider.

We’ve all heard at least one of the stories about generative AI being used to nefarious ends, from the lawyer who turned in a brief riddled with errors to the scammers spoofing voices and stealing money.

These stories, and so many others, have ricocheted through the media. So it’s no surprise that people are wary of generative AI at best and hostile to it at worst.

It’s also no surprise that serious discussions about regulating AI are ramping up, with the European Union recently agreeing on a landmark deal that will establish the first rules for AI in the world.

As the founder of Synthesia, one of the first startups to build an AI video platform, I read these stories with dismay. But I am hopeful that with the right sort of collective engagement with the public, generative AI can be better understood – and utilized – for the transformative tool that it is. Let me explain.

Have you read?

Regulating high-risk AI is undoubtedly important. Certain protections need to be guaranteed by law, and we see proposals for regulation increasing worldwide, focusing on issues such as social scoring, predictive policing, behavioural manipulation and other misuses that impact people’s fundamental rights.

Laws are also being introduced to ensure transparency and accountability in AI decision-making processes and address concerns about intellectual property rights in AI-generated content. Additionally, some states are implementing ethical guidelines and safety standards, particularly for AI technologies that interact with the physical world or are used in government sectors. These developments show the progress we are making to integrate AI safely into our society.

However, these regulations will only go so far in safeguarding the public against nefarious uses of AI. Much in the same way, we have educated the public on other new technologies – not everyone was thrilled when homes were electrified, for example – we have a responsibility to educate on how generative AI is used, what can be gained, and what the public should look out for when producing and consuming generative AI content.


As investors, inventors and users of this technology, we are responsible for doing this. But how do we go about such a momentous task? Here are a few strategies that will help us get started to ensure that generative AI advances society in a responsible way:

1. Consent

It is of vital importance that we ensure the public understands and companies enforce consent rules that protect users' likenesses. People must understand what it means to use their voice, body and thoughts to create AI-generated content and be comfortable with the rules about how that content may be used. Different companies may have different ways of securing this consent based on how their products work, but at Synthesia, we ask anyone creating a custom avatar to provide verbal consent on camera before we start capturing the video and audio recordings needed.

2. Watermarking

This will help individuals understand from the moment of consumption when videos are synthetic and perhaps apply a different lens when analyzing what they are viewing. Many similar efforts are also happening across the industry, and I believe watermarking will eventually become a standard industry expectation that consumers will learn to look out for.

3. Public awareness campaigns

We know AI is here to stay. We know that many billions of dollars will be invested in it and that it will change rapidly as it improves. It behoves us, the players in this industry, to earmark a significant amount of that funding toward building public awareness of these tools and instructing students of all ages on how to use and consume AI. These campaigns should include free tutorials and educational content so consumers understand how AI works and how it can be used for good or ill. The companies building these products will realize multiple benefits, like building trust and credibility with the public and gaining users, by creating these campaigns and educational content.

4. Positive examples

Related to this, it’s essential that we, as the companies providing AI technology, educate through exposure to positive use cases of AI as quickly as possible. It’s critical to show the public how AI can be used for good so that they are not scared of its power but rather can leverage it in the best ways for them. One positive example of generative AI is in making content accessible to different audiences, like dubbing films and TV shows into different languages. With users eager to access AI technology, we must build tools that are easy for people to use. Having a population that can not only spot, when AI is used for good versus bad but also understands how it is made, will be our best safeguard.


How is the World Economic Forum creating guardrails for Artificial Intelligence?

The opportunities for generative AI are endless. I feel lucky to develop this new and exciting technology every single day. But we also know that AI is a novel and sometimes confusing technology for most people. It’s our responsibility to make it more accessible, transparent, and safer to realize the technology’s full potential. I hope the rest of the industry will join me in educating the public – and I look forward to the day when there are more good AI news stories than bad ones.

Don't miss any update on this topic

Create a free account and access your personalized content collection with our latest publications and analyses.

Sign up for free

License and Republishing

World Economic Forum articles may be republished in accordance with the Creative Commons Attribution-NonCommercial-NoDerivatives 4.0 International Public License, and in accordance with our Terms of Use.

The views expressed in this article are those of the author alone and not the World Economic Forum.

Related topics:
Forum InstitutionalEmerging Technologies
World Economic Forum logo
Global Agenda

The Agenda Weekly

A weekly update of the most important issues driving the global agenda

Subscribe today

You can unsubscribe at any time using the link in our emails. For more details, review our privacy policy.

Institutional update

World Economic Forum

May 21, 2024

About Us



Partners & Members

  • Join Us

Language Editions

Privacy Policy & Terms of Service

© 2024 World Economic Forum