Emerging Technologies

Generative AI: This is how you can use ChatGPT safely

It is important to make sure that AI like ChatGPT is being used safely.

It is important to make sure that AI like ChatGPT is being used safely. Image: Jonathan Kemper/Unsplash

Ewan Thomson
Senior Writer, Forum Agenda
Share:
Our Impact
What's the World Economic Forum doing to accelerate action on Emerging Technologies?
The Big Picture
Explore and monitor how Artificial Intelligence is affecting economies, industries and global issues
A hand holding a looking glass by a lake
Crowdsource Innovation
Get involved with our crowdsourced digital platform to deliver impact at scale
Stay up to date:

Generative Artificial Intelligence

  • ChatGPT and other large language models (LLMs) can provide opportunities at home and at work for efficiencies and improvements – but these opportunities come with risks.
  • Generative AI can suffer from accuracy and bias issues, and care needs to be taken around how personal and sensitive information is utilized.
  • Here’s how to use ChatGPT safely and navigate increasing cyber risks, a key issue in the World Economic Forum’s Global Cybersecurity Outlook.

Your new AI assistant will write poetry for you, generate ideas for your business, check your emails for mistakes and maybe even pass your MBA exam, albeit with only a B- average.

But would you trust a well-meaning AI assistant that can sometimes offer plausible yet inaccurate answers? And does anyone else see the queries that you submit, and if so, who? In other words, how safe is ChatGPT?

ChatGPT and other large language models (LLMs) can provide opportunities at home and at work for efficiencies and improvements – but alongside these opportunities come risks around not just accuracy and bias, but the handling of personal information and security too.

Here’s how to use ChatGPT safely for all your needs.

Taking care with AI inaccuracy and bias

Training ChatGPT works by giving it previously unseen data sets to establish how well it makes predictions as it reviews the data. Then it is tested to see how well it performs with that new data.

As such, its ability to supply quality information is only as good as the information that was given to it, and the current version of ChatGPT is only trained on data up to 2021. The data that feeds LLMs is likely to contain conflicting evidence, opinions and unreliable conclusions, and so bias issues are as inevitable as accuracy issues.

Discover

How is the World Economic Forum creating guardrails for Artificial Intelligence?

For personal and business use, the key to navigating accuracy and bias issues with ChatGPT is vigilance. Fact-checking statistics, using multiple sources for information, and engaging critically with AI-generated content with the understanding that generative AI may not understand complex or nuanced topics can mitigate accuracy issues.


Because LLMs can only work with the data that is given to them – and as long as that data creates inaccuracies and bias – humans will need to be involved in the process to identify and remove them, says the Harvard Business Review.

Vigilance with personal or sensitive information

Even though website visits are down for the third consecutive month, ChatGPT still had 180.5 million unique users in August 2023, and students returning to school in September may lead to a bump in traffic, as the education sector cautiously embraces AI.

Many free apps and websites make their money by collecting and selling data, and ChatGPT operates in a similar way. Ensuring safety around your personal data, your children’s data, or the organization you work for is an important part of online security best practice.

Have you read?

ChatGPT does not store user-specific data, nor does it store details of interactions beyond the immediate conversation it is having. But it does save all of the conversations you have with it, store them, and utilize this data to improve its language model. So the human trainers working behind the scenes at OpenAI will be able to see conversations and queries.

To ensure a safer environment for your personal information and security, avoid sharing private information with ChatGPT. Consider disabling chat history and model training. But be aware that even if you disable the history, your chats will still be stored on Open AI’s servers for 30 days so that staff can monitor for abuse.

And if you feel like this is not enough, you can request to delete your entire account and its data by emailing dsar@openai.com.

Lastly, take care when installing extensions and apps on your computer or phone. Some apps and browsers that claim to bring ChatGPT functionality to your browser could be harvesting your data.

Artificial intelligence (AI) market size worldwide in 2021 with a forecast until 2030.
The global AI market is expected to reach $2 trillion by 2030. Image: Statista

How organizations can use AI safely

For organizations using generative AI such as ChatGPT as part of their day-to-day operations, making sure both their staff and their customers are clear on the boundaries of use is a good first step, and many organizations provide clear and transparent AI policies to ensure best practices.

For organizations looking to build their own LLM AI policies, the Corporate Governance Institute provides some guidance for areas to cover.

“Incorporating cyber-resilience governance into [a company’s] business strategy is one of the most impactful principles when it comes to cyber resilience,” notes the World Economic Forum’s latest Global Cybersecurity Outlook report.

Loading...
Don't miss any update on this topic

Create a free account and access your personalized content collection with our latest publications and analyses.

Sign up for free

License and Republishing

World Economic Forum articles may be republished in accordance with the Creative Commons Attribution-NonCommercial-NoDerivatives 4.0 International Public License, and in accordance with our Terms of Use.

The views expressed in this article are those of the author alone and not the World Economic Forum.

Related topics:
Emerging TechnologiesCybersecurity
Share:
World Economic Forum logo
Global Agenda

The Agenda Weekly

A weekly update of the most important issues driving the global agenda

Subscribe today

You can unsubscribe at any time using the link in our emails. For more details, review our privacy policy.

AI is changing the shape of leadership – how can business leaders prepare?

Ana Paula Assis

May 10, 2024

About Us

Events

Media

Partners & Members

  • Join Us

Language Editions

Privacy Policy & Terms of Service

© 2024 World Economic Forum