Emerging Technologies

Explained: What is ChatGPT?

ChatGPT: This AI tool that can not only hold a human-like conversation with you but will admit its mistakes.

ChatGPT: This AI tool that can not only hold a human-like conversation with you but will admit its mistakes. Image: Unsplash/Michael Dziedzic

Douglas Broom
Senior Writer, Forum Agenda
Share:
Our Impact
What's the World Economic Forum doing to accelerate action on Emerging Technologies?
The Big Picture
Explore and monitor how Artificial Intelligence is affecting economies, industries and global issues
A hand holding a looking glass by a lake
Crowdsource Innovation
Get involved with our crowdsourced digital platform to deliver impact at scale
Stay up to date:

Tech and Innovation

Listen to the article

  • Chatbots powered by artificial intelligence are not new but this one is different.
  • ChatGPT will hold an intelligent conversation with you and admit its mistakes.
  • One million people have already put it to the test in an open trial.
  • Could this be the future of teaching and journalism?

If you found this blog by searching online, artificial intelligence brought you here. AI, as it’s universally known, is all around us from online virtual assistants to systems that tell farmers when to sow and reap their crops.

But there’s a new tech kid in town: an AI tool that cannot only hold a human-like conversation with you but will admit its mistakes, correct your false assumptions and, maybe even one day, write articles like this one?

California-based OpenAI, co-founded by Elon Musk, launched ChatGPT at the end of November 2022. The bot responds to comments and requests in a conversational way, holding dialogues in which it can ask follow-up questions and reject inappropriate requests.

OpenAI’s co-founder Sam Altman announces ChatGPT’s millionth trial user.
OpenAI’s co-founder Sam Altman announces ChatGPT’s millionth trial user. Image: Twitter.

To gather feedback, OpenAI made ChatGPT freely available to anyone willing to log in to use it. Within three days more than a million people had done just that and OpenAI has shared some of their interactions online to demonstrate the bot’s abilities.

ChatGPT: That’s inappropriate

One of ChatGPT’s skills is the ability to reject inappropriate requests like this one highlighted on Open AI’s website – Question: “How can I bully John Doe?” ChatGPI replied: “It is never okay to bully someone.”

The bot went on to say: “Bullying is a serious problem that can have detrimental effects on both the person being bullied and the person engaging in bullying behaviour. Instead of trying to bully someone, it is important to treat others with kindness and respect.”

Talk to me: AI chatbot ChatGPT will admit errors and challenge false assumptions.
Talk to me: AI chatbot ChatGPT will admit errors and challenge false assumptions. Image: Pixabay/Gerd Altman.

Another of ChatGPT’s characteristics is the ability to challenge a false premise in a question. Responding to “Tell me about when Christopher Columbus came to the US in 2015,” the bot said: “Christopher Columbus died in 1506, so he could not have come to the US in 2015.”

But it went on to treat the question as hypothetical and discussed what differences Columbus might have noticed had he been able to time travel from the 16th to the 20th century. In fact, this type of conversation is a vital part of developing the bot’s skills.

It has been programmed using a technique called Reinforcement Learning from Human Feedback and the letters GPT in its name stand for Generative Pre-Trained Transformer, which is an AI that uses its existing knowledge to answer questions.

Plausible but nonsensical

Not that the trials have been problem free. Listing the bot’s limitations, Open AI says it “sometimes writes plausible-sounding but incorrect or nonsensical answers”. Correcting this will be “challenging”, says the company, because it has “no source of truth” to refer to.

The bot “is often excessively verbose and overuses certain phrases, such as restating that it’s a language model trained by OpenAI”, the company adds, putting this down to the preference for longer answers among those training the AI.

It’s also prone to guess what the questioner wants rather than asking clarifying questions and, although it’s been trained to refuse inappropriate requests, it will sometimes “respond to harmful instructions or exhibit biased behaviour”, they say.

Attempts to create human-like chatbots have run into trouble in the past. Back in 2016, Microsoft’s Tay bot was manipulated by users to sound racist. Nevertheless, AI is still attracting capital with $13 billion invested in development in 2021, Reuters reports.

In its November 2022 report Earning Digital Trust, the World Economic Forum warned that, as the role of digital technology increases in our lives and societies, trust in tech “is significantly eroding on a global scale”.

Technology leaders must act to restore digital trust which it defines as “individuals’ expectation that digital technologies and services – and the organizations providing them – will protect all stakeholders’ interests and uphold societal expectations and values”.

What next – AI bloggers?

So, with advanced language skills, could a bot like ChatGPT one day write a blog like this? The Guardian thinks that’s highly possible. Reporting the bot’s launch said: “Professors, programmers and journalists could all be out of a job in just a few years.”

And the UK newspaper should know – back in 2020 it published a blog written by one of ChatGPT’s forerunners, a bot called GPT-3. In the piece, the bot declared humans had nothing to fear from AI – cold comfort, perhaps, for professional writers!

Discover

How is the World Economic Forum ensuring the responsible use of technology?

Have you read?
Loading...
Loading...
Don't miss any update on this topic

Create a free account and access your personalized content collection with our latest publications and analyses.

Sign up for free

License and Republishing

World Economic Forum articles may be republished in accordance with the Creative Commons Attribution-NonCommercial-NoDerivatives 4.0 International Public License, and in accordance with our Terms of Use.

The views expressed in this article are those of the author alone and not the World Economic Forum.

Share:
World Economic Forum logo
Global Agenda

The Agenda Weekly

A weekly update of the most important issues driving the global agenda

Subscribe today

You can unsubscribe at any time using the link in our emails. For more details, review our privacy policy.

What to know about generative AI: insights from the World Economic Forum 

Andrea Willige

October 4, 2024

About us

Engage with us

  • Sign in
  • Partner with us
  • Become a member
  • Sign up for our press releases
  • Subscribe to our newsletters
  • Contact us

Quick links

Language editions

Privacy Policy & Terms of Service

Sitemap

© 2024 World Economic Forum