Artificial Intelligence

What is artificial intelligence—and what is it not?

An artificial intelligence robot at a mall in Japan.

'Artificial intelligence is not intelligence—it is prediction.' Image: Lukas on Unsplash

Spencer Feingold
Digital Editor, World Economic Forum
Share:
Our Impact
What's the World Economic Forum doing to accelerate action on Artificial Intelligence?
The Big Picture
Explore and monitor how Artificial Intelligence is affecting economies, industries and global issues
A hand holding a looking glass by a lake
Crowdsource Innovation
Get involved with our crowdsourced digital platform to deliver impact at scale
Stay up to date:

Artificial Intelligence

Listen to the article

  • Artificial intelligence (AI) is set to transform many aspects of day-to-day life.
  • There are, however, many misconceptions about AI and its potential uses.
  • “The exaggerations about AI’s potential largely stem from misunderstandings about what AI can actually do,” said Kay Firth-Butterfield, the Head of Artificial Intelligence and Machine Learning at the World Economic Forum.

Broadly speaking, artificial intelligence (AI) is a field of study and type of technology characterised by the development and use of machines that are capable of performing tasks that usually would have required human intelligence.

AI has already transformed many industries and aspects of society, ranging from the introduction of customer service chatbots to enhanced GPS and mapping applications. However, there are several misconceptions about AI and its potential uses.

In the following Q&A, Kay Firth-Butterfield, the Head of Artificial Intelligence and Machine Learning at the World Economic Forum, details the different types of AI, important developments and applications in the field of machine learning and—perhaps most importantly—discusses common misunderstandings about AI.

What are the different types of AI?

“AI consists of several different machine learning models. These include, but are not limited to, reinforcement learning, supervised and unsupervised learning, computer vision, natural language processing and deep learning.

“All of the machine learning models develop and advance statistical predictions, but differ in their use and comprehension of data. ChatGPT, for example, is an AI-powered chatbot that is able to predict the most likely next word in a sentence. With numerous and relatively accurate predictions, ChatGPT is able to create coherent paragraphs.”

What do most people misunderstand about AI?

“AI is not intelligence—it is prediction. With large language models, we’ve seen an increase in the machine’s ability to accurately predict and execute a desired outcome. But it would be a mistake to equate this to human intelligence.

“This is clear when examining machine learning systems that, for the most part, can still only do one task very well at a time. This is not common sense and is not equivalent to human levels of thinking that can facilitate multi-tasking with ease. Humans can take information from one source and use it in many different ways. In other words, our intelligence is transferable—the ‘intelligence’ of machines is not.”

Have you read?

Where do you see AI's greatest potential?

“AI has enormous potential to do good in various sectors, including education, healthcare and the fight against climate change. FireAId, for instance, is an AI-powered computer system that uses wildfire risk maps to predict the likelihood of forest fires based on seasonal variables. It also analyzes wildfire risk and severity to help determine resource allocation.

“Meanwhile, in healthcare, AI is being used to improve patient care through more personal and effective prevention, diagnosis and treatment. Improved efficiencies are also lowering healthcare costs. Moreover, AI is set to dramatically change—and ideally improve—care for the elderly.”

Where do you think AI's potential impact has been exaggerated?

“The exaggerations about AI’s potential largely stem from misunderstandings about what AI can actually do. We still see many AI-powered machines that consistently hallucinate, which means they make a lot of errors. So the idea that this type of AI will replace human intelligence is unlikely.

“Another hindrance to AI’s adoption is the fact that AI systems draw their data from unrepresentative sources. The vast majority of data is produced by a section of the population in North America and Europe, leading AI systems to reflect that worldview. ChatGPT, for instance, largely pulls the written word from those regions. Meanwhile, nearly 3 billion people still do not have regular access to the internet and have not created any data themselves.”

What are the biggest risks associated with AI?

“AI systems are incredibly new. Therefore, companies and the general public need to be careful before using them. Users should always check that an AI system has been designed and developed responsibly—and has been well tested. Think about other products; a car manufacturer would never release a new vehicle without rigorous testing beforehand.

“The risk of using untested and poorly developed AI systems not only threatens brand value and reputation, but also opens users up to litigation. In the United States, for example, government regulations have made clear that businesses will be held accountable for the use of AI-powered hiring tools that discriminate.

“There are also the major sustainability concerns surrounding AI and advanced computer systems, which use a tremendous amount of power to develop and operate. Already, the carbon footprint of the entire information and communications technology ecosystem equals the aviation industry’s fuel emissions.”

What steps can be taken to ensure AI is developed responsibly?

“First and foremost, people should think about whether or not AI is the best tool for solving a problem or improving a system. If AI is appropriate, the system should be developed with care and well-tested before it is released to the public.

“Users should also be aware of legal regulations—and the public and private sector should work together to develop adequate guardrails for the applications of AI.

“Lastly, users should use the various tools and resources that have been developed to help usher in responsible AI.”

Discover

How is the World Economic Forum ensuring the responsible use of technology?

Loading...
Don't miss any update on this topic

Create a free account and access your personalized content collection with our latest publications and analyses.

Sign up for free

License and Republishing

World Economic Forum articles may be republished in accordance with the Creative Commons Attribution-NonCommercial-NoDerivatives 4.0 International Public License, and in accordance with our Terms of Use.

The views expressed in this article are those of the author alone and not the World Economic Forum.

Share:
World Economic Forum logo
Global Agenda

The Agenda Weekly

A weekly update of the most important issues driving the global agenda

Subscribe today

You can unsubscribe at any time using the link in our emails. For more details, review our privacy policy.

How we can prepare for the future with foundational policy ideas for AI in education

TeachAI Steering Committee

April 16, 2024

About Us

Events

Media

Partners & Members

  • Join Us

Language Editions

Privacy Policy & Terms of Service

© 2024 World Economic Forum