Emerging Technologies

3 lessons for living with generative AI

We need to find ways to manage generative AI.

We need to find ways to manage generative AI. Image: Getty Images.

Karen Silverman
Founder and Chief Executive Officer, The Cantellus Group
Our Impact
What's the World Economic Forum doing to accelerate action on Emerging Technologies?
The Big Picture
Explore and monitor how Artificial Intelligence is affecting economies, industries and global issues
A hand holding a looking glass by a lake
Crowdsource Innovation
Get involved with our crowdsourced digital platform to deliver impact at scale
Stay up to date:

Tech and Innovation

Listen to the article

  • Generative AI is used to produce a variety of content, including text and images.
  • Getting the most out of these advanced technologies requires human cognitive skills.
  • These tips outline generative AI's role in our lives and how best to use it.

ChatGPT and Bard represent the beginning of a genuine change in how we interact with advanced technologies and what we will come to expect of each other.

Unlike preceding technologies, generative artificial intelligence (AI) demands human input and curiosity; it demands an appreciation of our strengths and limitations; and it demands that we unmoor from the familiar and passive “user” mentality.

It may feel counterintuitive, but the best strategy for getting the most out of generative AI is to lean even more aggressively into our most human and active cognitive abilities.

Have you read?

What are the human skills needed in an AI-assisted world? What will separate success from failure? In short, the ability to ask great, productive questions and skillfully interrogate questionable answers.

What do we need to know about generative AI?

First, AI applications, including generative AI, are still best suited to one of two general purposes: to aid or supplant routine human tasks where we are confident of the outcomes we desire because past performance clearly satisfies future needs and/or human limits; or where we want to detect new insights and predictions or uncover latent connections across datasets that are too massive, or in time frames that are too short for humans to manage. This does not describe all tasks – AI is also being developed for many other types of use cases. But right now, it works best where we want it to “rinse and repeat” or “scan and suggest”.

Second, as noted, generative AI is not a good candidate for a “user” or “consumer” mindset. On the contrary, it demands an active approach: one of inquiry, co-creation and collaboration. Our language should change to reflect this. Generative AI tools fundamentally differ from the digital products and devices we are accustomed to – they are neither a device nor a product. Generative AI is incomplete without us. We should not simply consume its offered content.

Third, no one knows exactly what these technologies will eventually enable us to do or how to define what we shouldn’t do, even if we technically can do it. These tools will get better at well-defined, repetitive tasks and at assimilating huge amounts of data. They will get pretty good at summarizing vast quantities of text (or other media) and expressing first drafts of semi-complex ideas.

But they are statistical modeling tools, not reasoning tools (results are predictions based on complex pattern-matching and “learn” only in this sense); they more or less assume the validity of questions asked whether or not that assumption is warranted, for example, “tell me the best arguments for why the earth is flat”. They will continue to suffer from the “garbage in, garbage out” problem (errors in the underlying data can show errors in the results). And they share our human flaws of imprecision and not always knowing (or admitting to not knowing) something.


How is the World Economic Forum ensuring the responsible use of technology?

How can we get the most from generative AI?

Whether you are a CEO or a student, a nurse or a lawyer, establishing a straightforward framework can help us to understand the role generative can play in our lives and how best to use it. Here are three key considerations for everyone.

1. Ask questions

Curiosity is critical when it comes to these advanced technologies, ask questions such as: what are they good for, how are they built, and what are their limits. Likewise, and more specifically, generative AI tools will demand that we learn how to ask and provide thoughtful and accurate questions and instructions that produce helpful answers that are fit for purpose, robust, and reduce error, bias and privacy risks.

Not all questions are the same. How we ask questions and instruct the tools directly impacts what results we get back and how valuable, defensible and safe the returns are. For instance, asking a cancer-cell detecting model to identify all atypical cells in a sample image differs from asking the model to exclude all the typical ones. The false positive and false negative rates will differ in each case, and when relying on a model’s output to support a professional diagnosis of an individual’s cancer status, that difference is consequential. Instructing a model to serve up a recommendation for a pair of pumps might raise the same issue but without nearly the same consequence.

Fashioning useful instructions is not as easy as it may seem – it is highly contextual, culturally specific work. We assume a lot in how we ask about things. Often, we are imprecise (i.e. giving instructions on how to make a sandwich), use language-specific idioms (a piece of cake) or assume too much (this worked before, so it will again). Each of these problems can actually increase with levels of expertise and experience.

2. Question answers

Similarly, generative AI will demand that we learn how to evaluate all model outputs with a critical eye. We will have to make time and space to permit humans to challenge technology-driven recommendations. This is one place the “garbage in, garbage out” problem arises (on the part of both technology and humans), as well as the opportunity to look carefully at how we might improve our historical practices in some respects while also improving our work with AI-enabled tools.

Generative AI can produce a range of outputs. Which of those outputs we implement to produce outcomes is up to us, and generally will be very case-specific. Where AI is supplanting repeatable, well-understood tasks, it may be easier to recognize sensible or appropriate outputs. We have a lot of experience in evaluating them, but we do not in the case of unique or novel tasks or insights. It will be especially important to interrogate the expected outputs as well as the surprising ones. If you get one or the other, all you really know is that you are missing a whole range of other possible results that could be produced from the same question.

Not all answers are correct answers. Indeed, many answers will be outright wrong and some will be OK but not as good as they could be, or good enough for some conditions, but not others. And we will consistently learn from the tools and from each other. Likewise, just because we can do something does not mean we should. So, we need to exercise humility and discretion as we use these tools in live settings, staying ready to consider that our own judgment or that of a colleague or student or vendor may have good reason to question model outputs before they become outcomes. Some students are already practising this, and hopefully, more will soon.

3. Use responsibly

These are new tools, imperfect, and designed to work with people (who also are imperfect), not on them. No end of mischief, assumptions and shoddy work is possible. Risk practices and governance controls are starting to account for all those possibilities. And importantly, many of these governance practices are not new. Even in the world of generative AI, there is a lot we all can do to create respectful, humane and inclusive norms in our day-to-day work, starting immediately with the tools we have. We should encourage our kids and colleagues to learn how to use generative AI tools while affirming the rules of civil society still apply. Perhaps they apply all the more.

With that in mind, here are nine things everyone can do to use generative AI more effectively:

  • Encourage, reward and dedicate time for curiosity and question-asking in yourself and others.
  • Run the same question multiple times.
  • Ask the same question multiple ways.
  • Identify when work has been assisted by an AI tool.
  • Don’t accept or act on first results.
  • Seek to discover the questions that produce the most surprising results, as well as the most expected ones.
  • Learn to explain why you accept or prefer the outputs you do (versus other options).
  • Don’t accept rude, abusive or exploitative results.
  • Stay curious and careful.
Don't miss any update on this topic

Create a free account and access your personalized content collection with our latest publications and analyses.

Sign up for free

License and Republishing

World Economic Forum articles may be republished in accordance with the Creative Commons Attribution-NonCommercial-NoDerivatives 4.0 International Public License, and in accordance with our Terms of Use.

The views expressed in this article are those of the author alone and not the World Economic Forum.

World Economic Forum logo
Global Agenda

The Agenda Weekly

A weekly update of the most important issues driving the global agenda

Subscribe today

You can unsubscribe at any time using the link in our emails. For more details, review our privacy policy.

Shaping the future of artificial intelligence: Reflections from the AI Governance Alliance

Cathy Li

June 13, 2024

About Us



Partners & Members

  • Join Us

Language Editions

Privacy Policy & Terms of Service

© 2024 World Economic Forum