Health and Healthcare Systems

How to get better at making warnings

A woman walks across the Old Town Square where thousands of crosses have been painted on a pavement to commemorate the first anniversary since the death of the first Czech coronavirus disease (COVID-19) patient in Prague, Czech Republic, March 22, 2021.  REUTERS/David W Cerny - RC2EGM9VM614

Pandemic warnings did not specify how lethal it would be. Image: REUTERS/David W Cerny

Florence Gaub
Deputy Director, EU Institute for Security Studies (EUISS)
Our Impact
What's the World Economic Forum doing to accelerate action on Health and Healthcare Systems?
The Big Picture
Explore and monitor how Pandemic Preparedness and Response is affecting economies, industries and global issues
A hand holding a looking glass by a lake
Crowdsource Innovation
Get involved with our crowdsourced digital platform to deliver impact at scale
Stay up to date:

Pandemic Preparedness and Response

  • Pandemic warnings did not specify how lethal it would be, how transmissible or when it would roughly happen.
  • For a reader to truly understand the threat, they need to be able to visualize it in as much detail as possible.
  • Solutions should be suggested at the same time as giving a warning.

March 2020 was a Pyrrhic victory for some experts: they had warned of a pandemic for years – and yet policymakers had not taken the necessary precautions. In a tsunami of articles, all kind of psychological mechanisms were identified to explain why politicians find it hard to listen to and act on uncomfortable truths, laying the blame squarely on the recipients of these warnings.

Have you read?

The fact that the sender is also part of the problem, is mostly overlooked. The problem is most experts are not very good at warning for one main reason: they are not very specific, both in content or style.

Pandemic warnings, for instance, did not say what the pandemic’s outline would be, how lethal it would be, how transmissible it would be or when this would roughly happen. They made few recommendations on what should be done, other than that it should be taken seriously.

To make matters even more confusing for a reader seeking to get a concrete idea of the threat, they generally used a language of vagueness designed to cover all eventualities. Their language had the unfortunate side effect of cancelling out the threat in the readers mind – using the words “could” and “possibly”; making verbs into nouns; using relative rather than concrete quantification of risk or time horizons (“increased” and “soon”); and using the passive voice (“if nothing is done”).

If you compare these warnings to the field I work in, security policy, this is akin to warning of a war whose location is as unknown as its rough onset, the weapons that will be used to fight it, how many people could die, or what measures could prevent it.

Daily new confirmed COVID-19 deaths

To be clear, not just pandemic warnings suffer from this lack of precision: whether it comes to how robotics and AI will change the labour market, how climate change will change conflict, or how an ageing population will change societies, precision is not widespread in the profession of warning. And this contributes to these warnings going unheard.

That is because for a reader to truly understand the threat, they need to be able to visualize it in as much detail as possible. In a study undertaken by Daniel Kahneman, respondents felt an earthquake in California was more likely than a flood in North America – simply because "California" was more specific than "North America". This phenomenon, known as the conjunction fallacy, is the result of how the human mind models the future. The more information we have available on a certain future, the more our mind focuses on it (which is why visualizations and the Law of Attraction are actually not just quackery).

Experts wishing for their warnings to be taken seriously will have no other choice but to make their warnings as specific as possible. But how can this be achieved considering that the future is eternally unknown and therefore hard to grasp?

Some options are available.

Quantify the risk as precisely as possible. This includes details on the effects, be it financial, economic, environmental cost, human and animal casualties, or examples of destruction. But it also has to include possible time horizons, precise localities (the more local the better), and, where possible, names of people affected. There can never be too many details in explaining a risk. Avoid generalizations.

If the risk is on a very large scale (e.g. climate change), break it down into its smaller components to make the magnitude more digestible. Although this seems counterintuitive this does not decrease but actually increases the risk perception.

Where possible, use sensorial tools rather than just words. Superflux, a strategic foresight company, developed the scent of a future UAE if no measures against pollution were taken. The use of AR smartglasses, too, is promising to visualize the future in a 360 degree way. Even visuals work better than just text.

Even if you have only text available, make it a scenario. Stories are far superior in communicating a threat than a factual text that cannot elicit emotions. In addition, scenarios offer the opportunities to think very concretely through different options on how to handle the risk.

Assign probabilities. As the GoodJudgmentProject and others have shown, putting a number on a risk helps to reflect on its likelihood and thereby work around the conjunction fallacy. Even though humans are not very good at dealing with numbers, probabilities make the risk more quantifiable. But do not stop there: probabilities need to be updated as new information comes in – and they have to be set in direct proportion to the impact of the unmitigated risk.

Use the language of the factual rather than the possible. Words that should be banned from any warning expert's vocabulary are “could”, “might”, “maybe”, “possibly” and all other vague language. Do not be afraid of your own judgment: use your language to take a stance – but do not fall into catastrophizing, which leads to an almost immediate cognitive shut-off.


What is the World Economic Forum doing about fighting pandemics?

If you keep warning of the same risk, change the narrative to avoid the so-called Gray Rhino effect whereby people get used to the threat. Find a new angle to pitch the problem.

Acknowledge the risk overload policymakers face. Risks tend to be egoistic, ignoring that there are many other risks out there seeking the decision-maker’s attention.

Most importantly: propose a solution at the same time as the problem. Decision-makers will be happy to act if you give them concrete ideas on what can be done. Here, too, small, concrete ideas will be more successful than a grand one.

Don't miss any update on this topic

Create a free account and access your personalized content collection with our latest publications and analyses.

Sign up for free

License and Republishing

World Economic Forum articles may be republished in accordance with the Creative Commons Attribution-NonCommercial-NoDerivatives 4.0 International Public License, and in accordance with our Terms of Use.

The views expressed in this article are those of the author alone and not the World Economic Forum.

Related topics:
Health and Healthcare SystemsGlobal CooperationStakeholder Capitalism
World Economic Forum logo
Global Agenda

The Agenda Weekly

A weekly update of the most important issues driving the global agenda

Subscribe today

You can unsubscribe at any time using the link in our emails. For more details, review our privacy policy.

Half the world is affected by oral disease – here’s how we can tackle this unmet healthcare need

Charlotte Edmond

May 23, 2024

About Us



Partners & Members

  • Join Us

Language Editions

Privacy Policy & Terms of Service

© 2024 World Economic Forum