- Pandemic warnings did not specify how lethal it would be, how transmissible or when it would roughly happen.
- For a reader to truly understand the threat, they need to be able to visualize it in as much detail as possible.
- Solutions should be suggested at the same time as giving a warning.
March 2020 was a Pyrrhic victory for some experts: they had warned of a pandemic for years – and yet policymakers had not taken the necessary precautions. In a tsunami of articles, all kind of psychological mechanisms were identified to explain why politicians find it hard to listen to and act on uncomfortable truths, laying the blame squarely on the recipients of these warnings.
Have you read?
The fact that the sender is also part of the problem, is mostly overlooked. The problem is most experts are not very good at warning for one main reason: they are not very specific, both in content or style.
Pandemic warnings, for instance, did not say what the pandemic’s outline would be, how lethal it would be, how transmissible it would be or when this would roughly happen. They made few recommendations on what should be done, other than that it should be taken seriously.
To make matters even more confusing for a reader seeking to get a concrete idea of the threat, they generally used a language of vagueness designed to cover all eventualities. Their language had the unfortunate side effect of cancelling out the threat in the readers mind – using the words “could” and “possibly”; making verbs into nouns; using relative rather than concrete quantification of risk or time horizons (“increased” and “soon”); and using the passive voice (“if nothing is done”).
If you compare these warnings to the field I work in, security policy, this is akin to warning of a war whose location is as unknown as its rough onset, the weapons that will be used to fight it, how many people could die, or what measures could prevent it.
To be clear, not just pandemic warnings suffer from this lack of precision: whether it comes to how robotics and AI will change the labour market, how climate change will change conflict, or how an ageing population will change societies, precision is not widespread in the profession of warning. And this contributes to these warnings going unheard.
That is because for a reader to truly understand the threat, they need to be able to visualize it in as much detail as possible. In a study undertaken by Daniel Kahneman, respondents felt an earthquake in California was more likely than a flood in North America – simply because "California" was more specific than "North America". This phenomenon, known as the conjunction fallacy, is the result of how the human mind models the future. The more information we have available on a certain future, the more our mind focuses on it (which is why visualizations and the Law of Attraction are actually not just quackery).
Experts wishing for their warnings to be taken seriously will have no other choice but to make their warnings as specific as possible. But how can this be achieved considering that the future is eternally unknown and therefore hard to grasp?
Some options are available.
Quantify the risk as precisely as possible. This includes details on the effects, be it financial, economic, environmental cost, human and animal casualties, or examples of destruction. But it also has to include possible time horizons, precise localities (the more local the better), and, where possible, names of people affected. There can never be too many details in explaining a risk. Avoid generalizations.
If the risk is on a very large scale (e.g. climate change), break it down into its smaller components to make the magnitude more digestible. Although this seems counterintuitive this does not decrease but actually increases the risk perception.
Where possible, use sensorial tools rather than just words. Superflux, a strategic foresight company, developed the scent of a future UAE if no measures against pollution were taken. The use of AR smartglasses, too, is promising to visualize the future in a 360 degree way. Even visuals work better than just text.
Even if you have only text available, make it a scenario. Stories are far superior in communicating a threat than a factual text that cannot elicit emotions. In addition, scenarios offer the opportunities to think very concretely through different options on how to handle the risk.
Assign probabilities. As the GoodJudgmentProject and others have shown, putting a number on a risk helps to reflect on its likelihood and thereby work around the conjunction fallacy. Even though humans are not very good at dealing with numbers, probabilities make the risk more quantifiable. But do not stop there: probabilities need to be updated as new information comes in – and they have to be set in direct proportion to the impact of the unmitigated risk.
Use the language of the factual rather than the possible. Words that should be banned from any warning expert's vocabulary are “could”, “might”, “maybe”, “possibly” and all other vague language. Do not be afraid of your own judgment: use your language to take a stance – but do not fall into catastrophizing, which leads to an almost immediate cognitive shut-off.
What is the World Economic Forum doing about fighting pandemics?
The first human trial of a COVID-19 vaccine was administered this week.
CEPI, launched at the World Economic Forum, provided funding support for the Phase 1 study. The organization this week announced their seventh COVID-19 vaccine project in the fight against the pandemic.
The Coalition for Epidemic Preparedness Innovations (CEPI) was launched in 2017 at the Forum's Annual Meeting – bringing together experts from government, business, health, academia and civil society to accelerate the development of vaccines against emerging infectious diseases and to enable access to these vaccines during outbreaks.
Coalitions like CEPI are made possible through public-private partnerships. The World Economic Forum is the trusted global platform for stakeholder engagement, bringing together a range of multistakeholders from business, government and civil society to improve the state of the world.
Organizations can partner with the Forum to contribute to global health solutions. Contact us to find out how.
If you keep warning of the same risk, change the narrative to avoid the so-called Gray Rhino effect whereby people get used to the threat. Find a new angle to pitch the problem.
Acknowledge the risk overload policymakers face. Risks tend to be egoistic, ignoring that there are many other risks out there seeking the decision-maker’s attention.
Most importantly: propose a solution at the same time as the problem. Decision-makers will be happy to act if you give them concrete ideas on what can be done. Here, too, small, concrete ideas will be more successful than a grand one.