Quantifying cyber risk was a challenging problem taken up by the World Economic Forum working group, and after a great deal of discussions and sharing of ideas between various partner companies, a robust structure to address the challenge was launched in Davos 2015. The value-at-risk model for cyber risk quantification aims to shift the conversation of cybersecurity from a technical issue, to a key business risk in a standard enterprise risk register. While the concept is of great relevance and should change the nature of conversations in the boardrooms and catalyze senior management deliberations, it is quite pertinent to appreciate the nuances so that it is used meaningfully.

  • Probability, not Certainty: Before delving into more depth, it might be useful to understand what Value-at-risk (VAR) really means. This is a concept drawn from the world of finance, which indicates the maximum amount that could be lost over a given period with a particular degree of confidence. The last phrase ‘degree of confidence’ is important to note, as in an uncertain world (given chance for random/ unexpected events), there is always rare events that cannot be fully factored or modelled. Hence, we accept a 1% or 0.1% risk of such events and try to ensure that risks are below a defined threshold for all other scenarios (e.g. in the 99 or 99.9% of the remaining). As one might expect, the risk exposure value increases disproportionately as we move towards higher levels of certainty.
  • Directional, not Definitive: The intent of the quantification is to provide a directional view around the level of risk, rather than a very accurate measure. Given the nature of assumptions made and the lack of actuarial data around frequency of attacks, extent of impacts and the magnitude of losses (both direct and indirect), it would be quite presumptuous to model for high accuracy levels.
  • Frequency:Given the evolving nature of threats, there is a view that this exercise should be conducted frequently. However, given the directional nature of the exercise and the management effort involved in building a view as well as acting on the recommendations, it would be sufficient to perform this annually, or in specific cases, every six months.
  • Handling Black Swan events: Any model operates with the assumption that past events are a reasonable predictor for the future, with specific assumptions around nature of volatility. Rare events (e.g. high impact zero day exploits, unexpected systemic risk) could cause high impact events, that don’t lend themselves to the quantification exercise. However, the fact that most ‘known’ risks can be modelled allows the fog to be lifted, and focus to be directed on the residual ‘unknown-unknowns’.

How does the quantification approach help?

We believe that the cyber quantification approach enables a mature conversation around the various approaches for risk management- ranging from taking mitigation steps (improved controls, technology upgrade), risk transfer through insurance or a realistic appreciation and willingness to bear the exposure. As risk markets mature in the area, there would be greater pool of data for validation and increased research and sophistication of control measures.

In summary, the risk quantification approach can change the dialogue of cyber risk management if used appropriately. Given the level of research and collaboration between the various stakeholders underway, the quality of data available and sophistication of models can only improve over time to deliver more useful outcomes.

To keep up with Forum:Agenda subscribe to our weekly newsletter.

Author: Guha Ramasubramanian heads Corporate Business Development at Wipro and member of the World Economic Forum’s Partnership for Cyber Resilience initiative.

Image: An illustration picture shows a projection of binary code on a man holding a laptop in an office. REUTERS/Kacper Pempel.