On May 27th, 75,000 people were left stranded because a contractor inadvertently switched off the power supply at a British Airways data centre. The cost has thus far been estimated at $100 million dollars. The week before, a large cyber-attack using WannaCrypt Ransomware infected more than 230,000 Windows PCs in 150 countries. This attack affected schools and universities, railways, and government computers across the globe. Most notably, the ransomware attack wreaked havoc across the National Healthcare System (NHS) in the UK, causing operations and medical appointments to be cancelled, Europol, the EU's law enforcement agency, has called the cyber-attack the "largest ransomware attack observed in history.”
Technological failures, both as a result of an unintentional error or malicious intent, are becoming more common place. In February, we saw the Amazon cloud go down for similar reasons to the British Airways outage, as an employee’s typo inadvertently shut down more computer servers than intended. As a result, the internet service went down for several hours, costing close to $150 million dollars. Last October, DDoS attacks brought down the internet across the US, fortunately only for a couple of hours. Just last weekend, British MPs were targeted by a cyber-attack.
Some argue that cyber “catastrophes” should be characterized as “black swan” events - Nassim Taleb’s term for things which are highly unexpected, carry large consequences, and are subjected to ex-post rationalization. Taleb argued that black swan events are impossible to predict yet have catastrophic ramifications. However, by characterizing technological catastrophes as black swans, we risk weakening our ability to handle them. Each event should not be seen in a silo but rather as part of the interweaving risks of the Fourth Industrial Revolution and we must plan accordingly.
The WannaCrypt raises the question of how we, as a society, as a government, or as a global consumer, can help prevent, mitigate, respond to, and recover from these potentially catastrophic events.
First and foremost, we must accept that these events are no longer anomalies or stand-alone events. They will continue to happen and with more frequency and possibly more severity than we’ve seen in the past. Second, we must not work in silos. Security by design, or ensuring that security is built into technology from the beginning, is a critical first step.
However, security conversations need to include not only technology developers but insurers, governments, and even the consumer. Best practices and minimum standards need to recognized and implemented not only across the supply chain but globally. Our networks are more interconnected than before and this is a perfect case in point of being only as strong as your weakest link. In the case of the Amazon cloud failure, a minor maintenance procedure was able to lead to a breakdown of the entire system, “highlight[ing] that AWS, and the cloud computing industry in general, still have some maturing to do”, said Ed Anderson, an analyst at Gartner Inc.
A similar comment could be made about BA’s Data Centre, as a global IT failure took out all British Airways flights out of London on Saturday May 27th. Further, the success of the the DDoS attack that brought down much of America’s internet last October was caused by the Mirai botnet that targeted the servers of Dyn, “a company that controls much of the internet’s domain name system (DNS) infrastructure.” The botnet’s success relied on hundreds of thousands of internet-connected devices such as cameras, baby monitors and home routers that were infected — without their owners’ knowledge, allowing hackers to command and flood internet traffic. In many cases, minimum security standards, protocols, and education may have prevented such events.
Have you read?
However, as WannaCrypt showed, security by design will not be enough. Although globally recognized standards may not exist, Microsoft has been a leader in securing their products. However, entry-points can be found even in the most secure networks. According to Microsoft, the exploits used in the WannaCrypt attack were stolen from the National Security Agency (NSA) in the United States and were not only publicly reported on March 14th but Microsoft released a security update to patch this very vulnerability. However, there was no mechanism in place to force this update. In this case, a clear lack of roles and responsibilities prevented the necessary correction action from being taken. This example highlights the need for a defined accountability framework and a set of defined actions which could help prevent and mitigate attacks.
We must understand that not every attack can be prevented. Learning from successful attacks is critical to preventing future attacks. Today, companies, cities, nations and supra-national organizations are working in silos when it comes to prevention and response.
However, the private sector should not solely see their security expertise as their competitive advantage. Furthermore, governments should work with each other and with the private sector so that those with malicious intent are not enabled by accessing information shared by a select few. Coordination between the public and private sector on a global platform must take place in order to work together, learn from one another, and develop best practices in building resilience against this new catastrophe - failures of interconnected technologies.
The World Economic Forum’s “Mitigating Risks in the Innovation Economy” project has brought together leaders from government, technology and insurance across the globe, to assess preparedness and build best practice incidence response plans in order to build resilience against technology enabled catastrophes. This will be high on the agenda at our Annual Meeting of the New Champions in Dalian, China this week.