- Tracking technologies could help monitor the spread of COVID-19.
- Yet they also raise questions about privacy and data management.
- Here are four ways to ensure the responsible use of these technologies, including ensuring independent oversight.
In the fight against COVID-19, tech governance is at a clear crossroad. Either it will be part of the recovery debate, leading to the emergence of public-private global cooperation for a trustworthy use of technology. Or we might observe a dramatic surrender of public liberties in the name of using surveillance technologies in the continuous battle against coronavirus and potential novel viruses.
Many tech companies and at least 30 governments have started proposing or deploying tracking tools. Privacy International has been collecting these initiatives, showing the wide range of approaches in terms of privacy and respect of freedoms. The lack of a shared governance framework is manifest.
What is the World Economic Forum doing about the coronavirus outbreak?
Responding to the COVID-19 pandemic requires global cooperation among governments, international organizations and the business community, which is at the centre of the World Economic Forum’s mission as the International Organization for Public-Private Cooperation.
Since its launch on 11 March, the Forum’s COVID Action Platform has brought together 1,667 stakeholders from 1,106 businesses and organizations to mitigate the risk and impact of the unprecedented global health emergency that is COVID-19.
The platform is created with the support of the World Health Organization and is open to all businesses and industry groups, as well as other stakeholders, aiming to integrate and inform joint action.
As an organization, the Forum has a track record of supporting efforts to contain epidemics. In 2017, at our Annual Meeting, the Coalition for Epidemic Preparedness Innovations (CEPI) was launched – bringing together experts from government, business, health, academia and civil society to accelerate the development of vaccines. CEPI is currently supporting the race to develop a vaccine against this strand of the coronavirus.
Tracking technologies using smartphones could help monitor the evolution of the virus among the population and quickly prevent new clusters of confirmed cases from building up, helping countries emerge from lockdowns. But they also bring dauting privacy risks associated with collecting health data from citizens and tracking movements. These tools may lead to an unprecedent infringement of our freedoms, unless governments and tech companies turn to robust governance frameworks and rely on true cooperation.
“There may be no turning back if Pandora’s box is opened,” noted Irakli Beridze, Head of the Centre for Artificial Intelligence and Robotics for the United Nations Interregional Crime and Justice Research Institute (UNICRI), in a recent article.
How can we navigate the gray zone of ending lockdowns while avoiding any infringement of freedom?
The World Economic Forum’s Centre for the Fourth Industrial Revolution recently launched a pilot project in France to test a governance framework for the responsible use of facial recognition. This work suggests four lessons to ensure the responsible use of technologies in a post-COVID-19 world.
1. Lean on a multistakeholder approach
COVID-19 allows tech companies to take a new lead on health or security infrastructures, domains historically occupied by governments. A multistakholder approach could help balance and coordinate the roles of the public and the private sectors and put the rights of the citizen first – at the local, regional and global levels.
2. Clearly identify potential limitations
It is critical to agree on what the objectives and limits of tracking technologies should be. For example, the Berkman Klein Center of Harvard University has built a tool to list many of the initiatives to ensure ethical approaches to AI. Policymakers and tech companies must examine nine main principles, asking questions that will help inform essential requirements for any tools.
- Bias and discrimination: Are there any bias or risk of discrimination in the tool, and if so, how can they be mitigated?
- Proportionality: What are the trade-offs brought by the tool?
- Privacy: Is the tool respecting “privacy by design” rules that ensure privacy is a priority?
- Accountability: What kind of accountability model is in place for external audits?
- Risk prevention: What is the risk-mitigation process in case something goes wrong?
- Performance: What is the performance and accuracy of the system and how is it assessed?
- Right to information: Are users well informed about data sharing?
- Consent: How do you ensure that users provide an informed, explicit and free consent?
- Accessibility: Is the tool accessible to anyone, including people with disabilities?
3. Ensure continuous internal evaluation
Once the principles are established, the next step is to ensure that the system meets the requirements. One way to do so is to draft a list of questions to guide the product team to internally check its compliance. This questionnaire helps bridge the gap between political principles and technical requirements.
For example, IBM Research has a framework called “FactSheets” to increase trust in AI services, Google has recently proposed a framework for internal algorithmic auditing, and Microsoft a list of AI Principles and the tools to apply them internally. These promising projects could serve as best practices for other tech companies.
4. Identify an independent third party for oversight
In addition to the internal questionnaire, there must also be external oversight of systems through an independent third party. Trust depends on the ability of this independent body to have access to the system to conduct verifications.
Have you read?
Recent history shows that the need for audits or audit reforms surges in the aftermath of crises. For example, the 2002 Sarbanes-Oxley Act that created new regulations to respond to fraud in financial services emerged after the failure of Enron. In the aerospace industry, the recent Boeing 737MAX accidents may lead to a new series of stronger controls.
As of now, tech companies have largely avoided any external audit, even when failures to protect citizens have been reported and have led to scandals. For example, consider the case of clearview.ai, a facial recognition tool working on a never-seen database of more than three billion images scraped from the Internet, including social media applications and millions of other websites. This startup has not only raised privacy concerns but also been reportedly exposed to a recent data breach. What is true for traditional industries should be true for the tech sector as well.
We know that tech governance is key to mitigating the potential risks brought by the Fourth Industrial Revolution. The COVID-19 crisis represents its truth test. We must act now to build frameworks that can help ensure any tracking tools not only aid in the recovery but also protect the rights of users.