Pathways to Digital Justice
This white paper, produced in collaboration with an advisory committee consisting of experts from around the world, is intended to guide policy efforts towards combating data-driven harms.
Led by the Global Future Council on Data Policy, in collaboration with the Global Future Council on Media, Entertainment and Sport and the Global Future Council on AI for Humanity
From doing business to staying informed of the news, we rely on access to information 24/7. But what happens when that information is incorrect or, worse, targets us in an unjust manner with real consequences? When we talk about disinformation, we often talk about it as a political phenomenon, but it is increasingly more direct. Whether in the form of fake news, deep-fake videos, defamatory content, inaccurate information, and whether spread as misinformation or by administrative error, it can have harmful consequences. Furthermore, it is difficult to remedy since there are limited ways to seek justice in a globally digitized world. There are a range of reasons why justice is hard to come by, including technical architectures, confusion over jurisdictions and market interests, to name but a few. If someone does need to seek recourse, where do they start? It’s a daunting and often expensive process to fix the problem, if that is even possible, let alone seek redress. Henry Kissinger famously asked: “Whom do I call when I want to call Europe?” Are we now asking the same question when we need to call companies to remove or correct inaccurate information? Is it possible to find a one-size-fits-all solution to what has become a systemic issue?
A significant and meaningful opportunity exists to design technical, governance and market solutions to restore trust, integrity and justice in digital service providers. There is an established history on how to build systems that realize the responsibilities of different parties, albeit imperfectly, which offer frameworks to consider, both as design requirements for top-down approaches, as well as applicable tests for user access and success. Essentially, the larger and more fundamental the issue affecting people’s rights, the higher the bar for validating the approach and accepting liability for its impacts. Further, to ensure that those systems maintain their integrity and independence, it’s important to have accessible dispute resolution and professional stewardship infrastructure for those who can’t represent their own interests. Duty of care is another key legal construct that will help to further conceptualize the criticality of effective recourse measures and digital due process rights. Duty of care is the common law term describing the responsibilities that we as people owe each other, and while the nature of business and the market has evolved in the digital era, little has changed. The concept of duties remains a useful tool and offers up questions that allow us to define responsible governance in digital ecosystems.
To learn more, please reach out to Evîn Cheikosman at firstname.lastname@example.org
Reporting of online harms increased significantly during the pandemic. What more can be done to legislate against online harms and educate wider society?
Activists and journalists from marginalized populations are often censored on social media due to algorithm bias. Three tech design principles can help.
New forms of harm require new forms of redress. We need cohesive, systemic mechanisms that consider the rights and duties of actors in the data ecosystem.