We now live in a world of algorithmic sorting and decision-making; social media platforms that harvest our data and present and obscure content in different ways in our feeds; recommendation algorithms used for targeted advertising; search algorithms showing tailored results; and predictive algorithms that may influence our chances of getting a loan or shape how much we pay for health insurance.
What lurks inside these ‘black boxes’? How can they be understood, governed and made accountable? These are challenging societal questions, in need of urgent public debate. Below we present twelve potential scenarios for algorithmic accountability and governance.
The first six scenarios focus on individual empowerment and agency over algorithms and platforms. These are arranged along a continuum from end-users being disengaged bystanders to individuals negotiating with companies on an equal footing. Moving from scenario to scenario pushes algorithmic accountability from opacity to transparency; from unawareness to agency; from no control over how data is used to full ownership and profit sharing of positive externalities; from unawareness to public global agreement on the values algorithms should enshrine.
1. Overwhelming the unaware
Here citizens are unaware of what happens with their data, how they are sorted and ranked. It is unclear how the information they interact with might be biased in different ways. These information asymmetries are achieved through complex legal jargon in Terms of Service agreements, which are opaque about how data is used. All risks and responsibilities reside in the individual user, who essentially has no agency.
2. Users beware
Here Terms of Service involve disclosure around third party sharing, advertising strategies and access to private data. Platforms make an effort to explain the reach of posts: the percentage of the user’s audience likely to see it, or how what users see is filtered based on preferences, location, and activity history. Although the focus is to limit the information gap, risks and responsibilities still lie with the individual, who has no agency.
3. Dashboard interface
Here users are able to access information and take control of it, able to customise their engagement with the platform. They can agree to share their information with third party applications for certain rewards. This opt in/out experience is made user-friendly, allowing users to tailor solutions that suit their specific individual needs.
4. Smart contract
Here user and platform enter into a ‘smart contract’ collectively agreeing on rights and responsibilities. These rely on a blockchain able to execute through an automated trigger mechanism built into the contract. All opt-in/out decisions are specified and can include certain provisions in the event of a breach of contract. Here individual users have both agency and rely on built-in protection mechanisms. If protection mechanisms are triggered, a public record is created detailing the company’s behaviour. Users are compensated for such breaches.
5. Shared Profits
This scenario envisages a business model where users are included as financial beneficiaries of the profits their data generates. This scenario can work in tandem with others described above, but essentially focuses on a more equitable, user-centric and inclusive approach to monetization. Companies continue to profit, but these are shared with the users that generate them.
6. Public agreement of values
This scenario sets out a set of agreed values, generated following public discussion about what is acceptable behaviour on the part of platforms, including how data is algorithmically sorted and presented to users. This public agreement of values thus cover what is considered ethical, fair as well as legal.
The next six scenarios focus on a series of possible arrangements for governance and accountability that range in a continuum from private governance to a multi-stakeholder supranational approach. The scenario that prevails will have important consequences on the regulatory system and the future of many data-driven industries.
7. Private governance
Companies decide to move to scenarios #2-#6 as part of their strategy without being compelled to do so. Platforms decide that they need to provide more comprehensible Terms of Service agreements or a mechanism, for example a dashboard that allows users to opt-in or out of individual application access requirements.
8. Accountable to a government national body
Due to the potentially far-reaching societal implications; for example algorithms underpinning social media platforms, this scenario emphasises they cannot be left as black boxes. A government national body is put in charge of enforcing governance conditions. Capabilities include non-compliance fines, sanctions and ultimately can resort to the use of force. Limitations within national legislation is likely to have an impact on a company’s ability to circumvent governance agreements.
9. International governments body
Due to the likely problem of technology companies’ ability to circumvent national governance agreements, an international body composed of national government representatives is formed to enforce algorithmic governance agreements. An effort is made to compile and promote the normalisation of national legislation that impacts the governance mechanisms.
10. Supranational multi-stakeholder body
In the event of both national and international government bodies being unprepared to audit algorithms, an independent supra-national and multi-stakeholder body is created. This is comprised of individual members, collectively tasked with the responsibility of auditing the ethics and social impact of algorithms. This includes representatives of a wide variety of parties including the media, academia, internet governance bodies and hackers that audit in scenarios where access is granted or find ways to reverse engineer where it is not.
11. Algorithms as a public utility
When certain platforms have de facto become the internet (because of the sheer volume of users that access the internet though that platform alone), platforms can no longer simply perform in the service of shareholders, nor be allowed to run on a basis where generating maximum profits is the key goal. Instead, as public utilities, they are part financed by taxes levied internationally, which results in a reduced or fully eliminated need to monetise user data.
12. Journalists as accountability agents
This last scenario puts investigative journalists at the centre of monitoring algorithmic accountability. Public information needs and services centrally involved in providing these services are continuously monitored and scrutinised in terms of the services they provide, how these might differ across users, therefore exposing potentially problematic information politics.
These scenarios are not a roadmap, but are instead aimed at encouraging long-range thinking and generate further public debate. Ultimately we ask: what efforts can and should be made to address these issues?
The Summit on the Global Agenda 2015 takes place in Abu Dhabi from 25-27 October
Authors: Pia Mancini, Director, Net Democracy, and Farida Vis, Director, Visual Social Media Lab, Faculty Research Fellow, Information School, University of Sheffield. Both are members of the World Economic Forum’s Global Agenda Council on Social Media.
Image: A man talks on the phone as he surfs the internet on his laptop at a local coffee shop in downtown Shanghai November 28, 2013. REUTERS/Carlos Barria