Artificial Intelligence

AI is convicting criminals and determining jail time, but is it fair?

Barbed wire is seen inside the Louisiana State Penitentiary in Angola, Louisiana, March 7, 2018. Picture taken March 7, 2018.    To match Special Report USA-JAILS/LOUISIANA       REUTERS/Shannon Stapleton - RC1CA5F4DFD0

Prison terms in the US are based in part on opaque algorithmic predictions. Image: REUTERS/Shannon Stapleton

Vyacheslav Polonski
Alumni, Global Shapers Community, Google
Share:
Our Impact
What's the World Economic Forum doing to accelerate action on Artificial Intelligence?
The Big Picture
Explore and monitor how Artificial Intelligence is affecting economies, industries and global issues
A hand holding a looking glass by a lake
Crowdsource Innovation
Get involved with our crowdsourced digital platform to deliver impact at scale
Stay up to date:

Artificial Intelligence

When Netflix gets a movie recommendation wrong, you’d probably think that it’s not a big deal. Likewise, when your favourite sneakers don’t make it into Amazon’s list of recommended products, it’s probably not the end of the world. But when an algorithm assigns you a threat score from 1 to 500 that is used to rule on jail time, you might have some concerns about this use of predictive analytics.

Artificial intelligence (AI) has now permeated almost every aspect of our lives. Naturally, machine predictions cannot always be 100% accurate. But the cost of error dramatically increases when AI is implemented in high-stakes settings. This could include medicine to recommend new cancer treatments, or criminal justice to help judges assess a suspect’s likelihood of reoffending. In fact, one of the most controversial uses of AI in recent years has been predictive policing.

To the general audience, predictive policing methods are probably best known from the 2002 science fiction movie Minority Report starring Tom Cruise. Based on a short story by Philip K. Dick, the movie presents a vision of the future in which crimes can be predicted and prevented. This may sound like a far-fetched utopian scenario. However, predictive justice already exists today. Built on advanced machine learning systems, there is a wave of new companies that provide predictive services to courts; for example, in the form of risk-assessment algorithms that estimate the likelihood of recidivism for criminals.

Can machines identify future criminals?

After his arrest in 2013, Eric Loomis was sentenced to six years in prison based in part on an opaque algorithmic prediction that he would commit more crimes. Equivant (formerly Northpointe), the company behind the proprietary software used in Eric Loomis’ case, claims to have provided a 360-degree view of the defendant in order to provide detailed algorithmic assistance in judicial decision-making.

This company is one of many players in the predictive justice field in the US. A recent report by the Electronic Privacy Information Center finds that algorithms are increasingly used in court to “set bail, determine sentences, and even contribute to determinations about guilt or innocence”. This shift towards more machine intelligence in courts, allowing AI to augment human judgement, could be extremely beneficial for the judicial system as a whole.

However, an investigative report by ProPublica found that these algorithms tend to reinforce racial bias in law enforcement data. Algorithmic assessments tend to falsely flag black defendants as future criminals at almost twice the rate as white defendants. What is more, the judges who relied on these risk-assessments typically did not understand how the scores were computed.

This is problematic, because machine learning models are only as reliable as the data they’re trained on. If the underlying data is biased in any form, there is a risk that structural inequalities and unfair biases are not just replicated, but also amplified. In this regard, AI engineers must be especially wary of their blind spots and implicit assumptions; it is not just the choice of machine learning techniques that matters, but also all the small decisions about finding, organising and labelling training data for AI models.

Biased data feeds biased algorithms

Even small irregularities and biases can produce a measurable difference in the final risk-assessment. The critical issue is that problems like racial bias and structural discrimination are baked into the world around us.

For instance, there is evidence that, despite similar rates of drug use, black Americans are arrested at four times the rate of white Americans on drug-related charges. Even if engineers were to faithfully collect this data and train a machine learning model with it, the AI would still pick up the embedded bias as part of the model.

Systematic patterns of inequality are everywhere. If you look at the top grossing movies of 2014/2015 you can see that female characters are vastly underrepresented both in terms of screen time and speaking time. New machine learning models can quantify these inequalities, but there are a lot of open questions about how engineers can proactively mitigate them.

Google’s recent “Quick, Draw!” experiment vividly demonstrates why addressing bias matters. The experiment invited internet users worldwide to participate in a fun game of drawing. In every round of the game, users were challenged to draw an object in under 20 seconds. The AI system would then try to guess what their drawing depicts. More than 20 million people from 100 nations participated in the game, resulting in over 2 billion diverse drawings of all sorts of objects, including cats, chairs, postcards, butterflies, skylines, etc.

But when the researchers examined the drawings of shoes in the data-set, they realised that they were dealing with strong cultural bias. A large number of early users drew shoes that looked like Converse sneakers. This led the model to pick up the typical visual attributes of sneakers as the prototypical example of what a “shoe” should look like. Consequently, shoes that did not look like sneakers, such as high heels, ballerinas or clogs, were not recognized as shoes.

Loading...

Recent studies show that, if left unchecked, machine learning models will learn outdated gender stereotypes, such as “doctors” being male and “receptionists” being female. In a similar fashion, AI models trained on images of past US presidents have been shown to predict exclusively male candidates as the likely winner of the presidential race.

Designing for fairness in AI

In October 2018, the International Conference of Data Protection and Privacy Commissioners released the Declaration on Ethics and Protection in Artificial Intelligence, one of the first steps towards a set of international governance principles on AI. The declaration states that “unlawful biases or discriminations that may result from the use of data in artificial intelligence should be reduced and mitigated”. Inherent to this notion is the assertion that AI needs to be evaluated on a broader set of ethical and legal criteria; not just based on classification accuracy and confusion matrices. Expanding on this argument, I propose the following principles of AI fairness for the purposes of predictive justice:

1. Representation

In order to guard against unfair bias, all subjects should have an equal chance of being represented in the data. Sometimes this means that underrepresented populations need to be thoughtfully added to any training datasets. Sometimes this also means that a biased machine learning model needs to be substantially retrained on diverse data sources. In the case of Google’s Quick, draw! experiment, the engineering team had to intentionally seek out additional training examples of other shoe types, like high heels and crocs, to compensate for gaps in representation. What is more, recent research offers new algorithmic techniques to measure misrepresentation and help mitigate unwanted bias in machine learning.

2. Protection

Machine learning systems need to avoid unjust effects on individuals, especially impacts related to social and physical vulnerabilities, and other sensitive attributes. These could include race, ethnicity, gender, nationality, sexual orientation, religion and political beliefs. The overall fairness of an algorithm must be judged by how it impacts the most vulnerable people affected by it.

However, simply omitting sensitive variables from machine learning models would not solve the problem due to the variety of confounding factors that may be correlated with them. With regard to criminal justice, research shows that omitting race from a dataset of criminal histories still results in racially disparate predictions. Instead, there is early evidence that racial disparities and other pieces of sensitive information can be removed from data-sets using a supplementary machine learning algorithm. The hope is that, in the future, this approach could help engineers build a “race-neutral” AI system for recidivism prediction.

3. Stewardship

Algorithmic fairness means much more than the absence of injustice; it represents the active responsibility to continuously strive for fairness in the design of machine learning systems. In this regard, the spirit of stewardship can only be borne by a diverse team that challenges each other’s implicit assumptions.

In regular unconscious bias busting exercises, for example, teams can develop an appreciation for the diversity of perspectives. Several NGOs, including ProPublica and Privacy International, have also begun advocating for diversity stewardship in companies that build large-scale AI models. Thus, only by creating a culture of inclusiveness can companies create the right conditions for teams to address unfair bias in machine learning.

4. Authenticity

The final principle refers not just to the authenticity of training data, but also the authenticity of AI predictions as they are used to inform human decision-making. For instance, despite continued efforts to limit potentially harmful or abusive applications, machine learning has been regretfully used in the past to distort reality through deep-fakes. In this context, the pervasive misuse of AI could help malicious actors to generate fake videos of people saying things they never said before, or fake images of situations that never happened in real life. Taken to the extreme, this could lead to a world when judges can no longer determine whether any depicted media or evidence corresponds to the truth. Hence, this has led some media pundits to conclude that the “biggest casualty to AI won't be jobs, but the final and complete eradication of trust in anything you see or hear.” Fortunately, AI researchers are already working on effective and scalable counter-measures to detect various forms of manipulated media.

Machines against machine bias

These four principles can help to start a conversation about AI fairness, especially when used for predictive justice. Fairness is never the default in machine learning. As such, engineers need to take proactive steps to change this default. If we do not actively design for AI fairness, we risk perpetuating harmful biases and stereotypes.

One of the most impressive things about AI, however, is that algorithms can also be effectively used to measure and mitigate unfair bias. Going forward, there’s hope that engineers will extend these techniques to meaningfully assist human decision-makers with predictions that will be free from prejudice.

Don't miss any update on this topic

Create a free account and access your personalized content collection with our latest publications and analyses.

Sign up for free

License and Republishing

World Economic Forum articles may be republished in accordance with the Creative Commons Attribution-NonCommercial-NoDerivatives 4.0 International Public License, and in accordance with our Terms of Use.

The views expressed in this article are those of the author alone and not the World Economic Forum.

Related topics:
Artificial IntelligenceEmerging Technologies
Share:
World Economic Forum logo
Global Agenda

The Agenda Weekly

A weekly update of the most important issues driving the global agenda

Subscribe today

You can unsubscribe at any time using the link in our emails. For more details, review our privacy policy.

How we can prepare for the future with foundational policy ideas for AI in education

TeachAI Steering Committee

April 16, 2024

About Us

Events

Media

Partners & Members

  • Join Us

Language Editions

Privacy Policy & Terms of Service

© 2024 World Economic Forum