Emerging Technologies

How to make sure the future of AI is ethical

ATTENTION EDITORS - IMAGE 1 OF 22 OF PICTURE PACKAGE '7 BILLION, 7 STORIES - OVERCROWDED IN HONG KONG. SEARCH 'MONG KOK' FOR ALL IMAGES - People cross a street in Mong Kok district in Hong Kong, October 4, 2011. Mong Kok has the highest population density in the world, with 130,000 in one square kilometre. The world's population will reach seven billion on 31 October 2011, according to projections by the United Nations, which says this global milestone presents both an opportunity and a challenge for the planet. While more people are living longer and healthier lives, says the U.N., gaps between rich and poor are widening and more people than ever are vulnerable to food insecurity and water shortages.   Picture taken October 4, 2011.   REUTERS/Bobby Yip   (CHINA - Tags: SOCIETY) - RTR2SQJP

Facial recognition provides a window in the ethics of artificial intelligence, writes Mike Loukides. Image: REUTERS/Bobby Yip

Mike Loukides
Vice President of Content Strategy, O'Reilly Media, Inc
Our Impact
What's the World Economic Forum doing to accelerate action on Emerging Technologies?
The Big Picture
Explore and monitor how Innovation is affecting economies, industries and global issues
A hand holding a looking glass by a lake
Crowdsource Innovation
Get involved with our crowdsourced digital platform to deliver impact at scale
Stay up to date:


A few weeks ago, I wrote a post on the ethics of artificial intelligence. Since then, we've been presented with an excellent example to reflect on: the use of face recognition to identify people likely to commit crimes. (There have been a number of articles about this research; I'll only link to this one.)

In my post, I said that we need to discuss what kind of society we want to build. I'm reasonably confident that, even under the worst societal conditions, we don't want a society where you can be imprisoned because your eyes are set too closely together. The article in New Scientist shows that researchers are making the right objections: the training data for crimnals and non-criminals was taking from two different sources; ethnicity issues may be at play; and that we're in danger making AI into "21st century phrenology," or "mathwashing."

AI Landscape
Image: CB Insights

I also say that an AI developer can choose what projects to work on, but that it's important that research not go behind closed doors, becoming opaque to the public and leaving everyone outside of those doors vulnerable to whatever happens inside. That leads me to suggest going a few steps further. While researchers and developers can certainly choose not to participate in projects they object to, there are useful ways to go beyond non-involvement:

Some researchers have worked on ways to use hair style, coloring, and other cosmetics to defeat face recognition. That's certainly a constantly escalating battle: what works now probably won't work a year from now. But more important, it requires understanding what face recognition is doing, how it works, and making that public knowledge.Abe Gong's work on COMPAS and Cathy O'Neil's work on data-driven teacher evaluation expose the machinery by which math-driven bias works. Gong's distinction between the statistical and human definitions of "bias" is particularly important: it's easy to be statistically unbiased while humanly unfair. O'Neil points out that it's easy to create systems in which you can only win by gaming the system, and that people who try to play fair are inevitably losers. We need many more researchers doing work like this: we need to understand how machine learning and AI are used, what the consequences are, and make that public knowledge.

So, researchers who opt out can also choose to actively subvert the system, or they can work to expose the flaws built into the system. Both functions are necessary.

As New Scientist points out, "the majority of U.S. police departments using face recognition do little to ensure that the software is accurate." Police departments have neither the expertise nor the inclination to critically evaluate software that claims to make their jobs easier. "This is magic that will make your job easier" is a tempting sales pitch for people who are already doing a hard job. It's way too easy for an uninformed official to fantasize about AI systems that will detect terrorists. It takes someone who isn't ignorant about AI to point out the problems with such a proposal, not the least of which is that the number of terrorists is so small that it would be impossible to build a good data set for training. And even with good training data, it's very hard to imagine a system with fewer than 5% false positives (roughly 16 million Americans, roughly 370 million people worldwide)—and such an error-prone system would be worse than useless.

Staying away from problem topics is never an answer; more than ever, we need AI researchers who are committed to building the future we want, rather than the future we're likely to get. That includes researchers who are actively trying to defeat AI systems as well as researchers who are exposing their inadequacies. Neither group can work from a position of ignorance. Doing so guarantees that we will be the victims, rather than the beneficiaries, of AI.

Don't miss any update on this topic

Create a free account and access your personalized content collection with our latest publications and analyses.

Sign up for free

License and Republishing

World Economic Forum articles may be republished in accordance with the Creative Commons Attribution-NonCommercial-NoDerivatives 4.0 International Public License, and in accordance with our Terms of Use.

The views expressed in this article are those of the author alone and not the World Economic Forum.

World Economic Forum logo
Global Agenda

The Agenda Weekly

A weekly update of the most important issues driving the global agenda

Subscribe today

You can unsubscribe at any time using the link in our emails. For more details, review our privacy policy.

The rise of smart contracts and strategies for mitigating cyber and legal risks

Jerome Desbonnet and Oded Vanunu

July 16, 2024

About Us



Partners & Members

  • Sign in
  • Join Us

Language Editions

Privacy Policy & Terms of Service

© 2024 World Economic Forum