Emerging Technologies

Hackers, AI and the risk from deliberate bias

Visitors experience facial recognition technology at Face++ booth during the China Public Security Expo in Shenzhen, China October 30, 2017. Picture taken October 30, 2017.     REUTERS/Bobby Yip - RC1A55461440

Injecting deliberate bias into algorithmic decision-making could be devastatingly simple and effective Image: REUTERS/Bobby Yip

Share:
Our Impact
What's the World Economic Forum doing to accelerate action on Emerging Technologies?
The Big Picture
Explore and monitor how Artificial Intelligence is affecting economies, industries and global issues
A hand holding a looking glass by a lake
Crowdsource Innovation
Get involved with our crowdsourced digital platform to deliver impact at scale
Stay up to date:

Artificial Intelligence

This article is part of the World Economic Forum's Geostrategy platform

The conversation about unconscious bias in artificial intelligence often focuses on algorithms that unintentionally cause disproportionate harm to entire swaths of society—those that wrongly predict black defendants will commit future crimes, for example, or facial-recognition technologies developed mainly by using photos of white men that do a poor job of identifying women and people with darker skin.

But the problem could run much deeper than that. Society should be on guard for another twist: the possibility that nefarious actors could seek to attack artificial intelligence systems by deliberately introducing bias into them, smuggled inside the data that helps those systems learn. This could introduce a worrisome new dimension to cyberattacks, disinformation campaigns or the proliferation of fake news.

According to a US government study on big data and privacy (PDF), biased algorithms could make it easier to mask discriminatory lending, hiring or other unsavory business practices. Algorithms could be designed to take advantage of seemingly innocuous factors that can be discriminatory. Employing existing techniques, but with biased data or algorithms, could make it easier to hide nefarious intent. Commercial data brokers collect and hold onto all kinds of information, such as online browsing or shopping habits, that could be used in this way.

Data as bait

Biased data could also serve as bait. Corporations could release biased data with the hope competitors would use it to train artificial intelligence algorithms, causing competitors to diminish the quality of their own products and consumer confidence in them.

Algorithmic bias attacks could also be used to more easily advance ideological agendas. If hate groups or political advocacy organizations want to target or exclude people on the basis of race, gender, religion or other characteristics, biased algorithms could give them either the justification or more advanced means to directly do so. Biased data also could come into play in redistricting efforts that entrench racial segregation (“redlining”) or restrict voting rights.

Finally, national security threats from foreign actors could use deliberate bias attacks to destabilize societies by undermining government legitimacy or sharpening public polarization. This would fit naturally with tactics that reportedly seek to exploit ideological divides by creating social media posts and buying online ads designed to inflame racial tensions.

Injecting deliberate bias into algorithmic decision-making could be devastatingly simple and effective. This might involve replicating or accelerating pre-existing factors that produce bias. Many algorithms are already fed biased data. Attackers could continue to use such data sets to train algorithms, with foreknowledge of the bias they contained. The plausible deniability this would enable is what makes these attacks so insidious and potentially effective. Attackers would surf the waves of attention trained on bias in the tech industry, exacerbating polarization around issues of diversity and inclusion.

Poisoned algorithms

The idea of “poisoning” algorithms by tampering with training data is not wholly novel. Top US intelligence officials have warned (PDF) that cyber attackers may stealthily access and then alter data to compromise its integrity. Proving malicious intent would be a significant challenge to address and therefore to deter.

But motivation may be beside the point. Any bias is a concern, a structural flaw in the integrity of society's infrastructure. Governments, corporations and individuals are increasingly collecting and using data in diverse ways that may introduce bias.

What this suggests is that bias is a systemic challenge—one requiring holistic solutions. Proposed fixes to unintentional bias in artificial intelligence seek to advance workforce diversity, expand access to diversified training data, and build in algorithmic transparency (the ability to see how algorithms produce results).

There has been some movement to implement these ideas. Academics and industry observers have called for legislative oversight that addresses technological bias. Tech companies have pledged to combat unconscious bias in their products by diversifying their workforces and providing unconscious bias training.

As with technological advances throughout history, we must continue to examine how we implement algorithms in society and what outcomes they produce. Identifying and addressing bias in those who develop algorithms, and the data used to train them, will go a long way to ensuring that artificial intelligence systems benefit us all, not just those who would exploit them.

Have you read?
Don't miss any update on this topic

Create a free account and access your personalized content collection with our latest publications and analyses.

Sign up for free

License and Republishing

World Economic Forum articles may be republished in accordance with the Creative Commons Attribution-NonCommercial-NoDerivatives 4.0 International Public License, and in accordance with our Terms of Use.

The views expressed in this article are those of the author alone and not the World Economic Forum.

Related topics:
Emerging TechnologiesCybersecurityResilience, Peace and Security
Share:
World Economic Forum logo
Global Agenda

The Agenda Weekly

A weekly update of the most important issues driving the global agenda

Subscribe today

You can unsubscribe at any time using the link in our emails. For more details, review our privacy policy.

Equitable healthcare is the industry's north star. Here's how AI can get us there

Vincenzo Ventricelli

April 25, 2024

About Us

Events

Media

Partners & Members

  • Join Us

Language Editions

Privacy Policy & Terms of Service

© 2024 World Economic Forum