Future of Work

Is artificial intelligence ready for the great rehiring?

A taxi 'for hire' sign

Businesses are turning to AI technology to help them recruit in the wake of COVID. But algorithms can multiply inequalities Image: Clem Onojeghuo for Unsplash

Keith E. Sonderling
Commissioner, United States Equal Employment Opportunity Commission
Share:
Our Impact
What's the World Economic Forum doing to accelerate action on Future of Work?
The Big Picture
Explore and monitor how Future of Work is affecting economies, industries and global issues
A hand holding a looking glass by a lake
Crowdsource Innovation
Get involved with our crowdsourced digital platform to deliver impact at scale
Stay up to date:

Future of Work

  • Companies looking to hire rapidly and in large numbers are turning to AI technology such as resume-screening programs.
  • Unless used carefully, algorithms could multiply inequalities based on gender, age and race, which have already been exacerbated by the pandemic.
  • Employers who use AI to make hiring decisions must understand exactly what they are buying and how it might impact their hiring decisions.

After a year that witnessed unemployment reach levels unseen since the Great Depression, the Great Rehiring is upon us – and AI is likely to play a significant role in it. Employers, especially those who need to hire rapidly and in large numbers, are turning to AI-driven technologies such as resume-screening programs, automated interviews, and mobile hiring apps to rebuild their workforces. To the millions of employees who were displaced by the COVID-19 pandemic, these technologies can mean a fast track back into the workplace. And to the businesses whose doors were shuttered by the pandemic, these technologies are an efficient path back to profitability.

However, ensuring that hiring technologies are designed and deployed in ways that are bias-free is a challenge under normal economic conditions—and the COVID economy is far from normal. Unless employers use these technologies with care, the demographic distortions wrought by pandemic on the workforce could be here to stay.

Have you read?

Numerous studies, including the WEF’s 2020 Future of Jobs Report, have documented the ways that the pandemic has exacerbated inequalities in the global workplace. For example, in the early stages of the pandemic, women around the world were unemployed at higher rates than men; and young people have consistently been unemployed at higher rates than older workers. In the United States, Department of Labor studies indicate that people of color have been disproportionately affected by the pandemic. Indeed, unemployment rates for African-Americans and Latinos have consistently outpaced those for Whites by several percentage points, and that trend persists even as Americans begin re-entering the workforce in record numbers.

US unemployment rates by racial group
Unemployment rates for African-Americans and Latinos have consistently outpaced those for Whites by several percentage points. Image: Congressional Research Service

While AI has the potential to eliminate bias in hiring, it can also multiply inequalities exponentially if it is used carelessly. This is because of the way that AI works: algorithms correlate information from a finite data set in order to make predictions about job applicants. An algorithm’s predictions are only as sound as the data set on which it relies. If an algorithm’s training data consists of the employer’s current workforce, it may simply replicate the status quo. This can be problematic if the current workforce is made up predominantly of employees of one race, gender, or age group, because a hiring algorithm may automatically screen out applicants who do not share those same characteristics.

Well-publicized examples of algorithmic hiring gone awry illustrate how AI can both reflect and amplify statistical bias. For example, several years ago Amazon tested a resume-screening program whose training data consisted of resumes belonging to the company’s current employees, along with resumes that had been submitted to the company in the prior ten years. Using machine learning, the program was able to identify patterns in the data set and then use those patterns to rate new applicants based on their resumes. However, because the majority of resumes in the data set belonged to men, the program began automatically downgrading resumes with word combinations indicating that the applicant was a woman.

Put simply, skewed inputs make for skewed outputs, and this may be especially problematic as employers try to build a post-pandemic workforce. In the United States, the COVID workforce has included fewer African-Americans, fewer Latinos, and fewer Asians. So, if a hiring program were to treat race the way that our example treated gender, minorities could be excluded from employment opportunities for which they are otherwise qualified. This could, in turn, spell legal liability for U.S. employers under the laws that my agency, the Equal Employment Opportunity Commission, enforces.

Under U.S. law, an employer need not intentionally discriminate in order to engage in a prohibited hiring practice. A hiring policy that is neutral on its face may violate federal law if it has a disparate impact on certain groups. Thus, it is essential that employers who use AI to make hiring decisions understand the way the technology works. Employers should ask vendors about the robustness of the training data, the mitigation of bias, and the potential for disparate impact. In short, they should understand exactly what they are buying and exactly how it works. A “set-it-and-forget-it” approach simply will not do.

Historically, recessions have accelerated the rate at which companies have invested in automation. The COVID recession is no exception. The pandemic has transformed not only the ways people work, but the ways companies hire. AI-enabled hiring programs have the potential to be a valuable part of the global recovery from COVID. As an official who is duty-bound to protect the civil rights of all workers in America, I am committed to ensuring that AI does not fall short of that potential.

Don't miss any update on this topic

Create a free account and access your personalized content collection with our latest publications and analyses.

Sign up for free

License and Republishing

World Economic Forum articles may be republished in accordance with the Creative Commons Attribution-NonCommercial-NoDerivatives 4.0 International Public License, and in accordance with our Terms of Use.

The views expressed in this article are those of the author alone and not the World Economic Forum.

Related topics:
Future of WorkCOVID-19Artificial Intelligence
Share:
World Economic Forum logo
Global Agenda

The Agenda Weekly

A weekly update of the most important issues driving the global agenda

Subscribe today

You can unsubscribe at any time using the link in our emails. For more details, review our privacy policy.

From 'Quit-Tok' to proximity bias, here are 11 buzzwords from the world of hybrid work

Kate Whiting

April 17, 2024

3:12

About Us

Events

Media

Partners & Members

  • Join Us

Language Editions

Privacy Policy & Terms of Service

© 2024 World Economic Forum