Rethinking risk and compliance for the Age of AI
The rise of AI is changing risk management and compliance, and creating new risks for businesses. Integrated audit solutions are needed to manage them.
A toolkit for HR professionals to promote positive and ethical human-centred use of AI, for organizations, workers and society.
There has been an explosion in recent years of artificial intelligence-based tools for human resources applications. These tools are designed to take on key HR tasks including hiring, retaining talent, training, benefits, and employee satisfaction. These products have the potential to boost employee productivity, save HR departments time and money, and improve fairness and diversity outcomes.
At the same time, articles warning about the negative consequences of AI almost inevitably point to its use in human resources as a key risk area. There are good reasons for these worries. Employment decisions have high stakes with critical consequences for individuals, organizations and society. Concerns about AI algorithms encoding bias and discrimination are particularly heightened, further complicated by labour and anti-discrimination laws. Errors in the adoption of AI-based HR products can also undermine employee trust, leading to lower productivity and job satisfaction. Finally, unique aspects of the human resources setting, including small datasets, complex social interactions and data privacy concerns, pose challenges to developing effective algorithms.
AI is a relatively new player in human resources. Few HR professionals have technical knowledge of how AI systems work. They face pressure to adopt AI-based tools often without the resources to fully assess the potential consequences of these decisions. The aim of the Human-Centred AI in HR toolkit is to provide HR professionals with a framework for sound decision-making for the organization and society.
The project draws on a multi-stakeholder community to bring together technical knowledge, an understanding of organizational contexts, legal and ethical expertise, and lessons from past experiences. Project community members include HR professionals in private and public settings, AI for HR vendors, People Analytics experts, AI ethicists, employment law experts and academics.
The first component of the resulting toolkit is a short booklet providing an overview of how artificial intelligence systems work and key concerns around their use in the HR context. The remainder of the toolkit is a procurement guide with questions to ask both vendors and one’s own organization. These questions cover trade-offs in algorithm design, the quality and the storage of the data being used, the value proposition and expected level of accuracy, as well as aspects of the organizational context, including the attitudes and buy-in of decision-makers, HR professionals, and employees.
The success of the toolkit depends on the input of all community members to ensure that it addresses the needs of HR professionals, reflects a nuanced understanding of AI algorithms, and leads to decisions that benefit all stakeholders including workers and society as a whole.
How to engage
Pilot: Implement the governance framework in your organization and provide continued feedback
Fellow: Nominate an individual from your company to work at the Centre to play an integral role in shaping this initiative.
For more information, contact Kay Firth-Butterfield, Head of AI and Machine Learning, at Kay.Firth-Butterfield@weforum.org.