Machine learning applications are already being used to make many life-changing decisions – such as who qualifies for a loan, and whether someone is released from prison. A new model is needed to govern how those developing and deploying machine learning can address the human rights implications of their products. This paper offers comprehensive recommendations on ways to integrate principles of non-discrimination and empathy into machine learning systems.
This White Paper was written as part of the ongoing work by the Global Future Council on Human Rights; a group of leading academic, civil society and industry experts providing thought leadership on the most critical issues shaping the future of human rights.
Further reading All related content
A glimpse into the future of human rights
As we approach the 70th anniversary of the ratification of the Universal Declaration of Human Rights, where has progress been made, and what challenges remain?
Here’s how we teach machines to be fair
Evidence has shown that AI and machine learning can lead to systematic bias. As its creators, we have a responsibility to put controls in place