- US activists have raised concerns over racial bias in AI, with wrongful arrests attributed to faulty facial recognition.
- According to the digital advocacy group Algorithmic Justice League, one of the reasons why AI systems are not inclusive is the predominantly white male composition of developer teams.
- As tech becomes an increasingly important part of our society, removing any bias is vital says Black Lives Matter (BLM) co-founder Ayo Tometi.
As concerns grow over racial bias in artificial intelligence, Black Lives Matter (BLM) co-founder Ayo Tometi urged the tech sector to act fast against perpetuating racism in systems such as facial recognition.
Artificial intelligence is transforming the world and can be applied in diverse sectors, from improving the early detection of diseases to sorting out data and solving complex problems, but there are also concerns around it.
"A lot of the algorithms, a lot of the data is racist," U.S. activist Tometi, who co-founded BLM in 2013, told Reuters on the sidelines of Lisbon's Web Summit.
"We need tech to truly understand every way it (racism) shows up in the technologies they are developing," she said.
The tech industry has faced a reckoning over the past few years over the ethics of AI technologies, with critics saying such systems could compromise privacy, target marginalised groups and normalise intrusive surveillance.
Some tech companies have acknowledged that some AI-driven facial recognition systems, which are popular among retailers and hospitals for security purposes, could be flawed.
On Wednesday, Facebook announced it was shutting down its facial recognition system citing concerns about its use. Microsoft said last year it would await federal regulation before selling facial recognition technology to police.
Police in the United States and Britain use facial recognition to identify suspects. But a study by the U.S. National Institute of Standards and Technology found the technology is not as accurate at identifying African-American and Asian faces compared to Caucasian faces.
Last year, the first known wrongful arrest based on an incorrect facial recognition occurred in the United States. The United Nations has cited the case, attributed to the fact that the tool had mostly been trained on white faces, as an example of the dangers posed by a lack of diversity in the tech sector.
How is the World Economic Forum ensuring that artificial intelligence is developed to benefit all stakeholders?
Artificial intelligence (AI) is impacting all aspects of society — homes, businesses, schools and even public spaces. But as the technology rapidly advances, multistakeholder collaboration is required to optimize accountability, transparency, privacy and impartiality.
The World Economic Forum's Platform for Shaping the Future of Technology Governance: Artificial Intelligence and Machine Learning is bringing together diverse perspectives to drive innovation and create trust.
- One area of work that is well-positioned to take advantage of AI is Human Resources — including hiring, retaining talent, training, benefits and employee satisfaction. The Forum has created a toolkit Human-Centred Artificial Intelligence for Human Resources to promote positive and ethical human-centred use of AI for organizations, workers and society.
- Children and young people today grow up in an increasingly digital age in which technology pervades every aspect of their lives. From robotic toys and social media to the classroom and home, AI is part of life. By developing AI standards for children, the Forum is working with a range of stakeholders to create actionable guidelines to educate, empower and protect children and youth in the age of AI.
- The potential dangers of AI could also impact wider society. To mitigate the risks, the Forum is bringing together over 100 companies, governments, civil society organizations and academic institutions in the Global AI Action Alliance to accelerate the adoption of responsible AI in the global public interest.
- AI is one of the most important technologies for business. To ensure C-suite executives understand its possibilities and risks, the Forum created the Empowering AI Leadership: AI C-Suite Toolkit, which provides practical tools to help them comprehend AI’s impact on their roles and make informed decisions on AI strategy, projects and implementations.
- Shaping the way AI is integrated into procurement processes in the public sector will help define best practice which can be applied throughout the private sector. The Forum has created a set of recommendations designed to encourage wide adoption, which will evolve with insights from a range of trials.
- The Centre for the Fourth Industrial Revolution Rwanda worked with the Ministry of Information, Communication Technology and Innovation to promote the adoption of new technologies in the country, driving innovation on data policy and AI – particularly in healthcare.
Contact us for more information on how to get involved.
'Solution for the future'
"They (tech companies) have to be very careful because technology has the ability to expedite values that otherwise would come about more slowly," Tometi said. "But technology speeds everything up so the impact will be worse, faster."
Urging software developers to "pay attention to all details", she said they should hear Black people more.
"Unfortunately I feel like tech companies have a long way to go to build a bridge with the community," she said.
According to the digital advocacy group Algorithmic Justice League, one of the reasons why AI systems are not inclusive is the predominantly white male composition of developer teams.
One of the hundreds of AI-driven startups that attended the Web Summit, Europe's largest tech event, was Brazil's NeuralMind, which specialises in product development.
Have you read?
CEO Patricia Tavares echoed Tometi's concerns, saying that although AI brings benefits to society, there was a need for "legislation to make sure companies use it in a responsible and ethical way".
Not far from NeuralMind's stand, the CEO of health tracking platform Revolab, Kalinas Ovidijus, said his startup's market was Nordic and Baltic nations and that most of the data they have access to, provided by local hospitals and health centres, was on white people.
They were unsure if the platform would be able to meet the health needs of people of colour.
"We need solutions for the future, for future challenges, but those solutions need to be very inclusive," Tometi said. "They need to protect marginalised and vulnerable communities - that's their duty." (Reporting by Catarina Demony; Additional reporting by Miguel Pereira and Pedro Nunes; Editing by Andrei Khalip, Alison Willliams and Gareth Jones)