How can AI support diversity, equity and inclusion?
AI must be unbiased and inclusive. Image: Freepik.
- Black History Month takes place during the month of February in the US, Canada and the UK.
- The civil rights movement in the US and emerging technology are closely intertwined, especially from a justice-oriented perspective.
- AI must be ethical and equitable in its approach to ensure it empowers communities and benefits society.
Bias in algorithms is a costly human oversight. Costly due to its immense impact on marginalized and underrepresented communities. As artificial intelligence (AI) continues to scale across industries and functions, it can be riddled with unconscious biases that do more harm than good.
While AI ethicists and responsible AI practitioners often speak about the need for more transparency and accountability in the AI lifecycle, Black History Month is often the time when organizations evaluate the tremendous work that has been done and that remains to be done.
In AI We Trust, a podcast co-hosted by Miriam Vogel of EqualAI and Kay Firth-Butterfield of the World Economic Forum, explores the opportunities and challenges of scaling responsible AI. For this year’s Black History Month episode, How AI Does (& Should) Impact Our BHM Celebration, they were joined by Renee Cummings, an AI ethicist, criminologist, Columbia University community scholar, and founder of Urban AI.
The episode focuses on the importance of equity and inclusion in AI, but also its links to justice and civic engagement.
So much of AI and data science is about civil rights. And when we think about Black History Month, we think about legacy, and American legacy that changed the world. As we think about AI, it’s that an algorithm can create a legacy
”History of technology in Black communities
Within communities of colour in the US, there is a history of distrust in technology – law enforcement, social services, housing, and healthcare have all displayed disparity and inequity especially during the COVID-19 pandemic. Cummings explains that even in recent history, many communities were used as “guinea pigs” in research so levels of distrust are generational and trauma based. Deployment of new technologies, such as AI, requires the building and restoration of that trust and justice.
As AI is meant to assist and replicate human interactions, the last thing technology should be doing is upholding old human biases and perpetuating harmful and inaccurate stereotypes. Old systems that continue to undermine the future of individuals, specifically Black, Indigenous, and People of Colour (BIPOC), is not a tool that should be considered for deployment.
Because AI has begun to assist in functions such as deciding who gets scholarships, mortgages, and opportunities to build economic capital, it becomes apparent as to why AI must – without question – be unbiased and inclusive. When replicating human interactions, it ought to be the best that humanity has to offer, rather than the worst.
What's the World Economic Forum doing about diversity, equity and inclusion?
AI&You, a nonprofit on a mission to engage and educate marginalized communities about AI, describes bias:
“The AI isn’t aware of the things we are showing it, but it tries to understand them nonetheless, and so if the things we are showing the AI are themselves biases, be it in a racist or xenophobic way, for example, the AI will inevitably reproduce that behaviour.”
A critical and recent example of the weaponization of technology was during the Black Lives Matter protests of 2020, where “surveillance technology was used to track and trace and terrorize many journalists, protestors, and activists – we were seeing many of the challenges when it came to the lack of an equitable approach or the lack of a diverse approach,” describes Cummings.
If AI-powered technology that enables facial and voice recognition and tracing is inherently biased because of its data and design, targeted BIPOC communities are put in harm’s way without even realizing it. If AI is to be scaleable, it must be ethical and equitable in its approach.
Public trust technology and civic engagement
Much of AI happens without the public knowing, and therefore, without the public being involved in the process. Renee Cummings is a big proponent of civic and community engagement in designing and deploying new and emerging technology like AI. In order to build trust, the public needs to be involved in “public trust technology.”
As custodians of the public good, Cummings says, everyone has the collective responsibility to ensure AI is responsible and trustworthy. And since communities are ultimately the producers of the data and knowledge that allow AI systems to run, it makes sense to include their voices in the design phases.
Further, to create a sustainable future, education and diverse voices are required for more inclusive innovation, especially as it pertains to public security and safety. If a community, for example, has a standing distrust with law enforcement, then when law enforcement deploys a technology, even if it’s beneficial, it is going to be deemed untrustworthy.
There is never a situation in which deploying technology warrants or should create a crisis in any community. Rather, when deploying a new tool (amongst all the existing tools) that is purposeful and intentional in its inclusivity, it must empower communities and present a benefit to all of society.
Four ways to ensure AI is inclusive
- Diversity: required throughout the entire AI lifecycle, from ideation, design, and development to deployment and post-launch monitoring. Appen’s Chief Executive Officer Mark Brayan wrote for the World Economic Forum that “creating AI that’s inclusive requires a full shift in mindset throughout the entirety of the development process.”
- Transparency: not just in the ideation or design phase, but also when choosing the right investments and capital for projects. Being open about what is being designed, and most importantly, for whom and its impacts is necessary for any new technology.
- Education: teaching and equipping underrepresented communities with the tools and skills to understand (and work) in the AI space. Dr. Brandeis Marshall, Founder of DataEdX, Stanford PACS Practitioner Fellow, and Partner Research Fellow at Siegel Family Endowment, shared in a community conversation that reaching BIPOC communities requires representation: “If you don’t see it, you won’t be it – and that is so vital in order to bring more people into this discipline.”
- Advocacy: supporting and following the work of organizations and individuals in the space such as Black in AI, whose programmes have removed barriers faced by Black people around the world in the field of AI, and the Global AI Action Alliance (GAIA).
Don't miss any update on this topic
Create a free account and access your personalized content collection with our latest publications and analyses.
License and Republishing
World Economic Forum articles may be republished in accordance with the Creative Commons Attribution-NonCommercial-NoDerivatives 4.0 International Public License, and in accordance with our Terms of Use.
The views expressed in this article are those of the author alone and not the World Economic Forum.
Stay up to date:
Systemic Racism
Related topics:
The Agenda Weekly
A weekly update of the most important issues driving the global agenda
You can unsubscribe at any time using the link in our emails. For more details, review our privacy policy.
More on Emerging TechnologiesSee all
Anja Eimer
November 1, 2024