- Marginalized groups are not well represented in the design and implementation of new technology.
- Dominant narrative suggests that technology can be "fixed" to address systemic issues prevalent in society.
- What we need is inclusive participation to centre on the concerns of these groups.
When discussing the impact of technology on marginalized groups, all too often the focus is on “fixing technology” to address harms against marginalized groups. This narrative is fundamentally flawed: it is based on the premise that technology is imposed on marginalized groups by some third party, at best unaware – at worst indifferent – to how the technology will affect these groups.
It also disregards that deploying even the most perfect functioning piece of technology can negatively affect groups due to the context in which it is rolled out. It is time we consider the needs of marginalized groups when designing and implementing technology, including engaging, consulting and encouraging participation of these groups during the process, instead of continuing to feed a false narrative about technological "fixes" to structural power dynamics.
Have you read?
The idea persists that technology that negatively impacts marginalized and racialised groups in our society can be set straight through "fixing". It gets reinforced even when policymakers are seeking to act against reproducing society's power structures through technology, as evidenced by the European Commission's recent draft regulation on Artificial Intelligence.
The Commission’s White Paper on Ethics and Artificial Intelligence that preceded the draft regulation, published in April 2021, stated that, “The use of algorithms can perpetuate or even stimulate racial bias if data to train algorithms does not reflect the diversity of EU society,” and the draft regulation follows that line by seeking to address "bias" in AI technology.
This position, for self-evident reasons, is also very popular with the big tech companies. For example IBM states that the shortcomings of facial recognition technology – which research by Joy Buolamwini, then a researcher at the MIT Media Lab, showed to only work accurately for male, white faces – was "not with the AI technology itself, per se, but with how the AI-powered facial recognition systems are trained".
What's the World Economic Forum doing about diversity, equity and inclusion?
The COVID-19 pandemic and recent social and political unrest have created a profound sense of urgency for companies to actively work to tackle inequity.
The Forum's work on Diversity, Equality, Inclusion and Social Justice is driven by the New Economy and Society Platform, which is focused on building prosperous, inclusive and just economies and societies. In addition to its work on economic growth, revival and transformation, work, wages and job creation, and education, skills and learning, the Platform takes an integrated and holistic approach to diversity, equity, inclusion and social justice, and aims to tackle exclusion, bias and discrimination related to race, gender, ability, sexual orientation and all other forms of human diversity.
The Platform produces data, standards and insights, such as the Global Gender Gap Report and the Diversity, Equity and Inclusion 4.0 Toolkit, and drives or supports action initiatives, such as Partnering for Racial Justice in Business, The Valuable 500 – Closing the Disability Inclusion Gap, Hardwiring Gender Parity in the Future of Work, Closing the Gender Gap Country Accelerators, the Partnership for Global LGBTI Equality, the Community of Chief Diversity and Inclusion Officers and the Global Future Council on Equity and Social Justice.
Marginalized and racialized groups disproportionately bear the brunt of the increased use of technology in our societies and this needs to be addressed urgently. However, the focus on technological fixes is not the conversation we should be having.
What's the problem and how can we fix it?
It's true that we have a "brogrammer" problem that engrains the choices made by a mostly white, male, privileged pool of designers and engineers into the technology they design, thereby imposing their perception of the world. And, yes, there are serious problems with the datasets used to train technology, reproducing the power structures in our societies that keep in place systems of oppression for racialised and marginalized groups. But focusing on these aspects alone feeds a false narrative that as long as we can get the technology right, if we can "fix" it, using that technology will be alright. This is a misconception: we are not going to undo centuries of structural oppression by tweaking a few datasets.
As EDRi (European Digital Rights), Europe's biggest membership organization for digital rights said in response to the EU Commission's draft regulation, "There is no ‘quick fix’, no risk assessment sophisticated enough, to undo centuries of systemic racism and discrimination. The problem is not just baked into the technology, but into the systems in which we live."
The deployment of technology in and of itself can constitute violence against certain groups. We need only look at the use of technology for the policing of migration and borders, law enforcement, or the provision of essential government services and a myriad of examples of how this further exacerbates hardship, further entrenches power structures and hampers, and sometimes even blocks access to justice.
To truly address this, we need a fundamental shift in the debate. Instead of accepting and reinforcing the notion of a power dynamic in which technology is designed by a privileged few to then be imposed on marginalized groups, we should focus on ensuring that technology is designed and built around the needs of, and in participation with, individuals and communities.
We already have examples of participation and ownership of communities in technology that we can build on: an app to safely record and store incidents of police violence, created by a collective led by family members who lost a loved one to police brutality in France; a more institutional example in the US is a similar app developed by the ACLU; a decolonial approach to smart cities in Togo; to feminist infrastructure projects that instead of focusing on the mechanics of technology look at how technological practices can be emancipatory tools for all of us.
These practices show that there is a different avenue open to us. It is true, as Laurence Meyer recently wrote, that "technology is not made out of thin air ... it is the product of extraction, of exploitation, of displacement, of violence." But, this current reality is human-made, and it can be changed by us. The point is to start making changes now, while we can still create a more just future for everyone in our society. Those setting the terms for policy debates on technology and human rights, regulators, those investing in technology, and those buying and implementing it should not waste any time.