Opinion
Emerging Technologies

Why we can’t leave AI to the machines

We must become more aware of the risks of AI if we are to mitigate them. Image: Unsplash/charlesdeluvio

Michael O’Flaherty
Director, EU Agency for Fundamental Rights
Share:
Our Impact
What's the World Economic Forum doing to accelerate action on Emerging Technologies?
The Big Picture
Explore and monitor how Artificial Intelligence is affecting economies, industries and global issues
A hand holding a looking glass by a lake
Crowdsource Innovation
Get involved with our crowdsourced digital platform to deliver impact at scale
Stay up to date:

Artificial Intelligence

  • Most people are still unaware of the extent to which machines are used to make decisions about their lives.
  • All too often, those machines draw on biased information, and so make biased decisions.
  • If we become more aware of the risks, we can better mitigate them and ensure that AI brings us the benefits we were promised it would.

“I would prefer my documents to be checked by a machine. It will not discriminate against me,” most people said when the EU Agency for Fundamental Rights (FRA) asked them about their views on automated border controls.

It was 2015 and the use of artificial intelligence (AI) was just picking up.

Fast forward to 2023 and AI is everywhere. It tells us what to watch on Netflix. It helps us develop vaccines. It chooses job candidates with the best CV.

But one thing has not changed: most people are still unaware of the extent to which machines are used to make decisions about their lives. And that those machines can discriminate – not just in theory, but in practice too.

Discover

How is the World Economic Forum creating guardrails for Artificial Intelligence?

Social welfare

In the most outrageous demonstration of this, thousands of innocent families were falsely accused of fraud and forced to return social benefits. Many of these families belonged to ethnic minorities. They were targeted by an algorithm simply because of their background. Pushed into poverty, some of them lost their homes and could no longer take care of their children.

In its report Xenophobic machines, Amnesty International documented how easily the use of technology led to discrimination in this case. The United Nations, for its part, has urged governments around the globe to avoid stumbling, zombie-like, into a digital welfare dystopia.

Policing

But it is not just about digital welfare. The use of AI raises issues around many other areas, such as healthcare, education, well-being and policing.

For example, authorities in several European countries have developed a system for forecasting burglaries based on historical data. With the help of algorithms, the technology calculates when and where similar crimes are likely to occur in the future. Based on these forecasts, more police patrols can be deployed.

The problem with such AI-powered tools is that they rely on historical crime data that might be skewed. For instance, we have research showing that police in Europe stop young Black men more often than anyone else. More stops generate more records. If these records are then fed into a machine, the machine will keep sending the police to specific neighbourhoods. And this might have nothing to do with the actual crime levels.

Offensive speech detection

Similarly, algorithms may generate mistaken information when detecting offensive speech online. When the Fundamental Rights Agency tested automated hate speech detection models, it quickly found that those algorithms are completely unreliable. Harmless phrases such as ‘I am Jewish’ or ‘I am Muslim’ may get flagged as offensive. And yet offensive content may easily slip through. This is because algorithms work with existing datasets that are not neutral.

This only underlines that even with the best of intentions, it is far too easy for bias to be baked into the algorithm from the start.

Towards a solution?

Would you put your life and the lives of your children in the hands of a technology that you do not understand? The direction of which you cannot predict? I would not. That is a dystopian future and we do not have to go there.

This does not mean that we need to stop using AI. It simply means that we need to understand much better how algorithms work and how they can become biased. We cannot settle for the explanation that AI is a black box that we should just let run at its own astonishing speed.

Instead, we need to insist on transparency. Humans have to stay very closely involved in monitoring AI and always test the applications in the context of their use. Because AI can affect every imaginable human right. From freedom of expression and freedom of assembly to freedom of movement or privacy. And we simply need to know if those rights are in danger.

Have you read?

If we become more aware of the risks, we can better mitigate them and ensure that AI brings us the benefits we were promised it would.

To this end, the agency that I lead has issued several recommendations for technology developers, government agencies, regulators, and legislators, as follows:

1. Make sure that AI respects all human rights: AI can affect many human rights, not just privacy or data protection. Any future AI legislation needs to consider this and create effective safeguards.

2. Assess the impact of AI and test for bias: Organizations should carry out further research and assessments of how AI could harm human rights and enable discrimination. In addition, they need to test for bias, as algorithms can be biased from the outset or develop bias over time. Such biases can be wide-ranging, so they must consider all grounds for discrimination, including sex, religion and ethnic origin.

3. Provide guidance on sensitive data: To assess potential discrimination, data on protected characteristics such as ethnicity or gender may be needed. This requires guidance on when such data collection is allowed. It has to be justified, necessary and with effective safeguards.

4. Create an effective oversight system: A joined-up system is needed to hold businesses and public administrations accountable when using AI. Oversight bodies need to have adequate resources and skills to do the job. Crucially, transparency and effective oversight require improved access to the data and data infrastructures for identifying and combating the risk of bias.

5. Guarantee that people can challenge decisions taken by AI: People need to know when AI is used and how it is used, as well as how and where to complain. Organizations using AI need to be able to explain how their systems take decisions.

It is high time to dispel the myth that human rights block us from making progress. We are not required to balance between human rights and innovation. It is not a zero-sum game.

More respect for human rights means more trustworthy technology. More trustworthy technology is more attractive technology. In the long run, it will be the more successful technology.

If we get this right, we can look forward to an astonishing future. A future that brings us cures for diseases. A future, in which the delivery of public services is done with a far higher degree of efficiency and quality than is the case today.

All this is possible if we steer AI in the right direction. And if we do not leave the decisions solely to the machines.

Loading...
Don't miss any update on this topic

Create a free account and access your personalized content collection with our latest publications and analyses.

Sign up for free

License and Republishing

World Economic Forum articles may be republished in accordance with the Creative Commons Attribution-NonCommercial-NoDerivatives 4.0 International Public License, and in accordance with our Terms of Use.

The views expressed in this article are those of the author alone and not the World Economic Forum.

Share:
World Economic Forum logo
Global Agenda

The Agenda Weekly

A weekly update of the most important issues driving the global agenda

Subscribe today

You can unsubscribe at any time using the link in our emails. For more details, review our privacy policy.

Critical minerals demand has doubled in the past five years – here are some solutions to the supply crunch

Emma Charlton

May 16, 2024

3:24

About Us

Events

Media

Partners & Members

  • Join Us

Language Editions

Privacy Policy & Terms of Service

© 2024 World Economic Forum