Emerging Technologies

How the responsible use of AI can create safer online spaces

AI algorithms left unchecked can produce digital discrimination.

AI algorithms left unchecked can produce digital discrimination. Image: Unsplash/Enrique Alarcon

Steve Durbin
Chief Executive, Information Security Forum
Share:
Our Impact
What's the World Economic Forum doing to accelerate action on Emerging Technologies?
The Big Picture
Explore and monitor how Artificial Intelligence is affecting economies, industries and global issues
A hand holding a looking glass by a lake
Crowdsource Innovation
Get involved with our crowdsourced digital platform to deliver impact at scale
Stay up to date:

Artificial Intelligence

Listen to the article

  • AI algorithms have massive implications for human life and the wider society.
  • Ethical dilemmas surrounding AI include digital disparities and its weaponization.
  • Autonomy should be balanced with human oversight while the responsible use of AI should be elevated, so it can be leveraged to tackle discrimination.

Artificial intelligence (AI) has become an everyday reality and business tool spurred by computer advancement, data science and the availability of huge data sets. Big tech companies – Google, Amazon and Meta – are now developing AI-based systems. The technology can mimic human speech, detect cancer, predict criminal activity, draft legal contracts, solve accessibility problems, and accomplish tasks better than humans. For businesses, AI promises to predict business outcomes, improve processes and deliver efficiencies at substantial cost savings.

But there are growing concerns with AI, still.

AI algorithms have become so powerful – with some experts labelling AI as being sentient – that any corruption, tampering, bias or discrimination can have massive implications on organizations, human life and society.

Have you read?

AI algorithms and digital discrimination

AI decisions increasingly influence and impact people’s lives at scale. Using them irresponsibly can exacerbate existing human biases and discriminatory measures such as racial profiling, behavioural prediction or sexual orientation identification. This inbuilt prejudice occurs because AI is only as good as the amount of training data we can provide, which can be susceptible to human biases.

Biases can also occur when machine learning algorithms are trained and tested on data that under-represent certain subpopulations, such as women, people of colour or people in certain age demographics. For example, studies show that people of colour are particularly vulnerable to algorithmic bias in facial recognition technology.

Biases can also occur in usage. For example, AI algorithms designed for a particular application may be used for unintended purposes for which they were not built, which results in misinterpretation of outputs.

Validating AI algorithm performance

AI-led discrimination can be abstract, un-intuitive, subtle, intangible and difficult to detect. The source code may likely be restricted from the public or auditors may not know how an algorithm is deployed. The complexity of getting inside an AI algorithm to see how it’s been written and responding cannot be underestimated.

Current privacy laws rely on notice and choice; therefore, the resultant barrage of notifications asking consumers to agree to lengthy privacy policies is seldom read. If such notices were applied to AI, it would have serious consequences for the security and privacy of consumers and society.

AI as a weapon

While true AI-powered malware may not yet exist, it’s not far-fetched to assume that artificially intelligent malware will amplify attacker capabilities. The possibilities are endless – malware that learns from its environment to identify and exploit new vulnerabilities, tools that test against AI-based security or malware that can poison AI with wrong information.

Digital content manipulated by AI is already being used to create hyper-realistic, synthetic copies of individuals in real-time (also known as deep fakes). As a result, attackers will leverage deep fakes to create highly targeted social engineering attacks, cause financial damage, manipulate public opinion or gain a competitive advantage.

AI-led discrimination can be abstract, un-intuitive, subtle, intangible and difficult to detect. The source code may likely be restricted from the public or auditors may not know how an AI algorithm is deployed.

Steve Durbin, Chief Executive, Information Security Forum

Mitigating AI algorithm-related risks

Because AI decisions increasingly influence and impact people’s lives at scale, enterprises have a moral, social and fiduciary responsibility to manage AI adoption ethically. They can do this in several ways.

1. Translate ethics into metrics

Ethical AI adheres to well-defined ethical guidelines and fundamental values, such as individual rights, privacy, non-discrimination and, importantly, non-manipulation. Organizations must establish clear principles for identifying, measuring, evaluating, and mitigating AI-led risks. Next, they must translate them into practical, measurable metrics and embed them into everyday processes.

2. Understand sources of bias in AI algorithms

Having the right tools to investigate sources of bias and understanding the impacts of fairness in decision-making is absolutely critical in developing ethical AI. Identify systems that use machine learning, determine how critical they are to the business and implement processes, controls and countermeasures against AI-induced biases.

3. Balance autonomy with human oversight

Organizations should set up a cross-domain ethics committee that oversees the ongoing management and monitoring of risk introduced by AI systems within the organization and the supply chain. The committee must comprise people from diverse backgrounds to ensure sensitivity towards the full spectrum of ethical issues.

Algorithms must be designed with expert inputs, situational knowledge and awareness of historical biases. Human authorization processes must be mandated in critical areas, for example, in financial transactions, to prevent them from being compromised by malicious actors.

4. Empower employees and elevate responsible AI

Nurture a culture that empowers individuals to raise concerns over AI systems without stifling innovation. Build internal trust and confidence in AI by addressing roles, expectations and accountabilities transparently. Recognize the need for new roles and actively upskill, reskill or hire.

You can empower users by providing better control and access to recourse if needed. Strong leadership is also pivotal to empowering employees and elevating responsible AI as a business imperative.

5. Leverage AI to tackle discrimination

Procedural checks, the traditional method of assessing human fairness, can benefit from AI by running algorithms alongside human decision processes, comparing results and explaining the reason behind machine-led decisions. Another example of this is MIT’s research initiative on combating systemic racism that works to develop and use computational tools to create racial equality in many different sectors of society.

Loading...

To summarize, AI models must be trustworthy, fair and explainable by design. As AI becomes more democratized and new governance models take shape, it follows that more AI-enabled innovations are on the horizon.

Don't miss any update on this topic

Create a free account and access your personalized content collection with our latest publications and analyses.

Sign up for free

License and Republishing

World Economic Forum articles may be republished in accordance with the Creative Commons Attribution-NonCommercial-NoDerivatives 4.0 International Public License, and in accordance with our Terms of Use.

The views expressed in this article are those of the author alone and not the World Economic Forum.

Share:
World Economic Forum logo
Global Agenda

The Agenda Weekly

A weekly update of the most important issues driving the global agenda

Subscribe today

You can unsubscribe at any time using the link in our emails. For more details, review our privacy policy.

Why AI is the new frontier global trade must learn to cross

Patrick McMaster

October 9, 2024

About us

Engage with us

  • Sign in
  • Partner with us
  • Become a member
  • Sign up for our press releases
  • Subscribe to our newsletters
  • Contact us

Quick links

Language editions

Privacy Policy & Terms of Service

Sitemap

© 2024 World Economic Forum