Emerging Technologies

Why we need to care about responsible AI in the age of the algorithm

Businesses need to take responsible AI seriously to remain competitive and avoid liability.

Businesses need to take responsible AI seriously to remain competitive and avoid liability. Image: Unsplash/Christopher Burns

Ayesha Gulley
Public Policy and Governance Associate, Holistic AI
Share:
Our Impact
What's the World Economic Forum doing to accelerate action on Emerging Technologies?
The Big Picture
Explore and monitor how Artificial Intelligence is affecting economies, industries and global issues
A hand holding a looking glass by a lake
Crowdsource Innovation
Get involved with our crowdsourced digital platform to deliver impact at scale
Stay up to date:

Emerging Technologies

Listen to the article

  • Artificial intelligence (AI) is one of the world's most discussed technology trends and is forecast to increase global GDP by $15.7 trillion by 2030.
  • But the transformative benefits of the nascent technology are accompanied by risks and we need to commit to principles of responsible AI.
  • Companies will soon need to comply with global AI regulations and will need to take a responsible approach to remain competitive and avoid liability.

Artificial Intelligence (AI) is one of the most discussed technology trends enabling business growth today. By 2030, AI is estimated to increase global GDP by $15.7 trillion – more than the current output of China and India combined.

But with great power comes great responsibility. As the transformative benefits of AI become apparent, so too do the risks. Algorithms can introduce bias, preventable errors and poor decision-making, causing mistrust amongst the people it intends to assist. Out of concern for the unprecedented pace of AI development, many organizations have begun to commit to principles of responsible AI.

Have you read?

Responsible AI is an emerging area of AI governance covering ethics, morals and legal values in the development and deployment of beneficial AI. As a governance framework, responsible AI documents how a specific organization addresses the challenges around AI in the service of good for individuals and society.

In the past several years, every organization connected to technology policy has proposed a set of guiding principles around its use, including Google, IBM, and the Organisation for Economic Co-operation and Development.

Increasing concerns about the risks of AI

However, the growing interest in AI has been accompanied by concerns, given the rise of unintended consequences, with risks affecting both technical aspects of the system and governance practices.

Responsible AI is an emerging area of AI governance covering ethics, morals and legal values in the development and deployment of beneficial AI.
Responsible AI is an emerging area of AI governance covering ethics, morals and legal values in the development and deployment of beneficial AI. Image: Holistic AI

Study after study has shown that AI-driven decision-making can potentially lead to biased outcomes, from racial profiling in predictive policing algorithms to sexist hiring decisions. As such, recent years have seen governments worldwide tightening regulations to target AI, increasing the adoption of responsible, ethical or trustworthy AI initiatives.

At the European level, the EU Commission has established a High-Level Expert Group on Artificial Intelligence, which is tasked with developing an integrative framework for responsible and trustworthy AI and has also proposed the AI Liability Directive.

The directive aims to make it easier to sue companies for causing harm as part of a wider push to prevent companies from developing and deploying harmful AI, and adds an extra layer onto the proposed EU AI Act, which will require extra checks for “high-risk” uses of AI, such as in the use of policing, recruitment or healthcare.

However, Europe is not alone in its efforts; the White House Office of Science and Technology Policy has recently published a Blueprint for an AI Bill of Rights, which outlines the US government’s vision for AI governance to prevent harm, and China has proposed a suite of legislation to regulate different applications of AI. As these regulatory regimes come into effect, businesses will need to alter how they operate on a global scale.

Responsible AI lagging behind breakthroughs

Many see the appeal of making AI more responsible, but few are getting it right.

The rapid pace of AI development does not appear to be slowing down. Breakthroughs come fast – quickly outpacing the speed of regulation. In the past year alone, we have seen a range of developments, from deep learning models that generate images from text, to large language models capable of answering any question you can think of. Although the progress is impressive, keeping pace with the potential harms of each new breakthrough can pose a relentless challenge.

The trouble is that many companies cannot even see that they have a problem to begin with, according to a report released by MIT Sloan Management Review and Boston Consulting Group.

AI was a top strategic priority for 42% of the report’s respondents, but only 19% said their organization had implemented a responsible AI programme. This gap increases the possibility of failure and exposes companies to regulatory, financial and reputational risks.

Responsible AI is more than a check box exercise or the development of an add-on feature. Organizations will need to make substantial structural changes in anticipation of AI implementation to ensure that their automated systems operate within legal, internal and ethical boundaries.

Customers, employees and shareholders expect organizations to use AI responsibly, and governments are demanding it. This is critical now, as more and more share concerns about brand reputation and their use of AI.

Increasingly we are seeing companies making social and ethical responsibility a key strategic priority. The major challenge is how to responsibly maximize its upside, while safeguarding against the dangers.

Auditing to ensure responsible AI

Ensuring that harmful or unintended consequences are minimized or do not occur during the lifespan of AI projects requires a comprehensive understanding of the role of responsible principles during the design, implementation and maintenance of AI applications.

AI auditing is the research and practice of assessing, mitigating and assuring an algorithm’s safety, legality and ethics. The purpose of AI auditing is to assess a system by mapping out its risks in both its technical functionality and its governance structure, and recommending measures that can be taken to mitigate these risks.

When assessing a system, it is important to consider the following five factors:

  • Efficacy: whether a system does what it is meant to and performs as expected.
  • Robustness or reliability: the idea that systems should be reliable safe and secure, not vulnerable to tampering or compromising of the data they are trained on.
  • Bias: systems should avoid unfair treatment of individuals or groups.
  • Explainability: systems should provide decisions or suggestions that can be understood by their users, developers and regulators.
  • Privacy: systems should be trained following data minimization principles, as well as adopt privacy-enhancing techniques to mitigate personal or critical data leakage.
Discover

How is the World Economic Forum ensuring the responsible use of technology?

While everyone wants responsible AI, few seem to have it. Yet ensuring responsibility in AI cannot be underestimated: it helps assure that an AI system will be efficient, operate according to ethical standards and prevent potential for reputational and financial damage down the road.

Beyond that, businesses will soon need to comply with global AI regulations and will need to take a more responsible approach to remain competitive and avoid liability. The best way forward is to start early and collaborate with a broad set of key stakeholders for a more holistic approach to responsible AI.

Don't miss any update on this topic

Create a free account and access your personalized content collection with our latest publications and analyses.

Sign up for free

License and Republishing

World Economic Forum articles may be republished in accordance with the Creative Commons Attribution-NonCommercial-NoDerivatives 4.0 International Public License, and in accordance with our Terms of Use.

The views expressed in this article are those of the author alone and not the World Economic Forum.

Share:
World Economic Forum logo
Global Agenda

The Agenda Weekly

A weekly update of the most important issues driving the global agenda

Subscribe today

You can unsubscribe at any time using the link in our emails. For more details, review our privacy policy.

The ‘Intelligent Age’ has arrived. This is how it can accelerate SDG progress

Mirek Dušek

October 1, 2024

About us

Engage with us

  • Sign in
  • Partner with us
  • Become a member
  • Sign up for our press releases
  • Subscribe to our newsletters
  • Contact us

Quick links

Language editions

Privacy Policy & Terms of Service

Sitemap

© 2024 World Economic Forum