Emerging Technologies

AI can turbocharge profits. But it shouldn't be at the expense of ethics

AI promises great benefits – but we must reckon with the challenges.

AI promises great benefits – but we must reckon with the challenges. Image: Getty Images/iStockphoto

Raj Verma
Chief Executive Officer, SingleStore
Share:
Our Impact
What's the World Economic Forum doing to accelerate action on Emerging Technologies?
The Big Picture
Explore and monitor how Artificial Intelligence is affecting economies, industries and global issues
A hand holding a looking glass by a lake
Crowdsource Innovation
Get involved with our crowdsourced digital platform to deliver impact at scale
Stay up to date:

Artificial Intelligence

This article is part of: World Economic Forum Annual Meeting
  • In the rush to harness AI for profit, business shouldn't neglect the possible negative impact of the technology.
  • Misinformation, privacy and bias are all areas requiring attention.
  • Technological advancements are key to confronting global challenges – this requires innovation and guardrails.

As the adoption of generative AI rapidly expands across all corners of society, businesses of all kinds are poised to become quicker, more creative and more clever. We’re seeing this everywhere: Casinos use AI to better predict customers’ gambling habits and lure them with irresistible promotions. AI guides product designers as they choose better and more efficient materials. Many firms are even beginning to use the technology to predict payments based on scanned invoices. According to Goldman Sachs, the widespread use of AI could lead to a 1.5% annual growth in productivity over a 10-year period.

Have you read?

However, this rapid expansion also should bring with it caution: Businesses must be careful not to expand the adoption of AI purely for profit. They must realize that, like many other fast-emerging technologies before it, unbridled use of AI could have dangerous consequences. Generative AI has the potential to turbocharge the spread of disinformation, worsen social media addiction among teenagers, and perpetuate existing social biases. These consequences are not only harmful for society at large, but bad for businesses that work tirelessly to generate trust among customers – only to mis-step and see reputational risks from which it can be impossible to recover. As firms try to adapt to the quickly evolving AI landscape, how can businesses use this groundbreaking technology ethically?

1. Prioritizing data privacy

First, firms must prioritize protecting data – their own, their clients’, and their customers’. To do so, they must understand the risks of using public Large Language Models (LLMs). LLMs are the backbone of generative AI. They are algorithms that feed on large amounts of data to make predictions and generate content. Public LLMs gather data from generic, publicly available datasets and make it available to anyone. If prompted correctly, they may reveal or leak sensitive data used in their training processes – or introduce biases. Since LLMs lack a delete button, they cannot unlearn data, which makes risks related to leakage permanent. Regulated industries, like financial institutions, should be particularly worried about the risks of using public LLMs. A leak that publishes financial information, such as bank account numbers and transaction details, could result in client identity theft and even fraud, not to mention hefty legal fees for banks.

To mitigate this risk, companies can use private LLMs, which are trained on a company’s specific, private corpus of data and can only be accessed by authorized stakeholders. With private LLMs, firms can reap the benefits of generative AI – for example, via chatbots developed on customers’ own data — without the risk of sending the data to third parties. And because these LLMs are trained on specific information and allow more control over update cycles, they are less likely to “hallucinate”, or provide “irrelevant, nonsensical, or factually incorrect” responses.

2. Mitigating AI bias

At the core of AI is data. Without it, AI is useless — but with the wrong data, the technology can also be dangerous. Popular generative AI systems like ChatGPT rely on large and publicly available data sources, some of which reflect historical and social biases. AI systems that rely on these datasets end up replicating these biases. Consider our earlier example of a bank: Algorithms trained on historical data generated by discriminatory practices (for example, redlining in 1930s Chicago) can lead banks to deny loans to marginalized communities. Insurance companies can also end up charging higher premiums, and credit bureaus misrepresenting credit scores.

The best way to combat AI bias is to incorporate humans in LLM training processes. This human-AI relationship can be twofold: Humans can monitor AI systems to provide input, feedback and corrections to enhance its performance, and the trained AI can be used to help humans detect bias in their behaviour. For example, as humans provide AI with the right data, and teach it to phase out bias through corrections, AI can be trained to alert hiring managers of hidden discriminatory practices that may exist in their companies’ hiring decisions.

Discover

How is the World Economic Forum creating guardrails for Artificial Intelligence?

3. Implementing a framework for transparency

Firms must ensure that their use of AI is in compliance with regulatory frameworks, including data protection, cybersecurity and corporate governance laws. But how can firms comply with oversight mechanisms that have yet to be designed? The answer lies in transparency. Transparency is key to generate trust and overcome fear that AI can manipulate and even dictate our lives. The EU’s High-Level Expert Group on Artificial Intelligence (AI HLEG) has developed an assessment list for trustworthy AI that firms can use as a guide. It follows these three tenets:

  • Traceability: Is the process that developed the AI system accurately and properly documented? For example, can you trace back which data was used by the AI system to generate decisions?
  • Explainability: Can the reasoning behind the AI system’s solutions or decisions be explained? More importantly, can humans understand the decisions made by an AI system?
  • Communication: Has the system’s potentials and limitations been communicated to users? For example, should you tell users that you are communicating with an AI bot and not a human?

AI is one of the most promising technological tools ever developed, not because it can help us boost profits and productivity (though it certainly will), but because of its enormous potential to help us become better humans. We must never lose sight of the fact that humans, including humans in the private sector, are the ones steering the wheel behind AI’s development. It is our responsibility to develop it responsibly – it's good for society and good for business.

Loading...
Don't miss any update on this topic

Create a free account and access your personalized content collection with our latest publications and analyses.

Sign up for free

License and Republishing

World Economic Forum articles may be republished in accordance with the Creative Commons Attribution-NonCommercial-NoDerivatives 4.0 International Public License, and in accordance with our Terms of Use.

The views expressed in this article are those of the author alone and not the World Economic Forum.

Related topics:
Emerging TechnologiesForum Institutional
Share:
World Economic Forum logo
Global Agenda

The Agenda Weekly

A weekly update of the most important issues driving the global agenda

Subscribe today

You can unsubscribe at any time using the link in our emails. For more details, review our privacy policy.

How venture capital is investing in AI in the top five global economies — and shaping the AI ecosystem

Piyush Gupta, Chirag Chopra and Ankit Kasare

May 24, 2024

About Us

Events

Media

Partners & Members

  • Join Us

Language Editions

Privacy Policy & Terms of Service

© 2024 World Economic Forum