Emerging Technologies

It’s time to stop talking about ethics in AI and start doing it

A screen, part of night vision technology developed by BrightWay Vision (BWV), an Israeli start-up who claim they have developed a technology to help autonomous vehicles navigate more safely at night, is seen during a demonstration for Reuters, in Haifa, Israel November 5, 2018. Picture taken November 5, 2018. REUTERS/Amir Cohen - RC16B4BD5B40

The race to build the first fully autonomous vehicle has brought ethical questions front and centre. Image: REUTERS/Amir Cohen

Dharmesh Syal
Chief Technology Officer, BCG Digital Ventures
This article is part of: World Economic Forum Annual Meeting

Everyone from Stephen Hawking to Bill Gates and Elon Musk have discussed the philosophy of AI. Now that companies around the world are creating AI products at an incredible rate, it’s increasingly urgent that we stop talking about how to implement ethical safeguards into AI and start doing it.

The race to build the first fully autonomous vehicle (AV) has brought this issue front and centre. The death of a pedestrian in March has raised concerns not only about the safety of AVs but also their ethical implications. How do you teach a machine to “think” ethically? And who decides who lives and who dies? While this is an obvious (and impending) example, ethical questions about AI are all around us.

Why are ethics so important?

The areas where AI stand to benefit us the most also have the most potential to harm us. Take healthcare, an industry where decisions are not always black and white. AI is far from being able to make complex diagnoses or replicate the “gut feelings” of a human. Even if it could, are AI doctors ethical? Could AI be trained to increase profits at the patient’s expense? And in the case of malpractice, who would the patient sue? The robot?

AI has been projected to manage $1 trillion in assets by 2020. As in healthcare, not all financial decisions can be made on logic alone. The variables that play into managing a portfolio are complex and one false move could lead to millions in losses. Could AI be used to exploit customer behaviour and data? What about hacking? Would you trust a machine to manage your money?

AI has been projected to manage $1 trillion in assets by 2020.

AI warfare raises the most concerning ethical flags. Fully autonomous “mobile intelligent entities” are coming and they promise to change warfare as we know it. What happens when an AI missile makes a mistake? How many errors are “acceptable”?

These are the questions that keep me up at night. The good news is, it’s not too late; we’ve only seen a glimpse of what AI is capable of. The only way to make sure we don’t create a monster that could turn against us is to incorporate ethical safeguards into the architecture of the AI we’re creating today.

Here are three strategies anyone currently building AI should consider:

1. Bring in a human in sensitive scenarios

In all the scenarios above, the question remains: when and to what extent do we bring in a human? While there’s no definitive answer, AI that employs a “human-in-the-loop” (HITL) system, where machines perform the work and humans assist only when there is uncertainty, yield more accurate algorithms. If a machine finds a misleading set of metadata, it could learn lessons a reasonable human would avoid.

Establishing ethical practices around metadata will give structure to the HITL scenario and potentially automate the “human factor” over time. Human conscience and moral code must also be codified as part of the AI metadata that drives interactions and sometimes decisions.

2. Put safeguards in place so machines can self-correct

We’ve all read about Facebook’s fake news problem, but the tech giant has recently come under fire once again, prompting it to remove more than 5,000 targeting options in their ad platform that could be used to discriminate against certain ethnicities and religious groups. These kinds of ethical features should ideally be integrated as the product is being built, but it's better late than never.

I had the opportunity to do this firsthand when BCG, BCG Digital Ventures and a Fortune 100 company partnered to build Formation, an AI platform for personalized experiences. During the product build, we implemented safeguards at three checkpoints to ensure we did not breach users’ trust.

3. Create an ethics code

This may seem obvious, but you’d be surprised how few companies are actually doing this. Whether it’s about data privacy, personalization or deep learning, every organization should have a set of standards it operates by. According to Apple CEO Tim Cook, "the best regulation is self regulation”. For Apple, this means carefully examining every app on its platform to make sure they aren’t violating users’ privacy.

This is not a one-size-fits-all solution; the ethical code you enact must be dictated by the way you’re using AI. If your company breaks (or nears) a standard, employees should be encouraged to raise the flag and you, as its leaders, are responsible for taking these concerns seriously.

Here are some recommendations for creating an ethics code:

⦁ When personal data is at stake, we pledge to aggregate and anonymize it to the best of our ability, treating consumers’ data as we would our own.

⦁ We pledge to enact safeguards at multiple intervals in the process to ensure the machine isn’t making harmful decisions.

⦁ We pledge to retrain all employees who have been displaced by AI in a related role.

As the architects of the future, we have a responsibility to build technologies that will enhance human lives, not hurt them. We have an opportunity now to take a step back and really understand how these product decisions can impact human lives. By doing so we can collectively become stewards of an ethical future.

Don't miss any update on this topic

Create a free account and access your personalized content collection with our latest publications and analyses.

Sign up for free

License and Republishing

World Economic Forum articles may be republished in accordance with the Creative Commons Attribution-NonCommercial-NoDerivatives 4.0 International Public License, and in accordance with our Terms of Use.

The views expressed in this article are those of the author alone and not the World Economic Forum.

Stay up to date:

Artificial Intelligence

Share:
The Big Picture
Explore and monitor how Artificial Intelligence is affecting economies, industries and global issues
A hand holding a looking glass by a lake
Crowdsource Innovation
Get involved with our crowdsourced digital platform to deliver impact at scale
World Economic Forum logo
Global Agenda

The Agenda Weekly

A weekly update of the most important issues driving the global agenda

Subscribe today

You can unsubscribe at any time using the link in our emails. For more details, review our privacy policy.

5 ways AI can help crisis response around the world

Devanand Ramiah

December 6, 2024

Equitable AI skilling can help solve talent scarcity – this is what leaders can do

About us

Engage with us

  • Sign in
  • Partner with us
  • Become a member
  • Sign up for our press releases
  • Subscribe to our newsletters
  • Contact us

Quick links

Language editions

Privacy Policy & Terms of Service

Sitemap

© 2024 World Economic Forum