Artificial Intelligence

If Hollywood has taught us anything, it's that robots need ethics

A technician makes adjustments to the "Inmoov" robot from Russia during the "Robot Ball" scientific exhibition in Moscow May 17, 2014. Picture taken May 17, 2014. REUTERS/Sergei Karpukhin (RUSSIA - Tags: SCIENCE TECHNOLOGY SOCIETY) - RTR3PNJ5

It's time to create an ethical framework for artificial intelligence, argues Susan Leigh Anderson. Image: REUTERS/Sergei Karpukhin

Susan Leigh Anderson
Professor Emerita of Philosophy, University of Connecticut
Share:
Our Impact
What's the World Economic Forum doing to accelerate action on Artificial Intelligence?
The Big Picture
Explore and monitor how Artificial Intelligence is affecting economies, industries and global issues
A hand holding a looking glass by a lake
Crowdsource Innovation
Get involved with our crowdsourced digital platform to deliver impact at scale
Stay up to date:

Artificial Intelligence

The prospect of artificial intelligence (AI) has long been a source of knotty ethical questions. But the focus has often been on how we, the creators, can and should use advanced robots. What is missing from the discussion is the need to develop a set of ethics for the machines themselves, together with a means for machines to resolve ethical dilemmas as they arise. Only then can intelligent machines function autonomously, making ethical choices as they fulfill their tasks, without human intervention.

 AI landscape: global quarterly financing history
Image: CB Insights

There are many activities that we would like to be able to turn over entirely to autonomously functioning machines. Robots can do jobs that are highly dangerous or exceedingly unpleasant. They can fill gaps in the labor market. And they can perform extremely repetitive or detail-oriented tasks – which are better suited to robots than humans.

But no one would be comfortable with machines acting independently, with no ethical framework to guide them. (Hollywood has done a pretty good job of highlighting those risks over the years.) That is why we need to train robots to identify and weigh a given situation’s ethically relevant features (for example, those that indicate potential benefits or harm to a person). And we need to instill in them the duty to act appropriately (to maximize benefits and minimize harm).

Of course, in a real-life situation, there may be several ethically relevant features and corresponding duties – and they may conflict with one another. So, for the robot, each duty would have to be relativized and considered in context: important, but not absolute. A duty that prima facie was vital could, in particular circumstances, be superseded by another duty.

The key to making these judgment calls would be overriding ethical principles that had been instilled in the machine before it went to work. Armed with that critical perspective, machines could handle unanticipated situations correctly, and even be able to justify their decision.

Which principles a machine requires would depend, to some extent, on how it is deployed. For example, a search and rescue robot, in fulfilling its duty of saving the most lives possible, would need to understand how to prioritize, based on questions like how many victims might be located in a particular area or how likely they are to survive. These concerns don’t apply to an eldercare robot with one person to look after. Such a machine would instead have to be equipped to respect the autonomy of its charge, among other things.

We should permit machines to function autonomously only in areas where there is agreement among ethicists about what constitutes acceptable behavior. Otherwise, we risk a backlash against allowing any machine to function autonomously.

But ethicists would not be working alone. On the contrary, developing machine ethics will require research that is interdisciplinary in nature, based on a dialogue between ethicists and AI specialists. To be successful, both sides must appreciate the expertise – and the needs – of the other.

AI researchers must recognize that ethics is a long-studied field within philosophy; it goes far beyond laypersons’ intuitions. Ethical behavior involves not only refraining from doing certain things, but also doing certain things to bring about ideal states of affairs. So far, however, the determination and mitigation of ethical concerns regarding machine behavior has largely emphasized the “refraining” part, preventing machines from engaging in ethically unacceptable behavior, which often comes at the cost of unnecessarily constraining their possible behaviors and domains of deployment.

For their part, ethicists must recognize that programming a machine requires the utmost precision, which will require them to sharpen their approach to ethical discussions, perhaps to an unfamiliar extent. They must also engage more with the real-world applications of their theoretical work, which may have the added benefit of advancing the field of ethics.

More broadly, attempting to formulate an ethics for machines would give us a fresh start at determining the principles we should use to resolve ethical dilemmas. Because we are concerned with machine behavior, we can be more objective in examining ethics than we would be in discussing human behavior, even though what we come up with should be applicable to humans as well.

For one thing, we will not be inclined to incorporate into machines some evolutionarily evolved behaviors of human beings, such as favoring oneself and one’s group. Rather, we will require that they treat all people with respect. As a result, it is likely that the machines will behave more ethically than most human beings, and serve as positive role models for us all.

Ethical machines would pose no threat to humanity. On the contrary, they would help us considerably, not just by working for us, but also by showing us how we need to behave if we are to survive as a species.

Don't miss any update on this topic

Create a free account and access your personalized content collection with our latest publications and analyses.

Sign up for free

License and Republishing

World Economic Forum articles may be republished in accordance with the Creative Commons Attribution-NonCommercial-NoDerivatives 4.0 International Public License, and in accordance with our Terms of Use.

The views expressed in this article are those of the author alone and not the World Economic Forum.

Related topics:
Artificial IntelligenceFourth Industrial Revolution
Share:
World Economic Forum logo
Global Agenda

The Agenda Weekly

A weekly update of the most important issues driving the global agenda

Subscribe today

You can unsubscribe at any time using the link in our emails. For more details, review our privacy policy.

How we can prepare for the future with foundational policy ideas for AI in education

TeachAI Steering Committee

April 16, 2024

About Us

Events

Media

Partners & Members

  • Join Us

Language Editions

Privacy Policy & Terms of Service

© 2024 World Economic Forum