Emerging Technologies

The moral dilemmas of the Fourth Industrial Revolution

Do we need a new moral code for the age of drones and gene-editing? Image: REUTERS/Steve Marcus

Vinayak Dalmia
Investor, Amber
Kavi Sharma
Chief Operating Officer, President Electronics USA
Share:
Our Impact
What's the World Economic Forum doing to accelerate action on Emerging Technologies?
The Big Picture
Explore and monitor how Values is affecting economies, industries and global issues
A hand holding a looking glass by a lake
Crowdsource Innovation
Get involved with our crowdsourced digital platform to deliver impact at scale
Stay up to date:

Values

Should your driverless car value your life over a pedestrian's? Should your Fitbit activity be used against you in a court case? Should we allow drones to become the new paparazzi? Can one patent a human gene?

Scientists are already struggling with such dilemmas. As we enter the new machine age, we need a new set of codified morals to become the global norm. We should put as much emphasis on ethics as we put on fashionable terms like disruption.

This is starting to happen. Last year, America's Carnegie Mellon University announced a new centre studying the Ethics of Artificial Intelligence; under President Obama, the White House published a paper on the same topic; and tech giants including Facebook and Google have announced a partnership to draw up an ethical framework for AI. Both the risks and the opportunities are vast: Stephen Hawking, Elon Musk and other experts signed an open letter calling for efforts to ensure AI is beneficial to society:

"The potential benefits are huge, since everything that civilization has to offer is a product of human intelligence; we cannot predict what we might achieve when this intelligence is magnified by the tools AI may provide, but the eradication of disease and poverty are not unfathomable. Because of the great potential of AI, it is important to research how to reap its benefits while avoiding potential pitfalls."

Who to save? A simulation of a driverless car's dilemma Image: MIT's Moral Machine

These are big names and grand ideals. However, many efforts lack global cooperation. Moreover, the implications of the Fourth Industrial Revolution go beyond the internet and AI.

Professor Klaus Schwab, Founder of the World Economic Forum, believes this phase will be built around “cyber-physical systems” with the blurring of the physical, digital and biological. As we embrace this machine age, we will need be confronted by new ethical challenges, calling for new laws. In some cases the entire moral code may need to be rebooted. Such is the nature of technological breakthroughs. We believe that humanity will soon be on the cusp of re-thinking morals – an Ethics 2.0.

The origins of ethics

Ethics derived from philosophy or religion do not easily fit into the world of technology. Everything from Aristotle to the Ten Commandments give us moral navigation – but any established set of rules tends to run into dilemmas. The world of science also has its share of attempts, from Asimov’s Three Laws for Robots to Nick Bostrom’s work on ethics. However, humans find it hard enough to develop virtues for their own conduct, let alone building relevant virtues into new technologies.

The ethical implications range from the immediate (how are the algorithms behind Facebook and Google influencing everything from our emotions to our elections?) to the future (what will happen if self-driving vehicles mean there are no more jobs for truck drivers?). Following is a sample, by no means exhaustive, of the ethical decisions which will face us:

Life Sciences. Should gene editing be legal to manipulate the human race and create “designer babies”? Cancer researcher Siddhartha Mukherjee, in his critically acclaimed book The Gene, highlighted the deep ethical questions that advances in genome science will pose. The list of ethical questions is long: what if a pre-natal test predicted your child would have an IQ of 80 points, well below average, unless you undertook a little editing? What if these technologies were limited to only a wealthy people?

AI, machine learning and data. Over time, Artificial Intelligence will help us make all kinds of decisions. But how do we ensure these algorithms are fairly designed? How do we iron out biases from such systems, which will eventually be used to determine job promotions, college admissions and even our choice of a life partner?

Should the local police use facial recognition software? Should predictive policing based on algorithms be legal? What impact will this have on our privacy? Will cutting-edge technology in the hands of local law enforcement usher in the era of the surveillance state?

Social media and gadgets. What if our Kindles were embedded with facial recognition software and bio-metric sensors, so that the device could tell how every sentence influenced our heart rate and blood pressure?

Bots and Machines. How do we decide what driverless cars can decide? How do we decide what Robots can decide? Will there be a need for the robot equivalent of a Bill of Rights? What about the rights of humans to marry robots and of robots to own property? Should a highly advanced Cyborg be allowed to run for political office?

The path ahead

Typically in the past, free markets have decided the fate of new innovation and with time, local governments come in and intervene (Uber is banned in Japan but operational in India). However, in this case such an approach could be disastrous.

We are not in favour of government getting in the way of innovation: we are calling for a coherent global dialogue around ethics in the 21st century. The dialogue needs to move beyond academic journals and opinion articles to include government committees and international bodies such as the UN.

So far we have taken a siloed approach – from worldwide banning on human cloning to partial restrictions on GM Foods. Different regions have also taken disparate views and failed to orchestrate a unified response: the EU’s approach to managing the societal impact of new technologies is markedly different from that of the US. China, on the other hand, has always taken the long view. Technology is like water – it’ll find its open spaces. In an interconnected world, local decisions are only effective when enabled by international consensus.

There is a need for a structured international forum to form a list of technologies that need governance, to evaluate each technology and release a blueprint for its code of conduct. For example, an international governmental body could lay down specific rules such as making it mandatory to release the logic behind certain AI algorithms.

The world of science boasts some successful examples of international cooperation. The Montreal Protocol of 1987 (to tackle the issue of Ozone depletion) and Asilomar conference of 1975 (to regulate DNA technology) come to mind.

Have you read?
Conclusion

Humanity will face questions it hasn’t had to answer yet. We need to start having the conversation now.

If we do not prepare in advance, we face several risks. We risk losing tremendous power to machines. We risk altering the course of humanity without fully understanding the consequences. We risk creating massive inequality between the “techno super-rich” and a large underclass.

Anyone who has seen even a single episode of the award-winning TV series Black Mirror should be worried about the dystopian future that could lie ahead of us – if we don't tackle the hard philosophical and legal questions now.

This is the not the first technology revolution. The concerns are not new. They have been around for over 200 years since the Industrial Revolution. But as the historian and philosopher Yuval Harari puts it, it's a matter of the boy (who cried wolf) being eventually right.

Traditionally, technology progress outpaces the political process: we already missed drafting the moral charter for the internet, and continue to play catch up till this day. We cannot afford to be blind-sided by the next frontiers, be it in biotechnology or AI. Our future is increasingly being scripted by engineers and entrepreneurs, who are not necessarily being held to account.

Society is good at adapting to change – from the steam engine to the iPhone to markedly increased lifespans. As Bill Gates put it, “technology is amoral”. It is up to us to decide how to use it and where to draw the line.

Don't miss any update on this topic

Create a free account and access your personalized content collection with our latest publications and analyses.

Sign up for free

License and Republishing

World Economic Forum articles may be republished in accordance with the Creative Commons Attribution-NonCommercial-NoDerivatives 4.0 International Public License, and in accordance with our Terms of Use.

The views expressed in this article are those of the author alone and not the World Economic Forum.

Related topics:
Emerging TechnologiesFourth Industrial Revolution
Share:
World Economic Forum logo
Global Agenda

The Agenda Weekly

A weekly update of the most important issues driving the global agenda

Subscribe today

You can unsubscribe at any time using the link in our emails. For more details, review our privacy policy.

How medicines ‘dropping from the sky’ are transforming healthcare in India

Simon Torkington

October 8, 2024

About us

Engage with us

  • Sign in
  • Partner with us
  • Become a member
  • Sign up for our press releases
  • Subscribe to our newsletters
  • Contact us

Quick links

Language editions

Privacy Policy & Terms of Service

Sitemap

© 2024 World Economic Forum