Artificial Intelligence

The age of robots could be a new Renaissance. This is why 

A man touches the head of a SoftBank humanoid robot known as Pepper at the venue for Pepper World 2016 Summer, ahead of its opening on Thursday, in Tokyo, Japan, July 20, 2016.

Intelligent machines reflect our human qualities - the good and the bad Image: REUTERS/Kim Kyung-Hoon

Leonardo Quattrucci
Global Shaper, Brussels Hub, Amazon Web Services Institute
Share:
Our Impact
What's the World Economic Forum doing to accelerate action on Artificial Intelligence?
The Big Picture
Explore and monitor how Artificial Intelligence is affecting economies, industries and global issues
A hand holding a looking glass by a lake
Crowdsource Innovation
Get involved with our crowdsourced digital platform to deliver impact at scale
Stay up to date:

Artificial Intelligence

Why have humans created machines? Technology is born from the human desire to save time in order to invest it in fantasy. Technology generates time and space for the activities that make us quintessentially human: creativity and entrepreneurship, inventiveness, and empathy. It serves us by alleviating suffering and creating solutions to grow and fulfil our aspirations – from curing cancer to landing on Mars.

But humankind’s most common mistake is to forget their ends, as Nietzsche warned. This is possibly why, in the face of automation, many of us are overwhelmed by fears of a Matrix-like apocalypse. At the opposite extreme, there are those who celebrate with probably too much enthusiasm the promises of artificial intelligence (AI), often overlooking the risks of delegating decisions to entities whose learning and logical process is ever harder to decode.

Today, the possibilities of AI depend on the limitations designed by its creators. Therefore, as AI becomes more pervasive in our daily life, it is imperative to focus on which roles and responsibilities we humans must retain in the age of robots.

A social contract for AI

Alpha Go – the AI developed by Google Deep Mind – performed unexpected and novel moves to beat the best human players at Go, the highly complex Asian board game. However, the question is: were Alpha Go’s moves genuine inventions or were they combinations so sophisticated that other players had not yet envisioned them?

Machines are already superior to humans in terms of computation and memory. At the same time, the applications of AI are a human choice – a political and social one. AI is better than humans at calculating traffic, anticipating the incidence of an epidemic or managing energy efficiency, but the questions it analyses and the biases it manifests in trying to solve them are inherited from its designers: us.

Image: Future of Jobs Report, World Economic Forum

The evolution and diffusion of AI is also a question of social license. We developed norms and laws to regulate social and economic discrimination, the use of drugs and arms, or – more simply – speed limits. In the same way, we should set rules of conduct for autonomous vehicles.

Which criteria are necessary when we choose which decisions to delegate to machines? For every AI that becomes more refined, there needs to be a community of citizens that assesses its social purpose and application.

New standards for a new era

'Trust your instinct' is a recommendation that has its limits: the probability that it will yield effective decisions is more or less that of a coin toss. In other words, there is often a gap between our actions and our understanding of how we decided to perform them. Trying to explain it is at best a simplification of our thought – or un-thought – process. We should not be surprised, then, if some learning processes in the human brain are equally cryptic within an AI context.

This is neither to say that we should surrender to such evidence, nor that we should refrain from innovation or limit the entrepreneurial risk-taking that is necessary to advance it. Rather, it requires us to elaborate standards and instruments to manage technological development. A Commission for Artificial Intelligence, for instance, would have a mandate to assess how explicable and transparent are the decision-making processes of machines. Or why not host regular competitions among machines to reveal their biases and fallibility – and test their capacities – the same way we undertake stress tests for banks?

Have you read?
Humans fit for technology

Giuliano Toraldo di Francia, an Italian physicist, once said: "We need to create technology fit for humans; we also need to create humans fit for technology." We choose to create machines to specialize in our uniquely human advantages. Reminding ourselves of that should be our first step. The next step is to prepare for the moment when we will have to critically engage in a daily dialogue with AI – some of which we will wear, if not have implanted.

Sounds like science fiction? Smartphones today are already a sort of extension of our minds, and a quasi-permanent extension of our body. We need a 'philosophy of technology' that equips us with a behavioural and moral compass in technology-rich environments. As AI becomes an integral part of our public and private lives, we need to establish rights and responsibilities for 'human-machine citizenship'.

Could the age of robots be a new Renaissance instead?

This article was originally published by LINC Magazine.

The views expressed in this article are those of the author alone and not necessarily of the European Commission.

Don't miss any update on this topic

Create a free account and access your personalized content collection with our latest publications and analyses.

Sign up for free

License and Republishing

World Economic Forum articles may be republished in accordance with the Creative Commons Attribution-NonCommercial-NoDerivatives 4.0 International Public License, and in accordance with our Terms of Use.

The views expressed in this article are those of the author alone and not the World Economic Forum.

Related topics:
Artificial IntelligenceFourth Industrial Revolution
Share:
World Economic Forum logo
Global Agenda

The Agenda Weekly

A weekly update of the most important issues driving the global agenda

Subscribe today

You can unsubscribe at any time using the link in our emails. For more details, review our privacy policy.

How we can prepare for the future with foundational policy ideas for AI in education

TeachAI Steering Committee

April 16, 2024

About Us

Events

Media

Partners & Members

  • Join Us

Language Editions

Privacy Policy & Terms of Service

© 2024 World Economic Forum