Artificial Intelligence

Technophobe or technophile? We need more conversation about digital transformation

People look at data on their mobiles as background with internet wire cables on switch hub is projected in this picture illustration taken May 30, 2018. Picture taken May 30, 2018. REUTERS/Kacper Pempel/Illustration - RC138602D200

Putting the burden of digitally responsible behaviour on the individual is unfair and unsustainable Image: REUTERS/Kacper Pempel/Illustration

Geir Christian Karlsen
CEO & Founder, AppsCo Inc
Share:
Our Impact
What's the World Economic Forum doing to accelerate action on Artificial Intelligence?
The Big Picture
Explore and monitor how Artificial Intelligence is affecting economies, industries and global issues
A hand holding a looking glass by a lake
Crowdsource Innovation
Get involved with our crowdsourced digital platform to deliver impact at scale
Stay up to date:

Artificial Intelligence

You don’t need to live in Copenhagen, Tokyo, Singapore or any one of the other smart cities adopting the Internet of Things (IoT) to have heard the buzz phrase 'digital transformation'. Exciting opportunities are afoot, and digital enthusiasts, public bodies, corporations and start-ups are all rushing to transform and take their position in the technology race. Automatization tools, artificial intelligence, IoT, smart solutions, robotics and bioengineering promise a better tomorrow, and a seamless and more efficient today.

However, alongside this wave of technophilia, such ambitious technological visions and initiatives come hand in hand with intimidating threats. The digital systems designed to control and coordinate our power supplies, traffic flows, and personal and health data will be the most vulnerable points of our futures, open to misuse and hacking. There is a looming feeling that the progressive pace of technology adoption is not adequately accompanied by the necessary security measures and standardization of processes on a larger scale.

On one hand, the recently launched General Data Protection Regulation (GDPR) showed us that large systems face huge problems in transforming their business processes, if they want to provide users with maximum safety in digital space. On the other hand, the Cambridge Analytica scandal reminded us that the insatiable desire for profit leads unscrupulous corporations to violate the law and abuse users’ personal data.

Powerful software programmes can and will be made to target users based on information available through various digital platforms. This scandal also showed us that current justice systems, even in highly developed countries, cannot meet these challenges due to their low-level of understanding of the technologies in question and how they work.

As forecast in dystopian novels of the past, we are living the reality of alternative facts, face recognition, video manipulation and privacy invasion. The flipside of progressive technology is the fears and phobias it generates, despite being designed to serve higher purposes. The problem is not how and why digital tools and platforms were created - the problem is what they have become.

Moreover, these two conflicting sides of technological progress point to a huge gap between those who are digitally savvy and those who are digitally illiterate, unaware of the threats hiding in the digital world. Putting the burden of digitally responsible behaviour and protection on the individual would not only be unfair, but also unsustainable. We want to live in a world where people have more control of what is happening around them in digital space, but the resources needed to reach this goal must come from all stakeholders.

Taming the technology beasts

The GDPR initiative was a heavenly idea for user protection, but in practice, its implementation showed the complex problems facing companies and institutions. These issues underline the need for more open debate, in order to define more regulatives and to reverse the whole process in general. Security issues should not be addressed when problems occur - they should be anticipated and prevented well in advance.

High-level leaders recognize these problems. Elon Musk constantly points out the potential threats of artificial intelligence, comparing its power to that of a nuclear weapon. The creator of the worldwide web, Tim Berners-Lee, openly expresses his disappointment with current tech giants.

“I am disappointed with the current state of the web", he said. "We have lost the feeling of individual empowerment, and to a certain extent also, I think the optimism has cracked.”

Berners-Lee is now trying to offer an alternative to the current form of social media platforms, to provide users with more access and control over how their data is manipulated.

Have you read?

Whether we take an optimistic or pessimistic stance on these issues, we all have a responsibility to address these topics before it’s too late. Taming the tech beasts will take collective efforts. We all need conversation, education, regulation and standardization around the technologies implemented. 'We' includes we individuals, we small and medium companies, we corporations and, most importantly, we governments - every stakeholder needs to join this ride.

The process has to be reversed and all necessary security layers need to be part of the very foundation of digital tools and platforms before they are launched. To give every individual power over their own data, we must deploy access control solutions, tools for decentralization, blockchain systems and AI security layers, both from a privacy and security standpoint.

The main players need to lead by example. Governments and corporations have an enormous responsibility to put a strong focus on security and data protection, to prevent consequences that affect both individuals and society in general. Their readiness to learn and implement necessary tools and regulations will be of crucial importance in the future.

Every stakeholder must do the same. They must open discussions and put pressure on public bodies. We must not be outsmarted by the very smart technologies we are so willingly embracing.

AppsCo Inc is a member of the World Economic Forum’s Tech for Integrity Community.

This article is part of a two-part World Anti-Corruption Day series curated by the World Economic Forum’s Partnering Against Corruption Initiative (PACI).

Don't miss any update on this topic

Create a free account and access your personalized content collection with our latest publications and analyses.

Sign up for free

License and Republishing

World Economic Forum articles may be republished in accordance with the Creative Commons Attribution-NonCommercial-NoDerivatives 4.0 International Public License, and in accordance with our Terms of Use.

The views expressed in this article are those of the author alone and not the World Economic Forum.

Related topics:
Artificial IntelligenceCybersecurity
Share:
World Economic Forum logo
Global Agenda

The Agenda Weekly

A weekly update of the most important issues driving the global agenda

Subscribe today

You can unsubscribe at any time using the link in our emails. For more details, review our privacy policy.

How we can prepare for the future with foundational policy ideas for AI in education

TeachAI Steering Committee

April 16, 2024

About Us

Events

Media

Partners & Members

  • Join Us

Language Editions

Privacy Policy & Terms of Service

© 2024 World Economic Forum