Artificial Intelligence

We need to talk about Artificial Intelligence

Facebook Chairman and CEO Mark Zuckerberg testifies at a House Financial Services Committee hearing in Washington, U.S., October 23, 2019. REUTERS/Erin Scott - RC17DEC6C4E0

There is a huge gap in understanding between policymakers and tech companies. Image: REUTERS/Erin Scott

Adriana Bora
AI Policy Researcher and Project Manager, The Future Society
David Alexandru Timis
Global Communications Manager, Generation
Share:
Our Impact
What's the World Economic Forum doing to accelerate action on Artificial Intelligence?
The Big Picture
Explore and monitor how Artificial Intelligence is affecting economies, industries and global issues
A hand holding a looking glass by a lake
Crowdsource Innovation
Get involved with our crowdsourced digital platform to deliver impact at scale
Stay up to date:

Artificial Intelligence

  • Dialogue is hampered by an information gap between creators of AI technology and policymakers trying to regulate it.
  • Knowledge building is critical to set a framework of ethics and norms in which AI can innovate safely.
  • Principles are valuable only if they are agreed upon and if they are actually implemented.

While consensus starts to form around the impact that AI will have on humankind, civil society, the public and the private sector alike are increasing their requests for accountability and trust-building. Ethical considerations such as AI bias (by race, gender, or other criteria), and algorithmic transparency (clarity on the rules and methods by which machines make decisions) have already negatively impacted society through the technologies we use daily.

Have you read?

The AI integration within industry and society and its impact on human lives, calls for ethical and legal frameworks that will ensure its effective governance, progressing AI social opportunities and mitigating its risks. There is a need for sound mechanisms that will generate a comprehensive and collectively shared understanding of AI’s development and deployment cycle. Thus, at its core, this governance needs to be designed under continuous dialogue utilizing multi-stakeholder and interdisciplinary methodologies and skills.

Yet, this dialogue is hampered by the fact that creators of AI technology have all the information and understanding of the subject, while policymakers trying to regulate it often have very little. On the one hand, there is a limited number of policy experts who truly understand the full cycle of AI technology. On the other hand, the technology providers lack clarity, and at times interest, in shaping AI policy with integrity by implementing ethics in their technological designs (with, for example, ethically aligned design).

Policymakers – lack of clarity on AI functions

Just as previous generations needed to adapt to the steam engine, electricity, or the internet, this generation will have to become familiar with the underlying techniques, principles and fundamental impacts of AI-based systems. However, while understanding AI will take time for the general public, policymakers that are responsible for regulating the use of AI will need to be fast-tracked.

The congressional hearings in the US – in which the executives of big tech companies testified – offered the American public the opportunity to observe the worrying digital literacy gap between the companies producing the technologies that are shaping our lives, and the legislators responsible for regulating them. US Congress is not alone in this, as the governments of countries around the world are faced with the same challenge.

AI ethics principles emphasize human rights
AI ethics principles emphasize human rights Image: Harvard University Berkman Klein Center

Policymakers will not have all of the answers or expertise to make the best decisions when it comes to regulating AI, but asking better questions is an important step forward. Without having this basic understanding of how AI technologies work, policymakers could become either too forceful in regulating AI, or on the contrary, not do enough to keep us safe and avoid the risk of deploying AI-based systems for mass surveillance for instance. Therefore, what is needed is a renewed emphasis on AI education among policymakers and regulators and an increase in funding and recruitment for technical talent in government, in order for the people making decisions about programmes, funding, and adoption when it comes to AI, to be informed about current developments concerning the technology.

It is only by familiarizing themselves with AI and its potential benefits and risks, that policymakers can draft sensible regulation that balances the development of AI within legal and ethical boundaries while leveraging its tremendous potential. Being literate in AI will allow policymakers to also become active users, as this technology can support them to meet their policy goals, advance the SDGs agenda, and make government more efficient.

Knowledge building is critical both for developing smarter regulations when it comes to AI, and for enabling policymakers to engage in dialogue with technology companies on an equal footing, and together set a framework of ethics and norms in which AI can innovate safely. Consequently, the public-private dialogue is key for the development of ‘trustworthy AI’.

Technology providers – lack of clarity on AI ethics

Compared with other corporate social responsibilities, AI is accelerating the need for technology companies to advance conversations about ethics and trust, as AI echoes societal behaviour. With AI, corporations risk being the driver of an exponential increase of the biases already existing in society at an irreversible scale and rate.

Therefore, technology companies need both ethics literacy and a commitment to multidisciplinary research to create a sound understanding and adoption of ethics. However, through their training and during their careers, the technical teams behind AI developments are not methodically educated about the complexity of human social systems, how their products could negatively impact society, and how to embed ethics in their designs.

The process of understanding and acknowledging the social and cultural context in which AI technologies are deployed, sometimes with high stakes for humanity, requires patience and time. This is in tension with the status quo of business models, which are built on speed and scale for a quick profit. As the German sociologist Ulrich Beck once stated, ethics nowadays “plays the role of a bicycle brake on an intercontinental airplane”.

With increased investments in the development and deployment of AI, technology companies are encouraged to identify the ethical consideration relevant to their products and transparently implement solutions before deploying them. This way their business will have a sound risk mitigation strategy, while at the same time ensuring and proving that their financial gains will not happen at the expense of the social and economic wellbeing of society. Moreover, they diminish the chances of developing a negative reputation associated with their use of AI.

In the light of the COVID-19 pandemic and the start of the next industrial revolution, companies with a long-term sustainable vision acknowledge the business case for ethical AI, which can help them avoid crises and better serve their stakeholders. Thus, technology companies have an opportunity to increase ethical literacy among their staff and strengthen dialogue and collaboration with policymakers. This will ensure that they have a say in shaping and designing the frameworks in which the ethical by design AI solutions will successfully and safely be deployed and scaled.

AI principles are only valuable when properly implemented

Policymakers and industry leaders need to break out from their silos particularly in exceptional contexts, such as the one created by the COVID-19 pandemic. This will allow them to have more constant and substantive dialogue ensuring that AI governance and legislation will not be toothless in the face of economic and political priorities.

We see already that alongside governments and international organizations, some technology companies have started to release high-level ethical principles for AI development and deployment (e.g. Google AI Principles, Microsoft AI Principles). However, to ensure the effective governance of AI, there should be a consistent dialogue between businesses and policymakers to agree on a common set of principles and concrete methodologies of translating them into practice.

Put simply, those principles are valuable only if they are agreed upon and if they are actually implemented. To do so, the two need to constantly communicate as the companies need policymakers to provide clear ethical frameworks and pathways for implementation, while policymakers need the industry to ensure that those frameworks become reality and are embedded in the AI technologies we are all using on a daily basis, without even realizing it.

Don't miss any update on this topic

Create a free account and access your personalized content collection with our latest publications and analyses.

Sign up for free

License and Republishing

World Economic Forum articles may be republished in accordance with the Creative Commons Attribution-NonCommercial-NoDerivatives 4.0 International Public License, and in accordance with our Terms of Use.

The views expressed in this article are those of the author alone and not the World Economic Forum.

Related topics:
Artificial IntelligenceFourth Industrial RevolutionAgile GovernanceGlobal Governance
Share:
World Economic Forum logo
Global Agenda

The Agenda Weekly

A weekly update of the most important issues driving the global agenda

Subscribe today

You can unsubscribe at any time using the link in our emails. For more details, review our privacy policy.

How we can prepare for the future with foundational policy ideas for AI in education

TeachAI Steering Committee

April 16, 2024

About Us

Events

Media

Partners & Members

  • Join Us

Language Editions

Privacy Policy & Terms of Service

© 2024 World Economic Forum