Artificial Intelligence

'Self-restraint and regulation' - how the tech companies transforming the world view responsible AI 

Responsible AI: The Thinker  

Responsible AI is giving philosophers plenty to think about. Image: Unsplash

Robin Pomeroy
Podcast Editor, World Economic Forum
Simon Torkington
Senior Writer, Forum Agenda
Share:
Our Impact
What's the World Economic Forum doing to accelerate action on Artificial Intelligence?
The Big Picture
Explore and monitor how Artificial Intelligence is affecting economies, industries and global issues
A hand holding a looking glass by a lake
Crowdsource Innovation
Get involved with our crowdsourced digital platform to deliver impact at scale
Stay up to date:

Artificial Intelligence

Listen to the article

Loading...
  • Two companies developing generative AI tools join the Radio Davos podcast to share their insights on how we ensure the technology is safe and ethical.
  • Microsoft's Chief Responsible AI Officer Natasha Crampton says companies must exercise self-restraint but also need regulation.
  • Thomas Wolf, Co-founder of start-up Hugging Face shares what it's like to launch a generative AI chatbot.
  • Subscribe to get the whole podcast series; Episode page and transcript: https://www.weforum.org/podcasts/radio-davos/episodes/ai-episode-2-microsoft-hugging-face
Have you read?

Microsoft is one of the key players in the artificial intelligence revolution. So how does the software giant, which has been integrating ChatGPT technology into the products most of us use, go about making its AI products responsible and ethical?

In the second episode of the World Economic Forum's podcast series on generative AI, Microsoft's Chief Responsible AI Officer Natasha Crampton talks responsibility and regulation.

And we hear from the Co-founder of Silicon Valley start-up Hugging Face on how it feels to launch an AI chatbot to the world.

Here are some of the key quotes:

Developing responsible AI

“My job is to put into practice across the company, the six AI principles that we've adopted at Microsoft, " says Natasha Crampton. "Our six principles that form our north star are fairness, privacy and security, reliability and safety, inclusiveness, accountability and transparency."

Microsoft's AI framework.
Microsoft’s framework for developing, assessing and deploying responsible AI. Image: Microsoft

“Microsoft has long taken the view that we need both responsible organizations like ourselves to exercise self-restraint and put in place the best practices that we can to make sure that AI systems are safe, trustworthy and reliable,” she adds.

“We also recognize that we need regulation. There's no question that we will need new laws and norms and standards to build confidence in this new technology and also to give everyone the protection under the law.”

“While we would love it to be the case that all companies decide to adopt the most responsible practices they can, that is not a realistic assumption,” she says. “We think it's important there is this baseline protection for everyone and that will help to build trust in the technology.”

Discover

How is the World Economic Forum ensuring the responsible use of technology?

Transparency is key to building trust in AI

Thomas Wolf, Co-founder of Hugging Face says his company's open source approach is one important way to achieve responsible AI.

“Having an open model is super important,” says Wolf. “People need to understand where AI might fail or trust in where it will work. They need to be aware of biases that you may have in these tools, or how they could be misused.”

Wolf says allowing everyone access to the source code of his platform is the ultimate expression of transparency. “If the model is open, you don't have to believe just the people who made it. You can audit the model yourself, read how it was made, dive into training data and raise issues,” he adds. “We have a lot of discussion pages on our models where people can flag models and raise questions.”

Should we pause the development of AI until we know it is safe?

A recent open letter signed by thousands of AI engineers and business leaders called for a six-month pause in the development of AI models more powerful than ChatGPT. The idea was to give regulators time to catch up with the pace of development and for tech companies to gain more understanding of the potential risks. Crampton recognizes the concerns but says pausing development would be counter-productive.

“Rather than pausing important research and development work underway right now, including into the safety of these models, I think we should focus on a plan of action,” she says. “We should be bringing our best ideas to the table about the practices that are effective today to make sure we're identifying and measuring and mitigating risks. And we should also be bringing our best ideas about the new laws and norms and standards that we need in this space.”

Loading...

Sector-specific AI regulation

Wolf believes the pace of AI development will only accelerate and no blanket regulatory framework will work. He prefers the idea of sector-specific regulation, akin to the governance of commercial aviation or the nuclear industry.“If it's in specific fields that would make sense. When you are talking about airlines, it's a very specific field, it's commercial airlines. You don't have the same for other types of aviation,” Wolf says.

“I think once you have nailed down a specific sector where there is a specific danger we want to prevent, then it makes sense. I think regulation at this level would be super positive and super interesting. But something that just generally covers AI feels to me like it would just be too wide-ranging to be effective.”

AI and the workplace

The impact of AI on the workplace and the jobs we all do has led the discussion on responsible AI. Questions have been raised about its use in recruitment, to shortlist and even select candidates. It’s recognized that AI can introduce bias in selection processes as a result of the data they have been trained on.

Perhaps the biggest question of all is who will do the work – human or AI? The Forum’s Future of Jobs Report 2023 has identified a trend where machines are increasingly performing tasks once done by people.

Infographic showing the proportion of tasks completed by humans vs machines.
The balance of work tasks performed by humans and machines is changing rapidly. Image: World Economic Forum

This trend is likely to accelerate in the years to come, but Crampton doesn’t see AI replacing humans, even though she’s been impressed with what ChatGPT can do.

“I prompted an early version of GPT-4 to produce a bill that could regulate AI based on an impact assessment methodology. I got an output that was a very decent first draft.

“Of course, it's very, very important to be judicious about it and as a trained lawyer I certainly picked up some errors. It's important to understand where the technology is good and to recognize its flaws.

“We need to strike a balance that combines the best of AI with the best of humans. This technology is essentially a co-pilot for doing these tasks and enhancing human ability.”

You can listen to the full Radio Davos podcast here:

Loading...

And the previous episode here:

Loading...
Don't miss any update on this topic

Create a free account and access your personalized content collection with our latest publications and analyses.

Sign up for free

License and Republishing

World Economic Forum articles may be republished in accordance with the Creative Commons Attribution-NonCommercial-NoDerivatives 4.0 International Public License, and in accordance with our Terms of Use.

The views expressed in this article are those of the author alone and not the World Economic Forum.

Share:
World Economic Forum logo
Global Agenda

The Agenda Weekly

A weekly update of the most important issues driving the global agenda

Subscribe today

You can unsubscribe at any time using the link in our emails. For more details, review our privacy policy.

How we can prepare for the future with foundational policy ideas for AI in education

TeachAI Steering Committee

April 16, 2024

About Us

Events

Media

Partners & Members

  • Join Us

Language Editions

Privacy Policy & Terms of Service

© 2024 World Economic Forum