Opinion
Artificial Intelligence

How can the aviation industry make AI safer?

Lawmakers and AI developers can look to the skies and learn from the aviation industry.

Lawmakers and AI developers can look to the skies and learn from the aviation industry. Image: Pexels.

Arathi Sethumadhavan
User Research Scientist, Technology and Society, Google
Joe Garvin
Writer, Ethics & Society, Microsoft
Share:
Our Impact
What's the World Economic Forum doing to accelerate action on Artificial Intelligence?
The Big Picture
Explore and monitor how Artificial Intelligence is affecting economies, industries and global issues
A hand holding a looking glass by a lake
Crowdsource Innovation
Get involved with our crowdsourced digital platform to deliver impact at scale
Stay up to date:

Artificial Intelligence

Listen to the article

  • AI needs regulation but there are disparate views on what that means in practice.
  • The aviation industry could offer valuable inspiration with its tight regulations and critical safety procedures.
  • If the tech industry adopted similar policies and created a culture of learning it could avert future AI-related catastrophes.

Recent stories about wrongful arrests based on facial recognition and sexual harassment in virtual reality underscore the need to do more to protect people from the harmful impacts of AI and other emerging technologies. Most people today agree that we need more regulation in the AI industry, but we’re far from consensus as to what those changes should look like. As tech companies, lobbyists, and government officials continue to push their disparate visions for AI regulation, let’s look for inspiration in an existing well-regulated and safety-critical industry: aviation.

It’s time we considered the AI industry a safety-critical industry. What changes can be made in this high-risk industry to protect people from possible harm while allowing AI innovation to stabilize and flourish? Lawmakers and AI developers can look to the skies and learn from the aviation industry.

Have you read?

Learning from the aviation industry: AI reforms

Over the decades, the aviation industry has developed regulations, standard operating procedures, licenses and certifications, and training programmes that improve communication and teamwork in high-stress scenarios. Many of these innovations, unfortunately, arrived as responses to catastrophic accidents and human tragedies. The aviation industry has also addressed its high risk by fostering a culture of learning from errors, asking for help, and listening to feedback.

We can make AI safer by implementing similar reforms and developing a similar culture: a responsible AI culture where it’s the norm to share useful information about failures and mistakes. Instead of waiting for new AI disasters to take us by surprise, we have the opportunity now to learn from disasters of the past by implementing these three reforms: comprehensive training programmes, regulation and standardization, and a culture of sharing safety information.

1. Comprehensive training programmes

On a 1978 commercial flight out of Denver, a communication breakdown occurred in the cockpit and the flight crew didn’t communicate to the pilot about fuel running low. Ultimately, the plane ran out of gas and crashed into a Portland suburb, killing 10 people. At that time in the airline industry, there weren’t established systems yet to make sure all the crew had a strong enough voice in emergency situations.

This disaster and the communication breakdown behind it inspired the creation of a wide range of training procedures known as crew resource management training (CRM). CRM was built upon the discovery that most problems for flight crews actually involve not the technical aspects of operating a cockpit but rather situational awareness, group decision-making, and leadership. Accordingly, CRM targets the cognitive and interpersonal skills people need to use all available resources safely and efficiently.

Crucially, CRM also accounts for key differences in national cultures in areas such as collectivism, uncertainty avoidance, and power distance. Power distance (PD), for example, is the measure of power between a boss and their subordinate. In high PD cultures, subordinates avoid approaching their bosses directly and hesitate to disagree with them. It’s factors such as these that shape the social dynamics in the cockpit, and CRM helps counterbalance these types of cultural differences. Today, the importance of CRM is recognized worldwide and it’s a requirement for all flight crew members at various stages of their careers.

And like flight crew members, AI professionals often work in safety-critical scenarios alongside people from diverse cultural backgrounds. Let’s not wait for a tragedy to spur us to focus on communication and teamwork among AI teams. The aviation industry’s CRM offers inspiration for developing standard and comprehensive training procedures of our own.

2. Regulation and standardization

While the aviation industry helps keep people safe by addressing interpersonal skills with CRM training, it addresses the technical, and operational aspects of the trade with a host of regulatory measures and standardized processes. Officially recognized FAA licenses and certificates, for example, are required for many roles throughout the aviation trade, with different licenses required depending on whether you build and maintain aviation technology (technicians need an aviation mechanic certificate) or operate it (pilots require regular recertification).

The aviation industry also adheres to many processes like standard operating procedures and official checklists. These shared guidelines minimize errors by promoting consistency and reliability.

Discover

How is the World Economic Forum ensuring the responsible use of technology?

Applying analogous regulatory requirements and standardized processes throughout the AI industry would improve safety in a similar way. Let’s consider new AI certifications and licenses, mandated and recognized truly industry-wide, with different licenses required depending on whether you build and maintain AI technology (researchers and engineers crafting models and algorithms) or operate it (government officials and corporate leaders using these AI tools).

Let’s also consider procedures in AI today that can be standardized and broadly adopted. Some efforts toward this are already underway: Microsoft’s Responsible AI resources include templates, checklists, and policies, while IBM’s AI Fairness 360 toolkit and Google’s What-If tool help developers detect and assess bias in their code. As we work toward more consistently adopted principles and processes in AI, aviation offers guidance for allowing the AI industry to flourish safely.

3. A culture of sharing safety information

Lastly, in addition to developing training programmes and regulations, the aviation industry has also responded to high risk by cultivating a culture of sharing relevant safety information and listening to feedback. The great need to keep people safe doesn’t allow for withholding information, concealing mistakes, or penalizing errors. Out of a shared responsibility to protect its users, aviation organizations and professionals instead share information about mistakes and failures that would help prevent future accidents.

For instance, aviation personnel can voluntarily report safety incidents and situations to a confidential database. The AI industry could implement comparable databases for reporting issues with human-AI interaction. A confidential database for AI professionals would serve as a safe space for raising ethical concerns, and a separate database could collect feedback from users and others affected by AI technology.

"When your technology changes the world,” says Microsoft President Brad Smith, “you bear a responsibility to help address the world you have helped create." The aviation industry has borne this weighty responsibility with hard-learned lessons, lessons the AI industry can learn from as it takes its turn reshaping the world.

Loading...
Don't miss any update on this topic

Create a free account and access your personalized content collection with our latest publications and analyses.

Sign up for free

License and Republishing

World Economic Forum articles may be republished in accordance with the Creative Commons Attribution-NonCommercial-NoDerivatives 4.0 International Public License, and in accordance with our Terms of Use.

The views expressed in this article are those of the author alone and not the World Economic Forum.

Share:
World Economic Forum logo
Global Agenda

The Agenda Weekly

A weekly update of the most important issues driving the global agenda

Subscribe today

You can unsubscribe at any time using the link in our emails. For more details, review our privacy policy.

How we can prepare for the future with foundational policy ideas for AI in education

TeachAI Steering Committee

April 16, 2024

About Us

Events

Media

Partners & Members

  • Join Us

Language Editions

Privacy Policy & Terms of Service

© 2024 World Economic Forum