Emerging Technologies

Why open-source is crucial for responsible AI development

At the heart of regulating AI is the need for a unified approach to responsible AI development and deployment;

At the heart of regulating AI is the need for a unified approach to responsible AI development and deployment; Image: Pixabay/Pexels

Rahul Roy-Chowdhury
Chief Executive Officer, Grammarly
Share:
Our Impact
What's the World Economic Forum doing to accelerate action on Emerging Technologies?
The Big Picture
Explore and monitor how Artificial Intelligence is affecting economies, industries and global issues
A hand holding a looking glass by a lake
Crowdsource Innovation
Get involved with our crowdsourced digital platform to deliver impact at scale
Stay up to date:

Emerging Technologies

This article is part of: World Economic Forum Annual Meeting
  • At the heart of regulating AI is the need for a unified approach to responsible AI development and deployment;
  • A wide range of technologists must be involved in regulatory conversations to ensure that regulation is developed without bias.
  • Without varied viewpoints, resulting regulations may overlook open-source models as a cornerstone of responsible AI.

Artificial intelligence (AI) is the most powerful technological force shaping our world today. It has enormous potential to improve lives, from optimizing global supply chains to speeding up the drug discovery process. When not deployed thoughtfully, however, AI has the potential to perpetuate bias, upend industry and even diminish human creativity and connection, causing real harm in our daily lives.

Have you read?

With dramatically different outcomes at stake, it’s critical we have a considered conversation about how we use AI to benefit society. Naturally, regulation is and should be a part of this conversation.

At the heart of regulation is the need for a unified approach to responsible AI development and deployment. Just as AI models must be trained on diverse datasets to ensure fairness, a wide range of technologists must be involved in regulatory conversations to ensure that regulation is developed without bias. Without varied viewpoints, resulting regulations may overlook open-source models as a cornerstone of responsible AI.

Nearly all countries' AI regulatory approach seems to follow a process of understand, then grow, then shape
Nearly all countries' AI regulatory approach seems to follow a process of understand, then grow, then shape Image: Deloitte Insights

What current regulatory conversations are missing

In the US, President Biden’s ‘Executive Order on the Safe, Secure and Trustworthy Development and Use of Artificial Intelligence’ addresses several critical elements of responsible AI that we prioritize at Grammarly, including developing standards for trustworthiness and helping educators safely deploy AI tools.

One area where Grammarly’s position differs from the executive order is a belief in open-source models as a core aspect of responsible AI development.

Section 4.6 of Biden’s executive order suggests a preference toward closed-source models, possibly reflecting the viewpoints of the tech giants that were consulted in its drafting. The administration is evaluating open-source models differently from closed-source ones and planning to develop recommendations specifically for open-source models.

I believe such restrictions on open-source development can hurt the tech industry and, ultimately, the end user. I’m not alone: similar claims have been made about the Artificial Intelligence Act proposed by the European Union.

For context, closed-source models are maintained by an organization and their code is not made publicly available for use or audit. Examples of closed-source large language models (LLMs) are PaLM from Google, the family of GPT models from OpenAI and Claude from Anthropic. While third parties can take advantage of some closed-source models through an Application Programming Interface (API), the code itself cannot be manipulated.

Open-source models, such as Falcon LLM by the Technology Innovation Institute and Llama2 from Meta, are free for the public to use and modify in the name of innovation. As the executive order implies, some people may have concerns that bad actors can intentionally change the code of an open-source model, leading to security issues.

These vulnerabilities aren’t dissimilar from the cybersecurity risks with which all software has to contend and both closed- and open-source development has been accepted for decades. As a letter from Martín Casado, general partner of venture capital firm Andreessen Horowitz, to President Biden notes: open-source software Linux is widely used in cloud computing across the US government.

The use of open-source models in businesses is actually on the rise. Red Hat’s 2022 report ‘The State of Enterprise Open Source’ suggests that 80% of IT leaders expect to increase their use of enterprise open-source software and 89% believe open source is as or more secure than its alternative.

Additionally, an industry that fosters open-source development can lead to:

  • Enhanced creativity, innovation, and competition: The availability of open-source AI models has significantly reduced the time and resources required to develop new applications and has made AI accessible to a broader range of developers, fostering competition beyond just the largest tech companies.
  • Safer AI: When models are publicly available, they don’t just help developers build new applications; they also enable them to make products safer. Seismograph, for instance, is an open-source model that can be used to help AI writing assistants interact with sensitive content responsibly.
  • Increased transparency: The datasets and codes of open-source models can be audited and verified by third parties, which helps to ensure their quality and reliability.

This last point is worth underscoring. Biden’s executive order focused on many of the most important aspects of responsible AI (safety, security and trustworthiness) but it doesn’t give much real estate to a crucial element: transparency.

Without transparency, it’s difficult for people to develop opinions about whether an AI tool is safe, secure and trustworthy. While regulation should explore many ways to improve transparency – from sharing data collection practices to implementing continuous feedback loops – open-source models are a beacon of transparency in their very existence.

Acting as a regulator is one of the three roles government can play
Acting as a regulator is one of the three roles government can play Image: Deloitte Insights

Open-source models alone aren’t the only way forward; closed-source software is a common practice for businesses to monetize AI products. As the CEO of a tech company, I fully appreciate the need for closed-source models. My fear is a future in which all AI development is closed-source, diminishing the innovation, responsibility models and transparency that open-source development brings to the industry.

What we can do differently for responsible AI

In a perfect world, tech leaders could look to regulation as confirmation they’re getting responsible AI right. That’s why we must proactively share our expertise on a global stage to ensure regulations are developed holistically.

Here are three ways technologists can do that:

  • Contribute to open-source: Grammarly’s open-source models cover everything from gender-inclusive grammatical error correction to delicate text interaction, helping foster more responsible AI writing assistance. By creating open-source models, we can support a wide range of developers and industries in building the best products.
  • Create and follow a responsible AI framework: At Grammarly, we have the TRUE framework. When intentional frameworks like this one are used industry-wide, many risks of the latest AI technologies will be mitigated long before regulation steps in.
  • Work together, not against each other: As AI leaders committed to the betterment of society, our goal should be to improve the use of AI for society at large. Team up with your peers to prioritize responsible AI over egos or competition.

A rising tide lifts all boats and creating a stronger, more responsible AI industry helps us ensure the successful deployment of AI. This includes protecting the right to share open-source technology that can make AI safer, more transparent and more useful.

Loading...
Don't miss any update on this topic

Create a free account and access your personalized content collection with our latest publications and analyses.

Sign up for free

License and Republishing

World Economic Forum articles may be republished in accordance with the Creative Commons Attribution-NonCommercial-NoDerivatives 4.0 International Public License, and in accordance with our Terms of Use.

The views expressed in this article are those of the author alone and not the World Economic Forum.

Related topics:
Emerging TechnologiesForum Institutional
Share:
World Economic Forum logo
Global Agenda

The Agenda Weekly

A weekly update of the most important issues driving the global agenda

Subscribe today

You can unsubscribe at any time using the link in our emails. For more details, review our privacy policy.

5 ways to make the transition to Generative AI a success for your business

Ana Kreacic and Michael Zeltkevic

May 7, 2024

About Us

Events

Media

Partners & Members

  • Join Us

Language Editions

Privacy Policy & Terms of Service

© 2024 World Economic Forum