Emerging Technologies

Step one in alignment on GenAI best practices: Transparency

Cross-sector collaboration and communication are key to scaling GenAI, responsibly.

Cross-sector collaboration and communication are key to scaling GenAI, responsibly. Image: Pexels/GoogleDeepMind

Reena Jana
Head of Content & Partnership Enablement, Responsible Innovation, Google
Share:
Our Impact
What's the World Economic Forum doing to accelerate action on Emerging Technologies?
The Big Picture
Explore and monitor how Artificial Intelligence is affecting economies, industries and global issues
A hand holding a looking glass by a lake
Crowdsource Innovation
Get involved with our crowdsourced digital platform to deliver impact at scale
Stay up to date:

Emerging Technologies

This article is part of: World Economic Forum Annual Meeting
  • Generative AI (GenAI) has the potential to enhance productivity and creativity across industries and sectors.
  • Industry leaders, governments and academics must weigh the technology’s opportunities and new challenges to establish best practices for building safe, secure and trustworthy GenAI applications.
  • Cross-sector collaboration and communication are key to scaling responsibly.

Promoting alignment on industry best practices is imperative for building advanced artificial intelligence (AI) applications that have social benefits, avoid unfair bias, are built and tested for safety and privacy and are accountable to people. The dawn of generative AI (GenAI) offers an opportunity to guide the development of an unprecedented technology using principled practices with common approaches to transparency.

This common approach can include even the most basic shared GenAI definitions to facilitate thoughtful and practical sharing of lessons learned. For example, the World Economic Forum’s AI Governance Alliance working group on responsible applications and transformation helped define a shared vocabulary for responsible GenAI development with other Forum community members.

Discover

How is the World Economic Forum creating guardrails for Artificial Intelligence?

As described in the newly released paper, Unlocking value from Generative AI: Guidance for responsible transformation, we co-defined:

  • Responsible transformation: the organizational effort and orientation to leverage the opportunities and benefits of GenAI while mitigating the risks to individuals, organizations and society. It is a strategic undertaking across organizations’ governance, operations, talent and communication pillars.
  • Responsible adoption: the adoption of individual use cases and opportunities within an organization. It is more tactical and requires thorough evaluation to ensure that value can be realised.
Have you read?

Accountability and AI governance

To hold ourselves accountable for sharing lessons learned, since 2019, Google has published an annual report on how we put our AI Principles into practice. The AI Principles are our ethical charter for building safe, secure and trustworthy AI applications. GenAI is no exception. The principles help us guide business and technical decisions throughout the product research and development. This year’s report offers concrete examples of responsible transformation and responsible adoption as defined in the AI Governance Alliance paper.

For instance, we are increasingly integrating our AI Principles into our holistic enterprise risk management frameworks to ensure the quality of our offerings. This evolution helps us further the scale of our work and integration into existing governance and company-wide infrastructure and accountability processes.

Google’s enterprise risk frameworks, tools and systems of record provide a foundation for first-line reviews of AI-related issues and help address compliance with evolving legal, regulatory and standards benchmarks such as the United States White House Executive Order on AI, the Group of Seven’s International Guiding Principles for Organizations Developing Advanced AI Systems and the AI Act in the European Union.

AI governance and trust and safety teams collaborate closely with teams and subject matter experts across machine learning research, product policy, user-experience research and design, public policy, law, human rights and the social sciences, among many other disciplines.

AI pre-launch assessments are part of a larger, end-to-end pre-launch process that includes technical safety testing and standard privacy and security reviews.

Common GenAI risks and interventions

Drawing from insights garnered over hundreds of GenAI launches in 2023, we’ve refined emerging best practices to mitigate these risks, which we share below. These range from the technical tools – such as SynthID or About this image – that can help identify mis- and dis-information when GenAI tools are used by malicious actors, to explainability techniques, such as increasing explanatory information throughout the AI product, not just at the moment of decision.

It also includes adversarial testing and red teaming, or “ethical hacking,” for systematically evaluating a GenAI model to learn how it behaves when provided with malicious or inadvertently harmful input.

Table of common GenAI interventions from 2023 Google AI Principles Progress Update.

Investing in research on GenAI risks

GenAI is a rapidly evolving technology with rapidly evolving risks. Conducting foundational research to gain additional insight into these risks is important. For example, we recently worked with Gallup, Inc. to survey perceptions and attitudes around technology to understand how anthropomorphism influences people’s use of GenAI chatbots and other technology.

Such insights help us understand the potential benefits and dangers of humanizing technology and the development of new interventions, mitigations and guardrails to help people use AI appropriately.

Our researchers explore GenAI within the lens of human-centred topics. An exploratory study with five designers examined how people with no machine learning programming experience or training can use prompt programming to prototype functional user interface mock-ups quickly. We used this technique internally and then launched it externally as Google AI Studio for GenAI developers in 179 countries and territories.

Multi-disciplinary AI research can help address society-level, shared challenges, from forecasting hunger to predicting diseases to improving productivity. Our recent research with Boston Consulting Group also found that AI has the potential to mitigate 5-10% of global greenhouse gas emissions by 2030.

To help promote diverse perspectives in society-centred AI research, we announced that 70 professors were selected for the 2023 Award for Inclusion Research Programme, which supports academic research that addresses the needs of historically marginalized groups globally.

Collaborative transparency is key

Building AI responsibly must be a collective effort. It’s necessary to involve academics and research labs proactively, as well as social scientists, industry-specific experts, policymakers, creators, publishers and people using AI in their daily lives.

An increasing number of forums for collaboration inform and complement the AI Governance Alliance that focus on specific areas, such as MLCommons’ multi-stakeholder development of standard AI safety benchmarks. The White House sponsored a red teaming event at DEF CON, which drew over 2,000 people to test industry-leading large language models from Google, NVIDIA, Anthropic, Hugging Face and others.

The Partnership on AI facilitates collaborative efforts on a synthetic media framework, data enrichment sourcing guidelines and guidance for safe modeldeployment. And looking to the future, we co-established, with industry partners, the Frontier Model Forum to develop standards and benchmarks for emerging safety and security issues of frontier models.

Sustaining and scaling this collaborative approach over time safely requires a multi-stakeholder approach to governance. Across industries and nations, we can learn from the experience of the internet’s growth over decades to develop common standards, shared best practices and appropriate risk-based regulation. This effort will take not only partnership but also transparency.

As a concrete call to action, it’s necessary to develop and agree to industry-wide standards for transparency documentation such as technical reports or model or data cards that appropriately make public essential information based on internal documentation of safety and other model evaluation details.

These transparency artefacts are more than communication vehicles. They can offer shared guidance for AI researchers, deployers and downstream developers on the responsible use of advanced AI, helping the pursuit of responsible GenAI applications and transformation, together.

Loading...
Don't miss any update on this topic

Create a free account and access your personalized content collection with our latest publications and analyses.

Sign up for free

License and Republishing

World Economic Forum articles may be republished in accordance with the Creative Commons Attribution-NonCommercial-NoDerivatives 4.0 International Public License, and in accordance with our Terms of Use.

The views expressed in this article are those of the author alone and not the World Economic Forum.

Related topics:
Emerging TechnologiesForum Institutional
Share:
World Economic Forum logo
Global Agenda

The Agenda Weekly

A weekly update of the most important issues driving the global agenda

Subscribe today

You can unsubscribe at any time using the link in our emails. For more details, review our privacy policy.

Robot rock stars, pocket forests, and the battle for chips - Forum podcasts you should hear this month

Robin Pomeroy and Linda Lacina

April 29, 2024

About Us

Events

Media

Partners & Members

  • Join Us

Language Editions

Privacy Policy & Terms of Service

© 2024 World Economic Forum