Fourth Industrial Revolution

The playbook for responsible generative AI development and use

Generative AI is increasingly being used by companies across the world.

Generative AI is on the rise, but we need to ensure its responsible development and use.

Image: Getty Images/iStockphoto

Genevieve Smith
Founding Director, Responsible AI Initiative, Berkeley Artificial Intelligence Research
Share:
Stay up to date:

Generative Artificial Intelligence

  • Generative AI's use is growing fast but its full potential is yet to be unlocked.
  • A research team led by UC Berkeley explores what responsible development and use of GenAI looks like within organizations.
  • UC Berkeley's Responsible Use of Generative AI playbook outlines how to responsibly use GenAI in day-to-day work and in new products.

Generative AI (GenAI) use continues to grow rapidly across companies globally. In fact, research shows that more than a third (39.4%) of US adults (18-64 years old) reported using GenAI last year, with 24% of workers using it at least once a week and 11% daily, across a range of occupations and tasks.

Adoption has increased rapidly. A 2024 McKinsey study found a near doubling of GenAI use across all regions in the previous year with ChatGPT boasting 200 million weekly active users as of August 2024 – double the number from 2023.

The pace of adoption for GenAI is quicker than the adoption of personal computers and the internet. The hype is not just continuing but amplifying. It is being urged on by innovators, companies and investors, as well as countries and governments eager to lead and capitalize on the technology.

Indeed, a recent survey of professionals globally by Thomson Reuters showed that 95% of respondents believe AI will be central to their organization’s workflow within the next five years.

Responsible development and use of generative AI needed

To unlock the technology’s full potential, organizations are increasingly recognizing the importance of proactively designing beneficial GenAI applications guided by principles, addressing potential risks and transparently sharing lessons learned to help establish good practices for how GenAI is built, tested and used responsibly.

Research shows the GenAI high performers who are capturing the most value are paying more attention to, and addressing, known risks – and working to identify and prevent new ones.

Identifying and defining what responsible development and use of new GenAI products and services looks like, particularly at the product management level, can be challenging at such an early stage.

0 seconds of 0 secondsVolume 90%
Press shift question mark to access a list of keyboard shortcuts
00:00
00:00
00:00
 

By listening to the needs of those making both business and technical decisions about how GenAI products are brought to market and used, organizations can better understand the factors that might constrain or support responsible development and use.

Recognizing this gap and opportunity, a research team led by UC Berkeley (Berkeley AI Research Lab & Haas School of Business) along with researchers from Stanford University and the Oxford Internet Institute, with support from Google, is exploring what responsible development and use of GenAI looks like.

The team conducted interviews with 25 US-based product managers and a survey of more than 300 global product managers across a variety of companies that offer generative AI products – and use them internally – to better understand current challenges faced while implementing responsible AI practices in both product development and product usage.

From uncertainty to action in responsible use of generative AI

At a high level, our research finds that there remains a widespread need among product managers – critical decision-making gatekeepers – for an industry-wide standard vocabulary and practical guides to help inform companies’ approaches to operationalizing responsible development and deployment of advanced GenAI models and products.

The study, Responsible Generative AI Use by Product Managers: Recoupling Ethical Principles and Practices, showed that product managers share a sense of uncertainty, which stems from the fact that GenAI is a nascent technology and has moved rapidly from labs into products. Its use across industries is still in its early days.

This uncertainty is linked to the lack of standardized definitions for terms and concepts within the AI ecosystem, as well as a clearly expressed need for practical organizational guidance for responsible use. In particular, there is an urgent need for specific standards on model transparency – for example explainability, reporting on how a model was built and tested for safety, intellectual property protections, model security and more.

Transparency (or lack of transparency) is a critical topic. The study highlights how perceived lack of transparency not only feeds uncertainty but holds back product manager adoption of, and trust in, GenAI models. This link between transparency and adoption has been illustrated elsewhere; for example, in an analysis of model use on machine learning platform Hugging Face, researchers found that models with transparency artefacts are downloaded significantly more than those without.

Finally, incentives matter. Only 19% of our survey respondents report having incentives for responsible use of GenAI, with incentives often tied to traditional incentives of shipping products and moving fast.

On the positive side, at organizations with AI principles and leadership support, product managers are nearly 3 times more likely to work with responsible AI or trust and safety teams and 2.4 times more likely to implement safeguards and standards, like assessing for bias.

Furthermore, even amid uncertain environments, PMs reported taking responsibility micro-moments, which are actions such as individual or team-wide reviews and safeguarding standards for customer data, and finding ways to align actions with company values.

In recognition of the need for practical guides to inform responsible development and use of GenAI and to support the space broadly, our research team created a playbook building on the research. It was tested with product managers at Google and across industry.

Key ways to use generative AI responsibly

The Responsible Use of Generative playbook outlines 10 plays – five for business leaders and five for product managers – that demonstrate how to responsibly use GenAI in day-to-day work and in new products, with accompanying resources and tooling.

Five plays for business leaders:

  • Ensure leadership recognizes the value of responsible GenAI use, develops responsible AI principles and communicates the organization’s commitment to responsibility to all employees.
  • Implement policies and accompanying standards to ensure responsible use of generative AI.
  • Build a comprehensive responsible AI governance framework that defines key roles, establishes organizational structures and fosters a culture of shared accountability.
  • Update incentives to align performance, product development and metrics with responsibility.
  • Implement tailored training to address gaps and support responsible use of GenAI.
The Responsible Use of Generative AI playbooks offers five plays for business leaders and another five for product managers.
The Responsible Use of Generative AI playbooks offers five plays for business leaders and another five for product managers. Image: Responsible AI Initiative/Berkeley Haas

Five plays for product managers:

  • Conduct “gut checks” to evaluate responsibility risks in work use cases and product development.
  • Choose a model for GenAI products by assessing needs and potential risks. Ensure transparency by documenting the model, fine-tuning data and highlighting key considerations.
  • Conduct risk assessments and audits for GenAI products, involving cross-functional teams, expert oversight and tools aligned with organizational principles and core risks.
  • Implement red-teaming and adversarial testing to uncover vulnerabilities, while capturing and responding to user feedback over time.
  • Track your responsibility micro-moments—simple, impactful actions that demonstrate responsible decision-making—and showcase them in performance reviews.

Playbook supports transformation in the generative AI age

The Responsible Use of Generative AI playbook serves as a practical complement to the World Economic Forum’s Centre for the Fourth Industrial Revolution’s cross-industry initiative under the AI Governance Alliance’s ongoing work – namely the recent Industries in the Intelligent Age white paper series.

Discover

How is the World Economic Forum creating guardrails for Artificial Intelligence?

This initiative asks: How can industry leaders drive growth and transformation in the age of AI and GenAI responsibly, benefiting both business and society?

The playbook provides structured guidance on how business leaders and product managers can systematically integrate emerging responsible GenAI practices into daily workflows and product development, through actionable steps aligned to strategic priorities.

The deep dive emphasizes the need for organizations to move from isolated AI use cases to a visionary, yet practical, enterprise-wide transformation through GenAI.

Accept our marketing cookies to access this content.

These cookies are currently disabled in your browser.

Don't miss any update on this topic

Create a free account and access your personalized content collection with our latest publications and analyses.

Sign up for free

License and Republishing

World Economic Forum articles may be republished in accordance with the Creative Commons Attribution-NonCommercial-NoDerivatives 4.0 International Public License, and in accordance with our Terms of Use.

The views expressed in this article are those of the author alone and not the World Economic Forum.

Share:
World Economic Forum logo

Forum Stories newsletter

Bringing you weekly curated insights and analysis on the global issues that matter.