Artificial Intelligence

Why philanthropy needs to prepare itself for a world powered by AI

image of a circuit board

We need to ensure that artificial intelligence is created and used ethically within society. Image: Unsplash/Chris Ried

Vilas Dhar
President and Trustee, Patrick J. McGovern Foundation
Kay Firth-Butterfield
Senior Research Fellow, University of Texas at Austin
Share:
Our Impact
What's the World Economic Forum doing to accelerate action on Artificial Intelligence?
The Big Picture
Explore and monitor how Artificial Intelligence is affecting economies, industries and global issues
A hand holding a looking glass by a lake
Crowdsource Innovation
Get involved with our crowdsourced digital platform to deliver impact at scale
Stay up to date:

Artificial Intelligence

Originally published by Philanthropy on 30 March 2021

  • The assurance that technology is being built and used ethically is an important consideration when it comes to AI.
  • A group of 20 senior philanthropic leaders came together to explore challenges and prevent the misuse of AI.
  • The conversations they had led to 'Global AI Action Alliance', which is a platform for philanthropic and technology leaders to engage in the development of ethical A.I. practices.
  • It also led to the creation of a 4-point action plan.

Artificial intelligence presents itself in both grand and mundane ways. It accelerates the scientific process, leading most recently to the development of COVID-19 vaccines at record speed. It runs self-driving cars, allowing them to smoothly navigate downtown streets. And it manages our emails and online calendars, improving our productivity and well-being.

But A.I.’s potential for transforming human learning and experience also sparks unease and raises fundamental questions. Who should control the creation and use of these tools? Are we comfortable handing a small group of technologists the keys to our social and economic development engine? And what role should philanthropy play in protecting the most vulnerable and ensuring that A.I. benefits the greater good?

Have you read?

Controversies over facial recognition, automated decision making, and COVID-19 tracking have shown that realizing A.I.’s potential requires strong buy-in from citizens and governments, based on their trust that the technology is built and used ethically.

To explore these challenges, we recently brought together a group of 20 senior philanthropic leaders representing institutions including the Schmidt Family Foundation, the Mastercard Center for Inclusive Growth, and the Berggruen Institute at a virtual convening of the World Economic Forum. Our conversation reflected philanthropists’ profound interest in both the positive potential for A.I. and the need to more deeply understand how to harness, steer, and govern these tools to prevent misuse and ensure they are deployed for social good.

Those conversations contributed to the launch of a new Global AI Action Alliance — a platform for philanthropic and technology leaders to engage in the development of ethical A.I. practices and tools. They also led to the creation of an action plan that can help pave the path forward for deeper philanthropic participation in the effective, safe, and equitable use of A.I. to address societal need. The plan encompasses four key areas:

A commitment to learning. While some foundations are tech-savvy, philanthropy as a field is not at the forefront of digital transformation. But we shouldn’t leave philanthropy’s response to A.I.’s challenges and potential to a handful of foundations focused on technological innovation. A broad swath of philanthropic organizations, regardless of their focus, need to invest in learning about A.I., sharing their perspectives across the field and with grantees, and adapting traditional strategies to incorporate these technologies.

We need to be honest about our organizational blind spots and commit to building internal capacity where needed. That means learning from and hiring data scientists and A.I. practitioners. The Rockefeller Foundation has led the way in this area, hiring a chief data officer early on and convening working groups on the design and implementation of responsible A.I.

And today, the Patrick J. McGovern Foundation, which one of us — Vilas Dhar — heads, deepened its own knowledge base by announcing plans to merge with Silicon Valley-based Cloudera Foundation to provide greater A.I. resources and expertise to grantees. Cloudera’s $9 million endowment and $3 million in existing grants, along with its staff and CEO, Claudia Juech, will form a new Data and Society program within the Patrick J. McGovern Foundation.

Integration of A.I. into key grant-making areas. Rather than relegating topics involving A.I. and data to the IT team, foundation leaders should consider how these technologies affect their key focus areas. Educational outcomes, for example, can be addressed through A.I. technologies that provide better language translation, increased access to online learning platforms, and interactive teaching tools. A.I. can also play an integral role in addressing issues such as food insecurity. For example, a nonprofit called the Common Market uses A.I. to improve its food supply chains between farmers, growing networks, and food banks across Texas, the Southeast, and the Mid-Atlantic.

At each stage of the decision making and programming process, philanthropic leaders should be asking, “What is the potential application of A.I., and what are the benefits and risks?”

Investment in safe data sharing. Philanthropic institutions have the advantage of looking across a wide range of organizations in a particular field or region and are well positioned to support the aggregation and sharing of data and technical knowledge. The fact that they rarely do so is a missed opportunity. A.I. tools rely on massive amounts of data to learn and pinpoint patterns on issues such as policing, homelessness, and public health. But for many nonprofits, it is challenging to amass data in meaningful quantities or to securely store and analyze the data they gather, especially since funding for such internal operations is typically scarce.

Philanthropic organizations should play a central role in supporting efforts to make data more accessible to grantees through vehicles such as data cooperatives and data trusts. These entities link data held by otherwise separate groups, providing even small nonprofits with robust data and analysis capabilities. Unlike many commercial data-gathering sources, they also address privacy concerns by ensuring that data is held confidentially and applied only for its intended use.

The Himalayan Cataract Project, for example, which seeks to cure blindness around the world through simple and inexpensive cataract surgery, is building a shared framework for how patient data is gathered, distributed, and used among ophthalmologic health organizations. This common standard not only gives health workers better insights on how to treat patients who may be served by multiple organizations but also ensures their privacy by imposing strict guidelines on how the data is used.

Diversification of voices. The conversation about development, ownership, and use of technology should expand to include philanthropists, activists, policy makers, and business leaders. In recent A.I. gatherings, we’ve brought together social-change activists and business leaders to facilitate discussions between those who understand the problems facing society and those who can build the solutions. Platforms such as data.org, launched by the Rockefeller Foundation and the Mastercard Center for Inclusive Growth, are furthering this type of dialogue by highlighting and funding A.I. solutions from around the world on issues such as improving economic well-being and creating safe and sustainable cities.

Discover

How is the World Economic Forum ensuring the responsible use of technology?

During our roundtable conversation at the World Economic Forum, Dan Huttenlocher, dean of the MIT Stephen A. Schwarzman College of Computing and board chair of the MacArthur Foundation, observed that “A.I. can help us leapfrog some of the societal challenges we face, but we have to design it to do so. There’s no such thing as a ‘good technology’ in and of itself — we have to make it work for us.”

Philanthropy occupies a position of financial privilege, moral responsibility, and public leadership. We must use that position as a platform for collaboration among those inside and outside of our field to build a future in which A.I. works safely and effectively to help solve humanity’s greatest challenges.

Loading...
Don't miss any update on this topic

Create a free account and access your personalized content collection with our latest publications and analyses.

Sign up for free

License and Republishing

World Economic Forum articles may be republished in accordance with the Creative Commons Attribution-NonCommercial-NoDerivatives 4.0 International Public License, and in accordance with our Terms of Use.

The views expressed in this article are those of the author alone and not the World Economic Forum.

Related topics:
Artificial IntelligenceFourth Industrial RevolutionHuman Rights
Share:
World Economic Forum logo
Global Agenda

The Agenda Weekly

A weekly update of the most important issues driving the global agenda

Subscribe today

You can unsubscribe at any time using the link in our emails. For more details, review our privacy policy.

How we can prepare for the future with foundational policy ideas for AI in education

TeachAI Steering Committee

April 16, 2024

About Us

Events

Media

Partners & Members

  • Join Us

Language Editions

Privacy Policy & Terms of Service

© 2024 World Economic Forum