Opinion

Business

Who is the human in human-centred AI?

Human-centred AI raises questions about morality, ethics, faith and spiritual expression.

Human-centred AI raises questions about morality, ethics, faith and spiritual expression. Image: Kristin Todorova/Unsplash

Jaco J. Hamman
Professor of Religion, Psychology, and Culture, Vanderbilt University
  • Human-centred AI raises questions about who is included, with disparities in access, demographic bias and unintended consequences shaping its impact.
  • Global frameworks such as the World Economic Forum's Global Gender Gap Report and EU AI Act highlight the need for ethics, transparency and inclusivity.
  • Faith traditions and spirituality contribute wisdom, ethics and boundaries that can help guide AI development towards the common good.

The implementation and use of generative artificial intelligence (AI) are disrupting industries and come at significant financial and environmental costs. Developing human-centred AI and increasing human-centred skills are two responses in the search for responsible AI.

In addition, the World Economic Forum Future of Jobs Report 2025, Stanford’s AI Index Report 2025 and the AI Incident Database highlight the power of large language models (LLMs). The reports identify the need for human-centred skills in the workplace and ethical guidelines around a technology that comes with biases, hallucinations and risks.

The turn to human-centred AI and skills reveals societal aspirations that technological progress, at best, promotes human flourishing with sustainable business practices and environmental stewardship. The turn accepts that humans, often in profound ways, relate to “persons and things” (including chatbots), which exposes a human vulnerability that can be exploited. Societies are recognizing the wide-ranging impact of building and using generative AI. The 2023 EU AI Act is the world’s first comprehensive AI law seeking to assure that AI systems are safe, transparent, traceable, non-discriminatory and environmentally friendly.

A technology such as AI, in the varied ways we encounter algorithms, demands our shared discernment. Historian of technology Melvin Kranzberg reminds us that “technology is neither good nor bad; nor is it neutral”. Implied in this dictum is that technology functions as a power relation with unintended and extensive human, social and environmental consequences.

How, when and where a technology is applied by a specific agent has different outcomes. A collaborative, multistakeholder approach is needed to best guide humanity’s development, implementation and use of AI platforms.

Discover

What is the World Economic Forum doing about the Fourth Industrial Revolution?

Questions for human-centred AI

Developing human-centred AI and upskilling employees raise three critical questions:

1. Who is the human in human-centred efforts?

Is it a person in a C-suite, someone in middle-management implementing C-suite decisions, a worker on an assembly line, or might it be a person in a rural part of the world with limited internet access? Is the human someone waiting for full self-driving capacity for their electric vehicle, a person expecting rain for their subsistence crop during drought, or an islander anticipating the next high tide amid rising sea levels?

Have you read?

2. Who is imagining the human using an AI platform?

Is it a younger, male technologist or a multi-disciplinary team embracing a diverse and complex understanding of humanity? Women and persons of colour are greatly under-represented in AI specialist demographics.

3. What guardrails will be placed to protect humans across demographic categories such as nationality, gender, religion, education, ability and class impacted by the AI platform?

How the human in human-centred approaches are viewed and understood has significant implications. Vast differences in the lived experience of persons are confirmed by the World Economic Forum Global Gender Gap Report 2025 and Gini-indices. Technological progress often overlooks the diverse ways humanity manifests itself along with the unintended consequences of such progress.

Discover

How is the World Economic Forum creating guardrails for Artificial Intelligence?

The role of faith and spirituality in guiding AI

Whereas disciplines such as psychology, sociology, anthropology and history can assist those building AI platforms in discovering the human engaging their product, faith traditions, representing 75.8% of the world’s population (2020), uniquely contribute to this conversation.

A surprising number of persons who do not identify with a religion, but practice spirituality – the so-called “nones” – hold religious beliefs too. Faith traditions and spirituality add value in at least six concrete ways:

1. Religions have a deep understanding of human flourishing and environmental stewardship.

2. Religious scriptures, traditions and spiritual practices offer values such as peace, compassion and justice supporting ethical living.

3. Faith is practiced universally in contexts and situations often overlooked in the development and implementation of AI platforms.

4. Religious narratives remind us that boundaries support personal, societal and environmental well-being. Boundaries and rules are not necessarily stifling.

5. Religious movements led some of the lowest moments in human history, instigating and sanctioning injustices, violence and destruction. Technology can learn from the mistakes religions made.

6. Faith traditions are the historical guardians of wisdom, which takes one beyond the data, information and knowledge driving algorithms.

Each of these ways faith traditions and spirituality can add value to a conversation on human-centred AI is layered and rich in potential.

Documents such as the 1934 Barmen Declaration against Christian Nationalism, the 2004 ecumenical Christian Accra Confession against economic and environmental injustices and the 2009 Charter for Compassion, emphasizing the wisdom of the Dalai Lama, can inform a discussion around human-centred AI. Recent reports, such as the interfaith statement towards a New Ethical Multilateralism and the Pontifical Jubilee Report, pursue the common good in a complex world facing a polycrisis.

Technological progress that serves the common good is possible. Creative collaboration between diverse stakeholders, including government, business, tech companies, the academy and faith communities holds possibility as a best practice.

Don't miss any update on this topic

Create a free account and access your personalized content collection with our latest publications and analyses.

Sign up for free

License and Republishing

World Economic Forum articles may be republished in accordance with the Creative Commons Attribution-NonCommercial-NoDerivatives 4.0 International Public License, and in accordance with our Terms of Use.

The views expressed in this article are those of the author alone and not the World Economic Forum.

Stay up to date:

Artificial Intelligence

Related topics:
Business
Jobs and the Future of Work
Technological Innovation
Global Risks
Wellbeing and Mental Health
Emerging Technologies
Social Innovation
Share:
The Big Picture
Explore and monitor how Artificial Intelligence is affecting economies, industries and global issues
World Economic Forum logo

Forum Stories newsletter

Bringing you weekly curated insights and analysis on the global issues that matter.

Subscribe today

More on Business
See all

What AI’s role in strategic foresight tells us about the future of thinking

Bryonie Guthrie and Piret Tõnurist

December 2, 2025

How to build a strong foundation for AI agent evaluation and governance

About us

Engage with us

Quick links

Language editions

Privacy Policy & Terms of Service

Sitemap

© 2025 World Economic Forum