Jobs and the Future of Work

The future of work starts with trust: How can we close the AI trust gap?

Responsible AI holds great potential for employers and employees

Responsible AI holds great potential for employers and employees Image: Getty Images/iStockphoto

Jim Stratton
Chief Technology Officer, Workday
Share:
Our Impact
What's the World Economic Forum doing to accelerate action on Jobs and the Future of Work?
The Big Picture
Explore and monitor how Artificial Intelligence is affecting economies, industries and global issues
A hand holding a looking glass by a lake
Crowdsource Innovation
Get involved with our crowdsourced digital platform to deliver impact at scale
Stay up to date:

Jobs and Skills

This article is part of: World Economic Forum Annual Meeting
  • Employers and employees are excited about the potential of AI, according to research by Workday.
  • Yet, both parties are also concerned that AI won't be deployed responsibly.
  • Collaboration between industry and government is essential to ensure AI's responsible, safe and secure development and deployment.

When it comes to the massive timesaving potential of artificial intelligence (AI), new Workday research, Closing the AI Trust Gap, finds that employees and company leaders are generally aligned. Both are excited, interested and eagerly anticipating what comes next.

However, despite much excitement, there’s also hesitancy. Our findings show that while business leaders and employees agree that AI holds great opportunities for business transformation, there is a lack of trust that it will be deployed responsibly. Employees show an even deeper level of scepticism than their leadership counterparts.

Only 62% of business leaders (C-suite or their direct reports) surveyed in our latest global study, for example, welcome AI and 62% are confident their organization will ensure AI is implemented in a responsible and trustworthy way. Among employees, these figures drop even lower to 52% and 55% respectively.

Discover

How is the World Economic Forum creating guardrails for Artificial Intelligence?

A trust gap has formed

These findings underscore one thing we are certain of here at Workday: AI’s ability to unlock human potential in the workplace can only be fully realized if it is built on a foundation of trust. We believe the future of work starts here — with trust at the centre — and that this gap must be closed.

According to the World Economic Forum, prioritising transparency, consistency, and meeting user expectations is crucial to establishing trust in AI systems. This can only be achieved when companies have smart AI governance in place. Workday has been leading by example for nearly a decade through our commitment to the responsible development and deployment of AI technologies. This governance, partnership and advocacy improves our customer service and benefits our employees. It also improves the world around us.

The scale of addressing this challenge may seem daunting, but our experience has taught us that we can take measured steps.

Closing the trust gap for employees and industries

We believe AI should elevate, not displace humans and that trust in this technology must be earned through transparency.

Our research shows that employees are concerned that business priorities and a lack of understanding of AI will negatively impact how organizations approach the human side of an AI-augmented workforce. Case in point: 70% of business leaders agree AI should be developed in a way that easily allows for human review and intervention — yet 42% of employees believe their company does not have a clear understanding of which systems should be fully automated and which require human intervention. Perhaps, more worryingly, 23% of employees are not confident their organization puts employee interests above its own when implementing AI.

Have you read?

What’s more, we found that among company leaders and employees, regulation of critical applications of AI and data and organizational frameworks for ethical AI are the top two drivers of trusted AI. Yet, three in every four employees say their organization is not collaborating on AI regulation and four in every five say their company has yet to share guidelines on responsible AI use.

In other words, even if an organization is engaging on the regulatory front and communicating ethical AI guidelines to its people, the message isn’t being received.

What to do? Organizations should consider four fundamental pillars when establishing a responsible AI (RAI) programme, these are based on insights from our programme and cover; principles, practices, people and policy. Here’s how:

1. Principles: guiding ethical foundations

At the heart of any successful RAI programme lies a set of guiding principles that delineate the ethical boundaries and commitments of the organization. These principles form the bedrock upon which the entire AI strategy is built. The principles component encompasses the overarching values and ethical standards that should be integrated into every facet of AI development and deployment.

From fairness and transparency to accountability and privacy, these principles serve as the ethical compass, steering AI initiatives towards responsible outcomes. Aligning these principles with the organisation's core values ensures a unified approach to ethical decision-making in developing and deploying AI applications.

2. Practices: building responsible infrastructure

To effectively translate principles into action, organizations must adopt a series of best practices that operationalize ethical considerations throughout the AI lifecycle. This includes the following:

Robust and scalable development tools

Utilize a risk-based RAI framework to assess use cases based on principles, regulations and best practice frameworks. Developers and product managers will benefit from an RAI risk evaluation tool, which helps them identify what makes a use case sensitive and which specific safeguards can handle those risks effectively. This tool guides developers directly to safeguards relevant to their use cases, empowering them to manage associated risks efficiently.

Transparency and disclosure

Also central to an RAI approach is customer transparency. To promote transparency regarding AI technology development processes, consider creating detailed fact sheets that provide insight into how the technology is built, tested, maintained and monitored. These help to facilitate understanding while mitigating risks. At Workday, we believe that empowering our customers to manage their data usage and customize the level of detail utilized by AI and machine learning technology is a key step in enhancing responsible design and data protection.

3. People: creating a culture of trust

Beyond algorithms and technological frameworks, AI is shaped and guided by the people who develop, deploy and utilize it. Within this pillar, there are a few key aspects that organizations should enable:

Leadership commitment

Guided by senior leadership commitment, an RAI programme should be championed and supported by an RAI advisory board composed of cross-functional executives, such as the chief compliance officer, chief diversity officer and chief technology officer. This group is responsible for ensuring adherence to ethical AI principles while maintaining human oversight and intervening in technology releases when necessary.

Dedicated resources

To effectively develop and maintain an RAI governance programme, one person should be responsible for AI at an organization. We recommend appointing a chief responsible AI officer with a dedicated team of multidisciplinary experts to oversee programme design, ethical reviews and training. It is important that this team remains independent and does not partake in frontline AI development. As the scope of AI innovation expands, there should be an increased investment in the size of this team.

Cross-company support

The effectiveness of the RAI programme stems from a dedicated team and from the joint efforts of all stakeholders in maintaining its ethical AI principles. A network of embedded RAI champions should be established within product and technology teams to ensure the company's ethical AI principles are upheld, while also providing governance as local ambassadors and guides.

4. Policy: shaping AI deployment through regulation

While the first three pillars serve as the foundation to RAI internally, public policy plays an equally crucial role in closing the AI trust gap more broadly. Ultimately, regulation establishes the infrastructure that enforces ethical principles and best practices and shapes the organizational culture surrounding AI. To strengthen the integration of ethical values throughout the entire AI ecosystem, organizations must collaborate with policymakers, navigate evolving AI regulations and develop responsible AI practices of their own.

At Workday, we have long recognized the importance of public policy and have been advocating for AI regulation since 2019. We have been working for years to help lay the foundations for robust AI regulation that builds trust and advances innovation. From the European Union, where we were encouraged to see the recent political agreement on the EU AI Act, to Washington, D.C., we’re proactively collaborating with policymakers to ensure regulation supports the development and deployment of responsible AI.

And, recognizing the importance of best practices in closing the trust gap, we worked with the Future of Privacy Forum and other companies to develop a roadmap for fostering responsible AI practices in the workplace. As we navigate this journey, it’s clear that collaboration between industry and government is essential to ensure AI's responsible, safe and secure development and deployment.

The pace of change that AI brings may be fast, but we can’t lose sight of what matters most in this world — building trust. Let’s close the gap, together.

Loading...
Don't miss any update on this topic

Create a free account and access your personalized content collection with our latest publications and analyses.

Sign up for free

License and Republishing

World Economic Forum articles may be republished in accordance with the Creative Commons Attribution-NonCommercial-NoDerivatives 4.0 International Public License, and in accordance with our Terms of Use.

The views expressed in this article are those of the author alone and not the World Economic Forum.

Related topics:
Jobs and the Future of WorkEmerging TechnologiesForum Institutional
Share:
World Economic Forum logo
Global Agenda

The Agenda Weekly

A weekly update of the most important issues driving the global agenda

Subscribe today

You can unsubscribe at any time using the link in our emails. For more details, review our privacy policy.

Why companies who pay a living wage create wider societal benefits

Sanda Ojiambo

May 14, 2024

About Us

Events

Media

Partners & Members

  • Join Us

Language Editions

Privacy Policy & Terms of Service

© 2024 World Economic Forum